+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/08 20:54:30 Waiting for host: 192.168.66.101:22 2018/08/08 20:54:33 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 20:54:41 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 20:54:46 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/08/08 20:54:51 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0808 20:54:52.093604 1301 feature_gate.go:230] feature gates: &{map[]} I0808 20:54:52.188899 1301 kernel_validator.go:81] Validating kernel version I0808 20:54:52.189072 1301 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 54.508277 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:b23d4773885feff6d5f75b17273a26c94229fffba4ec6d57354ed66bb6b6acfc + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/08 20:56:07 Waiting for host: 192.168.66.102:22 2018/08/08 20:56:10 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 20:56:18 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 20:56:23 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/08/08 20:56:28 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0808 20:56:29.361269 1301 kernel_validator.go:81] Validating kernel version I0808 20:56:29.362144 1301 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 56s v1.11.0 node02 Ready 20s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 57s v1.11.0 node02 Ready 21s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33162/kubevirt/virt-controller:devel Untagged: localhost:33162/kubevirt/virt-controller@sha256:4c287e7e86ad874f6db6857fcc2b6f893df9207b40b496b242807925f01a39d1 Deleted: sha256:9d32115769414766b6e958be5b2607848191964717aa2674118a0cc9d4e933e9 Deleted: sha256:aade4ebee6553a92e882cb22512ea2db626b9e6c1f5bd0a190b59fc2873f84da Deleted: sha256:439c36b8d1c0df42ea905e69cff962a500291b62dedbab2a90b54d5fd3b88f57 Deleted: sha256:c90eee0b0863c6bf2079fb3bc5b5ab4c1fa90823d530e26875972a7860c45837 Untagged: localhost:33162/kubevirt/virt-launcher:devel Untagged: localhost:33162/kubevirt/virt-launcher@sha256:7717f3feff9c0a8bfda86f6fd7cd280d7fdf40c9dccc858b02d504a822f6fe93 Deleted: sha256:64dea213be1a422463d54387116586fba58fabeb3a0dd378fde9ba09441aa290 Deleted: sha256:028f42fe23ff08c3a329141cd62811212b69ed4aab6e58b2651bdeb4b94b20d9 Deleted: sha256:03b36cfe2ec3361b838aaa5cc7432399f95a5cfa2f080e9f608c3b95b1f46725 Deleted: sha256:bbfa73f59f548432f8c246ab9ed9c2c843e31a4a4556df67ba66aae71f6c8fcc Deleted: sha256:7cd7b2955287773841725fb151918fc680f78ccf9299214eebf9ff88ada1a537 Deleted: sha256:c1bea895a82999e7834e8858a43a5f4ca6761772c476bb9fb05cdab8a12828a5 Deleted: sha256:cbcc595a9ab4ba02ae6e3273eb7df140a3950c276064cacd4ae6c0e2f5838c74 Deleted: sha256:c7e625815d5b78c83e7e93d53601222e8eb748be69b7fddaec40722be65551c7 Deleted: sha256:b5f2c77775741f04862f843ceeee35810b5415d55814aabf9d1133490b3509a8 Deleted: sha256:545f368bbf6b76ea5d59803ea4df2fa44020fec683d152c67f47accc6298c654 Deleted: sha256:d072d749266d43752401d10821bfc5549530e42fc99d2fc248d86d89bce95da2 Deleted: sha256:25a06d344b2c3ef1d7aa4edeed364c4386fcec3f47b136da4a2f530e1c36a87e Untagged: localhost:33162/kubevirt/virt-handler:devel Untagged: localhost:33162/kubevirt/virt-handler@sha256:f9d7924a58823f0f49699a032e6c590fe36b95e512b18078a921f70cbbb91a78 Deleted: sha256:03c09c074c70b112c378f326ee77d537ca09033c7dbc8f96e2299e71a6664e4d Deleted: sha256:f939f034230feab4f3cb2665d89d860b464560cf6640b5ff241506ef6d531a4b Deleted: sha256:b4b83f7ff8b463a3c1b10f93d2de3e693e079cac42bbda69adbc9d5be2f459ee Deleted: sha256:547468d8e880445a37552cf8031135524c47977490d1aefc8ba8b8a1602331bf Untagged: localhost:33162/kubevirt/virt-api:devel Untagged: localhost:33162/kubevirt/virt-api@sha256:07415c4364e1a76d192a0636035d6aa94ff38f104da4c428852330b1807fd2d7 Deleted: sha256:3f26bb6e9349c652c76a14e289070e7707384614bbfae4f6de9ae167824072f9 Deleted: sha256:9865e1ba1fa2efb7b4fe86c28382dc55d237fcc236f8c78a583ba19d694c5437 Deleted: sha256:a167eeaadc41b64e677c58a898a5d74d01ea41fd7d919d8b9315414b673cdf25 Deleted: sha256:1f20075269401628b0cf17eac3fcd023158027ea3843a12115a4bfb09d7f8166 Untagged: localhost:33162/kubevirt/subresource-access-test:devel Untagged: localhost:33162/kubevirt/subresource-access-test@sha256:4b02c289fd6d7119cb290b232f27245c22137da943502861807380a0f7f8ac03 Deleted: sha256:14c9ce47eea20ef24c689a9062f66f54be268b79ee255965705a3386cbddd6a5 Deleted: sha256:b1a016d2cbeb676aa59cb8a386e4f40fb5960b36f1dbf55d534f92762d121210 Deleted: sha256:6eacfb1af242c7339855c1ecd62fac1c43c0092a3316398bdbaa3bad3ccf52c8 Deleted: sha256:bcd49215f2a9db51964c874dedcf9511c123614d847ac842d3db9ab6571a1d33 Untagged: localhost:33162/kubevirt/example-hook-sidecar:devel Untagged: localhost:33162/kubevirt/example-hook-sidecar@sha256:26886dd375f93cc12ad3ae5be1d78668decd640f70b64903baa59dce3d8bf4cf Deleted: sha256:402edfc3198e68d059b7938cac0f6f455db31fb75d68b9c82f0b1875daf3cc7f Deleted: sha256:b9fc7e3ab287887e25a14ebfc81cc8249dfb11596804fc97fa33e0982f30aea2 Deleted: sha256:d21a51628feffc9faf835e74bb447b21c08f478e6ea67e9267f28926347770cc Deleted: sha256:7ea359c4bc1946bc0cc2050a06bd98400a4d06d6c68cddaf1fdcbc75aaa1feb8 Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.10 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.10 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.10 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Running in 62be46b5263e go version go1.10 linux/amd64 Cloning into '/go/src/mvdan.cc/sh'... Note: checking out 'v2.5.0'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b HEAD is now at 5f66499 all: bump to 2.5.0 Switched to a new branch 'release-1.10' Branch 'release-1.10' set up to track remote branch 'release-1.10' from 'origin'. Already on 'release-1.10' Your branch is up to date with 'origin/release-1.10'. Already on 'release-1.10' Your branch is up to date with 'origin/release-1.10'. Note: checking out '1643683e1b54a9e88ad26d98f81400c8c9d9f4f9'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b HEAD is now at 1643683 Add godoc badge (#444)  ---> bb3956924136 Removing intermediate container 62be46b5263e Step 10/12 : RUN pip install j2cli ---> Running in b442aae62fbf WARNING: Running pip install with root privileges is generally not a good idea. Try `pip install --user` instead. Collecting j2cli Downloading https://files.pythonhosted.org/packages/6a/fb/c67a5da25bc7f5fd840727ea742748df981ee425350cc33d57ed7e2cc78d/j2cli-0.3.1_0-py2-none-any.whl Collecting jinja2>=2.7.2 (from j2cli) Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB) Collecting MarkupSafe>=0.23 (from jinja2>=2.7.2->j2cli) Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz Installing collected packages: MarkupSafe, jinja2, j2cli Running setup.py install for MarkupSafe: started Running setup.py install for MarkupSafe: finished with status 'done' Successfully installed MarkupSafe-1.0 j2cli-0.3.1-0 jinja2-2.10 ---> dd6016de5626 Removing intermediate container b442aae62fbf Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> 81725140769a Removing intermediate container d1337f994def Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Running in b64e9bbb07e2 ---> 920dc21718b4 Removing intermediate container b64e9bbb07e2 Successfully built 920dc21718b4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.10 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.10 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.10 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> bb3956924136 Step 10/12 : RUN pip install j2cli ---> Using cache ---> dd6016de5626 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 81725140769a Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> 920dc21718b4 Successfully built 920dc21718b4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 84570f0bf244 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 4b8efcbf3461 Step 5/8 : USER 1001 ---> Using cache ---> c49257f2ff48 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 2724cb212027 Removing intermediate container 585ee96467b0 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 550b25423ab7 ---> ca8e4848800e Removing intermediate container 550b25423ab7 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-controller" '' ---> Running in c4cf3a15c718 ---> 78f6b1529d71 Removing intermediate container c4cf3a15c718 Successfully built 78f6b1529d71 Sending build context to Docker daemon 43.31 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> c1e65e6c8241 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 4c20d196c128 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> acfaa439ae4a Removing intermediate container 0dbfe59b9b97 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 573bc7cb497f Removing intermediate container 94aa6966e291 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 08618d133aaa  ---> 219d70ebe923 Removing intermediate container 08618d133aaa Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 1a49b1fb628b  ---> 66845b11838f Removing intermediate container 1a49b1fb628b Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 185a35e2db82 Removing intermediate container 72cc59f46700 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 90abd40a782a ---> cf3930d0b651 Removing intermediate container 90abd40a782a Step 10/10 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-launcher" '' ---> Running in 4760f3b2e585 ---> 59de4ef5a722 Removing intermediate container 4760f3b2e585 Successfully built 59de4ef5a722 Sending build context to Docker daemon 41.69 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 7e1e4e6e3c95 Removing intermediate container 218e280049b6 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in c62c6fc04cd7 ---> 0b3df47b4728 Removing intermediate container c62c6fc04cd7 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-handler" '' ---> Running in d0fdb5737316 ---> 83a00ce28752 Removing intermediate container d0fdb5737316 Successfully built 83a00ce28752 Sending build context to Docker daemon 38.84 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 6f2134b876af Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> d5ef0239bf68 Step 5/8 : USER 1001 ---> Using cache ---> 233000b2d9b5 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 80a272f97d84 Removing intermediate container 345910a258c1 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 863bfc6cef35 ---> c835810f0452 Removing intermediate container 863bfc6cef35 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-api" '' ---> Running in 2437bb223e20 ---> 2889e3f3f635 Removing intermediate container 2437bb223e20 Successfully built 2889e3f3f635 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/7 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 06d762a67408 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 3876d185cf84 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 1fb50ce9b78f Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> 1b3b27237ad4 Successfully built 1b3b27237ad4 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/5 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 6bc4f549313f Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "vm-killer" '' ---> Using cache ---> 4d3c35709578 Successfully built 4d3c35709578 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 9ef1c0ce5d24 Step 3/7 : ENV container docker ---> Using cache ---> 9ad55e41ed61 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 17a81fda7c2b Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 681d01e165e6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> a79815fe82d9 Step 7/7 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "registry-disk-v1alpha" '' ---> Using cache ---> 01c4b8a10474 Successfully built 01c4b8a10474 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33547/kubevirt/registry-disk-v1alpha:devel ---> 01c4b8a10474 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 05aed96d86e5 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 0c789bc44ebe Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> b5af01da1cf6 Successfully built b5af01da1cf6 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33547/kubevirt/registry-disk-v1alpha:devel ---> 01c4b8a10474 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d1691b3c6397 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> a867409b41c7 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> a58eed679ce2 Successfully built a58eed679ce2 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33547/kubevirt/registry-disk-v1alpha:devel ---> 01c4b8a10474 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d1691b3c6397 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> ffa69f199094 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> 19afad248297 Successfully built 19afad248297 Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> deebe9dc06da Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 4094ce77e412 Step 5/8 : USER 1001 ---> Using cache ---> ba694520e9a4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> d77019f41e6f Removing intermediate container 2a563f39db13 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in fc2b6d0726eb ---> e036fc735b61 Removing intermediate container fc2b6d0726eb Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "subresource-access-test" '' ---> Running in 19022e588c1b ---> f0506ef7bc96 Removing intermediate container 19022e588c1b Successfully built f0506ef7bc96 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/9 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> e0cf52293e57 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 8c031086e8cb Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 0f6dd31de4d3 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 6a702eb79a95 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> bed79012c9f3 Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "winrmcli" '' ---> Using cache ---> 307d3ac58d04 Successfully built 307d3ac58d04 Sending build context to Docker daemon 36.8 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 4d0b6fb90183 Removing intermediate container b55436b1725c Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 5b9e9c006bb4 ---> 6ee13bed5ee4 Removing intermediate container 5b9e9c006bb4 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Running in ee059e7219de ---> 44efc7496283 Removing intermediate container ee059e7219de Successfully built 44efc7496283 hack/build-docker.sh push The push refers to a repository [localhost:33547/kubevirt/virt-controller] e33fda06a2f4: Preparing 915a0c3e3f5f: Preparing 891e1e4ef82a: Preparing 915a0c3e3f5f: Pushed e33fda06a2f4: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:d0b6abb51351a3021a179be07e6cc9553785c58a601bdfe86720ddfd40b1101c size: 949 The push refers to a repository [localhost:33547/kubevirt/virt-launcher] a757c908a74e: Preparing 74368430bab0: Preparing 6e9785a97dba: Preparing 19323b907fa4: Preparing fe750f2b0c5b: Preparing 5379fb5d8cce: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing da38cf808aa5: Waiting 5379fb5d8cce: Waiting b83399358a92: Waiting 186d8b3e4fd8: Waiting fa6154170bf5: Waiting 5eefb9960a36: Waiting 74368430bab0: Pushed a757c908a74e: Pushed 19323b907fa4: Pushed b83399358a92: Pushed da38cf808aa5: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 6e9785a97dba: Pushed 5379fb5d8cce: Pushed fe750f2b0c5b: Pushed 5eefb9960a36: Pushed devel: digest: sha256:89eddb561cd3f22f3b9e8cddbed431f8e5d63aeb7250e9acc4a3c86e775de2f5 size: 2828 The push refers to a repository [localhost:33547/kubevirt/virt-handler] 33cc9b4768f8: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 33cc9b4768f8: Pushed devel: digest: sha256:baacac32691699d143148762bfbd514a9eb21bfdfb9e4ae306c70bdc3fa43a80 size: 741 The push refers to a repository [localhost:33547/kubevirt/virt-api] 5cde9d1bee3a: Preparing 7cc07c574d2a: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 7cc07c574d2a: Pushed 5cde9d1bee3a: Pushed devel: digest: sha256:ad639dd16302638fa0ebef65f4132b65ff8e7f51aef7ab15a347b07ac8e76844 size: 948 The push refers to a repository [localhost:33547/kubevirt/disks-images-provider] 1548fa7b1c9e: Preparing a7621d2cf364: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 1548fa7b1c9e: Pushed a7621d2cf364: Pushed devel: digest: sha256:d1e04ccb207fce5ec5cf2070e46f5fcd791bf5d9940c234d737d537697e98e11 size: 948 The push refers to a repository [localhost:33547/kubevirt/vm-killer] 3c31f9f8d755: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 3c31f9f8d755: Pushed devel: digest: sha256:23514e9bb58b187085ff3d46c138d84c96d483023102f31cb8e89f022cae8d29 size: 740 The push refers to a repository [localhost:33547/kubevirt/registry-disk-v1alpha] c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Pushed 4662bbc21c2d: Pushed 25edbec0eaea: Pushed devel: digest: sha256:65fc69563851d1c5f6bcbf2616441834ad1f294758a1c63c4a033987c24f6921 size: 948 The push refers to a repository [localhost:33547/kubevirt/cirros-registry-disk-demo] ff776dc1f8e1: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 4662bbc21c2d: Mounted from kubevirt/registry-disk-v1alpha ff776dc1f8e1: Pushed devel: digest: sha256:35ac14a90415143af3c8a766a4080b3952c2254268bda99241b333a522d21ec9 size: 1160 The push refers to a repository [localhost:33547/kubevirt/fedora-cloud-registry-disk-demo] 21e772fe647f: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/cirros-registry-disk-demo 4662bbc21c2d: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 21e772fe647f: Pushed devel: digest: sha256:2af5fdf2c2617af7e6dac2d02e473b952d600701e6d11cf5f0ce012d6ad4fb46 size: 1161 The push refers to a repository [localhost:33547/kubevirt/alpine-registry-disk-demo] ec917ce1d686: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 4662bbc21c2d: Mounted from kubevirt/fedora-cloud-registry-disk-demo ec917ce1d686: Pushed devel: digest: sha256:f8536c7203b443d52b11af17660023492feb927b79874afeb6e761285f5be6e8 size: 1160 The push refers to a repository [localhost:33547/kubevirt/subresource-access-test] 9276dd5053a1: Preparing 7e69243e781e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 7e69243e781e: Pushed 9276dd5053a1: Pushed devel: digest: sha256:acbc1d8d78c98bac8c273e7c49080ef66bf692dd0e632bc31dfcf9fdf5d71b32 size: 948 The push refers to a repository [localhost:33547/kubevirt/winrmcli] a117c61a5658: Preparing c9df4405017d: Preparing 99bb32247f65: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test a117c61a5658: Pushed 99bb32247f65: Pushed c9df4405017d: Pushed devel: digest: sha256:af778f3f0966af252e608ac96b334411df70554eaf54bbdae2099df9723927f8 size: 1165 The push refers to a repository [localhost:33547/kubevirt/example-hook-sidecar] c6d76cfe5f13: Preparing 39bae602f753: Preparing c6d76cfe5f13: Pushed 39bae602f753: Pushed devel: digest: sha256:2eedcec7576b2f293fd9f8ca5c4a4f6859e6d1d08bf1c7e33a82c96efee38731 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release2 ++ job_prefix=kubevirt-functional-tests-windows2016-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-201-gdafbaa8 ++ KUBEVIRT_VERSION=v0.7.0-201-gdafbaa8 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace image_pull_policy ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system +++ image_pull_policy=IfNotPresent ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33547/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace image_pull_policy + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release2 ++ job_prefix=kubevirt-functional-tests-windows2016-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-201-gdafbaa8 ++ KUBEVIRT_VERSION=v0.7.0-201-gdafbaa8 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace image_pull_policy ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system +++ image_pull_policy=IfNotPresent ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33547/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace image_pull_policy + echo 'Deploying ...' Deploying ... + [[ -z windows2016-release ]] + [[ windows2016-release =~ .*-dev ]] + [[ windows2016-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79975b94-bd5c9 0/1 ContainerCreating 0 2s virt-api-7d79975b94-fmwwx 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-6hmkf 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-v4gxm 0/1 ContainerCreating 0 2s virt-handler-6h4gj 0/1 ContainerCreating 0 2s virt-handler-h52dd 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers disks-images-provider-b4r5r 0/1 Pending 0 0s virt-api-7d79975b94-bd5c9 0/1 ContainerCreating 0 2s virt-api-7d79975b94-fmwwx 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-6hmkf 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-v4gxm 0/1 ContainerCreating 0 2s virt-handler-6h4gj 0/1 ContainerCreating 0 2s virt-handler-h52dd 0/1 ContainerCreating 0 2s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-6ppxc 1/1 Running 0 15m coredns-78fcdf6894-7qjc2 1/1 Running 0 15m disks-images-provider-b4r5r 1/1 Running 0 32s disks-images-provider-ljrbs 1/1 Running 0 32s etcd-node01 1/1 Running 0 15m kube-apiserver-node01 1/1 Running 0 15m kube-controller-manager-node01 1/1 Running 0 15m kube-flannel-ds-6vg5h 1/1 Running 0 15m kube-flannel-ds-kn542 1/1 Running 0 15m kube-proxy-fgpfl 1/1 Running 0 15m kube-proxy-s9wv7 1/1 Running 0 15m kube-scheduler-node01 1/1 Running 0 15m virt-api-7d79975b94-bd5c9 1/1 Running 0 34s virt-api-7d79975b94-fmwwx 1/1 Running 1 34s virt-controller-67dcdd8464-6hmkf 1/1 Running 0 34s virt-controller-67dcdd8464-v4gxm 1/1 Running 0 34s virt-handler-6h4gj 1/1 Running 0 34s virt-handler-h52dd 1/1 Running 0 34s + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n default --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-b7qbz 1/1 Running 0 15m local-volume-provisioner-xqm69 1/1 Running 0 15m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml' + [[ windows2016-release =~ windows.* ]] + [[ -d /home/nfs/images/windows2016 ]] + kubectl create -f - + cluster/kubectl.sh create -f - persistentvolume/disk-windows created + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + make functest hack/dockerized "hack/build-func-tests.sh" Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.10 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.10 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.10 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> bb3956924136 Step 10/12 : RUN pip install j2cli ---> Using cache ---> dd6016de5626 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 81725140769a Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> 920dc21718b4 Successfully built 920dc21718b4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1533762746 Will run 6 of 152 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST:17.758 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 ------------------------------ • [SLOW TEST:18.927 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 ------------------------------ • [SLOW TEST:229.546 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 ------------------------------ • [SLOW TEST:213.781 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 ------------------------------ • [SLOW TEST:21.558 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 ------------------------------ Pod name: disks-images-provider-b4r5r Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-ljrbs Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79975b94-bd5c9 Pod phase: Running level=info timestamp=2018-08-08T21:20:31.510430Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/08 21:20:34 http: TLS handshake error from 10.244.1.1:43840: EOF level=info timestamp=2018-08-08T21:20:37.622737Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/08 21:20:44 http: TLS handshake error from 10.244.1.1:43846: EOF level=info timestamp=2018-08-08T21:20:50.193962Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-08T21:20:51.028424Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-08T21:20:51.029983Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/08 21:20:54 http: TLS handshake error from 10.244.1.1:43852: EOF level=info timestamp=2018-08-08T21:21:01.567872Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/08 21:21:04 http: TLS handshake error from 10.244.1.1:43858: EOF level=info timestamp=2018-08-08T21:21:04.798764Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-08T21:21:04.801369Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-08T21:21:07.851406Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-08T21:21:08.628894Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/08 21:21:14 http: TLS handshake error from 10.244.1.1:43864: EOF Pod name: virt-api-7d79975b94-fmwwx Pod phase: Running level=info timestamp=2018-08-08T21:19:27.219820Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/08 21:19:34 http: TLS handshake error from 10.244.0.1:41490: EOF 2018/08/08 21:19:44 http: TLS handshake error from 10.244.0.1:41552: EOF 2018/08/08 21:19:54 http: TLS handshake error from 10.244.0.1:41612: EOF level=info timestamp=2018-08-08T21:19:57.313764Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/08 21:20:04 http: TLS handshake error from 10.244.0.1:41674: EOF 2018/08/08 21:20:14 http: TLS handshake error from 10.244.0.1:41734: EOF 2018/08/08 21:20:24 http: TLS handshake error from 10.244.0.1:41794: EOF level=info timestamp=2018-08-08T21:20:27.299503Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/08 21:20:34 http: TLS handshake error from 10.244.0.1:41854: EOF 2018/08/08 21:20:44 http: TLS handshake error from 10.244.0.1:41914: EOF 2018/08/08 21:20:54 http: TLS handshake error from 10.244.0.1:41974: EOF level=info timestamp=2018-08-08T21:20:57.337930Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/08 21:21:04 http: TLS handshake error from 10.244.0.1:42034: EOF 2018/08/08 21:21:14 http: TLS handshake error from 10.244.0.1:42094: EOF Pod name: virt-controller-67dcdd8464-6hmkf Pod phase: Running level=info timestamp=2018-08-08T21:11:40.255308Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-v4gxm Pod phase: Running level=info timestamp=2018-08-08T21:12:45.727327Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57c7k248jdxkzzgm4mf4z4gk4tm9m84wnxnskl2mlsq7gqswwbbtpgcqm8g6pzg kind= uid=cbb0cdb5-9b4f-11e8-a573-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-08T21:12:45.860632Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57c7k248jdxkzzgm4mf4z4gk4tm9m84wnxnskl2mlsq7gqswwbbtpgcqm8g6pzg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57c7k248jdxkzzgm4mf4z4gk4tm9m84wnxnskl2mlsq7gqswwbbtpgcqm8g6pzg" level=info timestamp=2018-08-08T21:12:46.026861Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57c7k248jdxkzzgm4mf4z4gk4tm9m84wnxnskl2mlsq7gqswwbbtpgcqm8g6pzg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57c7k248jdxkzzgm4mf4z4gk4tm9m84wnxnskl2mlsq7gqswwbbtpgcqm8g6pzg" level=info timestamp=2018-08-08T21:13:04.711014Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiq7lqrt89x4n8ltf7b779mvsx5f9j2k99mcvndl8ftf4lnf2fsp9csv2tjmlvgl6 kind= uid=d6fbc098-9b4f-11e8-a573-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-08T21:13:04.714615Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiq7lqrt89x4n8ltf7b779mvsx5f9j2k99mcvndl8ftf4lnf2fsp9csv2tjmlvgl6 kind= uid=d6fbc098-9b4f-11e8-a573-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-08T21:16:54.361165Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind= uid=5fdcc45b-9b50-11e8-a573-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-08T21:16:54.362871Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind= uid=5fdcc45b-9b50-11e8-a573-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-08T21:16:54.452922Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm" level=info timestamp=2018-08-08T21:16:54.492709Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm" level=info timestamp=2018-08-08T21:20:28.972072Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind= uid=dfcdf013-9b50-11e8-a573-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-08T21:20:28.973290Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind= uid=dfcdf013-9b50-11e8-a573-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-08T21:20:29.113529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v" level=info timestamp=2018-08-08T21:20:29.176412Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v" level=info timestamp=2018-08-08T21:20:50.213650Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind= uid=ec774099-9b50-11e8-a573-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-08T21:20:50.217180Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind= uid=ec774099-9b50-11e8-a573-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-6h4gj Pod phase: Running level=info timestamp=2018-08-08T21:21:09.250516Z pos=vm.go:742 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-08-08T21:21:09.250656Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd, existing: true\n" level=info timestamp=2018-08-08T21:21:09.250709Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-08T21:21:09.251842Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-08T21:21:09.252891Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind= uid=ec774099-9b50-11e8-a573-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-08T21:21:09.258071Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind= uid=ec774099-9b50-11e8-a573-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-08T21:21:09.258279Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd, existing: true\n" level=info timestamp=2018-08-08T21:21:09.258329Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-08T21:21:09.258391Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-08T21:21:09.258492Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind= uid=ec774099-9b50-11e8-a573-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-08T21:21:09.258678Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind= uid=ec774099-9b50-11e8-a573-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-08T21:21:17.332604Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd, existing: false\n" level=info timestamp=2018-08-08T21:21:17.332765Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-08T21:21:17.334036Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-08T21:21:17.334820Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-h52dd Pod phase: Running level=info timestamp=2018-08-08T21:20:28.577864Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm, existing: false\n" level=info timestamp=2018-08-08T21:20:28.578063Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-08T21:20:28.578125Z pos=vm.go:331 component=virt-handler msg="Domain status: Shutoff, reason: Destroyed\n" level=info timestamp=2018-08-08T21:20:28.578391Z pos=vm.go:358 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-08-08T21:20:28.578526Z pos=vm.go:410 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Processing deletion." level=info timestamp=2018-08-08T21:20:28.578870Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-08T21:21:14.775809Z pos=vm.go:742 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-08-08T21:21:14.776804Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm, existing: false\n" level=info timestamp=2018-08-08T21:21:14.776938Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-08T21:21:14.777119Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-08T21:21:14.777475Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-08T21:21:14.778025Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm, existing: false\n" level=info timestamp=2018-08-08T21:21:14.778117Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-08T21:21:14.778298Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-08T21:21:14.778480Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiglqphdrwnszfbzjc6p8nht2b2g7fg7n4m5l6n5qhjfswhxhbn2v9kj87v29jgsm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzmn425 Pod phase: Running level=info timestamp=2018-08-08T21:20:49.132721Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-08T21:20:49.135718Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind= uid=dfcdf013-9b50-11e8-a573-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-08T21:20:49.154609Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-08T21:20:49.267024Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind= uid=dfcdf013-9b50-11e8-a573-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-08T21:20:49.316621Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5d307ca9-b3ef-428c-8861-06e72d69f223: 203" level=info timestamp=2018-08-08T21:20:49.570920Z pos=manager.go:302 component=virt-launcher namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind= uid=dfcdf013-9b50-11e8-a573-525500d15501 msg="Domain stopped." level=info timestamp=2018-08-08T21:20:49.571486Z pos=server.go:96 component=virt-launcher namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind= uid=dfcdf013-9b50-11e8-a573-525500d15501 msg="Signaled vmi kill" caught signal level=info timestamp=2018-08-08T21:20:49.929463Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=info timestamp=2018-08-08T21:20:50.311895Z pos=monitor.go:231 component=virt-launcher msg="Process 5d307ca9-b3ef-428c-8861-06e72d69f223 and pid 203 is gone!" level=info timestamp=2018-08-08T21:20:50.320654Z pos=client.go:136 component=virt-launcher msg="Libvirt event 5 with reason 1 received" level=info timestamp=2018-08-08T21:20:50.359622Z pos=manager.go:306 component=virt-launcher namespace=kubevirt-test-default name=testvmixtjrrpslxbnvxwss4j98wsc9qsd9mmqpbkcdzzmvz5clc7mcs97mj7j4j2sdq4v kind=VirtualMachineInstance uid= msg="Domain not running or paused, nothing to do." level=info timestamp=2018-08-08T21:20:50.365870Z pos=client.go:119 component=virt-launcher msg="domain status: 5:2" level=info timestamp=2018-08-08T21:20:50.368344Z pos=virt-launcher.go:233 component=virt-launcher msg="Waiting on final notifications to be sent to virt-handler." level=info timestamp=2018-08-08T21:20:50.376948Z pos=client.go:145 component=virt-launcher msg="processed event" • Failure [28.622 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Expected error: <*errors.StatusError | 0xc420171050>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd\" not found", Reason: "NotFound", Details: { Name: "testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd" not found not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1508 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-08-08T21:20:51.039225Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=VirtualMachineInstance uid=ec774099-9b50-11e8-a573-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmdxg7z" level=info timestamp=2018-08-08T21:21:06.625134Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=VirtualMachineInstance uid=ec774099-9b50-11e8-a573-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmdxg7z" level=info timestamp=2018-08-08T21:21:08.751788Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=VirtualMachineInstance uid=ec774099-9b50-11e8-a573-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-08T21:21:08.803348Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi8qj6rqhrwgc24pqslrnm4dzgj4nks26fs2psmc6ckqgrwhdbqlwlsx4cc6trlgd kind=VirtualMachineInstance uid=ec774099-9b50-11e8-a573-525500d15501 msg="VirtualMachineInstance started." STEP: Deleting the vmi via kubectl command STEP: Checking that the vmi does not exist anymore STEP: Checking that the vmi pod terminated SSS Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 1 Failure: [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1508 Ran 6 of 152 Specs in 539.693 seconds FAIL! -- 5 Passed | 1 Failed | 0 Pending | 146 Skipped --- FAIL: TestTests (539.72s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh