+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... Downloading ....... Downloading ....... 2018/07/25 16:23:23 Waiting for host: 192.168.66.101:22 2018/07/25 16:23:26 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 16:23:34 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 16:23:39 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0725 16:23:40.027347 1257 feature_gate.go:230] feature gates: &{map[]} I0725 16:23:40.131925 1257 kernel_validator.go:81] Validating kernel version I0725 16:23:40.132172 1257 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 55.007308 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:bac8d3b2b0bc76ad0e18813c0f088f6a82e951d6eaff401e1ff2db40669f89ef + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/07/25 16:24:53 Waiting for host: 192.168.66.102:22 2018/07/25 16:24:56 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 16:25:08 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0725 16:25:09.247225 1258 kernel_validator.go:81] Validating kernel version I0725 16:25:09.247614 1258 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 54s v1.11.0 node02 Ready 25s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 55s v1.11.0 node02 Ready 26s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:34998/kubevirt/virt-controller:devel Untagged: localhost:34998/kubevirt/virt-controller@sha256:3c457b2ace44521cf2dc75a9deef2b52291f388d3f5da053abcb1cb166c3d33d Deleted: sha256:c1d79554a45435bf8ce6986bdfe4b06205b08dbb35d1a5de5c1d03aa6389d91a Untagged: localhost:34998/kubevirt/virt-launcher:devel Untagged: localhost:34998/kubevirt/virt-launcher@sha256:63a3858d4e000a5c5c166b94caf5ea48609dd40a9177022ad77c7c7682a611c5 Deleted: sha256:580e46b88d285d45d7040c3a5116b11a5fefc8515f1629aebbb5115af9955ee8 Deleted: sha256:d4bd23835de8b2e32b1c194b3ad75c8d8dba962a82d3b49a44cab94cb07c7335 Deleted: sha256:6fbf8e85abd7aadb42a73650bde49f955d04efba55c59f7ceededa064adbfcf9 Deleted: sha256:a11db76f9919709f0e969eca87740848d6bb367dd52729184252564868dc66ee Deleted: sha256:32933a430ea08e7222a6d89acc526e4093e4a97359e17309617a4861e8d69ab9 Deleted: sha256:11927f85e4925067123301e81318568dbe9f19a6d8d046f6461fc2151359c5e1 Untagged: localhost:34998/kubevirt/virt-handler:devel Untagged: localhost:34998/kubevirt/virt-handler@sha256:49a9cdbf8e2b2c399e24f891e1b603d845c9afa47b2edeffd81f80283ea0a263 Deleted: sha256:2b1d8cb9ef709cab65de8a51babfb4e25bb7cc259ee2b0dbaebf522c07cd245a Deleted: sha256:f0e68733a82c2cdd26c133e41205673782107974833ff5583790417b5017df3b Deleted: sha256:107dc8d389161b6c9bbba4254c242045f9d2373ce958b237f094763d1f9223b7 Deleted: sha256:369241508eba2f686b5e678e40fa547ccf534e277279b1db52b063a5703ce1c1 Untagged: localhost:34998/kubevirt/virt-api:devel Untagged: localhost:34998/kubevirt/virt-api@sha256:bb27313782bae8288b4518f64cd4a25ad9aafd7cbba5058c197f774e6d072d0b Deleted: sha256:20c962138aafbc21de6ec78efc6971b70ec006240e224e1236749021b5162390 Deleted: sha256:d106e11b89a9cde663da286270bac3a75a8670c170ec0694e3ed8a20e277ba99 Untagged: localhost:34998/kubevirt/subresource-access-test:devel Untagged: localhost:34998/kubevirt/subresource-access-test@sha256:883762d11b713f7899150d442f857d4143c1a67bf96887f89acd6943af2aff1b Deleted: sha256:417fb28d772d022ece6f8e4189ae874a709fed57240ffdad1f7e8179b178389b Deleted: sha256:591301e18c50fc90eabd7a883c88af21e68f1ba19c8a2aad7bb83af19c6412ba Deleted: sha256:79693ddc656f9738bc10e4726d0c246e9893e914240782343f737555e0eba438 Deleted: sha256:fa68a8b18bce35f14f2bcd3167ebf8fed2ad107bb7ec972166d5d77c4c46b206 Untagged: localhost:34998/kubevirt/example-hook-sidecar:devel Untagged: localhost:34998/kubevirt/example-hook-sidecar@sha256:286c9d9cf0f3df6dacc1f1ac87c1ab86d478feb3ea16d7aeae2a1d3ab24823a6 Deleted: sha256:3591458f43968c3e147aa215aaa731d23056351d96962bf1127252ff418fecc5 Deleted: sha256:6a6a58d809702ac35a609f674d0b88a0b96c8f93b432d20712c0f0e4e765deac Deleted: sha256:abbc16f60690ea1fef3923eafdc8c78a5c4cc3aebb4d9b085455cbd14e96e426 Deleted: sha256:2bffaeba0a4dd88776fe59d90f1b739e280f1edbb70d2db3cc3f776faf0b0a58 sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.35 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> e9589b9dbfb3 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 6526953b7273 Step 5/8 : USER 1001 ---> Using cache ---> 0da81e671cc6 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 76d04b06060c Removing intermediate container 710b8aa624b6 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 0c14bc999eae ---> 5c65ebbab99c Removing intermediate container 0c14bc999eae Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-controller" '' ---> Running in dbcf6037e37f ---> dd48f730e585 Removing intermediate container dbcf6037e37f Successfully built dd48f730e585 Sending build context to Docker daemon 42.63 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8826ac178c51 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 5eb474bfa821 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> aab3a6e8f72d Removing intermediate container a895e52ae8e6 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> d2c2fbdc9d44 Removing intermediate container 70d325f71987 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 1c8b4a9ce7af  ---> 8dcdf809e1a8 Removing intermediate container 1c8b4a9ce7af Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in e2e815bac0cd  ---> 1e9f37d4b2ed Removing intermediate container e2e815bac0cd Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 5266571b9596 Removing intermediate container 372b9caaa174 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in d0d6eee4926f ---> 1508d6e5deb4 Removing intermediate container d0d6eee4926f Step 10/10 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-launcher" '' ---> Running in 67e814725426 ---> ecd6de892918 Removing intermediate container 67e814725426 Successfully built ecd6de892918 Sending build context to Docker daemon 41.65 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 314bee5544fe Removing intermediate container d92fa54e7de6 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 4f629232dff3 ---> ee848a59bc36 Removing intermediate container 4f629232dff3 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-handler" '' ---> Running in 5649020bd659 ---> 06ad6f67fa95 Removing intermediate container 5649020bd659 Successfully built 06ad6f67fa95 Sending build context to Docker daemon 38.75 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1a58ff1483fa Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 87e30c5b4065 Step 5/8 : USER 1001 ---> Using cache ---> e889af541bd0 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 2aff57b26b5a Removing intermediate container 10d7b61bb514 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in fa472f2ae55e ---> 0b9d4aa03200 Removing intermediate container fa472f2ae55e Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-api" '' ---> Running in 6f22714c4417 ---> a361f4855582 Removing intermediate container 6f22714c4417 Successfully built a361f4855582 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/7 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 8e1d737ded1f Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 104e48aa676f Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 4ed9f69e6653 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> 77c86ff74c6d Successfully built 77c86ff74c6d Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> d130857891a9 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "vm-killer" '' ---> Using cache ---> b4da3b400e4d Successfully built b4da3b400e4d Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 3b36b527fef8 Step 3/7 : ENV container docker ---> Using cache ---> b3ada414d649 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 337be6171fcb Step 5/7 : ADD entry-point.sh / ---> Using cache ---> a98a961fa5a1 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 19baf5d1aab8 Step 7/7 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 0d6c7f341fe9 Successfully built 0d6c7f341fe9 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35269/kubevirt/registry-disk-v1alpha:devel ---> 0d6c7f341fe9 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> a4871cf183de Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 0e4f791782ad Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> dfd9f994021f Successfully built dfd9f994021f Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35269/kubevirt/registry-disk-v1alpha:devel ---> 0d6c7f341fe9 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 233db98e28cd Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 547ce1770d8a Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> d7d7ea78f525 Successfully built d7d7ea78f525 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35269/kubevirt/registry-disk-v1alpha:devel ---> 0d6c7f341fe9 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 233db98e28cd Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 77b1631b1170 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> 0531d7a08d7f Successfully built 0531d7a08d7f Sending build context to Docker daemon 35.56 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> f9cd90a6a0ef Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> df6f2d83c1d6 Step 5/8 : USER 1001 ---> Using cache ---> 56a7b7e6b8ff Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 42498b1ebd2d Removing intermediate container e14527acdc70 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 1ce4bcc9ff0a ---> a5ded43888b2 Removing intermediate container 1ce4bcc9ff0a Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "subresource-access-test" '' ---> Running in 269db01c0665 ---> 281a112a1eb9 Removing intermediate container 269db01c0665 Successfully built 281a112a1eb9 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/9 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> c1e9e769c4ba Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 6729c465203a Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 2aee087083e8 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> e3795172dd73 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 0de2fc4b917f Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "winrmcli" '' ---> Using cache ---> 2cea076f18ca Successfully built 2cea076f18ca Sending build context to Docker daemon 36.77 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b730b4ed65df Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 2018e93ceec6 Removing intermediate container 0dd3e96c0da9 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in dc4d1276cc4a ---> 9192dc99a61a Removing intermediate container dc4d1276cc4a Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Running in 090c6630369e ---> 504c7134754f Removing intermediate container 090c6630369e Successfully built 504c7134754f hack/build-docker.sh push The push refers to a repository [localhost:35269/kubevirt/virt-controller] 4ecd0ff1b661: Preparing ff9b9e61b9df: Preparing 891e1e4ef82a: Preparing ff9b9e61b9df: Pushed 4ecd0ff1b661: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:ec6ab9203c5d0c03c255323c5cce6c36b529e9b83b26146ff9e6e603676a2b12 size: 949 The push refers to a repository [localhost:35269/kubevirt/virt-launcher] c6565a98ebfe: Preparing 06c2a5086034: Preparing f902a27ceb43: Preparing 09ff2669fd9d: Preparing 8b539d998b34: Preparing cfcba35fba84: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing da38cf808aa5: Waiting b83399358a92: Waiting cfcba35fba84: Waiting 186d8b3e4fd8: Waiting fa6154170bf5: Waiting 06c2a5086034: Pushed 09ff2669fd9d: Pushed c6565a98ebfe: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller f902a27ceb43: Pushed cfcba35fba84: Pushed 8b539d998b34: Pushed 5eefb9960a36: Pushed devel: digest: sha256:4763869550ccfe0a23d7c078bcb9fad4f44ab249d30bc34872cc205e8c88eaff size: 2828 The push refers to a repository [localhost:35269/kubevirt/virt-handler] 71f9744b6740: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 71f9744b6740: Pushed devel: digest: sha256:f5344b59aad10b6bf236ed2009c0c63079ab482f90502b890399d2870e1c3d97 size: 741 The push refers to a repository [localhost:35269/kubevirt/virt-api] 4e45dc963453: Preparing 5f1414e2d326: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 5f1414e2d326: Pushed 4e45dc963453: Pushed devel: digest: sha256:d6557f59ff24aa4510b46a9509d05a11bf2aa774e0284638fccfa835e3b71e24 size: 948 The push refers to a repository [localhost:35269/kubevirt/disks-images-provider] 2e0da09ca39e: Preparing 4fe8becbb60f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 2e0da09ca39e: Pushed 4fe8becbb60f: Pushed devel: digest: sha256:bd6e9fb05ed82b7dd73a5bfed57ff4dbc10e2c0a538244997fe9bb5c1ca09487 size: 948 The push refers to a repository [localhost:35269/kubevirt/vm-killer] 7b031fa3032f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 7b031fa3032f: Pushed devel: digest: sha256:0d90aa40e027e8600ba18981253cb08d2a3d74acc77d3ff6c621e620c6d245ea size: 740 The push refers to a repository [localhost:35269/kubevirt/registry-disk-v1alpha] bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Pushed 18ac8ad2aee9: Pushed 132d61a890c5: Pushed devel: digest: sha256:09420d40f7e092de2cdfc557e52ac4cd0fee89ebd886a63eda211eee46286fb4 size: 948 The push refers to a repository [localhost:35269/kubevirt/cirros-registry-disk-demo] 9a4067e46b70: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing 132d61a890c5: Mounted from kubevirt/registry-disk-v1alpha bfd12fa374fa: Mounted from kubevirt/registry-disk-v1alpha 18ac8ad2aee9: Mounted from kubevirt/registry-disk-v1alpha 9a4067e46b70: Pushed devel: digest: sha256:878c6d2d0510d1448ba43f1765367277ed572b75d8791205562ff10bb8906820 size: 1160 The push refers to a repository [localhost:35269/kubevirt/fedora-cloud-registry-disk-demo] 1141e4bd5ffe: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing 132d61a890c5: Mounted from kubevirt/cirros-registry-disk-demo bfd12fa374fa: Mounted from kubevirt/cirros-registry-disk-demo 18ac8ad2aee9: Mounted from kubevirt/cirros-registry-disk-demo 1141e4bd5ffe: Pushed devel: digest: sha256:f3733f640f232f6ae225932b7e6639b7e7cff0f9091f7bec2e1b07e07a1816a2 size: 1161 The push refers to a repository [localhost:35269/kubevirt/alpine-registry-disk-demo] d8c9c4d998cc: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing 18ac8ad2aee9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 132d61a890c5: Mounted from kubevirt/fedora-cloud-registry-disk-demo bfd12fa374fa: Mounted from kubevirt/fedora-cloud-registry-disk-demo d8c9c4d998cc: Pushed devel: digest: sha256:f54e4034503b5833442b0a32ac6f18b66f21d92cfc58c7e6d7266e48f33885e5 size: 1160 The push refers to a repository [localhost:35269/kubevirt/subresource-access-test] 58389ac028b3: Preparing 3c1237181850: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 3c1237181850: Pushed 58389ac028b3: Pushed devel: digest: sha256:d7f508e38c35ecf615428fa71ffc0b100e0da092913e3c1a1bcb89a569fdda81 size: 948 The push refers to a repository [localhost:35269/kubevirt/winrmcli] bf2bff760365: Preparing 589098974698: Preparing 6e22155a44ef: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test bf2bff760365: Pushed 6e22155a44ef: Pushed 589098974698: Pushed devel: digest: sha256:eb3c86754dbc24a4a74d1f03b8368151e2e09e2b5f6db6b0b329164f7e316f7a size: 1165 The push refers to a repository [localhost:35269/kubevirt/example-hook-sidecar] 2582b191c57a: Preparing 39bae602f753: Preparing 2582b191c57a: Pushed 39bae602f753: Pushed devel: digest: sha256:cc186754f80c6b214193b42375d9cf781c0c52d06467c571242f5d46e6d238a2 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release1 ++ job_prefix=kubevirt-functional-tests-windows2016-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-111-g0e78e8d ++ KUBEVIRT_VERSION=v0.7.0-111-g0e78e8d + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:35269/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ wc -l ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release1 ++ job_prefix=kubevirt-functional-tests-windows2016-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-111-g0e78e8d ++ KUBEVIRT_VERSION=v0.7.0-111-g0e78e8d + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:35269/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z windows2016-release ]] + [[ windows2016-release =~ .*-dev ]] + [[ windows2016-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-bcc6b587d-crt72 0/1 ContainerCreating 0 2s virt-api-bcc6b587d-qpjzd 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-8vhqd 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-khdcn 0/1 ContainerCreating 0 2s virt-handler-gvb9v 0/1 ContainerCreating 0 2s virt-handler-w9nbc 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-bcc6b587d-crt72 0/1 ContainerCreating 0 3s virt-api-bcc6b587d-qpjzd 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-8vhqd 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-khdcn 0/1 ContainerCreating 0 3s virt-handler-gvb9v 0/1 ContainerCreating 0 3s virt-handler-w9nbc 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n 'false false' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false false false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-6zbc4 1/1 Running 0 14m coredns-78fcdf6894-wsfb7 1/1 Running 0 14m disks-images-provider-4k62s 1/1 Running 0 1m disks-images-provider-jrlhr 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 13m kube-controller-manager-node01 1/1 Running 0 13m kube-flannel-ds-hxlnb 1/1 Running 0 14m kube-flannel-ds-zzkmh 1/1 Running 1 14m kube-proxy-bbl92 1/1 Running 0 14m kube-proxy-lv8f9 1/1 Running 0 14m kube-scheduler-node01 1/1 Running 0 13m virt-api-bcc6b587d-crt72 1/1 Running 0 1m virt-api-bcc6b587d-qpjzd 1/1 Running 1 1m virt-controller-67dcdd8464-8vhqd 1/1 Running 0 1m virt-controller-67dcdd8464-khdcn 1/1 Running 0 1m virt-handler-gvb9v 1/1 Running 0 1m virt-handler-w9nbc 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-w2pvq 1/1 Running 0 14m local-volume-provisioner-z4mhw 1/1 Running 0 14m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml' + [[ windows2016-release =~ windows.* ]] + [[ -d /home/nfs/images/windows2016 ]] + kubectl create -f - + cluster/kubectl.sh create -f - persistentvolume/disk-windows created + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532536797 Will run 6 of 145 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Pod name: disks-images-provider-4k62s Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-jrlhr Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-crt72 Pod phase: Running 2018/07/25 16:45:14 http: TLS handshake error from 10.244.1.1:50342: EOF level=info timestamp=2018-07-25T16:45:15.110197Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:45:18.878263Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:45:24.152298Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:45:24 http: TLS handshake error from 10.244.1.1:50348: EOF 2018/07/25 16:45:34 http: TLS handshake error from 10.244.1.1:50354: EOF level=info timestamp=2018-07-25T16:45:40.094813Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:45:40.238850Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:45:44 http: TLS handshake error from 10.244.1.1:50360: EOF level=info timestamp=2018-07-25T16:45:44.982475Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:45:48.998469Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:45:52.481650Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:45:52.486422Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:45:54.229872Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:45:54 http: TLS handshake error from 10.244.1.1:50366: EOF Pod name: virt-api-bcc6b587d-qpjzd Pod phase: Running 2018/07/25 16:43:32 http: TLS handshake error from 10.244.0.1:33590: EOF 2018/07/25 16:43:42 http: TLS handshake error from 10.244.0.1:33650: EOF 2018/07/25 16:43:52 http: TLS handshake error from 10.244.0.1:33710: EOF 2018/07/25 16:44:02 http: TLS handshake error from 10.244.0.1:33770: EOF 2018/07/25 16:44:12 http: TLS handshake error from 10.244.0.1:33830: EOF 2018/07/25 16:44:22 http: TLS handshake error from 10.244.0.1:33890: EOF 2018/07/25 16:44:32 http: TLS handshake error from 10.244.0.1:33950: EOF 2018/07/25 16:44:42 http: TLS handshake error from 10.244.0.1:34010: EOF 2018/07/25 16:44:52 http: TLS handshake error from 10.244.0.1:34070: EOF 2018/07/25 16:45:02 http: TLS handshake error from 10.244.0.1:34130: EOF 2018/07/25 16:45:12 http: TLS handshake error from 10.244.0.1:34190: EOF 2018/07/25 16:45:22 http: TLS handshake error from 10.244.0.1:34250: EOF 2018/07/25 16:45:32 http: TLS handshake error from 10.244.0.1:34310: EOF 2018/07/25 16:45:42 http: TLS handshake error from 10.244.0.1:34370: EOF 2018/07/25 16:45:52 http: TLS handshake error from 10.244.0.1:34430: EOF Pod name: virt-controller-67dcdd8464-8vhqd Pod phase: Running level=info timestamp=2018-07-25T16:38:22.544695Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-07-25T16:38:22.602551Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-07-25T16:38:22.602627Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-07-25T16:38:22.602647Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-25T16:38:22.602673Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-25T16:38:22.602699Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-25T16:38:22.602716Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-25T16:38:22.602732Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-25T16:38:22.602794Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-25T16:38:22.613664Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-25T16:38:22.614086Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-25T16:38:22.614556Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-25T16:38:22.616750Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-25T16:39:58.581089Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:39:58.583345Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-khdcn Pod phase: Running level=info timestamp=2018-07-25T16:38:28.075743Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-gvb9v Pod phase: Running level=info timestamp=2018-07-25T16:38:39.724486Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-25T16:38:39.751281Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:39.753342Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:40.001052Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:40.354240Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:40.383060Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-w9nbc Pod phase: Running level=info timestamp=2018-07-25T16:38:28.259871Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-25T16:38:28.278453Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:28.284064Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:28.384755Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:28.394595Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:28.395902Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmi46lnp-46htm Pod phase: Pending ------------------------------ • Failure [360.818 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Timed out after 180.038s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 ------------------------------ level=info timestamp=2018-07-25T16:40:01.345963Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmi46lnp-46htm" Pod name: disks-images-provider-4k62s Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-jrlhr Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-crt72 Pod phase: Running level=info timestamp=2018-07-25T16:51:10.985761Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:51:11.054816Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:51:14 http: TLS handshake error from 10.244.1.1:50558: EOF level=info timestamp=2018-07-25T16:51:15.111091Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:51:20.028801Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:51:24 http: TLS handshake error from 10.244.1.1:50564: EOF level=info timestamp=2018-07-25T16:51:25.682404Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:51:34 http: TLS handshake error from 10.244.1.1:50570: EOF level=info timestamp=2018-07-25T16:51:41.063373Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:51:41.131516Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:51:44 http: TLS handshake error from 10.244.1.1:50576: EOF level=info timestamp=2018-07-25T16:51:45.023186Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:51:50.123796Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:51:54 http: TLS handshake error from 10.244.1.1:50582: EOF level=info timestamp=2018-07-25T16:51:55.836706Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-qpjzd Pod phase: Running 2018/07/25 16:49:32 http: TLS handshake error from 10.244.0.1:35754: EOF 2018/07/25 16:49:42 http: TLS handshake error from 10.244.0.1:35814: EOF 2018/07/25 16:49:52 http: TLS handshake error from 10.244.0.1:35874: EOF 2018/07/25 16:50:02 http: TLS handshake error from 10.244.0.1:35934: EOF 2018/07/25 16:50:12 http: TLS handshake error from 10.244.0.1:35994: EOF 2018/07/25 16:50:22 http: TLS handshake error from 10.244.0.1:36054: EOF 2018/07/25 16:50:32 http: TLS handshake error from 10.244.0.1:36114: EOF 2018/07/25 16:50:42 http: TLS handshake error from 10.244.0.1:36174: EOF 2018/07/25 16:50:52 http: TLS handshake error from 10.244.0.1:36234: EOF 2018/07/25 16:51:02 http: TLS handshake error from 10.244.0.1:36294: EOF 2018/07/25 16:51:12 http: TLS handshake error from 10.244.0.1:36354: EOF 2018/07/25 16:51:22 http: TLS handshake error from 10.244.0.1:36414: EOF 2018/07/25 16:51:32 http: TLS handshake error from 10.244.0.1:36474: EOF 2018/07/25 16:51:42 http: TLS handshake error from 10.244.0.1:36534: EOF 2018/07/25 16:51:52 http: TLS handshake error from 10.244.0.1:36594: EOF Pod name: virt-controller-67dcdd8464-8vhqd Pod phase: Running level=info timestamp=2018-07-25T16:38:22.602699Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-25T16:38:22.602716Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-25T16:38:22.602732Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-25T16:38:22.602794Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-25T16:38:22.613664Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-25T16:38:22.614086Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-25T16:38:22.614556Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-25T16:38:22.616750Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-25T16:39:58.581089Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:39:58.583345Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.533054Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:45:59.536027Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.672814Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.713301Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.738578Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" Pod name: virt-controller-67dcdd8464-khdcn Pod phase: Running level=info timestamp=2018-07-25T16:38:28.075743Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-gvb9v Pod phase: Running level=info timestamp=2018-07-25T16:38:39.724486Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-25T16:38:39.751281Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:39.753342Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:40.001052Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:40.354240Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:40.383060Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-w9nbc Pod phase: Running level=info timestamp=2018-07-25T16:38:28.259871Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-25T16:38:28.278453Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:28.284064Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:28.384755Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:28.394595Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:28.395902Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmignsr4-ll5ql Pod phase: Pending • Failure [360.794 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Timed out after 180.012s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 ------------------------------ STEP: Starting the vmi level=info timestamp=2018-07-25T16:46:00.582379Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmignsr4-ll5ql" Pod name: disks-images-provider-4k62s Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-jrlhr Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-crt72 Pod phase: Running 2018/07/25 16:57:14 http: TLS handshake error from 10.244.1.1:50774: EOF level=info timestamp=2018-07-25T16:57:15.118655Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:57:21.406879Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:57:24 http: TLS handshake error from 10.244.1.1:50780: EOF level=info timestamp=2018-07-25T16:57:27.433485Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:57:34 http: TLS handshake error from 10.244.1.1:50786: EOF level=info timestamp=2018-07-25T16:57:41.931915Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:57:42.053854Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 16:57:44 http: TLS handshake error from 10.244.1.1:50792: EOF level=info timestamp=2018-07-25T16:57:45.080589Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:57:51.500326Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T16:57:52.821629Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T16:57:52.825662Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/25 16:57:54 http: TLS handshake error from 10.244.1.1:50798: EOF level=info timestamp=2018-07-25T16:57:57.578466Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-qpjzd Pod phase: Running 2018/07/25 16:55:32 http: TLS handshake error from 10.244.0.1:37914: EOF 2018/07/25 16:55:42 http: TLS handshake error from 10.244.0.1:37974: EOF 2018/07/25 16:55:52 http: TLS handshake error from 10.244.0.1:38034: EOF 2018/07/25 16:56:02 http: TLS handshake error from 10.244.0.1:38094: EOF 2018/07/25 16:56:12 http: TLS handshake error from 10.244.0.1:38154: EOF 2018/07/25 16:56:22 http: TLS handshake error from 10.244.0.1:38214: EOF 2018/07/25 16:56:32 http: TLS handshake error from 10.244.0.1:38274: EOF 2018/07/25 16:56:42 http: TLS handshake error from 10.244.0.1:38334: EOF 2018/07/25 16:56:52 http: TLS handshake error from 10.244.0.1:38394: EOF 2018/07/25 16:57:02 http: TLS handshake error from 10.244.0.1:38454: EOF 2018/07/25 16:57:12 http: TLS handshake error from 10.244.0.1:38514: EOF 2018/07/25 16:57:22 http: TLS handshake error from 10.244.0.1:38574: EOF 2018/07/25 16:57:32 http: TLS handshake error from 10.244.0.1:38634: EOF 2018/07/25 16:57:42 http: TLS handshake error from 10.244.0.1:38694: EOF 2018/07/25 16:57:52 http: TLS handshake error from 10.244.0.1:38754: EOF Pod name: virt-controller-67dcdd8464-8vhqd Pod phase: Running level=info timestamp=2018-07-25T16:38:22.602794Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-25T16:38:22.613664Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-25T16:38:22.614086Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-25T16:38:22.614556Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-25T16:38:22.616750Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-25T16:39:58.581089Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:39:58.583345Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.533054Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:45:59.536027Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.672814Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.713301Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.738578Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.210843Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmignsr4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3576d20b-902a-11e8-910a-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.433134Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:52:00.434048Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-khdcn Pod phase: Running level=info timestamp=2018-07-25T16:38:28.075743Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-gvb9v Pod phase: Running level=info timestamp=2018-07-25T16:38:39.724486Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-25T16:38:39.751281Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:39.753342Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:40.001052Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:40.354240Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:40.383060Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-w9nbc Pod phase: Running level=info timestamp=2018-07-25T16:38:28.259871Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-25T16:38:28.278453Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:28.284064Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:28.384755Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:28.394595Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:28.395902Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmix4dng-wvfz7 Pod phase: Pending • Failure in Spec Setup (BeforeEach) [360.820 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Timed out after 180.015s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 ------------------------------ STEP: Creating winrm-cli pod for the future use STEP: Starting the windows VirtualMachineInstance level=info timestamp=2018-07-25T16:52:01.568787Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmix4dng-wvfz7" Pod name: disks-images-provider-4k62s Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-jrlhr Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-crt72 Pod phase: Running level=info timestamp=2018-07-25T17:03:12.763948Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T17:03:12.878676Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:03:14 http: TLS handshake error from 10.244.1.1:50990: EOF level=info timestamp=2018-07-25T17:03:15.017067Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:03:22.530438Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:03:24 http: TLS handshake error from 10.244.1.1:50996: EOF level=info timestamp=2018-07-25T17:03:29.213778Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:03:34 http: TLS handshake error from 10.244.1.1:51002: EOF level=info timestamp=2018-07-25T17:03:42.822318Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T17:03:42.931499Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:03:44 http: TLS handshake error from 10.244.1.1:51008: EOF level=info timestamp=2018-07-25T17:03:45.013914Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:03:52.619784Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:03:54 http: TLS handshake error from 10.244.1.1:51014: EOF level=info timestamp=2018-07-25T17:03:59.368900Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-qpjzd Pod phase: Running 2018/07/25 17:01:32 http: TLS handshake error from 10.244.0.1:40074: EOF 2018/07/25 17:01:42 http: TLS handshake error from 10.244.0.1:40134: EOF 2018/07/25 17:01:52 http: TLS handshake error from 10.244.0.1:40194: EOF 2018/07/25 17:02:02 http: TLS handshake error from 10.244.0.1:40254: EOF 2018/07/25 17:02:12 http: TLS handshake error from 10.244.0.1:40314: EOF 2018/07/25 17:02:22 http: TLS handshake error from 10.244.0.1:40374: EOF 2018/07/25 17:02:32 http: TLS handshake error from 10.244.0.1:40434: EOF 2018/07/25 17:02:42 http: TLS handshake error from 10.244.0.1:40494: EOF 2018/07/25 17:02:52 http: TLS handshake error from 10.244.0.1:40554: EOF 2018/07/25 17:03:02 http: TLS handshake error from 10.244.0.1:40614: EOF 2018/07/25 17:03:12 http: TLS handshake error from 10.244.0.1:40674: EOF 2018/07/25 17:03:22 http: TLS handshake error from 10.244.0.1:40734: EOF 2018/07/25 17:03:32 http: TLS handshake error from 10.244.0.1:40794: EOF 2018/07/25 17:03:42 http: TLS handshake error from 10.244.0.1:40854: EOF 2018/07/25 17:03:52 http: TLS handshake error from 10.244.0.1:40914: EOF Pod name: virt-controller-67dcdd8464-8vhqd Pod phase: Running level=info timestamp=2018-07-25T16:38:22.614556Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-25T16:38:22.616750Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-25T16:39:58.581089Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:39:58.583345Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.533054Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:45:59.536027Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.672814Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.713301Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.738578Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.210843Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmignsr4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3576d20b-902a-11e8-910a-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.433134Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:52:00.434048Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:58:01.188254Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibgqcm kind= uid=e39f5b3a-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:58:01.188700Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibgqcm kind= uid=e39f5b3a-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:58:01.254271Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibgqcm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibgqcm" Pod name: virt-controller-67dcdd8464-khdcn Pod phase: Running level=info timestamp=2018-07-25T16:38:28.075743Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-gvb9v Pod phase: Running level=info timestamp=2018-07-25T16:38:39.724486Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-25T16:38:39.751281Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:39.753342Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:40.001052Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:40.354240Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:40.383060Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-w9nbc Pod phase: Running level=info timestamp=2018-07-25T16:38:28.259871Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-25T16:38:28.278453Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:28.284064Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:28.384755Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:28.394595Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:28.395902Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmibgqcm-zkcpw Pod phase: Pending • Failure in Spec Setup (BeforeEach) [360.711 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Timed out after 180.010s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 ------------------------------ STEP: Creating winrm-cli pod for the future use STEP: Starting the windows VirtualMachineInstance level=info timestamp=2018-07-25T16:58:02.204063Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmibgqcm-zkcpw" Pod name: disks-images-provider-4k62s Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-jrlhr Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-crt72 Pod phase: Running 2018/07/25 17:07:14 http: TLS handshake error from 10.244.1.1:51134: EOF level=info timestamp=2018-07-25T17:07:15.290928Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:07:23.273903Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:07:24 http: TLS handshake error from 10.244.1.1:51140: EOF level=info timestamp=2018-07-25T17:07:30.365484Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:07:34 http: TLS handshake error from 10.244.1.1:51146: EOF level=info timestamp=2018-07-25T17:07:43.376736Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T17:07:43.496618Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:07:44 http: TLS handshake error from 10.244.1.1:51152: EOF level=info timestamp=2018-07-25T17:07:45.420274Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:07:51.886340Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:07:51.890410Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:07:53.365019Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:07:54 http: TLS handshake error from 10.244.1.1:51158: EOF level=info timestamp=2018-07-25T17:08:00.525566Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-qpjzd Pod phase: Running 2018/07/25 17:05:42 http: TLS handshake error from 10.244.0.1:41574: EOF 2018/07/25 17:05:52 http: TLS handshake error from 10.244.0.1:41634: EOF 2018/07/25 17:06:02 http: TLS handshake error from 10.244.0.1:41694: EOF 2018/07/25 17:06:12 http: TLS handshake error from 10.244.0.1:41754: EOF 2018/07/25 17:06:22 http: TLS handshake error from 10.244.0.1:41814: EOF 2018/07/25 17:06:32 http: TLS handshake error from 10.244.0.1:41874: EOF 2018/07/25 17:06:42 http: TLS handshake error from 10.244.0.1:41934: EOF 2018/07/25 17:06:52 http: TLS handshake error from 10.244.0.1:41994: EOF 2018/07/25 17:07:02 http: TLS handshake error from 10.244.0.1:42054: EOF 2018/07/25 17:07:12 http: TLS handshake error from 10.244.0.1:42114: EOF 2018/07/25 17:07:22 http: TLS handshake error from 10.244.0.1:42174: EOF 2018/07/25 17:07:32 http: TLS handshake error from 10.244.0.1:42234: EOF 2018/07/25 17:07:42 http: TLS handshake error from 10.244.0.1:42294: EOF 2018/07/25 17:07:52 http: TLS handshake error from 10.244.0.1:42354: EOF 2018/07/25 17:08:02 http: TLS handshake error from 10.244.0.1:42414: EOF Pod name: virt-controller-67dcdd8464-8vhqd Pod phase: Running level=info timestamp=2018-07-25T16:39:58.583345Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi46lnp kind= uid=5e520fca-9029-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.533054Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:45:59.536027Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmignsr4 kind= uid=3576d20b-902a-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:45:59.672814Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.713301Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.738578Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.210843Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmignsr4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3576d20b-902a-11e8-910a-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.433134Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:52:00.434048Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:58:01.188254Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibgqcm kind= uid=e39f5b3a-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:58:01.188700Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibgqcm kind= uid=e39f5b3a-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:58:01.254271Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibgqcm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibgqcm" level=info timestamp=2018-07-25T17:04:03.020766Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7ngt kind= uid=bb366eac-902c-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:04:03.024307Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7ngt kind= uid=bb366eac-902c-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:04:03.276430Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmib7ngt\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmib7ngt" Pod name: virt-controller-67dcdd8464-khdcn Pod phase: Running level=info timestamp=2018-07-25T16:38:28.075743Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-gvb9v Pod phase: Running level=info timestamp=2018-07-25T16:38:39.724486Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-25T16:38:39.751281Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:39.753342Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:40.001052Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:40.354240Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:40.383060Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-w9nbc Pod phase: Running level=info timestamp=2018-07-25T16:38:28.259871Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-25T16:38:28.278453Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:28.284064Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:28.384755Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:28.394595Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:28.395902Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmib7ngt-9vsg7 Pod phase: Pending • Failure [241.776 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Timed out after 120.013s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-07-25T17:04:04.085589Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmib7ngt-9vsg7" Pod name: disks-images-provider-4k62s Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-jrlhr Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-crt72 Pod phase: Running level=info timestamp=2018-07-25T17:11:14.081680Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:11:14 http: TLS handshake error from 10.244.1.1:51278: EOF level=info timestamp=2018-07-25T17:11:15.399880Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:11:24.050941Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:11:24 http: TLS handshake error from 10.244.1.1:51284: EOF level=info timestamp=2018-07-25T17:11:31.640607Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:11:34 http: TLS handshake error from 10.244.1.1:51290: EOF level=info timestamp=2018-07-25T17:11:44.002071Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T17:11:44.207517Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:11:44 http: TLS handshake error from 10.244.1.1:51296: EOF level=info timestamp=2018-07-25T17:11:45.263683Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:11:54.098794Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:11:54 http: TLS handshake error from 10.244.1.1:51302: EOF level=info timestamp=2018-07-25T17:12:01.776809Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:12:04 http: TLS handshake error from 10.244.1.1:51308: EOF Pod name: virt-api-bcc6b587d-qpjzd Pod phase: Running 2018/07/25 17:09:42 http: TLS handshake error from 10.244.0.1:43014: EOF 2018/07/25 17:09:52 http: TLS handshake error from 10.244.0.1:43074: EOF 2018/07/25 17:10:02 http: TLS handshake error from 10.244.0.1:43134: EOF 2018/07/25 17:10:12 http: TLS handshake error from 10.244.0.1:43194: EOF 2018/07/25 17:10:22 http: TLS handshake error from 10.244.0.1:43254: EOF 2018/07/25 17:10:32 http: TLS handshake error from 10.244.0.1:43314: EOF 2018/07/25 17:10:42 http: TLS handshake error from 10.244.0.1:43374: EOF 2018/07/25 17:10:52 http: TLS handshake error from 10.244.0.1:43434: EOF 2018/07/25 17:11:02 http: TLS handshake error from 10.244.0.1:43494: EOF 2018/07/25 17:11:12 http: TLS handshake error from 10.244.0.1:43554: EOF 2018/07/25 17:11:22 http: TLS handshake error from 10.244.0.1:43614: EOF 2018/07/25 17:11:32 http: TLS handshake error from 10.244.0.1:43674: EOF 2018/07/25 17:11:42 http: TLS handshake error from 10.244.0.1:43734: EOF 2018/07/25 17:11:52 http: TLS handshake error from 10.244.0.1:43794: EOF 2018/07/25 17:12:02 http: TLS handshake error from 10.244.0.1:43854: EOF Pod name: virt-controller-67dcdd8464-8vhqd Pod phase: Running level=info timestamp=2018-07-25T16:45:59.672814Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.713301Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:45:59.738578Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.210843Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmignsr4\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmignsr4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3576d20b-902a-11e8-910a-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmignsr4" level=info timestamp=2018-07-25T16:52:00.433134Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:52:00.434048Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmix4dng kind= uid=0c92994b-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:58:01.188254Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibgqcm kind= uid=e39f5b3a-902b-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T16:58:01.188700Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibgqcm kind= uid=e39f5b3a-902b-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T16:58:01.254271Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibgqcm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibgqcm" level=info timestamp=2018-07-25T17:04:03.020766Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7ngt kind= uid=bb366eac-902c-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:04:03.024307Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7ngt kind= uid=bb366eac-902c-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:04:03.276430Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmib7ngt\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmib7ngt" level=info timestamp=2018-07-25T17:08:04.321157Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimnn7w kind= uid=4b19b9c0-902d-11e8-910a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:08:04.322911Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimnn7w kind= uid=4b19b9c0-902d-11e8-910a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:08:04.499748Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimnn7w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimnn7w" Pod name: virt-controller-67dcdd8464-khdcn Pod phase: Running level=info timestamp=2018-07-25T16:38:28.075743Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-gvb9v Pod phase: Running level=info timestamp=2018-07-25T16:38:39.724486Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-25T16:38:39.751281Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:39.753342Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:40.001052Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:40.354240Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:40.383060Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-w9nbc Pod phase: Running level=info timestamp=2018-07-25T16:38:28.259871Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-25T16:38:28.278453Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-25T16:38:28.284064Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-25T16:38:28.384755Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-25T16:38:28.394595Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-25T16:38:28.395902Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmimnn7w-pk9cq Pod phase: Pending • Failure [241.347 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Timed out after 120.013s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-07-25T17:08:05.333893Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmimnn7w-pk9cq" SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 6 Failures: [Fail] Windows VirtualMachineInstance [It] should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 [Fail] Windows VirtualMachineInstance [It] should succeed to stop a running vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 [Fail] Windows VirtualMachineInstance with winrm connection [BeforeEach] should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 [Fail] Windows VirtualMachineInstance with winrm connection [BeforeEach] should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1049 Ran 6 of 145 Specs in 1934.048 seconds FAIL! -- 0 Passed | 6 Failed | 0 Pending | 139 Skipped --- FAIL: TestTests (1934.07s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh