+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/03 20:51:13 Waiting for host: 192.168.66.101:22 2018/08/03 20:51:16 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/03 20:51:24 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/03 20:51:29 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ wc -l ++ grep active + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0803 20:51:31.944844 1262 feature_gate.go:230] feature gates: &{map[]} I0803 20:51:32.024657 1262 kernel_validator.go:81] Validating kernel version I0803 20:51:32.024799 1262 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 52.511754 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:f2743e7c293d60d145a351c9fb1e8a92b633d9a594dfac3274101a23fd8c35e7 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/03 20:52:42 Waiting for host: 192.168.66.102:22 2018/08/03 20:52:45 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/03 20:52:57 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ wc -l ++ grep active + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0803 20:52:57.873464 1258 kernel_validator.go:81] Validating kernel version I0803 20:52:57.873781 1258 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 48s v1.11.0 node02 Ready 21s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 49s v1.11.0 node02 Ready 22s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33282/kubevirt/virt-controller:devel Untagged: localhost:33282/kubevirt/virt-controller@sha256:89ca06e9264ddedd11f68907c0238d3d2d28a3ffba43b43b78581b212b475f9f Deleted: sha256:92d9210e5bcc91828fe9d4be1453592ebdb08f0a72221eb6f94fbacbb377d6a1 Deleted: sha256:87d9cd69804550bc6a47129adc6b9162039a4077545afa53f36b21232103194e Deleted: sha256:f3c976070fe8d9d258cc655863c2e1a02be890cf8856a4f62dae001f06c0e5b1 Deleted: sha256:54059220c971d27d550f7a6df0e9c5aa40fd748008d824eb8a7cf7e3b295b1f6 Untagged: localhost:33282/kubevirt/virt-launcher:devel Untagged: localhost:33282/kubevirt/virt-launcher@sha256:353fbc505755088c7ebd4add72649fdf905cd4d10be2dc6f429dd01028edbfed Deleted: sha256:cf76884281e42a2651c852d24e4a65b1600095196aa07c2928285eb17ebbd102 Deleted: sha256:91d1e17def947cda45c0b48a36ecff1ceccb2201ce9dd6dcd36b540305e2cec8 Deleted: sha256:666a4cd98dc3fbb722a4ec2fd927f9ff74806f65c10bb88700ae12f55913e7c3 Deleted: sha256:c660a8ca913ceed79e94c2f24c03b98f27f0f2ec676dd7118c6db9b00918478e Deleted: sha256:14e377b9434cc106c0096d3746b70c36579669a9a418ebab1975bf3439ba62e6 Deleted: sha256:ab15c0dfefe394ad3cd0c491be71d7356b07c6dd9cb93df3366b39967cffb58f Deleted: sha256:982de3652196d5742a349e3eeaa7efa2060fabb0e815081d040213a757043aa1 Deleted: sha256:5c8ab09b4e6b269fa96f9d0167acd40310c4d9ea160d9c993716c9fde4d0ac6f Deleted: sha256:0655fd19b75fe8def01f64ed3ee4015e71dca946dbba382599bafea385957a42 Deleted: sha256:2e1e6691cfbfee54b8ab41093d638ec794101043c5eb1695d607dbac097ef6fe Deleted: sha256:3a6295103e032e0a60f493800e0eb605cb2a1e2ced2a43a02fbccdf669d986b2 Deleted: sha256:35bfb8581e5e7feaea2cda04f8f177b0ef8c6493b1083365a3f37cbb49c62e32 Untagged: localhost:33282/kubevirt/virt-handler:devel Untagged: localhost:33282/kubevirt/virt-handler@sha256:f920daea2fcab151bcb78ccdf090ae106925b8a65dde4ff9dca8d7a9602b4316 Deleted: sha256:bc12973c37805a20facf1ac19b379c254fa9d494db9a7c3dce2648d4757021c9 Deleted: sha256:6d2470daf84b7542ed3de2e4eac88f00a1a61b9c214db18453e6a80ab96d2dff Deleted: sha256:26ee346859d45be29a3cb1d1ada35c0897592b4e8f21a99af4801d6ee1f08b9e Deleted: sha256:496fb4038bfb97fdc417d83b285e3203e0944468d00a351def91afc2182824a2 Untagged: localhost:33282/kubevirt/virt-api:devel Untagged: localhost:33282/kubevirt/virt-api@sha256:5c1a79dad641a40f6dfda029a0ed7c2d7c512a7432b5cb44bbc4721d72d4400d Deleted: sha256:9276109d12424970a1013be388952ce9c41cd714f023caf6a7eb59f5e6b55806 Deleted: sha256:2ecc97c2ad34afb9a49e89309ea33642492bcad09aa07f1aad9b979258268a17 Deleted: sha256:300c585e5e85ff83dc5778bdbc415d5c91b2f24f684e6adbbe25ecfe1ecbd147 Deleted: sha256:cc065716145b85251c3776e8e1f68574078dc2638262c7b6c99d2c3963c6e256 Untagged: localhost:33282/kubevirt/subresource-access-test:devel Untagged: localhost:33282/kubevirt/subresource-access-test@sha256:2a4b47ec9bce47d79d249b6ae3c76cb1cb984c366efb6b0690229b4bb8f38c52 Deleted: sha256:1579116d87439d11bdaf210a0ddb5b54d3a9fbc7f0ffbf43ac5c93cbca59c638 Deleted: sha256:3c177ea764efb1cf63e74ba2189af27520c83f01d722a3dba29678bee55bd8f3 Deleted: sha256:ab197c1ebdf2902ed5726191c390c385a33671d7cdd617a8de1b1da2edd62d5e Deleted: sha256:0d9632e5cfe937a06ff5df7956f4d1db47b2dcc61bb05cbfe57036a56def1d54 Untagged: localhost:33282/kubevirt/example-hook-sidecar:devel Untagged: localhost:33282/kubevirt/example-hook-sidecar@sha256:39b0908acad00838f9af68d856390a02d1611114f7c88052337c5159e5c0bbf0 Deleted: sha256:fca864fae983d42d6e7698638a0fabd992505dca6212acc4fdc602c24c4ba4e2 Deleted: sha256:a31d918d2b69de8d77319ac3c2218a4e060a00c3e4cb4e8af546513fedc008d5 Deleted: sha256:d3fa96dfdd099b8240cc662d4315227efb309e1d265882c26a95d5dda505cee5 Deleted: sha256:7a70de952f3a3aab55df6c4f38dc905123845cb7c0520c97e65cdca463f7b4fa sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 84570f0bf244 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 4b8efcbf3461 Step 5/8 : USER 1001 ---> Using cache ---> c49257f2ff48 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 9ca4c4a42da6 Removing intermediate container d45342880e9d Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 3f6ed57d3eca ---> 6d5e3400ac24 Removing intermediate container 3f6ed57d3eca Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-controller" '' ---> Running in ae1d16419704 ---> b685e8b4a390 Removing intermediate container ae1d16419704 Successfully built b685e8b4a390 Sending build context to Docker daemon 43.32 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> c1e65e6c8241 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 4c20d196c128 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> e39198f15710 Removing intermediate container e858d12a91ca Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> aaee6873f45f Removing intermediate container 201d56ecf50c Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in ccfa2e8249a4  ---> 2825d0867765 Removing intermediate container ccfa2e8249a4 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 18f5f5b9cbd4  ---> a9a828714c92 Removing intermediate container 18f5f5b9cbd4 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 637ad074ecff Removing intermediate container 83215d13d95d Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in f726ec08cd1a ---> 4923f2d51453 Removing intermediate container f726ec08cd1a Step 10/10 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-launcher" '' ---> Running in 661638c2f29e ---> a3418641cc8a Removing intermediate container 661638c2f29e Successfully built a3418641cc8a Sending build context to Docker daemon 38.45 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> c219d19b0f0c Removing intermediate container c6c673116a92 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 5d4bf493c808 ---> ef9f65a5b9a3 Removing intermediate container 5d4bf493c808 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-handler" '' ---> Running in 672eb95bfb66 ---> 703db8cbfd8f Removing intermediate container 672eb95bfb66 Successfully built 703db8cbfd8f Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 6f2134b876af Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> d5ef0239bf68 Step 5/8 : USER 1001 ---> Using cache ---> 233000b2d9b5 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> beea466e23d4 Removing intermediate container 078e3f8c9090 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in b4eed2ed6e85 ---> 16efadc21d15 Removing intermediate container b4eed2ed6e85 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-api" '' ---> Running in 2af541ac603b ---> ced47558ae77 Removing intermediate container 2af541ac603b Successfully built ced47558ae77 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/7 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 06d762a67408 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 3876d185cf84 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 1fb50ce9b78f Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> ad0640d6a94a Successfully built ad0640d6a94a Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/5 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 6bc4f549313f Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "vm-killer" '' ---> Using cache ---> d1936042d584 Successfully built d1936042d584 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 9ef1c0ce5d24 Step 3/7 : ENV container docker ---> Using cache ---> 9ad55e41ed61 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 17a81fda7c2b Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 681d01e165e6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> a79815fe82d9 Step 7/7 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 6ef2fe0ba069 Successfully built 6ef2fe0ba069 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33342/kubevirt/registry-disk-v1alpha:devel ---> 6ef2fe0ba069 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 01615351ca4e Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 81ca76c46679 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> c448af5e3322 Successfully built c448af5e3322 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33342/kubevirt/registry-disk-v1alpha:devel ---> 6ef2fe0ba069 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d330eefdd757 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> d4f7cb7b1be2 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> c74218398637 Successfully built c74218398637 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33342/kubevirt/registry-disk-v1alpha:devel ---> 6ef2fe0ba069 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d330eefdd757 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 3696cd7aa2d3 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> c5b23ac9de78 Successfully built c5b23ac9de78 Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> deebe9dc06da Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 4094ce77e412 Step 5/8 : USER 1001 ---> Using cache ---> ba694520e9a4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 5f9d04d5f3e9 Removing intermediate container 3b987d753b8b Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 65fb9f90897a ---> 1888f2864ee0 Removing intermediate container 65fb9f90897a Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "subresource-access-test" '' ---> Running in f8a48b2d9761 ---> 8464b4d64105 Removing intermediate container f8a48b2d9761 Successfully built 8464b4d64105 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/9 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> e0cf52293e57 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 8c031086e8cb Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 0f6dd31de4d3 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 6a702eb79a95 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> bed79012c9f3 Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "winrmcli" '' ---> Using cache ---> dd5c5c7f0ce2 Successfully built dd5c5c7f0ce2 Sending build context to Docker daemon 36.8 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> c000f992e44b Removing intermediate container 3c6c2a765cdb Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 4537b7054cee ---> a52ca18b1c80 Removing intermediate container 4537b7054cee Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Running in 414f18c6fc28 ---> b46cde38d596 Removing intermediate container 414f18c6fc28 Successfully built b46cde38d596 hack/build-docker.sh push The push refers to a repository [localhost:33342/kubevirt/virt-controller] b2618eec54c8: Preparing 915a0c3e3f5f: Preparing 891e1e4ef82a: Preparing 915a0c3e3f5f: Pushed b2618eec54c8: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:e0eb6d0512743627fac7723320df259aefeef104ece7bae02df13b001c4af11e size: 949 The push refers to a repository [localhost:33342/kubevirt/virt-launcher] d8f9f39872a7: Preparing 84cf81ff76b8: Preparing 25b9f8faa8cb: Preparing 6515b8889720: Preparing 52b1c5ff9b83: Preparing 5379fb5d8cce: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing b83399358a92: Waiting 5379fb5d8cce: Waiting da38cf808aa5: Waiting 186d8b3e4fd8: Waiting 891e1e4ef82a: Waiting fa6154170bf5: Waiting 5eefb9960a36: Waiting 84cf81ff76b8: Pushed d8f9f39872a7: Pushed 6515b8889720: Pushed da38cf808aa5: Pushed b83399358a92: Pushed fa6154170bf5: Pushed 186d8b3e4fd8: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 25b9f8faa8cb: Pushed 5379fb5d8cce: Pushed 52b1c5ff9b83: Pushed 5eefb9960a36: Pushed devel: digest: sha256:59707e714f5b9ea2f28f338664b4ac34607dc37c7f4ddbd86f72710dafca4dbd size: 2828 The push refers to a repository [localhost:33342/kubevirt/virt-handler] 89191d0749ed: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 89191d0749ed: Pushed devel: digest: sha256:1c55037ebca943eb2d62864121dc87f468b409f4d6013c88145d397f904bbc22 size: 740 The push refers to a repository [localhost:33342/kubevirt/virt-api] d94394c386c8: Preparing 7cc07c574d2a: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 7cc07c574d2a: Pushed d94394c386c8: Pushed devel: digest: sha256:c4e41cbb4b3f0b58f1ff514d8fb7da09312e551dde4fce3c0e4d32e349c6899e size: 948 The push refers to a repository [localhost:33342/kubevirt/disks-images-provider] 1548fa7b1c9e: Preparing a7621d2cf364: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 1548fa7b1c9e: Pushed a7621d2cf364: Pushed devel: digest: sha256:d273f6da472de0e04913d3468b8efabc603cdb07dec7d2ff3559414c226fceef size: 948 The push refers to a repository [localhost:33342/kubevirt/vm-killer] 3c31f9f8d755: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 3c31f9f8d755: Pushed devel: digest: sha256:a6dc30b5b25246ac485e30d2aaee9c098c1e68191ce390cce3f0d8d4e1ad9328 size: 740 The push refers to a repository [localhost:33342/kubevirt/registry-disk-v1alpha] c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Pushed 4662bbc21c2d: Pushed 25edbec0eaea: Pushed devel: digest: sha256:983fa47e2a9f84477bd28f2f1c36f24812001a833dca5b4ae9a4d436a2d2564c size: 948 The push refers to a repository [localhost:33342/kubevirt/cirros-registry-disk-demo] 8081bd2f2d51: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 4662bbc21c2d: Mounted from kubevirt/registry-disk-v1alpha c66b9a220e25: Mounted from kubevirt/registry-disk-v1alpha 8081bd2f2d51: Pushed devel: digest: sha256:90cff06e4e356cc860429e715d7eb65570de321773a692851fd7888f39a0e2b0 size: 1160 The push refers to a repository [localhost:33342/kubevirt/fedora-cloud-registry-disk-demo] fa1881d7bf95: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 4662bbc21c2d: Mounted from kubevirt/cirros-registry-disk-demo c66b9a220e25: Mounted from kubevirt/cirros-registry-disk-demo fa1881d7bf95: Pushed devel: digest: sha256:18c2e2f569079fd2da55a2eb87240fe29015c8fbf293d125557e82dfb55a4cf0 size: 1161 The push refers to a repository [localhost:33342/kubevirt/alpine-registry-disk-demo] d01c36937189: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/fedora-cloud-registry-disk-demo 4662bbc21c2d: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo d01c36937189: Pushed devel: digest: sha256:994d447b46abde194e1d6610f761887b06c5a3b57c80e1807cdc6138f0d20f15 size: 1160 The push refers to a repository [localhost:33342/kubevirt/subresource-access-test] ba872356f677: Preparing 7e69243e781e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 7e69243e781e: Pushed ba872356f677: Pushed devel: digest: sha256:2ce5dfaec6aa7f1bddb41c0fdf954ff6c3362b8a35f60db77fafdb53665d09c9 size: 948 The push refers to a repository [localhost:33342/kubevirt/winrmcli] a117c61a5658: Preparing c9df4405017d: Preparing 99bb32247f65: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test a117c61a5658: Pushed 99bb32247f65: Pushed c9df4405017d: Pushed devel: digest: sha256:8f5e5fefe668fada12ea95ecdc2c5e3a1055bf6b924ed23056d8ab448a09b6f8 size: 1165 The push refers to a repository [localhost:33342/kubevirt/example-hook-sidecar] 123f71bc2024: Preparing 39bae602f753: Preparing 123f71bc2024: Pushed 39bae602f753: Pushed devel: digest: sha256:f99a9eba1a5ad5b47301e39abbc6f01557390c69731764c14bad68a41fdbce5c size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release1 ++ job_prefix=kubevirt-functional-tests-windows2016-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-189-g96d0165 ++ KUBEVIRT_VERSION=v0.7.0-189-g96d0165 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33342/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release1 ++ job_prefix=kubevirt-functional-tests-windows2016-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-189-g96d0165 ++ KUBEVIRT_VERSION=v0.7.0-189-g96d0165 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33342/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z windows2016-release ]] + [[ windows2016-release =~ .*-dev ]] + [[ windows2016-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-bcc6b587d-7q9xv 0/1 ContainerCreating 0 2s virt-api-bcc6b587d-p7sqd 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-9phcp 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-rz8nl 0/1 ContainerCreating 0 2s virt-handler-9hbwq 0/1 ContainerCreating 0 3s virt-handler-xpzr8 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-7spwb 0/1 Pending 0 0s disks-images-provider-nr6h4 0/1 Pending 0 0s virt-api-bcc6b587d-7q9xv 0/1 ContainerCreating 0 3s virt-api-bcc6b587d-p7sqd 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-9phcp 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-rz8nl 0/1 ContainerCreating 0 3s virt-handler-9hbwq 0/1 ContainerCreating 0 4s virt-handler-xpzr8 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-z4v4f 1/1 Running 0 15m coredns-78fcdf6894-zwmnr 1/1 Running 0 15m disks-images-provider-7spwb 1/1 Running 0 1m disks-images-provider-nr6h4 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 14m kube-controller-manager-node01 1/1 Running 0 14m kube-flannel-ds-dfrgm 1/1 Running 0 15m kube-flannel-ds-dn69h 1/1 Running 0 15m kube-proxy-lmmtj 1/1 Running 0 15m kube-proxy-txtgd 1/1 Running 0 15m kube-scheduler-node01 1/1 Running 0 14m virt-api-bcc6b587d-7q9xv 1/1 Running 0 1m virt-api-bcc6b587d-p7sqd 1/1 Running 0 1m virt-controller-67dcdd8464-9phcp 1/1 Running 0 1m virt-controller-67dcdd8464-rz8nl 1/1 Running 0 1m virt-handler-9hbwq 1/1 Running 0 1m virt-handler-xpzr8 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-n54hj 1/1 Running 0 15m local-volume-provisioner-xxrmx 1/1 Running 0 15m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml' + [[ windows2016-release =~ windows.* ]] + [[ -d /home/nfs/images/windows2016 ]] + kubectl create -f - + cluster/kubectl.sh create -f - persistentvolume/disk-windows created + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1533330524 Will run 6 of 148 specs SSSSSS Pod name: disks-images-provider-7spwb Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-nr6h4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-7q9xv Pod phase: Running 2018/08/03 21:12:17 http: TLS handshake error from 10.244.0.1:45852: EOF 2018/08/03 21:12:27 http: TLS handshake error from 10.244.0.1:45912: EOF 2018/08/03 21:12:37 http: TLS handshake error from 10.244.0.1:45972: EOF 2018/08/03 21:12:47 http: TLS handshake error from 10.244.0.1:46032: EOF 2018/08/03 21:12:57 http: TLS handshake error from 10.244.0.1:46092: EOF 2018/08/03 21:13:07 http: TLS handshake error from 10.244.0.1:46152: EOF 2018/08/03 21:13:17 http: TLS handshake error from 10.244.0.1:46212: EOF 2018/08/03 21:13:27 http: TLS handshake error from 10.244.0.1:46272: EOF 2018/08/03 21:13:37 http: TLS handshake error from 10.244.0.1:46332: EOF 2018/08/03 21:13:47 http: TLS handshake error from 10.244.0.1:46392: EOF 2018/08/03 21:13:57 http: TLS handshake error from 10.244.0.1:46452: EOF 2018/08/03 21:14:07 http: TLS handshake error from 10.244.0.1:46512: EOF 2018/08/03 21:14:17 http: TLS handshake error from 10.244.0.1:46572: EOF 2018/08/03 21:14:27 http: TLS handshake error from 10.244.0.1:46632: EOF 2018/08/03 21:14:37 http: TLS handshake error from 10.244.0.1:46692: EOF Pod name: virt-api-bcc6b587d-p7sqd Pod phase: Running level=info timestamp=2018-08-03T21:13:57.092188Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:13:57 http: TLS handshake error from 10.244.1.1:59228: EOF level=info timestamp=2018-08-03T21:14:00.214693Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:14:03.502918Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:14:07 http: TLS handshake error from 10.244.1.1:59234: EOF level=info timestamp=2018-08-03T21:14:07.436172Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:14:13.060412Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:14:17 http: TLS handshake error from 10.244.1.1:59240: EOF level=info timestamp=2018-08-03T21:14:27.157468Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:14:27 http: TLS handshake error from 10.244.1.1:59246: EOF level=info timestamp=2018-08-03T21:14:30.294623Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:14:33.358049Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:14:37 http: TLS handshake error from 10.244.1.1:59252: EOF level=info timestamp=2018-08-03T21:14:37.521050Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:14:43.214169Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-9phcp Pod phase: Running level=info timestamp=2018-08-03T21:07:08.906535Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-03T21:07:08.906708Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-03T21:07:08.906770Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-03T21:07:08.906812Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-03T21:07:08.906842Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-03T21:07:08.906921Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-08-03T21:07:08.906950Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-03T21:07:08.906978Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-03T21:07:08.907058Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-03T21:07:08.917364Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-03T21:07:08.918159Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-03T21:07:08.919326Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-03T21:07:08.920074Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-08-03T21:08:45.292020Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:08:45.293833Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-rz8nl Pod phase: Running level=info timestamp=2018-08-03T21:07:09.764641Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-9hbwq Pod phase: Running level=info timestamp=2018-08-03T21:07:12.764984Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-03T21:07:12.772346Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer configMapInformer" level=info timestamp=2018-08-03T21:07:12.875163Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-03T21:07:12.877005Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=error timestamp=2018-08-03T21:07:12.878139Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:09:00.770478Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:10:05.569316Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:16.917483Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:24.001958Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:45.745544Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-handler-xpzr8 Pod phase: Running level=info timestamp=2018-08-03T21:07:15.617625Z pos=virt-handler.go:89 component=virt-handler hostname=node02 level=info timestamp=2018-08-03T21:07:15.623187Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer configMapInformer" level=info timestamp=2018-08-03T21:07:15.730912Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=error timestamp=2018-08-03T21:07:15.732031Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=info timestamp=2018-08-03T21:07:15.735160Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=error timestamp=2018-08-03T21:09:04.272866Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:10:09.160033Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:20.471371Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:27.496374Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:49.203513Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-launcher-testvmixhnnk-qn2kp Pod phase: Pending ------------------------------ • Failure [360.860 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Timed out after 180.025s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 ------------------------------ level=info timestamp=2018-08-03T21:08:46.280717Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmixhnnk kind=VirtualMachineInstance uid=68580c29-9761-11e8-bce6-525500d15501 msg="Created virtual machine pod virt-launcher-testvmixhnnk-qn2kp" Pod name: disks-images-provider-7spwb Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-nr6h4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-7q9xv Pod phase: Running 2018/08/03 21:18:17 http: TLS handshake error from 10.244.0.1:48016: EOF 2018/08/03 21:18:27 http: TLS handshake error from 10.244.0.1:48076: EOF 2018/08/03 21:18:37 http: TLS handshake error from 10.244.0.1:48136: EOF 2018/08/03 21:18:47 http: TLS handshake error from 10.244.0.1:48196: EOF 2018/08/03 21:18:57 http: TLS handshake error from 10.244.0.1:48256: EOF 2018/08/03 21:19:07 http: TLS handshake error from 10.244.0.1:48316: EOF 2018/08/03 21:19:17 http: TLS handshake error from 10.244.0.1:48376: EOF 2018/08/03 21:19:27 http: TLS handshake error from 10.244.0.1:48436: EOF 2018/08/03 21:19:37 http: TLS handshake error from 10.244.0.1:48496: EOF 2018/08/03 21:19:47 http: TLS handshake error from 10.244.0.1:48556: EOF 2018/08/03 21:19:57 http: TLS handshake error from 10.244.0.1:48616: EOF 2018/08/03 21:20:07 http: TLS handshake error from 10.244.0.1:48676: EOF 2018/08/03 21:20:17 http: TLS handshake error from 10.244.0.1:48736: EOF 2018/08/03 21:20:27 http: TLS handshake error from 10.244.0.1:48796: EOF 2018/08/03 21:20:37 http: TLS handshake error from 10.244.0.1:48856: EOF Pod name: virt-api-bcc6b587d-p7sqd Pod phase: Running level=info timestamp=2018-08-03T21:20:01.027530Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:20:03.470753Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:20:07 http: TLS handshake error from 10.244.1.1:59450: EOF level=info timestamp=2018-08-03T21:20:08.625438Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:20:14.848606Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:20:17 http: TLS handshake error from 10.244.1.1:59456: EOF 2018/08/03 21:20:27 http: TLS handshake error from 10.244.1.1:59462: EOF level=info timestamp=2018-08-03T21:20:27.960960Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:20:31.091529Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:20:33.531871Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:20:37 http: TLS handshake error from 10.244.1.1:59468: EOF level=info timestamp=2018-08-03T21:20:38.718583Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:20:40.477568Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:20:40.481081Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:20:44.988715Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-9phcp Pod phase: Running level=info timestamp=2018-08-03T21:07:08.906842Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-03T21:07:08.906921Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-08-03T21:07:08.906950Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-03T21:07:08.906978Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-03T21:07:08.907058Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-03T21:07:08.917364Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-03T21:07:08.918159Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-03T21:07:08.919326Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-03T21:07:08.920074Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-08-03T21:08:45.292020Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:08:45.293833Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:45.947881Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixhnnk\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmixhnnk, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 68580c29-9761-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixhnnk" level=info timestamp=2018-08-03T21:14:46.080525Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:14:46.080893Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:46.196454Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirhjvc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirhjvc" Pod name: virt-controller-67dcdd8464-rz8nl Pod phase: Running level=info timestamp=2018-08-03T21:07:09.764641Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-9hbwq Pod phase: Running level=info timestamp=2018-08-03T21:07:12.764984Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-03T21:07:12.772346Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer configMapInformer" level=info timestamp=2018-08-03T21:07:12.875163Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-03T21:07:12.877005Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=error timestamp=2018-08-03T21:07:12.878139Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:09:00.770478Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:10:05.569316Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:16.917483Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:24.001958Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:45.745544Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:22.914995Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:21.569625Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:55.415680Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:15.862851Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-handler-xpzr8 Pod phase: Running level=info timestamp=2018-08-03T21:07:15.617625Z pos=virt-handler.go:89 component=virt-handler hostname=node02 level=info timestamp=2018-08-03T21:07:15.623187Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer configMapInformer" level=info timestamp=2018-08-03T21:07:15.730912Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=error timestamp=2018-08-03T21:07:15.732031Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=info timestamp=2018-08-03T21:07:15.735160Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=error timestamp=2018-08-03T21:09:04.272866Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:10:09.160033Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:20.471371Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:27.496374Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:49.203513Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:26.336176Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:24.942285Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:58.742262Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:19.147972Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-launcher-testvmirhjvc-vk4xq Pod phase: Pending • Failure [360.745 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Timed out after 180.014s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 ------------------------------ STEP: Starting the vmi level=info timestamp=2018-08-03T21:14:46.909287Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmirhjvc kind=VirtualMachineInstance uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Created virtual machine pod virt-launcher-testvmirhjvc-vk4xq" Pod name: disks-images-provider-7spwb Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-nr6h4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-7q9xv Pod phase: Running 2018/08/03 21:24:27 http: TLS handshake error from 10.244.0.1:50236: EOF 2018/08/03 21:24:37 http: TLS handshake error from 10.244.0.1:50296: EOF 2018/08/03 21:24:47 http: TLS handshake error from 10.244.0.1:50356: EOF 2018/08/03 21:24:57 http: TLS handshake error from 10.244.0.1:50416: EOF 2018/08/03 21:25:07 http: TLS handshake error from 10.244.0.1:50476: EOF 2018/08/03 21:25:17 http: TLS handshake error from 10.244.0.1:50536: EOF 2018/08/03 21:25:27 http: TLS handshake error from 10.244.0.1:50596: EOF 2018/08/03 21:25:37 http: TLS handshake error from 10.244.0.1:50656: EOF 2018/08/03 21:25:47 http: TLS handshake error from 10.244.0.1:50716: EOF 2018/08/03 21:25:57 http: TLS handshake error from 10.244.0.1:50776: EOF 2018/08/03 21:26:07 http: TLS handshake error from 10.244.0.1:50836: EOF 2018/08/03 21:26:17 http: TLS handshake error from 10.244.0.1:50896: EOF 2018/08/03 21:26:27 http: TLS handshake error from 10.244.0.1:50956: EOF 2018/08/03 21:26:37 http: TLS handshake error from 10.244.0.1:51016: EOF 2018/08/03 21:26:47 http: TLS handshake error from 10.244.0.1:51076: EOF Pod name: virt-api-bcc6b587d-p7sqd Pod phase: Running 2018/08/03 21:25:57 http: TLS handshake error from 10.244.1.1:59660: EOF level=info timestamp=2018-08-03T21:25:58.867827Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:26:01.933558Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:26:03.412670Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:26:07 http: TLS handshake error from 10.244.1.1:59666: EOF level=info timestamp=2018-08-03T21:26:09.787795Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:26:16.830466Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:26:17 http: TLS handshake error from 10.244.1.1:59672: EOF 2018/08/03 21:26:27 http: TLS handshake error from 10.244.1.1:59678: EOF level=info timestamp=2018-08-03T21:26:28.926578Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:26:31.986780Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:26:33.357712Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:26:37 http: TLS handshake error from 10.244.1.1:59684: EOF level=info timestamp=2018-08-03T21:26:39.885146Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:26:46.987148Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-9phcp Pod phase: Running level=info timestamp=2018-08-03T21:07:08.917364Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-03T21:07:08.918159Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-03T21:07:08.919326Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-03T21:07:08.920074Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-08-03T21:08:45.292020Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:08:45.293833Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:45.947881Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixhnnk\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmixhnnk, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 68580c29-9761-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixhnnk" level=info timestamp=2018-08-03T21:14:46.080525Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:14:46.080893Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:46.196454Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirhjvc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirhjvc" level=info timestamp=2018-08-03T21:20:46.904898Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:20:46.905626Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:20:46.995061Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.023605Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.043840Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" Pod name: virt-controller-67dcdd8464-rz8nl Pod phase: Running level=info timestamp=2018-08-03T21:07:09.764641Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-9hbwq Pod phase: Running level=info timestamp=2018-08-03T21:07:12.877005Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=error timestamp=2018-08-03T21:07:12.878139Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:09:00.770478Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:10:05.569316Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:16.917483Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:24.001958Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:45.745544Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:22.914995Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:21.569625Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:55.415680Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:15.862851Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:37.009087Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:25.980891Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:40.696714Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:21.861675Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-handler-xpzr8 Pod phase: Running level=error timestamp=2018-08-03T21:07:15.732031Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=info timestamp=2018-08-03T21:07:15.735160Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=error timestamp=2018-08-03T21:09:04.272866Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:10:09.160033Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:20.471371Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:27.496374Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:49.203513Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:26.336176Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:24.942285Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:58.742262Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:19.147972Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:40.307870Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:29.235534Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:43.914030Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:25.058957Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-launcher-testvminlz2l-lglh7 Pod phase: Pending • Failure in Spec Setup (BeforeEach) [360.676 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Timed out after 180.061s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 ------------------------------ STEP: Creating winrm-cli pod for the future use STEP: Starting the windows VirtualMachineInstance level=info timestamp=2018-08-03T21:20:47.697759Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvminlz2l kind=VirtualMachineInstance uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Created virtual machine pod virt-launcher-testvminlz2l-lglh7" Pod name: disks-images-provider-7spwb Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-nr6h4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-7q9xv Pod phase: Running 2018/08/03 21:30:27 http: TLS handshake error from 10.244.0.1:52396: EOF 2018/08/03 21:30:37 http: TLS handshake error from 10.244.0.1:52456: EOF 2018/08/03 21:30:47 http: TLS handshake error from 10.244.0.1:52516: EOF 2018/08/03 21:30:57 http: TLS handshake error from 10.244.0.1:52576: EOF 2018/08/03 21:31:07 http: TLS handshake error from 10.244.0.1:52636: EOF 2018/08/03 21:31:17 http: TLS handshake error from 10.244.0.1:52696: EOF 2018/08/03 21:31:27 http: TLS handshake error from 10.244.0.1:52756: EOF 2018/08/03 21:31:37 http: TLS handshake error from 10.244.0.1:52816: EOF 2018/08/03 21:31:47 http: TLS handshake error from 10.244.0.1:52876: EOF 2018/08/03 21:31:57 http: TLS handshake error from 10.244.0.1:52936: EOF 2018/08/03 21:32:07 http: TLS handshake error from 10.244.0.1:52996: EOF 2018/08/03 21:32:17 http: TLS handshake error from 10.244.0.1:53056: EOF 2018/08/03 21:32:27 http: TLS handshake error from 10.244.0.1:53116: EOF 2018/08/03 21:32:37 http: TLS handshake error from 10.244.0.1:53176: EOF 2018/08/03 21:32:47 http: TLS handshake error from 10.244.0.1:53236: EOF Pod name: virt-api-bcc6b587d-p7sqd Pod phase: Running 2018/08/03 21:32:07 http: TLS handshake error from 10.244.1.1:59882: EOF level=info timestamp=2018-08-03T21:32:10.834382Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:32:17 http: TLS handshake error from 10.244.1.1:59888: EOF level=info timestamp=2018-08-03T21:32:18.852881Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:32:27 http: TLS handshake error from 10.244.1.1:59894: EOF level=info timestamp=2018-08-03T21:32:29.862490Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:32:32.803267Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:32:33.286120Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:32:33.301717Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:32:33.611688Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:32:37 http: TLS handshake error from 10.244.1.1:59900: EOF level=info timestamp=2018-08-03T21:32:40.881881Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:32:40.885480Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:32:40.925802Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:32:47 http: TLS handshake error from 10.244.1.1:59906: EOF Pod name: virt-controller-67dcdd8464-9phcp Pod phase: Running level=info timestamp=2018-08-03T21:07:08.920074Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-08-03T21:08:45.292020Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:08:45.293833Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:45.947881Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixhnnk\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmixhnnk, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 68580c29-9761-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixhnnk" level=info timestamp=2018-08-03T21:14:46.080525Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:14:46.080893Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:46.196454Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirhjvc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirhjvc" level=info timestamp=2018-08-03T21:20:46.904898Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:20:46.905626Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:20:46.995061Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.023605Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.043840Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:26:47.353148Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvminlz2l, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 167a27ae-9763-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:26:47.527765Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpdfw kind= uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:26:47.528965Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpdfw kind= uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-rz8nl Pod phase: Running level=info timestamp=2018-08-03T21:07:09.764641Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-9hbwq Pod phase: Running level=error timestamp=2018-08-03T21:10:05.569316Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:16.917483Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:24.001958Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:45.745544Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:22.914995Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:21.569625Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:55.415680Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:15.862851Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:37.009087Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:25.980891Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:40.696714Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:21.861675Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:28:24.031342Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:29:45.226800Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:31:39.489640Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-handler-xpzr8 Pod phase: Running level=error timestamp=2018-08-03T21:10:09.160033Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:11:20.471371Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:12:27.496374Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:13:49.203513Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:26.336176Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:24.942285Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:58.742262Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:19.147972Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:40.307870Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:29.235534Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:43.914030Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:25.058957Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:28:27.195139Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:29:48.340408Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:31:42.547943Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-launcher-testvmiqpdfw-vqqtk Pod phase: Pending • Failure in Spec Setup (BeforeEach) [360.680 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Timed out after 180.012s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 ------------------------------ STEP: Creating winrm-cli pod for the future use STEP: Starting the windows VirtualMachineInstance level=info timestamp=2018-08-03T21:26:48.382731Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmiqpdfw kind=VirtualMachineInstance uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiqpdfw-vqqtk" Pod name: disks-images-provider-7spwb Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-nr6h4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-7q9xv Pod phase: Running 2018/08/03 21:34:27 http: TLS handshake error from 10.244.0.1:53836: EOF 2018/08/03 21:34:37 http: TLS handshake error from 10.244.0.1:53896: EOF 2018/08/03 21:34:47 http: TLS handshake error from 10.244.0.1:53956: EOF 2018/08/03 21:34:57 http: TLS handshake error from 10.244.0.1:54016: EOF 2018/08/03 21:35:07 http: TLS handshake error from 10.244.0.1:54076: EOF 2018/08/03 21:35:17 http: TLS handshake error from 10.244.0.1:54136: EOF 2018/08/03 21:35:27 http: TLS handshake error from 10.244.0.1:54196: EOF 2018/08/03 21:35:37 http: TLS handshake error from 10.244.0.1:54256: EOF 2018/08/03 21:35:47 http: TLS handshake error from 10.244.0.1:54316: EOF 2018/08/03 21:35:57 http: TLS handshake error from 10.244.0.1:54376: EOF 2018/08/03 21:36:07 http: TLS handshake error from 10.244.0.1:54436: EOF 2018/08/03 21:36:17 http: TLS handshake error from 10.244.0.1:54496: EOF 2018/08/03 21:36:27 http: TLS handshake error from 10.244.0.1:54556: EOF 2018/08/03 21:36:37 http: TLS handshake error from 10.244.0.1:54616: EOF 2018/08/03 21:36:47 http: TLS handshake error from 10.244.0.1:54676: EOF Pod name: virt-api-bcc6b587d-p7sqd Pod phase: Running 2018/08/03 21:35:57 http: TLS handshake error from 10.244.1.1:60020: EOF level=info timestamp=2018-08-03T21:36:00.363803Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:36:03.262542Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:36:03.600304Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:36:07 http: TLS handshake error from 10.244.1.1:60026: EOF level=info timestamp=2018-08-03T21:36:11.887672Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:36:17 http: TLS handshake error from 10.244.1.1:60032: EOF level=info timestamp=2018-08-03T21:36:20.036020Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:36:27 http: TLS handshake error from 10.244.1.1:60038: EOF level=info timestamp=2018-08-03T21:36:30.434726Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:36:33.331341Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:36:33.568546Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/03 21:36:37 http: TLS handshake error from 10.244.1.1:60044: EOF level=info timestamp=2018-08-03T21:36:41.989179Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:36:47 http: TLS handshake error from 10.244.1.1:60050: EOF Pod name: virt-controller-67dcdd8464-9phcp Pod phase: Running level=info timestamp=2018-08-03T21:08:45.293833Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixhnnk kind= uid=68580c29-9761-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:45.947881Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixhnnk\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmixhnnk, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 68580c29-9761-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixhnnk" level=info timestamp=2018-08-03T21:14:46.080525Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:14:46.080893Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:46.196454Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirhjvc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirhjvc" level=info timestamp=2018-08-03T21:20:46.904898Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:20:46.905626Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:20:46.995061Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.023605Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.043840Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:26:47.353148Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvminlz2l, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 167a27ae-9763-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:26:47.527765Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpdfw kind= uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:26:47.528965Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpdfw kind= uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:32:49.225385Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmismjs4 kind= uid=c50303e8-9764-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:32:49.226759Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmismjs4 kind= uid=c50303e8-9764-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-rz8nl Pod phase: Running level=info timestamp=2018-08-03T21:07:09.764641Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-9hbwq Pod phase: Running level=error timestamp=2018-08-03T21:13:45.745544Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:22.914995Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:21.569625Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:55.415680Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:15.862851Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:37.009087Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:25.980891Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:40.696714Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:21.861675Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:28:24.031342Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:29:45.226800Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:31:39.489640Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:33:41.825576Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:35:32.027450Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:36:34.131746Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-handler-xpzr8 Pod phase: Running level=error timestamp=2018-08-03T21:13:49.203513Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:15:26.336176Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:17:24.942285Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:58.742262Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:19.147972Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:40.307870Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:29.235534Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:43.914030Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:25.058957Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:28:27.195139Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:29:48.340408Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:31:42.547943Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:33:44.909076Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:35:35.120313Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:36:37.204911Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-launcher-testvmismjs4-ql2xb Pod phase: Pending • Failure [241.663 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Timed out after 120.014s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-08-03T21:32:50.090311Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmismjs4 kind=VirtualMachineInstance uid=c50303e8-9764-11e8-bce6-525500d15501 msg="Created virtual machine pod virt-launcher-testvmismjs4-ql2xb" Pod name: disks-images-provider-7spwb Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-nr6h4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-7q9xv Pod phase: Running 2018/08/03 21:38:27 http: TLS handshake error from 10.244.0.1:55276: EOF 2018/08/03 21:38:37 http: TLS handshake error from 10.244.0.1:55336: EOF 2018/08/03 21:38:47 http: TLS handshake error from 10.244.0.1:55396: EOF 2018/08/03 21:38:57 http: TLS handshake error from 10.244.0.1:55456: EOF 2018/08/03 21:39:07 http: TLS handshake error from 10.244.0.1:55516: EOF 2018/08/03 21:39:17 http: TLS handshake error from 10.244.0.1:55576: EOF 2018/08/03 21:39:27 http: TLS handshake error from 10.244.0.1:55636: EOF 2018/08/03 21:39:37 http: TLS handshake error from 10.244.0.1:55696: EOF 2018/08/03 21:39:47 http: TLS handshake error from 10.244.0.1:55756: EOF 2018/08/03 21:39:57 http: TLS handshake error from 10.244.0.1:55816: EOF 2018/08/03 21:40:07 http: TLS handshake error from 10.244.0.1:55876: EOF 2018/08/03 21:40:17 http: TLS handshake error from 10.244.0.1:55936: EOF 2018/08/03 21:40:27 http: TLS handshake error from 10.244.0.1:55996: EOF 2018/08/03 21:40:37 http: TLS handshake error from 10.244.0.1:56056: EOF 2018/08/03 21:40:47 http: TLS handshake error from 10.244.0.1:56116: EOF Pod name: virt-api-bcc6b587d-p7sqd Pod phase: Running level=info timestamp=2018-08-03T21:40:03.489864Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:40:03.956951Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:40:07 http: TLS handshake error from 10.244.1.1:60170: EOF level=info timestamp=2018-08-03T21:40:12.748260Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:40:17 http: TLS handshake error from 10.244.1.1:60176: EOF level=info timestamp=2018-08-03T21:40:21.199575Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:40:27 http: TLS handshake error from 10.244.1.1:60182: EOF level=info timestamp=2018-08-03T21:40:31.065506Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-03T21:40:33.447367Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:40:34.022106Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:40:37 http: TLS handshake error from 10.244.1.1:60188: EOF level=info timestamp=2018-08-03T21:40:41.267761Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:40:41.270996Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-03T21:40:42.836869Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/03 21:40:47 http: TLS handshake error from 10.244.1.1:60194: EOF Pod name: virt-controller-67dcdd8464-9phcp Pod phase: Running level=info timestamp=2018-08-03T21:14:46.080893Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirhjvc kind= uid=3f695be2-9762-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:14:46.196454Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirhjvc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirhjvc" level=info timestamp=2018-08-03T21:20:46.904898Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:20:46.905626Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminlz2l kind= uid=167a27ae-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:20:46.995061Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.023605Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:20:47.043840Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:26:47.353148Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminlz2l\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvminlz2l, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 167a27ae-9763-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminlz2l" level=info timestamp=2018-08-03T21:26:47.527765Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpdfw kind= uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:26:47.528965Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpdfw kind= uid=ed6cc134-9763-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:32:49.225385Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmismjs4 kind= uid=c50303e8-9764-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:32:49.226759Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmismjs4 kind= uid=c50303e8-9764-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-03T21:36:49.827529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmismjs4\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmismjs4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c50303e8-9764-11e8-bce6-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmismjs4" level=info timestamp=2018-08-03T21:36:50.619530Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv5p6j kind= uid=54e44f4f-9765-11e8-bce6-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-03T21:36:50.620003Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv5p6j kind= uid=54e44f4f-9765-11e8-bce6-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-rz8nl Pod phase: Running level=info timestamp=2018-08-03T21:07:09.764641Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-9hbwq Pod phase: Running level=error timestamp=2018-08-03T21:17:21.569625Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:55.415680Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:15.862851Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:37.009087Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:25.980891Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:40.696714Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:21.861675Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:28:24.031342Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:29:45.226800Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:31:39.489640Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:33:41.825576Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:35:32.027450Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:36:34.131746Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:38:17.933316Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:40:28.221760Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-handler-xpzr8 Pod phase: Running level=error timestamp=2018-08-03T21:17:24.942285Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:18:58.742262Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:20:19.147972Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:21:40.307870Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:23:29.235534Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:24:43.914030Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:26:25.058957Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:28:27.195139Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:29:48.340408Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:31:42.547943Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:33:44.909076Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:35:35.120313Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:36:37.204911Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:38:20.955412Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" level=error timestamp=2018-08-03T21:40:31.199882Z pos=health.go:55 component=virt-handler reason="tun device does not show up in /proc/misc, is the module loaded?" msg="Check for mandatory device /dev/net/tun failed" Pod name: virt-launcher-testvmiv5p6j-d2dh7 Pod phase: Pending • Failure [241.409 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Timed out after 120.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-08-03T21:36:51.429168Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmiv5p6j kind=VirtualMachineInstance uid=54e44f4f-9765-11e8-bce6-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiv5p6j-d2dh7" SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 6 Failures: [Fail] Windows VirtualMachineInstance [It] should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 [Fail] Windows VirtualMachineInstance [It] should succeed to stop a running vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 [Fail] Windows VirtualMachineInstance with winrm connection [BeforeEach] should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 [Fail] Windows VirtualMachineInstance with winrm connection [BeforeEach] should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1034 Ran 6 of 148 Specs in 1933.271 seconds FAIL! -- 0 Passed | 6 Failed | 0 Pending | 142 Skipped --- FAIL: TestTests (1933.29s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh