+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/27 13:53:08 Waiting for host: 192.168.66.101:22 2018/07/27 13:53:11 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/27 13:53:19 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/27 13:53:24 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0727 13:53:24.821720 1255 feature_gate.go:230] feature gates: &{map[]} I0727 13:53:24.905714 1255 kernel_validator.go:81] Validating kernel version I0727 13:53:24.905926 1255 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 52.007485 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:cab74240838f0674bb32fe8718484ad73c65d543428e9519927b0669402aedf4 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/07/27 13:54:34 Waiting for host: 192.168.66.102:22 2018/07/27 13:54:37 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/27 13:54:45 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/27 13:54:50 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0727 13:54:51.004620 1260 kernel_validator.go:81] Validating kernel version I0727 13:54:51.005165 1260 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 51s v1.11.0 node02 Ready 21s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 52s v1.11.0 node02 Ready 22s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32935/kubevirt/virt-controller:devel Untagged: localhost:32935/kubevirt/virt-controller@sha256:287f65f2ef3d6ce39c0a0525da67ca5555896dbd10ec62cf1542b2397f60320d Deleted: sha256:c5b054c48081efd9737941c9a26db6b5ec9675fe9bc4abdedbd86a1ebbd06dac Deleted: sha256:19c4e2a938491df9f2ced32bd04e6f0cc204de6f0a14f01c59c9dd75e4798fb8 Deleted: sha256:9603c7bd05e92e20ffe3ce09c624326ff2e808e343fa738f8871f06b96ca08a8 Deleted: sha256:6ac8a39a0a7b9785909160952ec60fb37324fd2a6554764580513303f39560a8 Untagged: localhost:32935/kubevirt/virt-launcher:devel Untagged: localhost:32935/kubevirt/virt-launcher@sha256:94a12847db9f4d93c14815788dff512de2f6dcbdc2ed4961e6b953712be66e51 Deleted: sha256:c078acb51aa3fd2dd0cf722c8b1c9094cad7ac4e9a00d6872c238cd3a15ce5d6 Deleted: sha256:3353de0677ce519d99cebc6bedd434305cbc0b143e8734408d5da9b78ac5bb3e Deleted: sha256:2383e62de9289dea88051dc67b484ddd5a4009ebb142cdd260565bb5ab6d209f Deleted: sha256:0a55cb40743f2dc3fdfd85679430867d341ba549b6e17f81ac730c9fa825c480 Deleted: sha256:52868ab1fa457e82afa565bfe308f257b8e7eee57fc56a290cabdce1dd058813 Deleted: sha256:140a5f14be2e8edd6e712e82198e1ee081add8cd1b154258019384bd580d088a Deleted: sha256:6084d30503ee3170409556b26d64aa9ae852e4161eb1b6c0377c54a325b7ad4f Deleted: sha256:cf226161f96c19720c7a1158456856790daf501c34b7fb41433c52105172b9e8 Deleted: sha256:f763f4cbcb4e3365b73b706dcd4f8ad08954c0ef59b36446faed9915cb8fb166 Deleted: sha256:a1db4742cb951633f5cbe672ef0178a00517713659d71f526034c9980bd30870 Deleted: sha256:41538416a0160c9a791540dd6f956af1bf8e68da2240438b2c52b75f4731f846 Deleted: sha256:c172a7aad05f54fb6241d411c2c63bbf0bebf43660b0d0f69f82ff9053c2a8a8 Untagged: localhost:32935/kubevirt/virt-handler:devel Untagged: localhost:32935/kubevirt/virt-handler@sha256:6d44b12fe2f7bbc1f5d8337ffce49fb2b9b66b45dd2807e8030a0523d87759a6 Deleted: sha256:c519eff54452ba090a2012f617b50e56dcc122acae3664e05d792386716bf527 Deleted: sha256:3fc6aa4d97feaedd591a0f2044fed193102086e116e26c1c4de0ed1cb843bbe8 Deleted: sha256:194e9667e7e5a3caf5ee357fd9369ecb97c5624de1db371a66b50713e0354307 Deleted: sha256:b3b788515c34120333da497063bd12fa0eb5a4d4c903d4f168e8b857d3a8ea94 Untagged: localhost:32935/kubevirt/virt-api:devel Untagged: localhost:32935/kubevirt/virt-api@sha256:1f65fedd80621e4e0eca297352dddb8610b95d74bea7b4d210254f96fa042942 Deleted: sha256:0c3b252200d8d9e6b35f498bebc86af200e0e4f203d63e6087e95656c77db34d Deleted: sha256:7fbc06ec724eedd9df9430cc6ee9c3d47f1ff731d9e0921a82fd6a8c2c75df20 Deleted: sha256:b072cdff0d9b2e21082862387517443aee02d40849a5ef6bf85d82bcaaedefbf Deleted: sha256:2b0d7136985a28fc2381968a2ea48905fa9a74cd9d9070210cecef026d6357b3 Untagged: localhost:32935/kubevirt/subresource-access-test:devel Untagged: localhost:32935/kubevirt/subresource-access-test@sha256:b09498c902ae1ae129ad4374ead6333d6a67fdba287c23ecff72b2bc791e118a Deleted: sha256:723d1945d8d3c94182c57132b09d752c2780c69ffd1b87f0742dfeeb0b6d88c2 Deleted: sha256:28d70ada9fb42c7739296d894a6fe283c60a763bf6df6c584bd93b4c5f18e265 Deleted: sha256:7b806ade6736fa9dcabe2591b81512c4686fd756e008e741c16b3f8f71bb8fe3 Deleted: sha256:16f8ed0e52e1e0dc0b0bfe0d641445e396b8a852484c9001d3ce34c885e20073 Untagged: localhost:32935/kubevirt/example-hook-sidecar:devel Untagged: localhost:32935/kubevirt/example-hook-sidecar@sha256:2d527f62dd3af91d6ccabd6d95ca43e879445a1e1df9725db7da9bc272f49757 Deleted: sha256:84d47f2e3336cbb552607b587725035a67fa84f445f0d0816540bea73d9006b3 Deleted: sha256:93e3da5385c20e3d954c5c7c55c40dd5abbf1f015141bfff7501bcf9ff4edba6 Deleted: sha256:9b8dd15d78bd77a71d8189c306f5e99e8e8bd4f2748a4113ddc60f32963f8ccd Deleted: sha256:6c0bdcc263bc5d8b433df25155155448b1cd739ef8b4922c2ec83bfab3d29758 sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 84570f0bf244 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 4b8efcbf3461 Step 5/8 : USER 1001 ---> Using cache ---> c49257f2ff48 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 30659ed93276 Removing intermediate container a9ba9ab65fe2 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in c9d8753b58e2 ---> dec300d148f4 Removing intermediate container c9d8753b58e2 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-controller" '' ---> Running in 34661bd48c45 ---> 76b3d27dbcb8 Removing intermediate container 34661bd48c45 Successfully built 76b3d27dbcb8 Sending build context to Docker daemon 43.3 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> c1e65e6c8241 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 4c20d196c128 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> eaaa66e8ea62 Removing intermediate container 1d3f2c0dccc5 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> d89965cfadb7 Removing intermediate container 2d12f6c09404 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in c8f36a68ef68  ---> ca2a8141cb32 Removing intermediate container c8f36a68ef68 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 1afc618f92a1  ---> 85307c13b4af Removing intermediate container 1afc618f92a1 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 7284d0a5b22f Removing intermediate container 52eb6ff380ba Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 4b5efe8fcef8 ---> aafba6f6d841 Removing intermediate container 4b5efe8fcef8 Step 10/10 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-launcher" '' ---> Running in b1989e4c8cec ---> d4f34dcbce55 Removing intermediate container b1989e4c8cec Successfully built d4f34dcbce55 Sending build context to Docker daemon 41.67 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> e3ad5501af58 Removing intermediate container cc98b9573dd9 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 336ab58b85f3 ---> 6472ffffbf31 Removing intermediate container 336ab58b85f3 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-handler" '' ---> Running in 1009f0a5c8a8 ---> becc4b7ecdfc Removing intermediate container 1009f0a5c8a8 Successfully built becc4b7ecdfc Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 6f2134b876af Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> d5ef0239bf68 Step 5/8 : USER 1001 ---> Using cache ---> 233000b2d9b5 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> d69eeb8984b8 Removing intermediate container 511c4edb11d7 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 4547081610e4 ---> 816c5185fdcb Removing intermediate container 4547081610e4 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "virt-api" '' ---> Running in 5c53ff41eff5 ---> a5221411eb65 Removing intermediate container 5c53ff41eff5 Successfully built a5221411eb65 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/7 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 06d762a67408 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 3876d185cf84 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 1fb50ce9b78f Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> 1b3b27237ad4 Successfully built 1b3b27237ad4 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/5 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 6bc4f549313f Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "vm-killer" '' ---> Using cache ---> 4d3c35709578 Successfully built 4d3c35709578 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 9ef1c0ce5d24 Step 3/7 : ENV container docker ---> Using cache ---> 9ad55e41ed61 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 17a81fda7c2b Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 681d01e165e6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> a79815fe82d9 Step 7/7 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "registry-disk-v1alpha" '' ---> Using cache ---> 01c4b8a10474 Successfully built 01c4b8a10474 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33074/kubevirt/registry-disk-v1alpha:devel ---> 01c4b8a10474 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 05aed96d86e5 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 0c789bc44ebe Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> b5af01da1cf6 Successfully built b5af01da1cf6 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33074/kubevirt/registry-disk-v1alpha:devel ---> 01c4b8a10474 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d1691b3c6397 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> a867409b41c7 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> a58eed679ce2 Successfully built a58eed679ce2 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33074/kubevirt/registry-disk-v1alpha:devel ---> 01c4b8a10474 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d1691b3c6397 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> ffa69f199094 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Using cache ---> 19afad248297 Successfully built 19afad248297 Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> deebe9dc06da Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 4094ce77e412 Step 5/8 : USER 1001 ---> Using cache ---> ba694520e9a4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> ad6f0470dbd8 Removing intermediate container 073cc10e7a34 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in f3ebd1cb2505 ---> 67a37f9f309d Removing intermediate container f3ebd1cb2505 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "subresource-access-test" '' ---> Running in 4e43a558a802 ---> 984c987577d1 Removing intermediate container 4e43a558a802 Successfully built 984c987577d1 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3265a3c6f899 Step 3/9 : ENV container docker ---> Using cache ---> 3fe7db912524 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> e0cf52293e57 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 8c031086e8cb Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 0f6dd31de4d3 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 6a702eb79a95 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> bed79012c9f3 Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release2" '' "winrmcli" '' ---> Using cache ---> 307d3ac58d04 Successfully built 307d3ac58d04 Sending build context to Docker daemon 36.79 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> aaebebb5c895 Removing intermediate container 893aaafeeb07 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 495262ac8e87 ---> 7026d96453fb Removing intermediate container 495262ac8e87 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-windows2016-release2" '' ---> Running in d058c5aa34ce ---> 4dcc0ceb4bd3 Removing intermediate container d058c5aa34ce Successfully built 4dcc0ceb4bd3 hack/build-docker.sh push The push refers to a repository [localhost:33074/kubevirt/virt-controller] 3f314d796df0: Preparing 915a0c3e3f5f: Preparing 891e1e4ef82a: Preparing 915a0c3e3f5f: Pushed 3f314d796df0: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:0ab8354fca1a430474df9104d742bbbcb5623a9ebbcecc4304500efc1bbc43fa size: 949 The push refers to a repository [localhost:33074/kubevirt/virt-launcher] 7d1e6b7efadf: Preparing 648563d944f3: Preparing 879d29e2de50: Preparing 6ed8f292a513: Preparing d739759a9773: Preparing 5379fb5d8cce: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 186d8b3e4fd8: Waiting 5379fb5d8cce: Waiting 5eefb9960a36: Waiting da38cf808aa5: Waiting b83399358a92: Waiting 891e1e4ef82a: Waiting fa6154170bf5: Waiting 648563d944f3: Pushed 6ed8f292a513: Pushed 7d1e6b7efadf: Pushed b83399358a92: Pushed da38cf808aa5: Pushed fa6154170bf5: Pushed 186d8b3e4fd8: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 879d29e2de50: Pushed 5379fb5d8cce: Pushed d739759a9773: Pushed 5eefb9960a36: Pushed devel: digest: sha256:1b00d750abe699be99eef5733fcd74d17c5f0ebaaaa7197a75ad36d2670c61f7 size: 2828 The push refers to a repository [localhost:33074/kubevirt/virt-handler] a10e98bc8c3a: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher a10e98bc8c3a: Pushed devel: digest: sha256:ed1239e271428e1ca2a27a9e092dbac2421a3727f7fba424744f0ae5f362d287 size: 741 The push refers to a repository [localhost:33074/kubevirt/virt-api] 60952b42d88d: Preparing 7cc07c574d2a: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 7cc07c574d2a: Pushed 60952b42d88d: Pushed devel: digest: sha256:ba4a6fea1b034107f5ad5e9d728f8f67aa6a958d13fc22375fafd1bcf40378d4 size: 948 The push refers to a repository [localhost:33074/kubevirt/disks-images-provider] 1548fa7b1c9e: Preparing a7621d2cf364: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 1548fa7b1c9e: Pushed a7621d2cf364: Pushed devel: digest: sha256:d1e04ccb207fce5ec5cf2070e46f5fcd791bf5d9940c234d737d537697e98e11 size: 948 The push refers to a repository [localhost:33074/kubevirt/vm-killer] 3c31f9f8d755: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 3c31f9f8d755: Pushed devel: digest: sha256:23514e9bb58b187085ff3d46c138d84c96d483023102f31cb8e89f022cae8d29 size: 740 The push refers to a repository [localhost:33074/kubevirt/registry-disk-v1alpha] c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Pushed 4662bbc21c2d: Pushed 25edbec0eaea: Pushed devel: digest: sha256:65fc69563851d1c5f6bcbf2616441834ad1f294758a1c63c4a033987c24f6921 size: 948 The push refers to a repository [localhost:33074/kubevirt/cirros-registry-disk-demo] ff776dc1f8e1: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha c66b9a220e25: Mounted from kubevirt/registry-disk-v1alpha 4662bbc21c2d: Mounted from kubevirt/registry-disk-v1alpha ff776dc1f8e1: Pushed devel: digest: sha256:35ac14a90415143af3c8a766a4080b3952c2254268bda99241b333a522d21ec9 size: 1160 The push refers to a repository [localhost:33074/kubevirt/fedora-cloud-registry-disk-demo] 21e772fe647f: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/cirros-registry-disk-demo 4662bbc21c2d: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 21e772fe647f: Pushed devel: digest: sha256:2af5fdf2c2617af7e6dac2d02e473b952d600701e6d11cf5f0ce012d6ad4fb46 size: 1161 The push refers to a repository [localhost:33074/kubevirt/alpine-registry-disk-demo] ec917ce1d686: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/fedora-cloud-registry-disk-demo 4662bbc21c2d: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo ec917ce1d686: Pushed devel: digest: sha256:f8536c7203b443d52b11af17660023492feb927b79874afeb6e761285f5be6e8 size: 1160 The push refers to a repository [localhost:33074/kubevirt/subresource-access-test] 1b7f86b9502b: Preparing 7e69243e781e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 7e69243e781e: Pushed 1b7f86b9502b: Pushed devel: digest: sha256:247e79f99057f1cbc63d3d61da41eaa0700577ec4c81036742f397d405706adb size: 948 The push refers to a repository [localhost:33074/kubevirt/winrmcli] a117c61a5658: Preparing c9df4405017d: Preparing 99bb32247f65: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test a117c61a5658: Pushed 99bb32247f65: Pushed c9df4405017d: Pushed devel: digest: sha256:af778f3f0966af252e608ac96b334411df70554eaf54bbdae2099df9723927f8 size: 1165 The push refers to a repository [localhost:33074/kubevirt/example-hook-sidecar] 300b037ff8c3: Preparing 39bae602f753: Preparing 300b037ff8c3: Pushed 39bae602f753: Pushed devel: digest: sha256:43061e593dc7a9bee7da821b28c77242e9d3cbee7236b00ef325f2441688aca6 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release2 ++ job_prefix=kubevirt-functional-tests-windows2016-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-148-g69f12fe ++ KUBEVIRT_VERSION=v0.7.0-148-g69f12fe + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33074/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release2 ++ job_prefix=kubevirt-functional-tests-windows2016-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-148-g69f12fe ++ KUBEVIRT_VERSION=v0.7.0-148-g69f12fe + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33074/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z windows2016-release ]] + [[ windows2016-release =~ .*-dev ]] + [[ windows2016-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-bcc6b587d-4wwt4 0/1 ContainerCreating 0 3s virt-api-bcc6b587d-7lbfg 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-pbp9x 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-xxkgv 0/1 ContainerCreating 0 3s virt-handler-4wgkz 0/1 ContainerCreating 0 3s virt-handler-fjdzm 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers virt-api-bcc6b587d-4wwt4 0/1 ContainerCreating 0 4s virt-api-bcc6b587d-7lbfg 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-pbp9x 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-xxkgv 0/1 ContainerCreating 0 4s virt-handler-4wgkz 0/1 ContainerCreating 0 4s virt-handler-fjdzm 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-8fvnp 1/1 Running 0 14m coredns-78fcdf6894-sc55g 1/1 Running 0 14m disks-images-provider-dwrwc 1/1 Running 0 1m disks-images-provider-zrdsn 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 14m kube-controller-manager-node01 1/1 Running 0 14m kube-flannel-ds-68m67 1/1 Running 0 14m kube-flannel-ds-fgwcw 1/1 Running 0 14m kube-proxy-gz7jc 1/1 Running 0 14m kube-proxy-sn46c 1/1 Running 0 14m kube-scheduler-node01 1/1 Running 0 14m virt-api-bcc6b587d-4wwt4 1/1 Running 1 1m virt-api-bcc6b587d-7lbfg 1/1 Running 0 1m virt-controller-67dcdd8464-pbp9x 1/1 Running 0 1m virt-controller-67dcdd8464-xxkgv 1/1 Running 0 1m virt-handler-4wgkz 1/1 Running 0 1m virt-handler-fjdzm 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-4q42l 1/1 Running 0 14m local-volume-provisioner-pg9zd 1/1 Running 0 14m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml' + [[ windows2016-release =~ windows.* ]] + [[ -d /home/nfs/images/windows2016 ]] + kubectl create -f - + cluster/kubectl.sh create -f - persistentvolume/disk-windows created + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/junit.xml --ginkgo.focus=Windows' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532700605 Will run 6 of 149 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Pod name: disks-images-provider-dwrwc Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-zrdsn Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-4wwt4 Pod phase: Running 2018/07/27 14:15:17 http: TLS handshake error from 10.244.1.1:53400: EOF level=info timestamp=2018-07-27T14:15:19.439586Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:15:20.550110Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:15:27 http: TLS handshake error from 10.244.1.1:53406: EOF level=info timestamp=2018-07-27T14:15:29.463377Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:15:33.902971Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-27T14:15:33.906723Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-27T14:15:35.462026Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:15:37 http: TLS handshake error from 10.244.1.1:53412: EOF 2018/07/27 14:15:47 http: TLS handshake error from 10.244.1.1:53418: EOF level=info timestamp=2018-07-27T14:15:49.497228Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:15:50.619779Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:15:57 http: TLS handshake error from 10.244.1.1:53424: EOF level=info timestamp=2018-07-27T14:15:59.545963Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:16:05.591646Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-7lbfg Pod phase: Running level=info timestamp=2018-07-27T14:14:25.902807Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-27T14:14:26.102223Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:14:33 http: TLS handshake error from 10.244.0.1:53338: EOF 2018/07/27 14:14:43 http: TLS handshake error from 10.244.0.1:53398: EOF 2018/07/27 14:14:53 http: TLS handshake error from 10.244.0.1:53458: EOF level=info timestamp=2018-07-27T14:14:55.991022Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:15:03 http: TLS handshake error from 10.244.0.1:53518: EOF 2018/07/27 14:15:13 http: TLS handshake error from 10.244.0.1:53578: EOF 2018/07/27 14:15:23 http: TLS handshake error from 10.244.0.1:53638: EOF level=info timestamp=2018-07-27T14:15:26.042873Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:15:33 http: TLS handshake error from 10.244.0.1:53698: EOF 2018/07/27 14:15:43 http: TLS handshake error from 10.244.0.1:53758: EOF 2018/07/27 14:15:53 http: TLS handshake error from 10.244.0.1:53818: EOF level=info timestamp=2018-07-27T14:15:56.042301Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:16:03 http: TLS handshake error from 10.244.0.1:53878: EOF Pod name: virt-controller-67dcdd8464-pbp9x Pod phase: Running level=info timestamp=2018-07-27T14:08:30.245635Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-27T14:08:30.245748Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-07-27T14:08:30.245778Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-27T14:08:30.245797Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-07-27T14:08:30.249765Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-07-27T14:08:30.249890Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-27T14:08:30.250409Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-27T14:08:30.250448Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-27T14:08:30.250809Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-27T14:08:30.253146Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-27T14:08:30.253912Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-27T14:08:30.254355Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-27T14:08:30.260222Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-27T14:10:05.644862Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:10:05.649313Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-xxkgv Pod phase: Running level=info timestamp=2018-07-27T14:08:31.703591Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4wgkz Pod phase: Running level=info timestamp=2018-07-27T14:08:33.437302Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-27T14:08:33.443518Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:33.444401Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:33.545114Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:33.561769Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-27T14:08:33.564483Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" Pod name: virt-handler-fjdzm Pod phase: Running level=info timestamp=2018-07-27T14:08:40.851273Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-27T14:08:40.859029Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:40.860184Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:41.199801Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:44.777607Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-27T14:08:44.778906Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmib7lrc-s9bx9 Pod phase: Pending ------------------------------ • Failure [360.685 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Timed out after 180.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 ------------------------------ level=info timestamp=2018-07-27T14:10:06.756569Z pos=utils.go:245 component=tests msg="Created virtual machine pod virt-launcher-testvmib7lrc-s9bx9" Pod name: disks-images-provider-dwrwc Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-zrdsn Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-4wwt4 Pod phase: Running level=info timestamp=2018-07-27T14:21:00.567702Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:21:07.082091Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:21:07 http: TLS handshake error from 10.244.1.1:53610: EOF 2018/07/27 14:21:17 http: TLS handshake error from 10.244.1.1:53616: EOF level=info timestamp=2018-07-27T14:21:20.248315Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:21:21.327393Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:21:27 http: TLS handshake error from 10.244.1.1:53622: EOF level=info timestamp=2018-07-27T14:21:30.654045Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:21:37.246633Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:21:37 http: TLS handshake error from 10.244.1.1:53628: EOF 2018/07/27 14:21:47 http: TLS handshake error from 10.244.1.1:53634: EOF level=info timestamp=2018-07-27T14:21:50.332602Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:21:51.387532Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:21:57 http: TLS handshake error from 10.244.1.1:53640: EOF level=info timestamp=2018-07-27T14:22:00.769747Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-7lbfg Pod phase: Running 2018/07/27 14:20:23 http: TLS handshake error from 10.244.0.1:55442: EOF level=info timestamp=2018-07-27T14:20:26.074759Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:20:33 http: TLS handshake error from 10.244.0.1:55502: EOF 2018/07/27 14:20:43 http: TLS handshake error from 10.244.0.1:55562: EOF 2018/07/27 14:20:53 http: TLS handshake error from 10.244.0.1:55622: EOF level=info timestamp=2018-07-27T14:20:55.960085Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:21:03 http: TLS handshake error from 10.244.0.1:55682: EOF 2018/07/27 14:21:13 http: TLS handshake error from 10.244.0.1:55742: EOF 2018/07/27 14:21:23 http: TLS handshake error from 10.244.0.1:55802: EOF level=info timestamp=2018-07-27T14:21:26.070087Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:21:33 http: TLS handshake error from 10.244.0.1:55862: EOF 2018/07/27 14:21:43 http: TLS handshake error from 10.244.0.1:55922: EOF 2018/07/27 14:21:53 http: TLS handshake error from 10.244.0.1:55982: EOF level=info timestamp=2018-07-27T14:21:55.970858Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:22:03 http: TLS handshake error from 10.244.0.1:56042: EOF Pod name: virt-controller-67dcdd8464-pbp9x Pod phase: Running level=info timestamp=2018-07-27T14:08:30.245797Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-07-27T14:08:30.249765Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-07-27T14:08:30.249890Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-27T14:08:30.250409Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-27T14:08:30.250448Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-27T14:08:30.250809Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-27T14:08:30.253146Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-27T14:08:30.253912Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-27T14:08:30.254355Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-27T14:08:30.260222Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-27T14:10:05.644862Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:10:05.649313Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.457184Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:16:06.457900Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.517957Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" Pod name: virt-controller-67dcdd8464-xxkgv Pod phase: Running level=info timestamp=2018-07-27T14:08:31.703591Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4wgkz Pod phase: Running level=info timestamp=2018-07-27T14:08:33.437302Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-27T14:08:33.443518Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:33.444401Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:33.545114Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:33.561769Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-27T14:08:33.564483Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" Pod name: virt-handler-fjdzm Pod phase: Running level=info timestamp=2018-07-27T14:08:40.851273Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-27T14:08:40.859029Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:40.860184Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:41.199801Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:44.777607Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-27T14:08:44.778906Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmi2n2g6-cb2rx Pod phase: Pending • Failure [360.690 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Timed out after 180.010s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 ------------------------------ STEP: Starting the vmi level=info timestamp=2018-07-27T14:16:07.367545Z pos=utils.go:245 component=tests msg="Created virtual machine pod virt-launcher-testvmi2n2g6-cb2rx" Pod name: disks-images-provider-dwrwc Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-zrdsn Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-4wwt4 Pod phase: Running level=info timestamp=2018-07-27T14:27:08.739222Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:27:17 http: TLS handshake error from 10.244.1.1:53832: EOF level=info timestamp=2018-07-27T14:27:21.091678Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:27:22.201476Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:27:27 http: TLS handshake error from 10.244.1.1:53838: EOF level=info timestamp=2018-07-27T14:27:31.702806Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:27:33.360180Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-27T14:27:33.364170Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:27:37 http: TLS handshake error from 10.244.1.1:53844: EOF level=info timestamp=2018-07-27T14:27:38.868573Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:27:47 http: TLS handshake error from 10.244.1.1:53850: EOF level=info timestamp=2018-07-27T14:27:51.165597Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:27:52.281513Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:27:57 http: TLS handshake error from 10.244.1.1:53856: EOF level=info timestamp=2018-07-27T14:28:01.818497Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-7lbfg Pod phase: Running 2018/07/27 14:26:23 http: TLS handshake error from 10.244.0.1:57602: EOF level=info timestamp=2018-07-27T14:26:25.989551Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:26:33 http: TLS handshake error from 10.244.0.1:57662: EOF 2018/07/27 14:26:43 http: TLS handshake error from 10.244.0.1:57722: EOF 2018/07/27 14:26:53 http: TLS handshake error from 10.244.0.1:57782: EOF level=info timestamp=2018-07-27T14:26:56.047561Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:27:03 http: TLS handshake error from 10.244.0.1:57842: EOF 2018/07/27 14:27:13 http: TLS handshake error from 10.244.0.1:57902: EOF 2018/07/27 14:27:23 http: TLS handshake error from 10.244.0.1:57962: EOF level=info timestamp=2018-07-27T14:27:26.046019Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:27:33 http: TLS handshake error from 10.244.0.1:58022: EOF 2018/07/27 14:27:43 http: TLS handshake error from 10.244.0.1:58082: EOF 2018/07/27 14:27:53 http: TLS handshake error from 10.244.0.1:58142: EOF level=info timestamp=2018-07-27T14:27:56.007230Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:28:03 http: TLS handshake error from 10.244.0.1:58202: EOF Pod name: virt-controller-67dcdd8464-pbp9x Pod phase: Running level=info timestamp=2018-07-27T14:08:30.250409Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-27T14:08:30.250448Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-27T14:08:30.250809Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-27T14:08:30.253146Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-27T14:08:30.253912Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-27T14:08:30.254355Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-27T14:08:30.260222Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-27T14:10:05.644862Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:10:05.649313Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.457184Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:16:06.457900Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.517957Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.022082Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi2n2g6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 99e10136-91a7-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.285450Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:22:07.285983Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-xxkgv Pod phase: Running level=info timestamp=2018-07-27T14:08:31.703591Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4wgkz Pod phase: Running level=info timestamp=2018-07-27T14:08:33.437302Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-27T14:08:33.443518Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:33.444401Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:33.545114Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:33.561769Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-27T14:08:33.564483Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" Pod name: virt-handler-fjdzm Pod phase: Running level=info timestamp=2018-07-27T14:08:40.851273Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-27T14:08:40.859029Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:40.860184Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:41.199801Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:44.777607Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-27T14:08:44.778906Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmi6vl46-58hq4 Pod phase: Pending • Failure in Spec Setup (BeforeEach) [360.784 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Timed out after 180.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 ------------------------------ STEP: Creating winrm-cli pod for the future use STEP: Starting the windows VirtualMachineInstance level=info timestamp=2018-07-27T14:22:08.208496Z pos=utils.go:245 component=tests msg="Created virtual machine pod virt-launcher-testvmi6vl46-58hq4" Pod name: disks-images-provider-dwrwc Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-zrdsn Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-4wwt4 Pod phase: Running 2018/07/27 14:33:07 http: TLS handshake error from 10.244.1.1:54042: EOF level=info timestamp=2018-07-27T14:33:10.538527Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:33:17 http: TLS handshake error from 10.244.1.1:54048: EOF level=info timestamp=2018-07-27T14:33:21.916482Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:33:22.989245Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:33:27 http: TLS handshake error from 10.244.1.1:54054: EOF level=info timestamp=2018-07-27T14:33:32.905481Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:33:37 http: TLS handshake error from 10.244.1.1:54060: EOF level=info timestamp=2018-07-27T14:33:40.676969Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:33:47 http: TLS handshake error from 10.244.1.1:54066: EOF level=info timestamp=2018-07-27T14:33:51.969192Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:33:53.042376Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:33:57 http: TLS handshake error from 10.244.1.1:54072: EOF level=info timestamp=2018-07-27T14:34:02.995680Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:34:07 http: TLS handshake error from 10.244.1.1:54078: EOF Pod name: virt-api-bcc6b587d-7lbfg Pod phase: Running 2018/07/27 14:32:23 http: TLS handshake error from 10.244.0.1:59762: EOF level=info timestamp=2018-07-27T14:32:26.057556Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:32:33 http: TLS handshake error from 10.244.0.1:59822: EOF 2018/07/27 14:32:43 http: TLS handshake error from 10.244.0.1:59882: EOF 2018/07/27 14:32:53 http: TLS handshake error from 10.244.0.1:59942: EOF level=info timestamp=2018-07-27T14:32:56.010748Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:33:03 http: TLS handshake error from 10.244.0.1:60002: EOF 2018/07/27 14:33:13 http: TLS handshake error from 10.244.0.1:60062: EOF 2018/07/27 14:33:23 http: TLS handshake error from 10.244.0.1:60122: EOF level=info timestamp=2018-07-27T14:33:26.082853Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:33:33 http: TLS handshake error from 10.244.0.1:60182: EOF 2018/07/27 14:33:43 http: TLS handshake error from 10.244.0.1:60242: EOF 2018/07/27 14:33:53 http: TLS handshake error from 10.244.0.1:60302: EOF level=info timestamp=2018-07-27T14:33:56.095304Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:34:03 http: TLS handshake error from 10.244.0.1:60362: EOF Pod name: virt-controller-67dcdd8464-pbp9x Pod phase: Running level=info timestamp=2018-07-27T14:08:30.253146Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-27T14:08:30.253912Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-27T14:08:30.254355Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-27T14:08:30.260222Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-27T14:10:05.644862Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:10:05.649313Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.457184Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:16:06.457900Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.517957Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.022082Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi2n2g6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 99e10136-91a7-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.285450Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:22:07.285983Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:28:07.838968Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6vl46\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi6vl46, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 70f1aea5-91a8-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6vl46" level=info timestamp=2018-07-27T14:28:08.047908Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwmcw5 kind= uid=47fbbab4-91a9-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:28:08.048352Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwmcw5 kind= uid=47fbbab4-91a9-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-xxkgv Pod phase: Running level=info timestamp=2018-07-27T14:08:31.703591Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4wgkz Pod phase: Running level=info timestamp=2018-07-27T14:08:33.437302Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-27T14:08:33.443518Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:33.444401Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:33.545114Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:33.561769Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-27T14:08:33.564483Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" Pod name: virt-handler-fjdzm Pod phase: Running level=info timestamp=2018-07-27T14:08:40.851273Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-27T14:08:40.859029Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:40.860184Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:41.199801Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:44.777607Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-27T14:08:44.778906Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmiwmcw5-sczm4 Pod phase: Pending • Failure in Spec Setup (BeforeEach) [360.748 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Timed out after 180.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 ------------------------------ STEP: Creating winrm-cli pod for the future use STEP: Starting the windows VirtualMachineInstance level=info timestamp=2018-07-27T14:28:08.882519Z pos=utils.go:245 component=tests msg="Created virtual machine pod virt-launcher-testvmiwmcw5-sczm4" Pod name: disks-images-provider-dwrwc Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-zrdsn Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-4wwt4 Pod phase: Running 2018/07/27 14:37:17 http: TLS handshake error from 10.244.1.1:54192: EOF level=info timestamp=2018-07-27T14:37:22.465977Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:37:23.528655Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:37:27 http: TLS handshake error from 10.244.1.1:54198: EOF level=info timestamp=2018-07-27T14:37:33.603908Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:37:33.659573Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-27T14:37:33.662727Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:37:37 http: TLS handshake error from 10.244.1.1:54204: EOF level=info timestamp=2018-07-27T14:37:41.936530Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:37:47 http: TLS handshake error from 10.244.1.1:54210: EOF level=info timestamp=2018-07-27T14:37:52.522078Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:37:53.626477Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:37:57 http: TLS handshake error from 10.244.1.1:54216: EOF level=info timestamp=2018-07-27T14:38:03.694776Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:38:07 http: TLS handshake error from 10.244.1.1:54222: EOF Pod name: virt-api-bcc6b587d-7lbfg Pod phase: Running 2018/07/27 14:36:23 http: TLS handshake error from 10.244.0.1:32970: EOF level=info timestamp=2018-07-27T14:36:25.997849Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:36:33 http: TLS handshake error from 10.244.0.1:33030: EOF 2018/07/27 14:36:43 http: TLS handshake error from 10.244.0.1:33090: EOF 2018/07/27 14:36:53 http: TLS handshake error from 10.244.0.1:33150: EOF level=info timestamp=2018-07-27T14:36:55.992587Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:37:03 http: TLS handshake error from 10.244.0.1:33210: EOF 2018/07/27 14:37:13 http: TLS handshake error from 10.244.0.1:33270: EOF 2018/07/27 14:37:23 http: TLS handshake error from 10.244.0.1:33330: EOF level=info timestamp=2018-07-27T14:37:25.993575Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:37:33 http: TLS handshake error from 10.244.0.1:33390: EOF 2018/07/27 14:37:43 http: TLS handshake error from 10.244.0.1:33450: EOF 2018/07/27 14:37:53 http: TLS handshake error from 10.244.0.1:33510: EOF level=info timestamp=2018-07-27T14:37:56.077228Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:38:03 http: TLS handshake error from 10.244.0.1:33570: EOF Pod name: virt-controller-67dcdd8464-pbp9x Pod phase: Running level=info timestamp=2018-07-27T14:10:05.644862Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:10:05.649313Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7lrc kind= uid=c2cb69cd-91a6-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.457184Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:16:06.457900Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2n2g6 kind= uid=99e10136-91a7-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:16:06.517957Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.022082Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi2n2g6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 99e10136-91a7-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.285450Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:22:07.285983Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:28:07.838968Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6vl46\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi6vl46, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 70f1aea5-91a8-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6vl46" level=info timestamp=2018-07-27T14:28:08.047908Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwmcw5 kind= uid=47fbbab4-91a9-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:28:08.048352Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwmcw5 kind= uid=47fbbab4-91a9-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:34:08.593079Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwmcw5\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiwmcw5, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 47fbbab4-91a9-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwmcw5" level=info timestamp=2018-07-27T14:34:09.732707Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4x62w kind= uid=1f8fa267-91aa-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:34:09.733227Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4x62w kind= uid=1f8fa267-91aa-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:34:10.016191Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi4x62w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi4x62w" Pod name: virt-controller-67dcdd8464-xxkgv Pod phase: Running level=info timestamp=2018-07-27T14:08:31.703591Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4wgkz Pod phase: Running level=info timestamp=2018-07-27T14:08:33.437302Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-27T14:08:33.443518Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:33.444401Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:33.545114Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:33.561769Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-27T14:08:33.564483Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" Pod name: virt-handler-fjdzm Pod phase: Running level=info timestamp=2018-07-27T14:08:40.851273Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-27T14:08:40.859029Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:40.860184Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:41.199801Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:44.777607Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-27T14:08:44.778906Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmi4x62w-r4fx7 Pod phase: Pending • Failure [241.685 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Timed out after 120.015s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-07-27T14:34:10.779525Z pos=utils.go:245 component=tests msg="Created virtual machine pod virt-launcher-testvmi4x62w-r4fx7" Pod name: disks-images-provider-dwrwc Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-zrdsn Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-4wwt4 Pod phase: Running 2018/07/27 14:41:07 http: TLS handshake error from 10.244.1.1:54330: EOF level=info timestamp=2018-07-27T14:41:13.212348Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:41:17 http: TLS handshake error from 10.244.1.1:54336: EOF level=info timestamp=2018-07-27T14:41:22.990257Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:41:24.111477Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:41:27 http: TLS handshake error from 10.244.1.1:54342: EOF level=info timestamp=2018-07-27T14:41:34.314270Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:41:37 http: TLS handshake error from 10.244.1.1:54348: EOF level=info timestamp=2018-07-27T14:41:43.376686Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:41:47 http: TLS handshake error from 10.244.1.1:54354: EOF level=info timestamp=2018-07-27T14:41:53.058327Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T14:41:54.171569Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:41:57 http: TLS handshake error from 10.244.1.1:54360: EOF level=info timestamp=2018-07-27T14:42:04.394039Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 14:42:07 http: TLS handshake error from 10.244.1.1:54366: EOF Pod name: virt-api-bcc6b587d-7lbfg Pod phase: Running 2018/07/27 14:40:23 http: TLS handshake error from 10.244.0.1:34410: EOF level=info timestamp=2018-07-27T14:40:26.071024Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:40:33 http: TLS handshake error from 10.244.0.1:34470: EOF 2018/07/27 14:40:43 http: TLS handshake error from 10.244.0.1:34530: EOF 2018/07/27 14:40:53 http: TLS handshake error from 10.244.0.1:34590: EOF level=info timestamp=2018-07-27T14:40:56.008985Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:41:03 http: TLS handshake error from 10.244.0.1:34650: EOF 2018/07/27 14:41:13 http: TLS handshake error from 10.244.0.1:34710: EOF 2018/07/27 14:41:23 http: TLS handshake error from 10.244.0.1:34770: EOF level=info timestamp=2018-07-27T14:41:26.027748Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:41:33 http: TLS handshake error from 10.244.0.1:34830: EOF 2018/07/27 14:41:43 http: TLS handshake error from 10.244.0.1:34890: EOF 2018/07/27 14:41:53 http: TLS handshake error from 10.244.0.1:34950: EOF level=info timestamp=2018-07-27T14:41:55.997233Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 14:42:03 http: TLS handshake error from 10.244.0.1:35010: EOF Pod name: virt-controller-67dcdd8464-pbp9x Pod phase: Running level=info timestamp=2018-07-27T14:22:07.022082Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2n2g6\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi2n2g6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 99e10136-91a7-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2n2g6" level=info timestamp=2018-07-27T14:22:07.285450Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:22:07.285983Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6vl46 kind= uid=70f1aea5-91a8-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:28:07.838968Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6vl46\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi6vl46, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 70f1aea5-91a8-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6vl46" level=info timestamp=2018-07-27T14:28:08.047908Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwmcw5 kind= uid=47fbbab4-91a9-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:28:08.048352Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwmcw5 kind= uid=47fbbab4-91a9-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:34:08.593079Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwmcw5\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiwmcw5, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 47fbbab4-91a9-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwmcw5" level=info timestamp=2018-07-27T14:34:09.732707Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4x62w kind= uid=1f8fa267-91aa-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:34:09.733227Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4x62w kind= uid=1f8fa267-91aa-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:34:10.016191Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi4x62w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi4x62w" level=info timestamp=2018-07-27T14:38:10.270022Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi4x62w\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi4x62w, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1f8fa267-91aa-11e8-afd3-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi4x62w" level=info timestamp=2018-07-27T14:38:11.058761Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikss2s kind= uid=af681faa-91aa-11e8-afd3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T14:38:11.059282Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikss2s kind= uid=af681faa-91aa-11e8-afd3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T14:38:11.272255Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikss2s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikss2s" level=info timestamp=2018-07-27T14:38:11.322984Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikss2s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikss2s" Pod name: virt-controller-67dcdd8464-xxkgv Pod phase: Running level=info timestamp=2018-07-27T14:08:31.703591Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4wgkz Pod phase: Running level=info timestamp=2018-07-27T14:08:33.437302Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-27T14:08:33.443518Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:33.444401Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:33.545114Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:33.561769Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-27T14:08:33.564483Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" Pod name: virt-handler-fjdzm Pod phase: Running level=info timestamp=2018-07-27T14:08:40.851273Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-27T14:08:40.859029Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-27T14:08:40.860184Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-27T14:08:41.199801Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-27T14:08:44.777607Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-27T14:08:44.778906Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmikss2s-9gkgk Pod phase: Pending • Failure [241.321 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Timed out after 120.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 ------------------------------ STEP: Starting the vmi via kubectl command level=info timestamp=2018-07-27T14:38:11.981158Z pos=utils.go:245 component=tests msg="Created virtual machine pod virt-launcher-testvmikss2s-9gkgk" SSSSSSSSSSSSSSSSS Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 6 Failures: [Fail] Windows VirtualMachineInstance [It] should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 [Fail] Windows VirtualMachineInstance [It] should succeed to stop a running vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 [Fail] Windows VirtualMachineInstance with winrm connection [BeforeEach] should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 [Fail] Windows VirtualMachineInstance with winrm connection [BeforeEach] should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 [Fail] Windows VirtualMachineInstance with kubectl command [It] should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1070 Ran 6 of 149 Specs in 1934.170 seconds FAIL! -- 0 Passed | 6 Failed | 0 Pending | 143 Skipped --- FAIL: TestTests (1934.19s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh