+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/12 10:44:35 Waiting for host: 192.168.66.101:22 2018/08/12 10:44:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/12 10:44:46 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/12 10:44:51 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0812 10:44:52.456603 1293 feature_gate.go:230] feature gates: &{map[]} I0812 10:44:52.552202 1293 kernel_validator.go:81] Validating kernel version I0812 10:44:52.552984 1293 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 57.007898 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:751523c73d262aa8329b30afc5d4893f131dae68debaf590c48db271ad030e18 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/12 10:46:08 Waiting for host: 192.168.66.102:22 2018/08/12 10:46:11 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/12 10:46:19 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/12 10:46:24 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0812 10:46:27.199810 1295 kernel_validator.go:81] Validating kernel version I0812 10:46:27.201116 1295 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 54s v1.11.0 node02 Ready 22s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 55s v1.11.0 node02 Ready 23s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33740/kubevirt/virt-controller:devel Untagged: localhost:33740/kubevirt/virt-controller@sha256:62b84aeed65214517380568d2ca7bd4db8db23db68aa9216c1b43ebcb1f730eb Deleted: sha256:72bfd04e7c4e2493bd5114fb5872007c8a39ed2b60621040b5c8208d863aac0f Deleted: sha256:29a20c0d87245ffc01da8b6dc4654930a77038cec801ae738efa901b94da08ef Deleted: sha256:c608522a7707dac05fbc2e35be615c497a70ba0001d632f0393f5d6b135d7d6f Deleted: sha256:220f0e9420ae663340c0d079002ff4bde86a68a159edcaf6f2a6aaefcd2055a7 Untagged: localhost:33740/kubevirt/virt-launcher:devel Untagged: localhost:33740/kubevirt/virt-launcher@sha256:e2db416f296c9c4038edbb74b269b3d3d82f464d4653d0deafcfac298963148f Deleted: sha256:74efc6ebe3e07548786316803d710c10270ac918ade28c8cf1faa51c32bd08f4 Deleted: sha256:16f128a4817c40c4d8bac267b5c6147fd57cf7f9b5c1eb73c6474c8b81aa6812 Deleted: sha256:6ef6e0c1519139d76ddc4c450c9cb1c412c18adf58c3520c9eb5e332d3e5a6d3 Deleted: sha256:1062fa1d6dce1fe96ca52d9f4492ba5d505e3a858fd016cbbbef0da52c1172f1 Deleted: sha256:fed680fcc294b5485fcfa3c6625950b9693f0fdbc7338096f03b8e8c291431e6 Deleted: sha256:e8d216cb110a1b4a2a3af3d708f9192f8dcabd80cecd5cafe284f03e6ece4383 Deleted: sha256:74781ccb7112598141cea255cb1ff8e721e9bd665bce1d341be2ea783d253e05 Deleted: sha256:badc929fced7e6db002ad41e9a6253af1648fa38c5c13bf5217b028ed7ee3b3a Deleted: sha256:9625480e4984348ca8e744c350f6dbbc12fb2751a73188424fe70d884fa27e0d Deleted: sha256:a2526a3d572dba1ca0301392392693961e71a6758c3c8c5dba8c41f0a4c0eff9 Deleted: sha256:5939c02b49378168bc2bb547fe0e1127ec2d25049b08f3970c0ef55fd83dddaa Deleted: sha256:611058b4b07ffeb9ddd2615fc12b3b977d295a8ad998ce09d1777a4779071258 Untagged: localhost:33740/kubevirt/virt-handler:devel Untagged: localhost:33740/kubevirt/virt-handler@sha256:efb929f98b3fac962f4fa98fb3fc4555521a21bf9abd20239ecd0d3ed8bc1c61 Deleted: sha256:3aa628b0cf1a95ec0657a52a40e9dfa951714a38ec4118a14dec3a4477991d60 Deleted: sha256:bf3bfd3f9efb6495a6983625c86ed379be497b7897172fe0b1be425b13f860f4 Deleted: sha256:9b883fa669186050784c055ba4576a83032372ac4d32cff2a65902c330485cab Deleted: sha256:6aadf2623c967b4446250c512e5ce1c2d6b475c1729b277ee4dfb65b4e30288a Untagged: localhost:33740/kubevirt/virt-api:devel Untagged: localhost:33740/kubevirt/virt-api@sha256:c9f54c0376d4ed7cd1aa3b7817a0bee05ee9113f2f87c136a7b0176a282ee0d3 Deleted: sha256:c19ffe04fbd71e5bea9eb88bcdfff45ac3bfc1ace114b50311059a3c4f53c83c Deleted: sha256:b2690bc808472f96156e60f4425636fda17becfc3fea8adc10654dbda23bd959 Deleted: sha256:6f931980cd4ee9fba83dbba75794a467fdd56b19d82b73713b8e517dcd3a3196 Deleted: sha256:6b4681b1e1794c5244cf20dd4cc07551fdeaa1c8100cb12fcc8e8a9b0979c165 Untagged: localhost:33740/kubevirt/subresource-access-test:devel Untagged: localhost:33740/kubevirt/subresource-access-test@sha256:139c1e80f9e67c398eef470b2000f7702fb05b660d4519f9d47e275a8f87305a Deleted: sha256:79ffc5dcd50bbc6e3b4911cd6139c1c9d420ca903b6ab327f8a968d0515cbf69 Deleted: sha256:ef9afaaeb784db8bf2c6f48a79c7e5ed9c389e566fd31e0f94e160a6cc4f5fa4 Deleted: sha256:a32085d5a9db85de37861492a54f72d9f7d64ead3199cfd3ac0ac76a665f310a Deleted: sha256:dd6799bf7702e6e4829e55c81b1c38f03014532b9f8a2273dd9903aa8225842c sha256:0bec083f1c9e66baa107940ad909778a562a54db14b6a64ab117f1897e0697dd go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:0bec083f1c9e66baa107940ad909778a562a54db14b6a64ab117f1897e0697dd go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.22 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 2822515a72a1 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 0e522f373cd1 Step 5/8 : USER 1001 ---> Using cache ---> c94440fde013 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 2e775d5184cc Removing intermediate container 9bd52ae25127 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 8c91abfede8c ---> 47fa90ba7004 Removing intermediate container 8c91abfede8c Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-controller" '' ---> Running in aa08706cf60d ---> 25b543c12939 Removing intermediate container aa08706cf60d Successfully built 25b543c12939 Sending build context to Docker daemon 38.16 MB Step 1/9 : FROM kubevirt/libvirt:3.7.0 ---> c4e262d2dc3c Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a3eeea6a19f7 Step 3/9 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 4e81f3f1ccb3 Step 4/9 : COPY virt-launcher /usr/bin/virt-launcher ---> 4689922c2410 Removing intermediate container 2fca758869a5 Step 5/9 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 8ae8daf7531f Removing intermediate container b9043e372689 Step 6/9 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in bc23cccdf627  ---> ad67d20038f8 Removing intermediate container bc23cccdf627 Step 7/9 : COPY entrypoint.sh libvirtd.sh sh.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> cf8f09f89f4c Removing intermediate container 4f258440d47c Step 8/9 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 2003009237e0 ---> a7781468e512 Removing intermediate container 2003009237e0 Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-launcher" '' ---> Running in a1481e17983b ---> 7a6669a6ed8c Removing intermediate container a1481e17983b Successfully built 7a6669a6ed8c Sending build context to Docker daemon 36.76 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 735ed9f2cf65 Removing intermediate container 2bb90788cbe6 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in f50966297628 ---> 555c3a18749a Removing intermediate container f50966297628 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-handler" '' ---> Running in 50bf1e9634ab ---> 1ea6fbfcd270 Removing intermediate container 50bf1e9634ab Successfully built 1ea6fbfcd270 Sending build context to Docker daemon 36.97 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> de2e6a870490 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> a8e6f7bd6c45 Step 5/8 : USER 1001 ---> Using cache ---> 38bfe789bc15 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> a1eb5e50f479 Removing intermediate container 2ea73122289d Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 3c6e56c55e51 ---> 749501331ac8 Removing intermediate container 3c6e56c55e51 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "virt-api" '' ---> Running in 6dd1418053a0 ---> 4fd77b3b71ad Removing intermediate container 6dd1418053a0 Successfully built 4fd77b3b71ad Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:27 ---> 9110ae7f579f Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/7 : ENV container docker ---> Using cache ---> 95752cc0f6e3 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 874ab482e353 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 1547eaa29e05 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 57014bb60130 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> 878790a02614 Successfully built 878790a02614 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/5 : ENV container docker ---> Using cache ---> 95752cc0f6e3 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 58c89175cab8 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "vm-killer" '' ---> Using cache ---> 2149bf4b5e2a Successfully built 2149bf4b5e2a Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 9ef1c0ce5d24 Step 3/7 : ENV container docker ---> Using cache ---> 9ad55e41ed61 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 17a81fda7c2b Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 681d01e165e6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> a79815fe82d9 Step 7/7 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 6ef2fe0ba069 Successfully built 6ef2fe0ba069 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33837/kubevirt/registry-disk-v1alpha:devel ---> 6ef2fe0ba069 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 01615351ca4e Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 81ca76c46679 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> c448af5e3322 Successfully built c448af5e3322 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33837/kubevirt/registry-disk-v1alpha:devel ---> 6ef2fe0ba069 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d330eefdd757 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> d4f7cb7b1be2 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> c74218398637 Successfully built c74218398637 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33837/kubevirt/registry-disk-v1alpha:devel ---> 6ef2fe0ba069 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d330eefdd757 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 3696cd7aa2d3 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release1" '' ---> Using cache ---> c5b23ac9de78 Successfully built c5b23ac9de78 Sending build context to Docker daemon 34.03 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 43dc6378f9e6 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> e29a9816bce9 Step 5/8 : USER 1001 ---> Using cache ---> 0a3ae9e18938 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 4c74b7d9b982 Removing intermediate container 1d74f1e708d5 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 82b54e285b51 ---> 4b20592525c5 Removing intermediate container 82b54e285b51 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "subresource-access-test" '' ---> Running in d62a8d69d2c6 ---> 7747081dfe8e Removing intermediate container d62a8d69d2c6 Successfully built 7747081dfe8e Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> cc296a71da13 Step 3/9 : ENV container docker ---> Using cache ---> 95752cc0f6e3 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 23feff5b1edf Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 4770990ed1fc Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 0aaacf980e26 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 894fa40f5dab Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 0b5a3650fce1 Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release1" '' "winrmcli" '' ---> Using cache ---> d602fc206b26 Successfully built d602fc206b26 hack/build-docker.sh push The push refers to a repository [localhost:33837/kubevirt/virt-controller] d4ae950b0c01: Preparing 7d5157818b5e: Preparing 39bae602f753: Preparing 7d5157818b5e: Pushed d4ae950b0c01: Pushed 39bae602f753: Pushed devel: digest: sha256:7968e56d2a3a1fa3e79144dbada9696e199e5877135f06f25c23efcfbd18989b size: 948 The push refers to a repository [localhost:33837/kubevirt/virt-launcher] dd52552407f2: Preparing ab3e3a5ba038: Preparing bb50353cc274: Preparing 2e088fec3d97: Preparing 496ac61b2571: Preparing 9e20b26113ea: Preparing a1a99db27cd1: Preparing ec5be2616f4d: Preparing ffcfbc9458ac: Preparing 68e0ce966da1: Preparing 39bae602f753: Preparing 39bae602f753: Waiting 68e0ce966da1: Waiting ffcfbc9458ac: Waiting 9e20b26113ea: Waiting a1a99db27cd1: Waiting ab3e3a5ba038: Pushed bb50353cc274: Pushed dd52552407f2: Pushed 9e20b26113ea: Pushed a1a99db27cd1: Pushed ec5be2616f4d: Pushed ffcfbc9458ac: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 496ac61b2571: Pushed 2e088fec3d97: Pushed 68e0ce966da1: Pushed devel: digest: sha256:f8edba07c42908fc3f7b1bd03138304fae0a40fdf171684f9bf9b138a9bf45e4 size: 2617 The push refers to a repository [localhost:33837/kubevirt/virt-handler] f69443600926: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher f69443600926: Pushed devel: digest: sha256:69421c4babef38fff7f6ce613dbade1c3c0a07c5976c063f67a7f5f3dd02fa49 size: 740 The push refers to a repository [localhost:33837/kubevirt/virt-api] 9d916add75b7: Preparing 8e02b41e2e47: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 8e02b41e2e47: Pushed 9d916add75b7: Pushed devel: digest: sha256:783783ca15c52336db1ec4b29b317bc75c77db31f1976c03361d4fd7b24ad924 size: 948 The push refers to a repository [localhost:33837/kubevirt/disks-images-provider] 6b28f682fe68: Preparing 60fe073622c0: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api 6b28f682fe68: Pushed 60fe073622c0: Pushed devel: digest: sha256:56400ffa381057ce00f915e6b7fb54d85a952adb991ff04e7b75bebe28a6e6d8 size: 948 The push refers to a repository [localhost:33837/kubevirt/vm-killer] e8d99376fb84: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/disks-images-provider e8d99376fb84: Pushed devel: digest: sha256:3d14457a859efe7a75722b772a97063e5b7e3a45f71cc884f5b37bbd56b6e411 size: 740 The push refers to a repository [localhost:33837/kubevirt/registry-disk-v1alpha] c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Pushed 4662bbc21c2d: Pushed 25edbec0eaea: Pushed devel: digest: sha256:983fa47e2a9f84477bd28f2f1c36f24812001a833dca5b4ae9a4d436a2d2564c size: 948 The push refers to a repository [localhost:33837/kubevirt/cirros-registry-disk-demo] 8081bd2f2d51: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing c66b9a220e25: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 4662bbc21c2d: Mounted from kubevirt/registry-disk-v1alpha 8081bd2f2d51: Pushed devel: digest: sha256:90cff06e4e356cc860429e715d7eb65570de321773a692851fd7888f39a0e2b0 size: 1160 The push refers to a repository [localhost:33837/kubevirt/fedora-cloud-registry-disk-demo] fa1881d7bf95: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing 4662bbc21c2d: Mounted from kubevirt/cirros-registry-disk-demo c66b9a220e25: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo fa1881d7bf95: Pushed devel: digest: sha256:18c2e2f569079fd2da55a2eb87240fe29015c8fbf293d125557e82dfb55a4cf0 size: 1161 The push refers to a repository [localhost:33837/kubevirt/alpine-registry-disk-demo] d01c36937189: Preparing c66b9a220e25: Preparing 4662bbc21c2d: Preparing 25edbec0eaea: Preparing 4662bbc21c2d: Mounted from kubevirt/fedora-cloud-registry-disk-demo c66b9a220e25: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo d01c36937189: Pushed devel: digest: sha256:994d447b46abde194e1d6610f761887b06c5a3b57c80e1807cdc6138f0d20f15 size: 1160 The push refers to a repository [localhost:33837/kubevirt/subresource-access-test] 2b919d354e61: Preparing e0bbf15a89b8: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer e0bbf15a89b8: Pushed 2b919d354e61: Pushed devel: digest: sha256:423581967b0928d36e7082dbc6a5b952eacd806f1088fb34e7563a3068906c4c size: 948 The push refers to a repository [localhost:33837/kubevirt/winrmcli] e874f7647382: Preparing 2b23514ed94b: Preparing 479fc5763b8f: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test e874f7647382: Pushed 479fc5763b8f: Pushed 2b23514ed94b: Pushed devel: digest: sha256:0526f07c8a0ccaa61dc26ad864a275f24c01e3e8192176c6e3836db31e8bdf82 size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release1 ++ job_prefix=kubevirt-functional-tests-windows2016-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.6.3-4-gfe14b97 ++ KUBEVIRT_VERSION=v0.6.3-4-gfe14b97 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33837/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-windows2016-release ']' ++ provider_prefix=kubevirt-functional-tests-windows2016-release1 ++ job_prefix=kubevirt-functional-tests-windows2016-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.6.3-4-gfe14b97 ++ KUBEVIRT_VERSION=v0.6.3-4-gfe14b97 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33837/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z windows2016-release ]] + [[ windows2016-release =~ .*-dev ]] + [[ windows2016-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created error: error validating "/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml": error validating data: [ValidationError(CustomResourceDefinition.status): missing required field "conditions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus, ValidationError(CustomResourceDefinition.status): missing required field "storedVersions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus]; if you choose to ignore these errors, turn validation off with --validate=false make: *** [cluster-deploy] Error 1 + make cluster-down ./cluster/down.sh