+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/08 15:54:21 Waiting for host: 192.168.66.101:22 2018/08/08 15:54:24 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 15:54:32 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 15:54:37 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0808 15:54:40.098631 1296 feature_gate.go:230] feature gates: &{map[]} I0808 15:54:40.202944 1296 kernel_validator.go:81] Validating kernel version I0808 15:54:40.203888 1296 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 34.006950 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:4ef712955e4bff6a210d7907882e7b309d39810849a6aa77c18642c54853a9e3 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/08 15:55:36 Waiting for host: 192.168.66.102:22 2018/08/08 15:55:39 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 15:55:47 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/08 15:55:52 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/08/08 15:55:57 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0808 15:55:58.710155 1299 kernel_validator.go:81] Validating kernel version I0808 15:55:58.710794 1299 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 56s v1.11.0 node02 Ready 21s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 57s v1.11.0 node02 Ready 22s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33136/kubevirt/virt-controller:devel Untagged: localhost:33136/kubevirt/virt-controller@sha256:cbd3ffbaa1ca2a0168fe2dc806951b844e170569fb7490ae00d5cc8874b5b84b Deleted: sha256:e54457465e2d44a77ebba346bf5d7fb8fe407c0981e9acfef6f482381eb9893e Deleted: sha256:0b02bb8320f43369519bdaa8f67229a70d5311eaffbf1e75332157505b320d88 Deleted: sha256:a68a48eb34ed5a409080a20bbb611c26142e9dc7693d1d800b187b3dcb5578a9 Deleted: sha256:3e935dab0ab8941c0c7d5eae3147f7b6f025648b5f40bc525d7417cc5d7f0a5f Untagged: localhost:33136/kubevirt/virt-launcher:devel Untagged: localhost:33136/kubevirt/virt-launcher@sha256:eb89fa1394e1b0f69286f8d03556b8caed80fe67f12a8ee64770859c1c4fb517 Deleted: sha256:d3d7f9ebf908a9db7c00ac7b9b15713ada54deac1c96005177909600422762a9 Deleted: sha256:055554e935d463939de83dd3cdcdd198cc0c10bae3b19d6af839654bd5fb917c Deleted: sha256:c0a272cad1a13853cbc728f4e694140e7a64da00f6afcd332a2993fca83b7920 Deleted: sha256:27fce8641017eecd8ab7d29ad4216052e2b2adfc27739163813e473290e43d5d Deleted: sha256:f928a7f7b58049a2ff3f10bc50e74528dce3554338d039c33ca672084b3b9ee4 Deleted: sha256:f3d0a5fab7db8f28a5d21d5d776213ec5cb01a583a33c9df70bb684c055ded68 Deleted: sha256:c1c1314108d33dfee3421677c1382c070e25220a6079e689d286abcbbc3f9ecd Deleted: sha256:0fc0f966b04e41376cc51d633f0f69cd5f70113827da08e317e4af80e9e33714 Deleted: sha256:bc4d45518141e8a275fef14632fc03e6fb8fd6d5e7e022b4f64fea7fccb7171e Deleted: sha256:b07419e06b3a083acec58cb428d346bc5100bc4f738d38e0cfc4c304ea0be539 Deleted: sha256:a5791a7daf244f01cd8b54f63b4e7ae495c342478b2214b591c7edaaff95062c Deleted: sha256:768e97aa45ce1dfd5fcf052d746fb8924006894a0f13f2472a30f62d3c519f29 Untagged: localhost:33136/kubevirt/virt-handler:devel Untagged: localhost:33136/kubevirt/virt-handler@sha256:b8e85c799325a3ec0d6dc49fa292e9b82a00cd139a64d2b672df9d47da53319d Deleted: sha256:247b20382980c7342904028776f120e377ea2483c79186b4a68c80d3457f589f Deleted: sha256:acbabef1b373389c34707db7bec6f78e07c3a9b68306eae6a64fe1a5685fbcaa Deleted: sha256:f0ce2809b68a981a0e2f48bf0170fe9366eeb68c76299a7ed3abfbb0ad11aaab Deleted: sha256:c1885cb3d13dcdbc121de70c2c9655e99d21e775eca003e3c27212c198160bb0 Untagged: localhost:33136/kubevirt/virt-api:devel Untagged: localhost:33136/kubevirt/virt-api@sha256:42fb629830c31fcf39a32fb775d74e2670fa9d3cf56d3cdc3f0178d456363e04 Deleted: sha256:33e663e99ef7ce2c19d3ede7e38f8636379aa30ba8f4d960a2742e1c1cdc151c Deleted: sha256:3c9e2ee2a20d76dd317624b3b83d83a9055c1e49f9f9a97529f30c21ecfded99 Deleted: sha256:d8628f1e5450447766916d3bdc1a39d2945acd932aa75787629cf193aec0f98a Deleted: sha256:200c9f06c16cfc2193780b75d973d1c6e3e8c6e77f271ac779a282844afa5562 Untagged: localhost:33136/kubevirt/subresource-access-test:devel Untagged: localhost:33136/kubevirt/subresource-access-test@sha256:a8cadfe0c25a8085753dbc546c9f08428835495f2e0c644545575a57f567a2ea Deleted: sha256:027e6aa37ea56f825081619fdbe700ff4746fb681aaa09a9dea0a72a1c73f1b7 Deleted: sha256:f9c6a8359459a5996ed943f46294e26df38d9629d5becacd71d9ade154f1fec7 Deleted: sha256:ef056e1bc54a804c70bee1c82bf253db12c87b25b6e7ae36028be68c440248bb Deleted: sha256:2b825a177f7ef31d9d528c17371ed4bdfcfa14ecf7e7556de04c414a1897afdb Untagged: localhost:33136/kubevirt/example-hook-sidecar:devel Untagged: localhost:33136/kubevirt/example-hook-sidecar@sha256:a79c3035912d3f69c9816b03465a410fc148d9996aa705ee9f9cfd748fc5ffde Deleted: sha256:16a102937d253f1ef02757bb4c63fa97a58f97cb1d2d2e38e999c7b142e18083 Deleted: sha256:e0b0adba55e06f6a154e55e7a12b0dd649f817d586319667b6051e39ac8bfd09 Deleted: sha256:1b018f381d05b86b3de2873d19aeb3cb843f1da6f50abb6ebcedfb3743e81fdc Deleted: sha256:377418a79ff77db8b630d7a2e7ac8d79280f3ec9309635a2106220bd1b17cb1e Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 1118529380fd Step 10/12 : RUN pip install j2cli ---> Using cache ---> 811419e85d66 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 826bb62508db Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> b69a3f94b204 Successfully built b69a3f94b204 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 1118529380fd Step 10/12 : RUN pip install j2cli ---> Using cache ---> 811419e85d66 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 826bb62508db Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> b69a3f94b204 Successfully built b69a3f94b204 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/tests_test tests/vmi_networking_test.go:470:4: undefined: waitUntilVMIReady make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh