+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/07 19:22:14 Waiting for host: 192.168.66.101:22 2018/08/07 19:22:17 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/07 19:22:25 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/07 19:22:30 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0807 19:22:32.573661 1298 feature_gate.go:230] feature gates: &{map[]} I0807 19:22:32.663904 1298 kernel_validator.go:81] Validating kernel version I0807 19:22:32.664262 1298 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 57.012877 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:81dbaad81492753d372342aec9920a60bbd6302d28e32bb44f99fe232ec27efa + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/07 19:23:49 Waiting for host: 192.168.66.102:22 2018/08/07 19:23:52 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/07 19:24:00 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/07 19:24:05 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0807 19:24:08.155499 1295 kernel_validator.go:81] Validating kernel version I0807 19:24:08.156046 1295 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 57s v1.11.0 node02 Ready 21s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 58s v1.11.0 node02 Ready 22s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33258/kubevirt/virt-controller:devel Untagged: localhost:33258/kubevirt/virt-controller@sha256:03bb0b92871fcbc4a4525d912862aeb5a0494bc68ae588b3e362cf290294a317 Deleted: sha256:9f00bd47f9e24281a6a0ac04978e931629310f32ed2895ac772f96efb157900c Deleted: sha256:570fddba598fac14e9c2595eb35ed6376c81a7228fd758a3498374700c398b43 Deleted: sha256:cbb5842da6878c7deb6010152ce13ca9a083703ed343a733ad01549e1067e489 Deleted: sha256:40bf27c9dee197db8f1c7c0349b0c16da0e588378a50a7d8cd6fb398f6ae0d1b Untagged: localhost:33258/kubevirt/virt-launcher:devel Untagged: localhost:33258/kubevirt/virt-launcher@sha256:3da12ba1c6239941b500bc19f7527a154dd16518b86e453f610405825cd5d88c Deleted: sha256:6c08af52e2a3e3a6014a24d9ccf7f04bce45250dffe6449858b4de74617332e5 Deleted: sha256:110719abba14f1384291773b7228e20f9bed6290a9ad947304ad0873e591ab2c Deleted: sha256:78698bea85e70286b5e22a7a3a88031f7ec1b99228ee008322fdf736c682e715 Deleted: sha256:686d11cfe4dae5ee3a17cbe65752d4cce1045cc45d9d6b2f1425ac234c577362 Deleted: sha256:dd8819700c9a30e9615773a1054562abb2018507258489c4d2cd596dac1d7ac5 Deleted: sha256:a206704de7b62937aa17b59904998e21b796be2091fcc1d59bdd06750d9821f3 Deleted: sha256:610b4e481e528475b0d848dd5eb30f7aa87a6e22af7d64d3aedbf5ccd43eb906 Deleted: sha256:39d376b577422de551ab2cfc57f5eadaffcc17fb90e1893ee8c1de44e5aab6b5 Deleted: sha256:f56f1faf5bed110144abe2c1070884ebc9c8ca3d6f234f0cac33a5d1023c7948 Deleted: sha256:f114cfb5effd25e63a907455a7225f3415f8d410e3a458702a9b210539236bf7 Deleted: sha256:be163d93fd8439b8d6e47da2624a99ddef86bbe00560429640afdb9d0dc20512 Deleted: sha256:0520fbc82ee3a4157a284ae8e04862fb6ea6c229df14ea231ee124bfef4d0ab5 Untagged: localhost:33258/kubevirt/virt-handler:devel Untagged: localhost:33258/kubevirt/virt-handler@sha256:193b2f24658cbb2503617505490427b9bbc598d9948403a3cafee2e06c1090b1 Deleted: sha256:ba8d0305987a60d155631813b45494a7aaf4694025979268ecfc8239662546f1 Deleted: sha256:9a3edd7521c85948ff8a6b00cb9151c8625bbcb3007cd6f8d7b85bcd2aeae7c1 Deleted: sha256:88283b08dce8508c601ba2eb61fe06e618358acc7e1b97e0575e463dd74f0ddf Deleted: sha256:b397a55b94f58fed5d6e6ce6b57610a3083b6ed446c9390416eacee657912db0 Untagged: localhost:33258/kubevirt/virt-api:devel Untagged: localhost:33258/kubevirt/virt-api@sha256:e9fe5488d20957fdedb306083fc734c0d05f9ea9d7271502777f1a45dc032df0 Deleted: sha256:865a361b92c0245e21a4e51dc3ed0e775651cf234f1a4100fa7808f52da8c3f8 Deleted: sha256:e0ea14f27c014cfe170f0c9d4cb33da17302acc3b8677f0974cfc4f48108cde3 Deleted: sha256:f8af5c547aac5746dd957aef5f8f1511dd24ef60c6d853b720d7911840664c63 Deleted: sha256:83e635f6b0a6c37a5e1e1c412dd09b4c30a302f9f5d3e34e8b88b1d777352983 Untagged: localhost:33258/kubevirt/subresource-access-test:devel Untagged: localhost:33258/kubevirt/subresource-access-test@sha256:2436ec4d195882e4a809dfbbaf434cefbef3d0e690cd64770d7074b97cdc43c8 Deleted: sha256:941a8c0a1cbdc0578c7a4aa4bd49ddd0bd79eb640065a69021bbd1323bf0687d Deleted: sha256:16c153efad0ad29640ebae6bfe6328a38f3f0de075d691a6bdaa5e9abb45f594 Deleted: sha256:91913bddbfbe5efa0ee281b1893e4787a3a9b7778c90a3c996c1eb7593a756e5 Deleted: sha256:733594a445c2b7cacf584d34ed9b43c2971b1feb5c5a2115eaae4b10a1d96a5d Untagged: localhost:33258/kubevirt/example-hook-sidecar:devel Untagged: localhost:33258/kubevirt/example-hook-sidecar@sha256:febdd7f7de22dbb584f7541cc96d17dba9480cdab97842933c8102c410d9e283 Deleted: sha256:c496bf2306a668048a28c1105e313d780298fd6e250e00b31f50bae55d5aa6d2 Deleted: sha256:c31b253d5ae79f21eaa416afaed9f53523271c4bba0d613cf10bbbefdb71bca9 Deleted: sha256:9d57804874846b92d55a55bace870402197139cccac6f70e6daa630c05d9dce2 Deleted: sha256:53ac87ba297035ebe5f7f3f2d03ef82f6e40f6b83c689d91a5e4d230e9eeb212 Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 1118529380fd Step 10/12 : RUN pip install j2cli ---> Using cache ---> 811419e85d66 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 826bb62508db Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> b69a3f94b204 Successfully built b69a3f94b204 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 1118529380fd Step 10/12 : RUN pip install j2cli ---> Using cache ---> 811419e85d66 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 826bb62508db Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> b69a3f94b204 Successfully built b69a3f94b204 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory