+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2 + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/10 07:34:44 Waiting for host: 192.168.66.101:22 2018/08/10 07:34:47 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 07:34:55 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 07:35:00 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0810 07:35:01.523186 1292 feature_gate.go:230] feature gates: &{map[]} I0810 07:35:01.624611 1292 kernel_validator.go:81] Validating kernel version I0810 07:35:01.625236 1292 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 55.012266 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:98c0c4772fabbd106861eaa1a52de527e0bca2c9c15fc3faebb6f7985f46b6ed + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/10 07:36:17 Waiting for host: 192.168.66.102:22 2018/08/10 07:36:20 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 07:36:28 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 07:36:34 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/08/10 07:36:39 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0810 07:36:41.099236 1299 kernel_validator.go:81] Validating kernel version I0810 07:36:41.099675 1299 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.11.0 node02 Ready 26s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 27s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33486/kubevirt/virt-controller:devel Untagged: localhost:33486/kubevirt/virt-controller@sha256:1b68f2eb71d114a3b107db788e86acd0e2f51962940b6185ca7f242b67416423 Deleted: sha256:31b8e42d754a26ba44dd6e8e757bb9737de8a899cb31cfd7256007d1e307a8db Deleted: sha256:26d10556375c7c0efc994a967f1a4b0a5e914b8994e842692846037e9e3f0e07 Deleted: sha256:8293164588aeca49e54f8259eda16ae6a620f45587ad5613f0b4ea26231532c6 Deleted: sha256:dd759402183b96efffa556c06b08c0555103a7cdf3c4e6091d5be468af6c591f Untagged: localhost:33486/kubevirt/virt-launcher:devel Untagged: localhost:33486/kubevirt/virt-launcher@sha256:1442dff1ab88b5132f22565eb73607d09b4251e8d3fd971d03b92765e46cd219 Deleted: sha256:1f6f30af2bacfb5c152cd309785b22829aca416c658d75f3176e0a499ff76e5c Deleted: sha256:0b49ec2128e2053f165ba51ad7c978d066bed903a7c1038e215c34e7b1cfe177 Deleted: sha256:a0c07d0acbe6b74b63cef0b131eda8a12d88b7c307bfd7cd86bc145c2fbb193c Deleted: sha256:f4754acc8125910cbc0694b40dc8fe3d9880de48f4f9ab415783bf401dc9ba11 Deleted: sha256:7b1daf24dd0af0494e990e41dbbf973688e5d9021f0e8052a8c1402031f92b96 Deleted: sha256:d1203244ba28de850a737129daffa65142dcda9b371ebfeeae0bfd5b673ea573 Deleted: sha256:ce6aacd377c5f8ae4b7f07ae9aeb808a7d8e5dea2f7f19bf82c988105d389abb Deleted: sha256:a2518bfd64aae3bd482e9d0121bdf2d68547a0679a5867e66beb7dc71e90073e Deleted: sha256:64c7e3ccf6fbe2e66d77810b7c9e7a84e966bcdaff8704e4563240bfb17321d3 Deleted: sha256:83059ab1724ad4ae470e4e5bc8826dd9395af4e785de00b57b9574439afc0adc Deleted: sha256:2cc48a6e6bd0e259c327fdc0cfc6c92e34b2a3b3ffbb9c2216273f4359f4a463 Deleted: sha256:5a9823bcb4e67fe08e40e707cba686cbfbad99fa7449cf5e027aad1c4c7074f2 Untagged: localhost:33486/kubevirt/virt-handler:devel Untagged: localhost:33486/kubevirt/virt-handler@sha256:66f7e210c902ef74f7319aa89550e67768df2f449ffe155a489db7611ec12d87 Deleted: sha256:0bb075a7848b7b9a90e1bf6790bcd237b8719d487ea6e0fb64cffba35e315f07 Deleted: sha256:17cf9b9301b47b53749ae8c6cfd7bb98df110ddb7570674e34dec6799a31ed4b Deleted: sha256:4a0c4a098260f861c4e3713dcb149170511401905ee066bac43317dcf2efcd69 Deleted: sha256:c3348558db1525303bb9699e41a7a2fca197e8b6993a0f24b6ef42f578934d4d Untagged: localhost:33486/kubevirt/virt-api:devel Untagged: localhost:33486/kubevirt/virt-api@sha256:6268dd2413853d05f86b9e910b0a8bd429b1eccfee5a190098f35c4837c53ad4 Deleted: sha256:c6a1c2562cc11a40439ba371a701907a70adc4791a4dd4ff89789a315e8b2658 Deleted: sha256:15aa0f98b65355df2bb39d858cf2c68db6fe36163f08eabc9ccb6fd7c9c6b3bd Deleted: sha256:06a3fa0c5d03f8b0912f215aa523182f15711b1867e0f08a5153e6a05b8251e3 Deleted: sha256:2f76629ccdba08824d09cbbdac2845323f022503224a2f2a15bbd2b5fcc5f591 Untagged: localhost:33486/kubevirt/subresource-access-test:devel Untagged: localhost:33486/kubevirt/subresource-access-test@sha256:c1b547043ea0ad6f60bad8b683a14f14d2fc7ae2063958ca7883809b75f5057c Deleted: sha256:89305f774715a45dc6618d1e8c8f4b321e356326bd9bfb9a3b28d0b8997ca983 Deleted: sha256:dee337cd33f18f92f2c9cd8d00b58bfb77bcc0a76d8eac090b9e66ef4f54e463 Deleted: sha256:e3d683df0a8fb5ec99c93beb88154cf348fda97e2ec5ee342af94e4e75cd5499 Deleted: sha256:24e2b157a0d2836e6acf66101df2476c3bb81f2aa6af8505d0b3cf59d1b645b2 Untagged: localhost:33486/kubevirt/example-hook-sidecar:devel Untagged: localhost:33486/kubevirt/example-hook-sidecar@sha256:5a66aa3fca2eb3e71891fa2cc8fb7f73ae91eb32bf84e8e99f8ae994ff79fe37 Deleted: sha256:af43d76e1542ab8415841e79d7aeec45978bd0a3b45839d3e544056ba103e00f Deleted: sha256:0202be2301435d7b923ea34aba78404e4ac7882c99f79d14a8eb5ddf25814ab0 Deleted: sha256:889ae9b6c7ca35b00acbd116afdbccb0c92d021494d778f331ca4ffb382210c2 Deleted: sha256:55f67922bae47821532d90cd51cd27209cd22b3a2695408b0226380770856278 Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 1118529380fd Step 10/12 : RUN pip install j2cli ---> Using cache ---> 811419e85d66 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 826bb62508db Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> b69a3f94b204 Successfully built b69a3f94b204 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> dcc6695ef4c0 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 724322f9fbd0 Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> f0fe3aa82c4c Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> 7908885d79a9 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 42679488ba0f Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 3e90be7b7e02 Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 777bd794ff51 Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 1118529380fd Step 10/12 : RUN pip install j2cli ---> Using cache ---> 811419e85d66 Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 826bb62508db Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> b69a3f94b204 Successfully built b69a3f94b204 go version go1.10 linux/amd64 go version go1.10 linux/amd64