+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2 + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/01 13:00:15 Waiting for host: 192.168.66.101:22 2018/08/01 13:00:18 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/01 13:00:26 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/01 13:00:31 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/08/01 13:00:36 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ wc -l ++ grep active + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0801 13:00:37.513165 1257 feature_gate.go:230] feature gates: &{map[]} I0801 13:00:37.666189 1257 kernel_validator.go:81] Validating kernel version I0801 13:00:37.666428 1257 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 60.011064 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:68584d4cda4a1e7a42af8a3626aa67dcaa34bf5d16ba088d0e91decdff508c93 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/01 13:01:58 Waiting for host: 192.168.66.102:22 2018/08/01 13:02:01 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/01 13:02:09 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/01 13:02:15 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/08/01 13:02:20 Connected to tcp://192.168.66.102:22 ++ wc -l ++ systemctl status docker ++ grep active + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0801 13:02:20.910059 1269 kernel_validator.go:81] Validating kernel version I0801 13:02:20.910409 1269 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.11.0 node02 Ready 29s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 30s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33047/kubevirt/virt-controller:devel Untagged: localhost:33047/kubevirt/virt-controller@sha256:eeb9265585a0bc4d879f9eb570fd3df7b0c9b5237a912baa37a134d5fddfce53 Deleted: sha256:ab966a1c5f5e2719dc30e0f48746f0e49711fcb1b8fe120b7998866ce877b89c Deleted: sha256:507759a1161eee0b944e57d98d1dfc3fde1c5ee2fa8cabd04a07e49a043eebdd Deleted: sha256:5b0abaf353bec545ff98ca84daa6ff5615f06ddc57bea6962d51899f16e4047e Deleted: sha256:f0a7eacc6ab530c5bcc09e405b01d2c9498f074b69417860b5b9f820f25918a8 Untagged: localhost:33047/kubevirt/virt-launcher:devel Untagged: localhost:33047/kubevirt/virt-launcher@sha256:6ed79854202d99ef63c8393cf45f6a0c8c9a9e6219061010d2468586f91399b3 Deleted: sha256:c1994fca353a000925d24e4c5a5a3f579a3d798fc7c92c234395665996e5cb7e Deleted: sha256:7b1b52c5b309786b56f881a90adfe7971e5f7b40db76347feb5781b377161bc3 Deleted: sha256:7ce4e11420dac1bd87304cead507092d0b693beac4186d0dc76a834a929c33eb Deleted: sha256:42d4bd49dfbfa88aebed8f0e6e02747bddb59d9138736f2833d0cf9d22556b72 Deleted: sha256:697a25e7cec9f68d2ebf0f0691fa1805828cc66e95773b3ddf2224cc43e861ca Deleted: sha256:03c2899aafb6c94255caadaa6852d3cde9629b1607aec3d3c54412244b26ae0a Deleted: sha256:ab85a7ab124228bd59c6d97f298b6a718e3bcc83f8613fe08b8a8a3352f20963 Deleted: sha256:03d893cae7db744870472a5a154c93c3d76053cea3ba5ca90a9bfd7a73202273 Deleted: sha256:ee5d1d053e9a81fffead730fa1ff21f72c78dcd21512fdd12c21aba200ff7198 Deleted: sha256:66d8f433c93cc27b3c0e927f4ae3e4e8d916f315d0dd2d0488f1ce62428f3641 Deleted: sha256:5688d73c01836e82e7629955bf0a2b945141ca08319e2faf1b647e1966258cf9 Deleted: sha256:a18c95021a21caace93b635efc217386009109ebe8557cc2066462649fb61461 Untagged: localhost:33047/kubevirt/virt-handler:devel Untagged: localhost:33047/kubevirt/virt-handler@sha256:e1d81e3d36e6e3c61d3df135cb699773f11ec670b03c7ea3b60666b1be7ce244 Deleted: sha256:7844e8ae32783e9bbb36a9ae1d726006379013f8b38dd7265531a6a1b6f88821 Deleted: sha256:80bd99c850f8ab5b5a4e4f898994304209156d569a7c730b358c303a53e5e418 Deleted: sha256:9962a9888a4489cecdb820f16c5e4514f83f252cca3493230b34c30c4baff792 Deleted: sha256:aaff45bb01c5c6ddb967905bacb10d8be9ea71622b61227a60619d0b04ff5e3f Untagged: localhost:33047/kubevirt/virt-api:devel Untagged: localhost:33047/kubevirt/virt-api@sha256:8dd925cee657f27c86f9314dfeaef5eb34f1abdfaee25176bf459af79314b002 Deleted: sha256:87197f6deef1b964c5170713c893df19f4887b0fff298cccbc845b6d9ed12a5c Deleted: sha256:9f46f0e700fcaf4d0e9f25c227662c9c7a8ea71c772818c261e69a8cc26df36b Deleted: sha256:6c40850ab3f14a2a6f1016f851cda605c5e80904db8ca21a5956101bd0031160 Deleted: sha256:6b7eb30c5a2a94955adefeff8e700fc0b7696e018b610cc71945df54a64372b8 Untagged: localhost:33047/kubevirt/subresource-access-test:devel Untagged: localhost:33047/kubevirt/subresource-access-test@sha256:48765700f03bffca6e8812b946ef33be6adedfd43bbf0c2759b4d74fcb12c3a6 Deleted: sha256:214059f2df1bd32caea3018b381c64590a5d969567ca94e418b26ed161c68ae2 Deleted: sha256:62b8c6ec940ef9221ab23802f57a2a0245aee74b93878bd6088a97212de9a367 Deleted: sha256:8b415637f0b5cb90da9de9943856ef29bde6796a9b6607ed3d4a3de7595775f3 Deleted: sha256:cf455f5678531f1c3b379c71a25fac970c63a4661c006d19a6048f88c567aa74 Untagged: localhost:33034/kubevirt/example-hook-sidecar:devel Untagged: localhost:33034/kubevirt/example-hook-sidecar@sha256:e51cda53b1e919917bf004de4992c5279a6586bf476ff22b8ba788c86bf3e131 Deleted: sha256:0f3f62a1dc035b93b83d47687fee50601777c04c6e1862e5e126340ff752cf94 Deleted: sha256:6dc9c5aae270f3435e51e3397d64932dff738976a2e369152e9e7289a10fd519 Deleted: sha256:008751890b8305ef27b248760ffba945d182f4643178db57104fa9a41ff913c5 Deleted: sha256:30f96ac6be218da3183a441ff86da3028a0694d7bcc9d3d56baa682fe08bc897 sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/tests_test tests/vmi_networking_test.go:470:4: undefined: waitUntilVMIReady make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh