+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2 + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/02 21:20:49 Waiting for host: 192.168.66.101:22 2018/08/02 21:20:52 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/02 21:21:00 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/02 21:21:05 Connected to tcp://192.168.66.101:22 ++ grep active ++ systemctl status docker ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ wc -l ++ grep active + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0802 21:21:08.409700 1259 feature_gate.go:230] feature gates: &{map[]} I0802 21:21:08.497848 1259 kernel_validator.go:81] Validating kernel version I0802 21:21:08.498067 1259 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 54.505839 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:1f4fdc00e812683035b8e637a41f2a678715cc5749ec428ad4fc07bf5f8f0dba + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/02 21:22:24 Waiting for host: 192.168.66.102:22 2018/08/02 21:22:27 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/02 21:22:35 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/02 21:22:40 Connected to tcp://192.168.66.102:22 ++ grep active ++ systemctl status docker ++ wc -l + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0802 21:22:44.961021 1263 kernel_validator.go:81] Validating kernel version I0802 21:22:44.961477 1263 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.11.0 node02 Ready 39s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 39s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33146/kubevirt/virt-controller:devel Untagged: localhost:33146/kubevirt/virt-controller@sha256:6906495b271f7fda4c32b94d4460eecb91d9132b2927773f6d31a7444b74660c Deleted: sha256:b44e22b2fbef40a5c9cb62faf9c4184d0470b15a1fc9587536b4fd596df975fd Deleted: sha256:d5f2119e15042982aa215e1dbc7dd4b979900402497c034f4d6ffcab1b8b3066 Deleted: sha256:1de41a2b20e2b05db01dbb4a5b24d7d16de44a97b4f074028399f5323efb6070 Deleted: sha256:61d31efc3c02323c941f9579698a96c632ef6ddc773c73345b3e43c714bea930 Untagged: localhost:33146/kubevirt/virt-launcher:devel Untagged: localhost:33146/kubevirt/virt-launcher@sha256:923f8a385bdef58d13b6cf711bac8d21256880301ac7f1936b6822a4e20cfae2 Deleted: sha256:c9b4db6d3b316399c5499829a4c9c9e3ddc82e76f9659af0feb9c4ad74172619 Deleted: sha256:ff36daef95bf6caf21d3a59c63bb806b5d3b49f3e3d7439a3b179f084a7ee014 Deleted: sha256:687ef97bc320a1480f10ad52e6eb3e823616e24e63e72f031fbd7d82202755ab Deleted: sha256:5904e0f6d40b7d8e2ad02fc2cf7c3050101d7ca42fd1798709dcf1e6a51aedcd Deleted: sha256:05781f3813335a75ce9ebd82a0422a7eca1d5a1db6c115323358e0ba7e9cce84 Deleted: sha256:ac10ffa4ae5c478803f1ea289e51e77044b5cfd3d311b11d73969bf715f3de2d Deleted: sha256:5453f93c87ff8e86dbcb84f7b5cbdae651bca79a69ff68373ef19e613944f6b0 Deleted: sha256:c5e24d64879a8ed8426e4a8f23d8596dc652c1c8fbf865369cf6a81bfe5fe5c6 Deleted: sha256:0eaa4165a1aed8d5187adc4f37196efe165ebc44b75c3cb83e1abc5289cd5914 Deleted: sha256:60ac2b1a7ccf8c916a46c069a52df74a95f73c0461b7befa81e4efdbc01023b9 Untagged: localhost:33146/kubevirt/virt-handler:devel Untagged: localhost:33146/kubevirt/virt-handler@sha256:a13235d85c627a2e9ded7e4c8ce29e842b0da4fb4a6c33123222bd9f4dd3f949 Deleted: sha256:3702fc0912cca50d561cadef906a24747f301f9271e92601eba73b3884ba6465 Deleted: sha256:68a6851075312720fa9025782013d848bc648ba54cdf06712ea57f374b5931c9 Deleted: sha256:eb37a72da95ab5116e6a3c998291a5076db3d21e95db882330ec66bb8424712d Deleted: sha256:ce9ad0654926acb9563cf9d802b1f9ed8d2ffe17369752caeeb7c7514104866f Untagged: localhost:33146/kubevirt/virt-api:devel Untagged: localhost:33146/kubevirt/virt-api@sha256:7dc69a7b3137959e344e9c52853af156454a04a577057609d1bf3413fffeae6c Deleted: sha256:bc5bcc59ab78cc3bf83aa21c7ff9724e5f82148e9ae602933cbdc8a85dfaf548 Deleted: sha256:64c01e8334cfb0b744cbe51847c2634cec5aa5b1e657671ed766fee90dcddd0e Deleted: sha256:c218b51ce544a6d4e204b0393246bfdcb6ebd03eae6ab82b9fa765a3179bc912 Deleted: sha256:745b29d946575c0eaa28f5b2af5caacff9a6e72ff17c153d834cd76e9a57af4c Untagged: localhost:33146/kubevirt/subresource-access-test:devel Untagged: localhost:33146/kubevirt/subresource-access-test@sha256:2d10c2dfcd7648a518a8d30b5028026559f06f98685b5ff283d585f74baaef62 Deleted: sha256:3483530755371a42a98d8b388ea2f0883cc5bfa035f0e794696285aa11b13b58 Deleted: sha256:e87f590dd9036b46f20fd97fca5b8c1657687c0554247478a1f3c8db88b09d10 Deleted: sha256:b15c7d335e5fce3a4e3ac3162a6796cffedf360ab435209554566b040f5885a5 Deleted: sha256:b3e32fa87cd7de1953fc37c6baa73aa77e36dc1c0e3f6dde920a63de6f63bdd7 Untagged: localhost:33122/kubevirt/example-hook-sidecar:devel Untagged: localhost:33122/kubevirt/example-hook-sidecar@sha256:02f46166deaac98633b275fc8079b87477a85d400d2c3ad117a5f5fcb3521756 Deleted: sha256:5979fc3a723b89ee5cae6b4e26860f1999177bcaafb136f16c8bd573df7ed842 Deleted: sha256:f25ead9942baeb8b10647ca9e0d9f752f50b909aeac8b82ea4f5ae8656577d1c Deleted: sha256:c61fd7f987f6fb5bb64717be1966d201caed20016b60fb0f18b3d7973164310a Deleted: sha256:db4af286cf65236ff49c1a9a4722b05fe539185935d4373e3ad13b7a108138e9 sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:b69a3f94b2043cd36cc41eb5d9446480e0a640962e468ab72c3cc51f2b89386a go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap pkg/virt-launcher/virtwrap/manager.go:144:54: vmi.Spec.Domain.CPU.PlacementPolicy undefined (type *"kubevirt.io/kubevirt/pkg/api/v1".CPU has no field or method PlacementPolicy) # kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap pkg/virt-launcher/virtwrap/manager.go:144:54: vmi.Spec.Domain.CPU.PlacementPolicy undefined (type *"kubevirt.io/kubevirt/pkg/api/v1".CPU has no field or method PlacementPolicy) make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release@2/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh