+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release + [[ k8s-1.10.4-release =~ openshift-.* ]] + [[ k8s-1.10.4-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.4 + KUBEVIRT_PROVIDER=k8s-1.10.4 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/08/02 22:17:49 Waiting for host: 192.168.66.101:22 2018/08/02 22:17:52 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/02 22:18:04 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 26.503389 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:0b03bc075b58fa27f9ddb677592c7a95e12e80c4609b2b8856553b30aed2a10a + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/08/02 22:18:46 Waiting for host: 192.168.66.102:22 2018/08/02 22:18:49 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/02 22:19:01 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39611920 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 35s v1.10.4 node02 NotReady 10s v1.10.4 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 10s v1.10.4' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 36s v1.10.4 node02 Ready 11s v1.10.4 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 46s v1.10.4 node02 Ready 21s v1.10.4 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33041/kubevirt/virt-controller:devel Untagged: localhost:33041/kubevirt/virt-controller@sha256:e4cb76fbbe4968113edce05476147e891ad8510d0ed7dff907e24b9630ba98f8 Deleted: sha256:0cc77b243bfa2943c4215cd46ba89dc68d9004b2403b9eb1e9f8916ea1163bd8 Deleted: sha256:aa3ef0484a9f2b87313d9b18d81f4fd9962b6ec1ab89aa129234e6393cabbd3a Deleted: sha256:62c9aa15d3c07ac3541c32e1de8edcd989e6cd141b57b3f2350434515e1c01bb Deleted: sha256:a6f1970acbbeeacd4344167e10ddfc98dd1677332249eaca3fdb612cf40f79cb Untagged: localhost:33041/kubevirt/virt-launcher:devel Untagged: localhost:33041/kubevirt/virt-launcher@sha256:1ae30c9688d2c325b0035d13467f99831c881a64d1077965fba375ba25845be6 Deleted: sha256:a05d3cb292639fdf93154e545c911083f26f05f155728dbfe9adedad42af95d0 Deleted: sha256:c8d380e3869dbd0a74736169735ce160fc6858c4287bfdef7803cc66f5d4ba89 Deleted: sha256:86f24956859000f9ca75cac56f65a71fcf46d0e44a698be08d0a9e9e6903fc9a Deleted: sha256:ab9d1f37bb9d9b5535391e463be54335d0c344d60f1a61912270562deb8f451b Deleted: sha256:080eee6a96a911498122b7596871ee911f1e4c43773081a7878d88fbe46472f9 Deleted: sha256:01e34ae72d0026285fb4b298fbb8f584928afee2b067f0061988b86a1ddab890 Deleted: sha256:bba6db3fd86568be11271870fcc722b19096650813969260a4bdfe24ad4e5719 Deleted: sha256:fccb9306ddbea78415412d53a2ba59a25d7ecdf5292662a24615ca3b2bfffd65 Deleted: sha256:d1c2c23a864a934b251f48076f1b41f72dd31a3534ae6a2e0b174615d0376fb8 Deleted: sha256:8d126f1d5ced1e8421c058309d42f3a39292f9dd60a67b0a8831cbb37cedb361 Deleted: sha256:b1fe6719f73a77ab01cd211c1c1ae451494da78fe9cf20e93037b1633b4e0ef3 Deleted: sha256:db54bc0705551c7e02383cb020f017d9ed595068fdd096e82fc9c1c2857b7527 Untagged: localhost:33041/kubevirt/virt-handler:devel Untagged: localhost:33041/kubevirt/virt-handler@sha256:b34e0f33bc16a08165c8ba98c8dde004b998b4786cc4a2f632b7feee0e471542 Deleted: sha256:79d77d21cf537a25586a10369c912f6052a5b17797eb9536382d2cc2527fb6c8 Deleted: sha256:a137c28aacb80d696204c7a1f8c002d271d3e772fdd12178ae4dbd2679acf543 Deleted: sha256:b4ec11c887fd684baac481f01186e93e82208acf10bc093970c21cee4a64edc5 Deleted: sha256:77f4d61820e9baa1e179c940a34c51ce9c7cd296b74fa385d5e106d73f492c7a Untagged: localhost:33041/kubevirt/virt-api:devel Untagged: localhost:33041/kubevirt/virt-api@sha256:365f2cfbc9f8fcda66e72a518d369b76ebcc4708e3f23634929f47fc59af479f Deleted: sha256:7f34f989d2b327316f4604d6f5e2bc3fdeee5dd20c80fb077b0f486b3937e689 Deleted: sha256:91a053eb27890c5924ce06c9a340352b01053ee1a5e02cfec385155c70d0fc14 Deleted: sha256:0ad02c5fe958e95e6960d61824440832eaca801947d2d9e335b17fd68630333c Deleted: sha256:bad42b5bc589a61a28b850c866a544a56b7fa505178f4ebd8da61f00225ac088 Untagged: localhost:33041/kubevirt/subresource-access-test:devel Untagged: localhost:33041/kubevirt/subresource-access-test@sha256:03a81db04bd5afc2bc3d761806d89ec1152a020682ad7ed04100d9e892741be1 Deleted: sha256:a39af2c4629081aff49e0ec5bbbb709145244a824dd4e988b6e8a83d4478afbc Deleted: sha256:cfa70404d3b0ab5790101a73ed4ea5ae70eb6646aa872fcf31b6185941f404af Deleted: sha256:8b79d869464fcd16cdb171d9ba9cf7401e778924af3a6fb0192b266e7bb9e830 Deleted: sha256:8621634803a08dee80e346f027602506ad06898a2676218ab15dd91bb05c4ad4 Untagged: localhost:33041/kubevirt/example-hook-sidecar:devel Untagged: localhost:33041/kubevirt/example-hook-sidecar@sha256:039f24ac72804541cc0921dc1bf1df1fee49b99e170b180f0a0b3f240a16c81d Deleted: sha256:9cab8342e050786f07fdc29f658d50bbda45be92804314896771b700525f6dd5 Deleted: sha256:258b762a80c63a2a62f5f4d11027e98fc6ec55df3f2234d99f152bb72a2677c9 Deleted: sha256:666f1609e82f65e6060fc75e5e2fecdd47e3754f4cab9021d81918115310a105 Deleted: sha256:cb1671ac350fd7b27687c410576cb7a6b3c8d79a1c3af229897ca2f49f57e7aa sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap pkg/virt-launcher/virtwrap/manager.go:144:54: vmi.Spec.Domain.CPU.PlacementPolicy undefined (type *"kubevirt.io/kubevirt/pkg/api/v1".CPU has no field or method PlacementPolicy) # kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap pkg/virt-launcher/virtwrap/manager.go:144:54: vmi.Spec.Domain.CPU.PlacementPolicy undefined (type *"kubevirt.io/kubevirt/pkg/api/v1".CPU has no field or method PlacementPolicy) make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh