+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/07 07:57:33 Waiting for host: 192.168.66.101:22 2018/07/07 07:57:36 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/07 07:57:44 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/07 07:57:49 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 25.503895 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:ebbfb61a65c9b8cc26fc1d6b25ca30e5878f1a5430cf51d18844825e758e1eaf + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/07 07:58:33 Waiting for host: 192.168.66.102:22 2018/07/07 07:58:36 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/07 07:58:44 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/07 07:58:49 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 36s v1.10.3 node02 Ready 13s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 36s v1.10.3 node02 Ready 13s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33778/kubevirt/virt-controller:devel Untagged: localhost:33778/kubevirt/virt-controller@sha256:f023f2e3d4dc6fa4b7a88aff4b3e4ed9d19936d05bfe16897fda2e511b270429 Deleted: sha256:a60e784d7aaa9618110b2fc344d9c5015a1c75ec6a502dfc8565cbace5ef4655 Deleted: sha256:a0144e25bc567381d69477f21bcd81a0cd8b0f95d1f08ce536e4082613452ecb Deleted: sha256:5409a52e0fb56657db1f93098cd4745c9014224acd0d53b2b898910e87083245 Deleted: sha256:4adfc4f292f5705a40bd36fb6b71284f4aa80fbceab3b9e2d3fc5661fd34332e Untagged: localhost:33778/kubevirt/virt-launcher:devel Untagged: localhost:33778/kubevirt/virt-launcher@sha256:42db95a89b784ba7930bbd2de217ea0bfe4f583b197bf7d614ecb1927c033c88 Deleted: sha256:afaf9f841c76c20ae72e39787e131919adb5d73df221b21567c3bb0c2016bdb0 Deleted: sha256:3a446bb20a703d18dbbb6bd3fa41d07d4baf5a482fa85335b91714b683d95c6b Deleted: sha256:8efb61317a95626ecfa01569886260a8e1341a10bd4ba144e2fe269da0003c2d Deleted: sha256:690023836ea5b54d3506e5033e07860566e69bfd7d65a26315ca10d5096ef3b8 Deleted: sha256:2bfbbf93743933662ab5daabf86027ee7fec2873bb4f16695271764c9d9480e1 Deleted: sha256:ce1e7b9a14ce42221fd54eee87eed5c93495cf9dc1906fb9d100d1a23b10141a Deleted: sha256:789d86788522021ba5168df10b754909f6625637fb62227c7707ee4bd3444a68 Deleted: sha256:8e4b909126385fcb41d33c7b86b438fc4a15a9cd65b02c66cfa7dd9488cb4ae2 Deleted: sha256:a075b127c2365a8b498aafdee4504241fb0a742b4fbda8b1404de3fbf3e2b5cb Deleted: sha256:c9e493ae261bcb2a67a6731c6732dec272288d2fb26666e3a004e3739a13aac9 Deleted: sha256:395216c099e06853b0bf15d9357557f28f26401bd13c96540bc8afe87612395f Deleted: sha256:5db70167dd8b74662be326dee42b56e2150c6338283aeee156c71e7ae27070ba Untagged: localhost:33778/kubevirt/virt-handler:devel Untagged: localhost:33778/kubevirt/virt-handler@sha256:c3aa149f5be30d02ac39d3d1d274494d2dbc1cfafbc27d84b0e911b0e01dd75b Deleted: sha256:e9fa978f92befcd05e0a1fe92c3167c0cb000297c0de8186ec8b3f3992014e71 Deleted: sha256:8e3dedaa91b97ed74ffff6090ede4a181bcb85d05334921156b30bf7420ab3a1 Deleted: sha256:63de6161f6f462001b1bd9e9ae70a7f7849461f4a8e14c81d7c5b301a4290c81 Deleted: sha256:8503d3d4e08414c94d401964f10aad392bd23af5250ede04c8e889b6a590cf19 Untagged: localhost:33778/kubevirt/virt-api:devel Untagged: localhost:33778/kubevirt/virt-api@sha256:9126f631f4f5457a6728a8547b08a95100e19421c2eb9ee58056e468d3f32c9a Deleted: sha256:1cde7c6aec3af71a401a4828524dc64392961b7c84515db6a2367a985469d4f9 Deleted: sha256:f378c1cd3bd24c0f4d9dcd0e233d61cb16f1c376b060fe3299019ee6b2156d45 Deleted: sha256:51cb8a509d4985a554608a4fa45675b143bee0c4c5db1874fdac1616344515d9 Deleted: sha256:399d0e018c8bc23f1f1c52621607f774ed5d96ec0b9ad29ed03c7eb1bbe6950d Untagged: localhost:33778/kubevirt/subresource-access-test:devel Untagged: localhost:33778/kubevirt/subresource-access-test@sha256:c2af11930f85b300a5833b54583ce560748bfab3fbd3274ea6d4bff997606bd7 Deleted: sha256:4dca54d97107f2c10556dceef0222393d49de98793d0c696824518fc9bc4537a Deleted: sha256:3270eb691d1a378473bf8836d714d42c8cd2059178e9b017257c133cf4064cb7 Deleted: sha256:752a6457ca35c3a44b6059217bd9e61343e9c9bb8ea118d6af13fd57ffb52231 Deleted: sha256:daffb5903898d4a461ca29f3da9c92aaab36807f5912265cadfedc0082dde292 sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/pkg/virt-controller/services_test pkg/virt-controller/services/template_test.go:164:22: undefined: "kubevirt.io/kubevirt/pkg/api/v1".Toleration make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh