+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/18 16:38:03 Waiting for host: 192.168.66.101:22 2018/07/18 16:38:06 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/18 16:38:14 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/18 16:38:19 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/07/18 16:38:24 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 26.006996 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:91f1cf813b1cfe65441f82f1494de5ec4f8e74db3dc11cfd018154d261800432 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/18 16:39:04 Waiting for host: 192.168.66.102:22 2018/07/18 16:39:07 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/18 16:39:15 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/18 16:39:20 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 37s v1.10.3 node02 Ready 11s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 38s v1.10.3 node02 Ready 12s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... sha256:eac86de70a4e6cb392340c5eb3c9e29aa4eee64229c68e6e8a3ba9514fb773e5 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:eac86de70a4e6cb392340c5eb3c9e29aa4eee64229c68e6e8a3ba9514fb773e5 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 138d4f372f95 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> a5be079f2ad5 Step 5/8 : USER 1001 ---> Using cache ---> a8da3331f8c9 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> c752014396ba Removing intermediate container 2f36da36ed91 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 47d6b5fb0b5c ---> 5a9e159d112e Removing intermediate container 47d6b5fb0b5c Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-controller" '' ---> Running in de67cc1dbf45 ---> 8cdca659af4a Removing intermediate container de67cc1dbf45 Successfully built 8cdca659af4a Sending build context to Docker daemon 41.03 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dc5562afdf06 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 67916fb6391a Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 8ab76a438618 Removing intermediate container 9c890dd04c60 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> edfee84e4d77 Removing intermediate container dea425a6ccb4 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in c878993baffd  ---> 54440840dec4 Removing intermediate container c878993baffd Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 9f1f57cc9b4f  ---> 35858d82fd23 Removing intermediate container 9f1f57cc9b4f Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 133a97a498bd Removing intermediate container 2ad180a68f1b Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 2737da3da299 ---> 406c2844b773 Removing intermediate container 2737da3da299 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-launcher" '' ---> Running in 7bd0905ad4fa ---> 4d7fb8006e8e Removing intermediate container 7bd0905ad4fa Successfully built 4d7fb8006e8e Sending build context to Docker daemon 40.1 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> bd2925a010c1 Removing intermediate container 97127b3c92d7 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in e225088cbdf0 ---> d84d6294e7bd Removing intermediate container e225088cbdf0 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-handler" '' ---> Running in 09c86076b6dc ---> 29f1a1378ba9 Removing intermediate container 09c86076b6dc Successfully built 29f1a1378ba9 Sending build context to Docker daemon 37.03 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 425f1b8d360c Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 109325fd6af7 Step 5/8 : USER 1001 ---> Using cache ---> e638e9684a2f Step 6/8 : COPY virt-api /usr/bin/virt-api ---> f0cfff0830c7 Removing intermediate container 18b35fca5595 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in e6785f78cf2a ---> 5f16dd27ff24 Removing intermediate container e6785f78cf2a Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-api" '' ---> Running in 5cbc3d7a94cd ---> 43ba0eee39cc Removing intermediate container 5cbc3d7a94cd Successfully built 43ba0eee39cc Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/7 : ENV container docker ---> Using cache ---> c41fed4a1333 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 940d88594d2e Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 923b84390ce2 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> e9ddd62d459f Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 71ab0eba6ca9 ---> ebfc6baa924f Removing intermediate container 71ab0eba6ca9 Successfully built ebfc6baa924f Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/5 : ENV container docker ---> Using cache ---> c41fed4a1333 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 944f01c7c457 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "vm-killer" '' ---> Running in 6292856b8029 ---> 564f60145eec Removing intermediate container 6292856b8029 Successfully built 564f60145eec Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 760b7aedd755 Step 3/7 : ENV container docker ---> Using cache ---> 242765a70aa0 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> b671cb63e24f Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 96395ae20289 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 281b61469fe1 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "registry-disk-v1alpha" '' ---> Running in 1c30df7e4266 ---> 7b3f47899de6 Removing intermediate container 1c30df7e4266 Successfully built 7b3f47899de6 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32803/kubevirt/registry-disk-v1alpha:devel ---> 7b3f47899de6 Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in 829bdb112815 ---> 0d2ff9e9fde7 Removing intermediate container 829bdb112815 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in 1a3886dc79a2  % Total % Received % Xferd Average Speed Time Time Time Current  Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 47 12.1M 47 5904k 0 0 4193k 0 0:00:02 0:00:01 0:00:01 4193k 100 12.1M 100 12.1M 0 0 7737k 0 0:00:01 0:00:01 --:--:-- 7732k  ---> bef4deb81929 Removing intermediate container 1a3886dc79a2 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 53122a8c4331 ---> 228d40c35eee Removing intermediate container 53122a8c4331 Successfully built 228d40c35eee Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32803/kubevirt/registry-disk-v1alpha:devel ---> 7b3f47899de6 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in b21848efeb29 ---> 3088e1b84541 Removing intermediate container b21848efeb29 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 24ca5f32bd76   % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0  0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0  0 --:--:-- --:--:-- --:--:-- 0  0 221M 0 121k 0 0 100k 0 0:37:32 0:00:01 0:37:31 100k 0 221M 0 709k 0 0 308k 0 0:12:15 0:00:02 0:12:13 537k 0 221M 0 1773k 0 0 540k 0 0:07:00 0:00:03 0:06:57 794k 1 221M 1 2774k 0 0 650k  0 0:05:48 0:00:04 0:05:44 866k 1 221M 1 3966k 0 0 755k 0 0:05:00 0:00:05 0:04:55 950k 2 221M 2 5269k 0 0 845k 0 0:04:28 0:00:06 0:04:22 1023k 2 221M 2 6581k 0 0 911k 0 0:04:08 0:00:07 0:04:01 1193k 3 221M 3 7885k 0 0 961k 0 0:03:56 0:00:08 0:03:48 1242k 4 221M 4 9347k 0 0 1012k 0 0:03:44 0:00:09 0:03:35 1324k 4 221M 4 10.4M 0 0 1048k 0 0:03:36 0:00:10 0:03:26 1359k 5 221M 5 11.7M 0 0 1076k 0 0:03:30 0:00:11 0:03:19 1367k 5 221M 5 12.9M 0 0 1085k 0 0:03:29 0:00:12 0:03:17 1336k 6 221M 6 14.2M 0 0 1108k 0 0:03:24 0:00:13 0:03:11 1349k 7 221M 7 15.7M 0 0 1133k 0 0:03:20 0:00:14 0:03:06 1357k 7 221M 7 17.1M 0 0 1157k 0 0:03:15 0:00:15 0:03:00 1380k 8 221M 8 18.7M 0 0 1182k  0 0:03:11 0:00:16 0:02:55 1419k 9 221M 9 20.1M 0 0 1201k  0 0:03:08 0:00:17 0:02:51 1487k 9 221M 9 21.4M 0 0 1203k  0 0:03:08 0:00:18 0:02:50 1453k 10 221M 10 22.7M 0 0 1209k 0 0:03:07 0:00:19 0:02:48 1426k 10 221M 10 24.1M 0 0 1222k 0 0:03:05 0:00:20 0:02:45 1419k 11 221M 11 25.6M 0 0 1237k 0 0:03:03 0:00:21 0:02:42 1413k 12 221M 12 27.1M 0 0 1252k 0 0:03:01 0:00:22 0:02:39 1428k 12 221M 12 28.6M 0 0 1265k 0 0:02:59 0:00:23 0:02:36 1491k 13 221M 13 30.1M 0 0 1276k 0 0:02:57 0:00:24 0:02:33 1531k 14 221M 14 31.7M 0 0 1290k 0 0:02:55 0:00:25 0:02:30 1562k 15 221M 15 33.4M 0 0 1307k 0 0:02:53 0:00:26 0:02:27 1607k 15 221M 15 35.4M 0 0 1333k 0 0:02:50 0:00:27 0:02:23 1690k 16 221M 16 37.5M 0 0 1364k 0 0:02:46 0:00:28 0:02:18 1827k 17 221M 17 39.0M 0 0 1367k 0 0:02:45 0:00:29 0:02:16 1809k 18 221M 18 40.4M 0 0 1372k  0 0:02:45 0:00:30 0:02:15 1789k 18 221M 18 42.0M 0 0 1379k  0 0:02:44 0:00:31 0:02:13 1751k 19 221M 19 43.2M 0 0 1375k 0 0:02:44 0:00:32 0:02:12 1603k 20 221M 20 44.6M 0 0 1377k 0 0:02:44 0:00:33 0:02:11 1450k 20 221M 20 45.9M 0 0 1376k 0 0:02:44 0:00:34 0:02:10 1430k 21 221M 21 46.9M 0 0 1365k 0 0:02:46 0:00:35 0:02:11 1324k 21 221M 21 47.8M 0 0 1354k  0 0:02:47 0:00:36 0:02:11 1200k 22 221M 22 48.8M 0 0 1344k 0 0:02:48 0:00:37 0:02:11 1150k 22 221M 22 49.8M 0 0 1336k 0 0:02:49 0:00:38 0:02:11 1062k 22 221M 22 50.7M 0 0 1326k 0 0:02:51 0:00:39 0:02:12 980k 23 221M 23 51.5M 0 0 1313k 0 0:02:52 0:00:40 0:02:12 946k 23 221M 23 52.5M 0 0 1304k 0 0:02:53 0:00:41 0:02:12 941k 24 221M 24 53.3M 0 0 1293k 0 0:02:55 0:00:42 0:02:13 909k 24 221M 24 54.1M 0 0 1284k  0 0:02:56 0:00:43 0:02:13 889k 24 221M 24 55.1M 0 0 1276k 0 0:02:57 0:00:44 0:02:13 888k 25 221M 25 56.0M 0  0 1270k 0 0:02:58  0:00:45  0:02:13 923k 25 221M 25 57.1M 0 0 1265k 0 0:02:59 0:00:46 0:02:13 945k 26 221M 26 58.1M 0 0 1258k 0 0:03:00 0:00:47 0:02:13 969k 26 221M 26 58.8M 0  0 1249k 0 0:03:01 0:00:48 0:02:13 951k 26 221M 26 59.7M 0 0 1244k 0 0:03:02 0:00:49 0:02:13 955k 27 221M 27 60.6M 0 0 1237k 0 0:03:03 0:00:50 0:02:13 935k 27 221M 27 61.4M 0 0  1229k 0 0:03:04 0:00:51 0:02:13 899k 28 221M 28 62.3M 0 0 1223k  0 0:03:05 0:00:52 0:02:13 883k 28 221M 28 63.1M 0 0 1215k 0 0:03:06 0:00:53 0:02:13 877k 28 221M 28 63.9M 0 0 1207k 0 0:03:07 0:00:54 0:02:13 847k  29 221M 29 64.7M 0 0 1200k 0 0:03:08 0:00:55 0:02:13 829k 29 221M 29 65.3M 0 0 1190k 0 0:03:10 0:00:56 0:02:14 789k 29 221M 29 65.8M 0 0 1179k  0 0:03:12 0:00:57 0:02:15 720k 30 221M 30 66.5M  0 0 1169k 0 0:03:13 0:00:58 0:02:15 689k 30 221M 30 67.0M 0 0 1160k 0  0:03:15 0:00:59 0:02:16 650k 30 221M 30 67.6M 0 0 1150k 0 0:03:17 0:01:00 0:02:17 605k 30 221M 30 68.2M 0 0 1142k 0 0:03:18 0:01:01 0:02:17 600k 31 221M 31 68.9M 0 0 1135k 0 0:03:19 0:01:02 0:02:17 631k 31 221M 31 69.7M 0  0 1129k 0 0:03:20 0:01:03 0:02:17 664k 31 221M 31 70.2M 0 0 1120k 0 0:03:22 0:01:04 0:02:18 650k 31 221M 31 70.8M 0 0 1111k 0 0:03:24 0:01:05 0:02:19 640k 32 221M 32 71.3M 0 0 1103k  0 0:03:25 0:01:06 0:02:19 627k 32 221M 32 71.8M 0 0 1095k 0 0:03:27 0:01:07 0:02:20 598k 32 221M 32 72.4M 0 0 1087k 0 0:03:28 0:01:08 0:02:20 558k 32 221M 32 73.0M 0 0 1081k 0 0:03:29 0:01:09 0:02:20 573k 33 221M 33 73.8M 0 0 1076k   0 0:03:30 0:01:10 0:02:20 615k 33 221M 33 74.7M 0 0 1074k  0 0:03:31 0:01:11 0:02:20 695k 34 221M 34 76.0M 0 0  1077k 0 0:03:30 0:01:12 0:02:18 842k 34 221M 34 77.2M 0 0 1081k  0 0:03:29 0:01:13  0:02:16 987k 35 221M 35 78.2M 0 0 1080k  0 0:03:30 0:01:14 0:02:16 1066k 35 221M 35 79.3M 0 0 1080k  0 0:03:30 0:01:15 0:02:15 1133k 36 221M 36 80.4M 0 0 1080k 0 0:03:29 0:01:16 0:02:13 1162k 36 221M 36 81.5M 0 0 1081k 0 0:03:29 0:01:17 0:02:12 1128k 37 221M 37 82.4M 0 0 1079k  0  0:03:30 0:01:18 0:02:12 1059k 37 221M 37 83.3M 0 0 1077k  0 0:03:30 0:01:19 0:02:11 1043k 38 221M 38 84.3M 0 0 1077k  0 0:03:30 0:01:20 0:02:10 1034k 38 221M 38 85.3M 0  0 1076k 0 0:03:30 0:01:21 0:02:09 1021k 38 221M 38 86.0M 0 0 1072k  0 0:03:31 0:01:22 0:02:09 933k 39 221M 39 86.7M 0 0 1067k 0 0:03:32 0:01:23 0:02:09 883k 39 221M 39 87.5M 0 0 1063k  0 0:03:33 0:01:24 0:02:09 841k 39 221M 39 88.2M 0 0 1059k 0 0:03:34 0:01:25 0:02:09 779k 40 221M 40 88.9M 0 0 1056k 0 0:03:34 0:01:26 0:02:08 720k 40 221M 40 89.7M 0 0 1053k 0 0:03:35 0:01:27 0:02:08 747k 40 221M 40 90.6M 0 0 1051k  0 0:03:35 0:01:28  0:02:07 782k 41 221M 41 91.6M 0 0 1051k 0 0:03:35 0:01:29 0:02:06 849k 41 221M 41 92.4M 0 0 1049k 0 0:03:36 0:01:30 0:02:06 867k 42 221M 42 93.1M 0 0 1046k 0 0:03:36 0:01:31 0:02:05 869k 42 221M 42 93.9M  0 0 1043k 0 0:03:37 0:01:32 0:02:05 867k 42 221M 42 94.7M 0 0 1041k  0 0:03:37 0:01:33 0:02:04 856k 43 221M 43 95.5M 0 0  1038k 0 0:03:38 0:01:34 0:02:04 805k  43 221M 43 96.3M 0 0 1036k 0 0:03:38 0:01:35 0:02:03 806k 43 221M 43 97.2M 0 0 1035k 0 0:03:39 0:01:36 0:02:03 845k 44 221M 44 98.3M 0 0 1036k  0 0:03:38 0:01:37 0:02:01 902k 45 221M 45 99.7M 0 0 1039k 0 0:03:38 0:01:38 0:02:00 1011k 45 221M 45 101M 0 0 1046k 0 0:03:36 0:01:39 0:01:57 1185k 46 221M 46 102M 0 0 1051k  0 0:03:35 0:01:40 0:01:55 1337k 47 221M 47 104M 0 0 1056k 0 0:03:34 0:01:41 0:01:53 1454k 47 221M 47 105M 0 0 1059k  0 0:03:34 0:01:42 0:01:52 1519k 48 221M 48 107M 0 0 1063k 0 0:03:33 0:01:43 0:01:50 1531k 49 221M 49 108M 0 0 1067k 0 0:03:32 0:01:44 0:01:48 1502k 49 221M 49 110M 0  0 1072k 0 0:03:31 0:01:45 0:01:46 1491k 50 221M 50 111M 0 0 1076k 0 0:03:30 0:01:46 0:01:44 1485k 50 221M 50 112M 0 0 1078k 0 0:03:30 0:01:47 0:01:43 1458k 51 221M 51 114M 0 0 1081k 0 0:03:29 0:01:48 0:01:41 1442k 52 221M 52 115M 0 0 1084k 0 0:03:29 0:01:49 0:01:40 1429k 52 221M 52 117M 0 0 1087k 0 0:03:28 0:01:50 0:01:38 1407k 53 221M 53 118M 0 0 1089k 0 0:03:28 0:01:51 0:01:37 1353k 54 221M 54 119M 0 0 1092k 0 0:03:27 0:01:52 0:01:35 1390k 54 221M 54 121M 0 0 1096k 0 0:03:26 0:01:53 0:01:33 1422k 55 221M 55 122M 0  0 1099k 0  0:03:26 0:01:54 0:01:32 1438k 56 221M 56 124M 0 0 1103k 0 0:03:25 0:01:55 0:01:30 1458k 56 221M 56  125M 0 0 1107k  0 0:03:24 0:01:56 0:01:28 1518k 57 221M 57 127M 0 0 1111k 0 0:03:24 0:01:57 0:01:27 1547k 58 221M 58 128M 0 0 1115k 0 0:03:23 0:01:58 0:01:25 1543k 58 221M 58 130M 0 0 1117k 0 0:03:23 0:01:59 0:01:24 1515k 59 221M 59 131M 0 0 1118k 0 0:03:22 0:02:00 0:01:22 1458k 59 221M 59 132M 0 0 1119k 0 0:03:22 0:02:01 0:01:21 1408k 60 221M 60 133M 0 0 1121k  0 0:03:22 0:02:02 0:01:20 1343k 60 221M 60 135M 0 0 1122k 0 0:03:22 0:02:03 0:01:19 1304k 61 221M 61 136M 0 0 1123k 0 0:03:22 0:02:04 0:01:18 1258k 61 221M 61 137M 0 0 1123k 0 0:03:22 0:02:05 0:01:17 1231k 62 221M 62 138M 0 0 1124k  0 0:03:21 0:02:06 0:01:15 1222k 63 221M 63 139M 0 0 1125k 0 0:03:21 0:02:07 0:01:14 1242k 63 221M 63 141M 0 0 1127k 0 0:03:21 0:02:08 0:01:13 1230k 64 221M 64 142M 0 0 1126k 0 0:03:21 0:02:09 0:01:12 1211k 64 221M 64 143M 0 0 1125k 0 0:03:21 0:02:10 0:01:11 1179k 64 221M 64 144M 0 0 1123k 0 0:03:21 0:02:11 0:01:10 1117k 65 221M 65 144M 0 0 1122k 0 0:03:22 0:02:12 0:01:10 1028k 65 221M 65 145M 0 0 1118k 0 0:03:22 0:02:13 0:01:09 900k 65 221M 65 146M 0 0 1114k 0 0:03:23 0:02:14 0:01:09 811k 66 221M 66 146M 0 0 1110k 0 0:03:24 0:02:15 0:01:09 730k 66 221M 66 147M 0 0 1107k 0 0:03:24 0:02:16 0:01:08 665k 66 221M 66 147M 0 0 1103k 0 0:03:25 0:02:17 0:01:08 614k 67 221M 67 148M 0 0 1100k 0 0:03:26 0:02:18 0:01:08 615k 67 221M 67 149M 0 0 1097k 0 0:03:26 0:02:19 0:01:07 626k 67 221M 67 149M 0 0 1094k 0 0:03:27 0:02:20 0:01:07 655k 67 221M 67 150M 0 0 1092k 0 0:03:27 0:02:21 0:01:06 685k 68 221M 68 151M 0 0 1091k 0 0:03:27 0:02:22 0:01:05 744k 68 221M 68 152M  0 0 1091k 0 0:03:27 0:02:23 0:01:04 860k 69 221M 69 153M 0 0 1092k 0 0:03:27 0:02:24 0:01:03 956k 70 221M 70 155M 0 0 1094k 0 0:03:27 0:02:25 0:01:02 1089k 70 221M 70 156M 0 0 1096k 0 0:03:26 0:02:26 0:01:00 1223k 71 221M 71 158M 0 0 1099k 0 0:03:26 0:02:27 0:00:59 1331k 71 221M 71 159M 0 0 1101k 0 0:03:25 0:02:28 0:00:57 1386k 72 221M 72 160M 0 0 1104k 0 0:03:25 0:02:29 0:00:56 1445k 73 221M 73 162M 0 0 1106k 0 0:03:24 0:02:30 0:00:54 1477k 73 221M 73 163M 0 0 1110k 0 0:03:24 0:02:31 0:00:53 1505k 74 221M 74 165M 0 0 1114k 0 0:03:23 0:02:32 0:00:51 1565k 75 221M 75 167M 0 0 1118k 0 0:03:22 0:02:33 0:00:49 1616k 76 221M 76 168M 0 0 1121k 0 0:03:22 0:02:34 0:00:48 1622k 76 221M 76 169M 0 0 1119k 0 0:03:22 0:02:35 0:00:47 1501k 76 221M 76 170M 0 0 1117k 0 0:03:23 0:02:36 0:00:47 1326k 77 221M 77 171M 0 0 1114k 0 0:03:23 0:02:37 0:00:46 1123k 77 221M 77 171M 0 0 1112k 0 0:03:23 0:02:38 0:00:45 935k 77 221M 77 172M 0 0 1110k 0 0:03:24 0:02:39 0:00:45 785k 78 221M 78 173M 0 0 1109k 0 0:03:24 0:02:40 0:00:44 777k 78 221M 78 174M 0 0 1107k 0 0:03:24 0:02:41 0:00:43 812k 79 221M 79 175M 0  0 1107k 0 0:03:24 0:02:42 0:00:42 858k 79 221M 79 176M 0 0 1105k  0 0:03:25 0:02:43 0:00:42 866k  79 221M 79 176M 0 0 1103k 0 0:03:25 0:02:44 0:00:41 852k 80 221M 80 177M 0 0 1099k 0 0:03:26 0:02:45 0:00:41  786k 80 221M 80 177M 0 0 1095k 0 0:03:27 0:02:46 0:00:41 692k 80 221M 80 178M 0 0 1091k 0 0:03:27 0:02:47 0:00:40  578k 80 221M 80 178M 0 0 1087k  0 0:03:28 0:02:48 0:00:40  501k 80 221M 80 179M 0 0 1083k 0 0:03:29 0:02:49 0:00:40 448k 81 221M 81 179M 0 0 1080k 0 0:03:29 0:02:50 0:00:39 461k 81 221M 81 180M 0 0 1078k 0 0:03:30 0:02:51 0:00:39 515k 81 221M 81 181M 0 0 1077k 0 0:03:30 0:02:52 0:00:38 618k 82 221M 82 182M 0 0 1079k 0 0:03:30 0:02:53 0:00:37  803k 82 221M 82 183M 0 0 1080k 0 0:03:30 0:02:54 0:00:36 968k 83 221M 83 185M 0 0 1082k 0 0:03:29 0:02:55 0:00:34 1147k 84 221M 84 186M 0 0 1083k 0 0:03:29 0:02:56 0:00:33 1281k 84 221M 84 187M 0 0 1084k  0 0:03:29 0:02:57 0:00:32 1324k 85 221M 85 188M  0 0 1085k 0 0:03:28 0:02:58  0:00:30 1314k 85 221M 85 190M 0 0 1087k 0 0:03:28 0:02:59 0:00:29 1330k 86 221M 86 191M 0 0 1088k 0 0:03:28 0:03:00 0:00:28 1300k 87 221M 87 192M 0 0 1089k 0 0:03:28 0:03:01 0:00:27 1298k 87 221M 87 194M 0 0 1091k  0 0:03:27 0:03:02 0:00:25 1338k 88 221M 88 195M 0 0 1093k 0 0:03:27 0:03:03 0:00:24 1374k 89 221M 89 197M 0 0 1097k 0 0:03:26 0:03:04 0:00:22 1444k 89 221M 89 199M 0 0 1102k 0 0:03:25 0:03:05 0:00:20 1610k 91 221M 91 201M 0 0 1110k 0 0:03:24 0:03:06 0:00:18 1861k 92 221M 92 205M 0 0 1122k  0 0:03:22 0:03:07 0:00:15 2234k 94 221M 94 209M 0 0 1138k 0 0:03:19 0:03:08 0:00:11 2803k 96 221M 96 214M 0 0 1158k 0 0:03:15 0:03:09 0:00:06 3401k 98 221M 98 218M 0  0 1174k 0 0:03:13 0:03:10 0:00:03 3837k 100 221M 100 221M 0 0 1188k 0 0:03:10 0:03:10 --:--:-- 4269k  ---> 035dad61e16a Removing intermediate container 24ca5f32bd76 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 60a9c97190b2 ---> bbe4bc2d8db4 Removing intermediate container 60a9c97190b2 Successfully built bbe4bc2d8db4 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32803/kubevirt/registry-disk-v1alpha:devel ---> 7b3f47899de6 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3088e1b84541 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in 920c8ff53804  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 9 37.0M 9 3737k 0 0 4654k  0 0:00:08 --:--:-- 0:00:08 4649k 22 37.0M 22 8548k 0 0 4738k 0 0:00:07 0:00:01 0:00:06 4736k 36 37.0M 36 13.4M 0 0 4916k 0 0:00:07 0:00:02 0:00:05 4915k 51 37.0M 51 19.1M 0 0 5162k 0 0:00:07 0:00:03 0:00:04 5161k 68 37.0M 68 25.4M 0 0 5415k 0 0:00:06 0:00:04 0:00:02 5415k 87 37.0M 87 32.2M 0 0 5696k 0 0:00:06 0:00:05 0:00:01 5864k 100 37.0M 100 37.0M 0 0 5741k 0 0:00:06 0:00:06 --:--:-- 6120k  ---> 611a90f402f8 Removing intermediate container 920c8ff53804 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in b33570aea582 ---> bc8a08c383b0 Removing intermediate container b33570aea582 Successfully built bc8a08c383b0 Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 8ded2e37f9da Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 2baf0c61c4e7 Step 5/8 : USER 1001 ---> Using cache ---> cddb35bbdd8e Step 6/8 : COPY subresource-access-test /subresource-access-test ---> Using cache ---> 0854b469fae5 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Using cache ---> ea3ecc9d1585 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "subresource-access-test" '' ---> Running in 36693643bf87 ---> 40688242ef1a Removing intermediate container 36693643bf87 Successfully built 40688242ef1a Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/9 : ENV container docker ---> Using cache ---> c41fed4a1333 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 4f9ac85fbee5 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 788ec0618eab Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> cc3ff134b422 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> cd908bbed6a4 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 1630fb4c77d9 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "winrmcli" '' ---> Running in af207bc2ae41 ---> 4f163532f416 Removing intermediate container af207bc2ae41 Successfully built 4f163532f416 Sending build context to Docker daemon 35.17 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 43cfafb0eafc Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> Using cache ---> 24aadc05dfbe Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Using cache ---> c9cf905c0985 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 9f1d9bbd7ca8 ---> db0b08c577e2 Removing intermediate container 9f1d9bbd7ca8 Successfully built db0b08c577e2 hack/build-docker.sh push The push refers to a repository [localhost:32803/kubevirt/virt-controller] 0b983f21d044: Preparing 291a040d9067: Preparing 891e1e4ef82a: Preparing 291a040d9067: Pushed 0b983f21d044: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:b6c78ac2bcf870ffe5677f9cc9f6541a8c58412b634abeaa1c40e71977141ced size: 949 The push refers to a repository [localhost:32803/kubevirt/virt-launcher] 138707ca6c8d: Preparing 949778acc009: Preparing b783fbc2c036: Preparing 673b4219a745: Preparing 728677d519c3: Preparing 03cf24bfe08c: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 186d8b3e4fd8: Waiting fa6154170bf5: Waiting 5eefb9960a36: Waiting da38cf808aa5: Waiting 891e1e4ef82a: Waiting 03cf24bfe08c: Waiting b83399358a92: Waiting 728677d519c3: Waiting 949778acc009: Pushed 673b4219a745: Pushed 138707ca6c8d: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed b783fbc2c036: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 03cf24bfe08c: Pushed 728677d519c3: Pushed 5eefb9960a36: Pushed devel: digest: sha256:11ba2081ab219ff514ce8c774cc2f814f583a9ce0c8c4c6237c2450832fd4cef size: 2828 The push refers to a repository [localhost:32803/kubevirt/virt-handler] 53fe731619ad: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 53fe731619ad: Pushed devel: digest: sha256:a4e3fbffbcd31d4cfa50fdcd6b70e01117c44d3d298f6d2586545ed281cd6969 size: 741 The push refers to a repository [localhost:32803/kubevirt/virt-api] 941d0cbe51d8: Preparing c1418c9009fc: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler c1418c9009fc: Pushed 941d0cbe51d8: Pushed devel: digest: sha256:522d0cb88ff9ccd8aa88ad15172fe266aeda050e46dba07f30a0d9729c7770af size: 948 The push refers to a repository [localhost:32803/kubevirt/disks-images-provider] 080f4f9db6ce: Preparing 7270498e55cc: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 080f4f9db6ce: Pushed 7270498e55cc: Pushed devel: digest: sha256:3f2440c465beba43218dc9ea3ca12f8e22685243572becc738adfef92f403abd size: 948 The push refers to a repository [localhost:32803/kubevirt/vm-killer] 68a997c47b9c: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 68a997c47b9c: Pushed devel: digest: sha256:9a4c51ce3e5ea3f0a743dbc4149ef784e8cb77151df3ae366de817e76a1c57b6 size: 740 The push refers to a repository [localhost:32803/kubevirt/registry-disk-v1alpha] 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 0905ff81ba68: Pushed 0be79cca88bb: Pushed 25edbec0eaea: Pushed devel: digest: sha256:86cfa9a94fe121fea9116c748337cfb3b4cd74438312ffc7280d1673c004d0c9 size: 948 The push refers to a repository [localhost:32803/kubevirt/cirros-registry-disk-demo] 714024d1627b: Preparing 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Waiting 0905ff81ba68: Mounted from kubevirt/registry-disk-v1alpha 0be79cca88bb: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 714024d1627b: Pushed devel: digest: sha256:4cd80be25d0f4b6225689cefeeaf95cb236c1c19743c31f7b8470cbf4cdb208b size: 1160 The push refers to a repository [localhost:32803/kubevirt/fedora-cloud-registry-disk-demo] ef67609702d0: Preparing 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Waiting 0905ff81ba68: Mounted from kubevirt/cirros-registry-disk-demo 0be79cca88bb: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo ef67609702d0: Pushed devel: digest: sha256:5f1d20593519ca0b7cc5efa2ce1861a64be08b6fb9fb8c9f8c02b3e3f04c90b9 size: 1161 The push refers to a repository [localhost:32803/kubevirt/alpine-registry-disk-demo] 39e6dfc85453: Preparing 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 0905ff81ba68: Mounted from kubevirt/fedora-cloud-registry-disk-demo 0be79cca88bb: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 39e6dfc85453: Pushed devel: digest: sha256:3b70315004381b7667caf143fd030781455345a00f8a4c563e30666ce836bc78 size: 1160 The push refers to a repository [localhost:32803/kubevirt/subresource-access-test] ee9ddfcdf556: Preparing f11f8a160bfe: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer f11f8a160bfe: Pushed ee9ddfcdf556: Pushed devel: digest: sha256:dcaa62531c8df4ceb0c322f154865fa8c424815cd3d5fc9056becaff68955ba8 size: 948 The push refers to a repository [localhost:32803/kubevirt/winrmcli] 19038f244d65: Preparing 40d75932eef1: Preparing 8acbb2baad2c: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test 19038f244d65: Pushed 8acbb2baad2c: Pushed 40d75932eef1: Pushed devel: digest: sha256:1c14a832904321ef160a1079c12a52822e9b176deaf7b4083148f951625f7262 size: 1165 The push refers to a repository [localhost:32803/kubevirt/example-hook-sidecar] 17b0cc85f06c: Preparing 39bae602f753: Preparing 17b0cc85f06c: Pushed 39bae602f753: Pushed devel: digest: sha256:e889bf4f6792d89f3b06224b0117889af0448ffb19828ef374e5953c604d8b33 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-62-g8fda10c ++ KUBEVIRT_VERSION=v0.7.0-62-g8fda10c + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:32803/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ wc -l ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-62-g8fda10c ++ KUBEVIRT_VERSION=v0.7.0-62-g8fda10c + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:32803/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'virt-api-7d79764579-2m8vl 0/1 ContainerCreating 0 4s virt-api-7d79764579-pz47m 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-b8scc 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-mpfvt 0/1 ContainerCreating 0 3s virt-handler-7fnts 0/1 ContainerCreating 0 4s virt-handler-klnk2 0/1 ContainerCreating 0 4s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-k88w4 0/1 Pending 0 3s disks-images-provider-rrrv5 0/1 Pending 0 3s virt-api-7d79764579-2m8vl 0/1 ContainerCreating 0 6s virt-api-7d79764579-pz47m 0/1 ContainerCreating 0 6s virt-controller-7d57d96b65-b8scc 0/1 ContainerCreating 0 6s virt-controller-7d57d96b65-mpfvt 0/1 ContainerCreating 0 5s virt-handler-7fnts 0/1 ContainerCreating 0 6s virt-handler-klnk2 0/1 ContainerCreating 0 6s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n 'false false false' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + true + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ grep false ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-k88w4 1/1 Running 0 1m disks-images-provider-rrrv5 1/1 Running 0 1m etcd-node01 1/1 Running 0 12m kube-apiserver-node01 1/1 Running 0 12m kube-controller-manager-node01 1/1 Running 0 12m kube-dns-86f4d74b45-fq798 3/3 Running 0 12m kube-flannel-ds-65l8x 1/1 Running 1 12m kube-flannel-ds-cmrzv 1/1 Running 0 12m kube-proxy-dtq55 1/1 Running 0 12m kube-proxy-f82cj 1/1 Running 0 12m kube-scheduler-node01 1/1 Running 0 12m virt-api-7d79764579-2m8vl 1/1 Running 0 1m virt-api-7d79764579-pz47m 1/1 Running 0 1m virt-controller-7d57d96b65-b8scc 1/1 Running 0 1m virt-controller-7d57d96b65-mpfvt 1/1 Running 0 1m virt-handler-7fnts 1/1 Running 0 1m virt-handler-klnk2 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ grep -v Running ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:eac86de70a4e6cb392340c5eb3c9e29aa4eee64229c68e6e8a3ba9514fb773e5 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1531932806 Will run 141 of 141 specs • [SLOW TEST:114.525 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • ------------------------------ • [SLOW TEST:132.639 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:15.652 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:29.782 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ •••••••••••volumedisk0 compute ------------------------------ • [SLOW TEST:56.981 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • [SLOW TEST:16.718 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.209 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:80.713 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:277 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:278 ------------------------------ • [SLOW TEST:80.138 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:305 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:306 ------------------------------ • [SLOW TEST:50.715 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:326 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:349 ------------------------------ • [SLOW TEST:16.226 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• ------------------------------ • [SLOW TEST:49.060 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:48.646 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:120.813 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:137.368 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:51.296 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:51.629 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:49.477 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:107.821 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ • [SLOW TEST:139.985 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • [SLOW TEST:6.321 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.687 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.670 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.565 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:5.297 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to three, to two and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:8.570 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:18.208 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ • [SLOW TEST:5.490 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 ------------------------------ • ------------------------------ • [SLOW TEST:5.578 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • [SLOW TEST:14.591 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove the finished VM /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:279 ------------------------------ • [SLOW TEST:52.469 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:54.644 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:54.247 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ • [SLOW TEST:157.036 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••• ------------------------------ • [SLOW TEST:5.114 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.083 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••••• ------------------------------ • [SLOW TEST:56.358 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 ------------------------------ • ------------------------------ • [SLOW TEST:54.947 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 ------------------------------ • [SLOW TEST:57.529 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:425 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:426 ------------------------------ • [SLOW TEST:57.801 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:438 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:439 ------------------------------ • [SLOW TEST:57.196 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:451 should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:452 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.009 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachineinstance testvmi2nhcc • [SLOW TEST:63.182 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service node-port-vm successfully exposed for virtualmachineinstance testvmi2nhcc • [SLOW TEST:8.201 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:98 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 ------------------------------ Service cluster-ip-udp-vm successfully exposed for virtualmachineinstance testvmibqdws • [SLOW TEST:64.726 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:147 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:151 ------------------------------ Service node-port-udp-vm successfully exposed for virtualmachineinstance testvmibqdws • [SLOW TEST:10.250 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:179 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 ------------------------------ Service cluster-ip-vmrs successfully exposed for vmirs replicaset62xxl • [SLOW TEST:80.371 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:227 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:260 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:264 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmi78m5n • [SLOW TEST:71.350 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:292 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:336 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:337 ------------------------------ •• ------------------------------ • [SLOW TEST:18.123 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:80.711 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:42.449 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:63.549 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ Received interrupt. Emitting contents of GinkgoWriter... --------------------------------------------------------- STEP: Doing run: 0 STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition STEP: Stopping the VirtualMachineInstance --------------------------------------------------------- Received interrupt. Running AfterSuite... ^C again to terminate immediately