+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release + [[ vagrant-release =~ openshift-.* ]] + export PROVIDER=k8s-1.9.3 + PROVIDER=k8s-1.9.3 + export VAGRANT_NUM_NODES=1 + VAGRANT_NUM_NODES=1 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Unable to find image 'kubevirtci/k8s-1.9.3@sha256:2f1600681800f70de293d2d35fa129bfd2c64e14ea01bab0284e4cafcc330662' locally Trying to pull repository docker.io/kubevirtci/k8s-1.9.3 ... sha256:2f1600681800f70de293d2d35fa129bfd2c64e14ea01bab0284e4cafcc330662: Pulling from docker.io/kubevirtci/k8s-1.9.3 eb359457bc37: Pulling fs layer a068d4baae47: Pulling fs layer d867b8969b5b: Pulling fs layer bc770f22e8ac: Pulling fs layer 713f9c3973ad: Pulling fs layer dc133e2c3a66: Pulling fs layer 050f9598d39b: Pulling fs layer 3baada3bf8b7: Pulling fs layer 713f9c3973ad: Waiting dc133e2c3a66: Waiting d867b8969b5b: Download complete bc770f22e8ac: Download complete 713f9c3973ad: Verifying Checksum 713f9c3973ad: Download complete dc133e2c3a66: Download complete eb359457bc37: Verifying Checksum eb359457bc37: Download complete a068d4baae47: Verifying Checksum a068d4baae47: Download complete eb359457bc37: Pull complete a068d4baae47: Pull complete d867b8969b5b: Pull complete bc770f22e8ac: Pull complete 713f9c3973ad: Pull complete dc133e2c3a66: Pull complete 3baada3bf8b7: Verifying Checksum 050f9598d39b: Verifying Checksum 050f9598d39b: Download complete 050f9598d39b: Pull complete 3baada3bf8b7: Pull complete Digest: sha256:2f1600681800f70de293d2d35fa129bfd2c64e14ea01bab0284e4cafcc330662 Status: Downloaded newer image for docker.io/kubevirtci/k8s-1.9.3@sha256:2f1600681800f70de293d2d35fa129bfd2c64e14ea01bab0284e4cafcc330662 kubevirt-functional-tests-vagrant-release0_registry WARNING: You're not using the default seccomp profile WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled kubevirt-functional-tests-vagrant-release0-node01 2018/04/06 10:32:59 Waiting for host: 192.168.66.101:22 2018/04/06 10:33:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/06 10:33:10 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/06 10:33:15 Connected to tcp://192.168.66.101:22 [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 18.504067 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:fe8f782c2c8568c47719195f1803623fde7f460f1dc48b3e0b917bd472cbb3e3 clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created node "node01" untainted kubevirt-functional-tests-vagrant-release0-node02 2018/04/06 10:33:45 Waiting for host: 192.168.66.102:22 2018/04/06 10:33:48 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/06 10:34:00 Connected to tcp://192.168.66.102:22 [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. 2018/04/06 10:34:02 Waiting for host: 192.168.66.101:22 2018/04/06 10:34:02 Connected to tcp://192.168.66.101:22 Warning: Permanently added '[127.0.0.1]:32812' (ECDSA) to the list of known hosts. Warning: Permanently added '[127.0.0.1]:32812' (ECDSA) to the list of known hosts. Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ grep -v Ready ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 NotReady master 28s v1.9.3 node02 NotReady 2s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... sha256:b18cb6d0540e8dd771c6ecf8eaa4b2884cb7d1ef29889bcfcd54070ed0067268 go version go1.9.2 linux/amd64 rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready skipping directory . go version go1.9.2 linux/amd64 74da57d538d30a105104d6c510239439f4663428c6e84f277ac6d801a0fffa2d 74da57d538d30a105104d6c510239439f4663428c6e84f277ac6d801a0fffa2d make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && ./hack/build-go.sh install " sha256:b18cb6d0540e8dd771c6ecf8eaa4b2884cb7d1ef29889bcfcd54070ed0067268 go version go1.9.2 linux/amd64 skipping directory . go version go1.9.2 linux/amd64 Compiling tests... compiled tests.test aae5e8521199ad5ef95b41aa1d23383f815ac348dd06d467d274e6e16bc0a345 aae5e8521199ad5ef95b41aa1d23383f815ac348dd06d467d274e6e16bc0a345 hack/build-docker.sh build sending incremental file list ./ Dockerfile kubernetes.repo sent 854 bytes received 53 bytes 1814.00 bytes/sec total size is 1167 speedup is 1.29 Sending build context to Docker daemon 35.91 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 26d08b2f873c Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> d73a973f334b Step 5/8 : USER 1001 ---> Using cache ---> 92d14ee3fb2e Step 6/8 : COPY virt-controller /virt-controller ---> 5a580d843023 Removing intermediate container f7f591375651 Step 7/8 : ENTRYPOINT /virt-controller ---> Running in b638f75c75a6 ---> bacb01877310 Removing intermediate container b638f75c75a6 Step 8/8 : LABEL "kubevirt-functional-tests-vagrant-release0" '' "virt-controller" '' ---> Running in 799383a58aa6 ---> 1ad915524765 Removing intermediate container 799383a58aa6 Successfully built 1ad915524765 sending incremental file list ./ Dockerfile entrypoint.sh kubevirt-sudo libvirtd.sh sock-connector sent 2970 bytes received 110 bytes 6160.00 bytes/sec total size is 4980 speedup is 1.62 Sending build context to Docker daemon 37.48 MB Step 1/13 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/13 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> fdd57f83e446 Step 3/13 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> b824b882c94e Step 4/13 : COPY sock-connector /sock-connector ---> Using cache ---> 8cbe8006a6c1 Step 5/13 : COPY virt-launcher /virt-launcher ---> 7938d1de40d5 Removing intermediate container c58dff3ab77e Step 6/13 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 63c863eba9f3 Removing intermediate container d7e1b6c748b5 Step 7/13 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Running in a0ee21d2d86d  ---> e821d3d5d73d Removing intermediate container a0ee21d2d86d Step 8/13 : RUN rm -f /libvirtd.sh ---> Running in c19c3153ca17  ---> d11b3fde9fcf Removing intermediate container c19c3153ca17 Step 9/13 : COPY libvirtd.sh /libvirtd.sh ---> 1d328c21cde3 Removing intermediate container 92792c111a68 Step 10/13 : RUN chmod a+x /libvirtd.sh ---> Running in ccaf74d828e6  ---> 246448ca5661 Removing intermediate container ccaf74d828e6 Step 11/13 : COPY entrypoint.sh /entrypoint.sh ---> c2a3fd3c8f1c Removing intermediate container e7b5e5723d81 Step 12/13 : ENTRYPOINT /entrypoint.sh ---> Running in ecc3c4255d21 ---> 2a5ee2be802e Removing intermediate container ecc3c4255d21 Step 13/13 : LABEL "kubevirt-functional-tests-vagrant-release0" '' "virt-launcher" '' ---> Running in fee02669e382 ---> 0867e94baf10 Removing intermediate container fee02669e382 Successfully built 0867e94baf10 sending incremental file list ./ Dockerfile sent 585 bytes received 34 bytes 1238.00 bytes/sec total size is 775 speedup is 1.25 Sending build context to Docker daemon 36.58 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/5 : COPY virt-handler /virt-handler ---> 778f9ed252df Removing intermediate container 99a207be5457 Step 4/5 : ENTRYPOINT /virt-handler ---> Running in bcc523320431 ---> 20b8805e405b Removing intermediate container bcc523320431 Step 5/5 : LABEL "kubevirt-functional-tests-vagrant-release0" '' "virt-handler" '' ---> Running in 9842abdedc21 ---> 32055fca188f Removing intermediate container 9842abdedc21 Successfully built 32055fca188f sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd/iscsi-demo-target-tgtd ./ Dockerfile run-tgt.sh sent 2185 bytes received 53 bytes 4476.00 bytes/sec total size is 3992 speedup is 1.78 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/10 : ENV container docker ---> Using cache ---> d0b0dc01cb5d Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> 35c00214c275 Step 5/10 : RUN mkdir -p /images ---> Using cache ---> e3e179183ea6 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> e86b61826c05 Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> db2dc53efd9e Step 8/10 : EXPOSE 3260 ---> Using cache ---> f2767bc543c9 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> c066b080f396 Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-vagrant-release0" '' ---> Running in 6958c41fd958 ---> 06f1b3d5fa2b Removing intermediate container 6958c41fd958 Successfully built 06f1b3d5fa2b sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd/vm-killer ./ Dockerfile sent 602 bytes received 34 bytes 1272.00 bytes/sec total size is 787 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/5 : ENV container docker ---> Using cache ---> d0b0dc01cb5d Step 4/5 : RUN dnf -y install procps-ng && dnf -y clean all ---> Using cache ---> 35c59baf794d Step 5/5 : LABEL "kubevirt-functional-tests-vagrant-release0" '' "vm-killer" '' ---> Running in 7ba12e38cb95 ---> 84c35c8ca829 Removing intermediate container 7ba12e38cb95 Successfully built 84c35c8ca829 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd/registry-disk-v1alpha ./ Dockerfile entry-point.sh sent 1529 bytes received 53 bytes 3164.00 bytes/sec total size is 2482 speedup is 1.57 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> f20a31819d03 Step 3/7 : ENV container docker ---> Using cache ---> 96277a0619bb Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> f43fc40caf89 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> a7c8be03fcd6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 3129ff620dd1 Step 7/7 : LABEL "kubevirt-functional-tests-vagrant-release0" '' "registry-disk-v1alpha" '' ---> Running in dd8efef75206 ---> 658a9e1d6b56 Removing intermediate container dd8efef75206 Successfully built 658a9e1d6b56 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd/cirros-registry-disk-demo ./ Dockerfile sent 630 bytes received 34 bytes 1328.00 bytes/sec total size is 825 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32811/kubevirt/registry-disk-v1alpha:devel ---> 658a9e1d6b56 Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in ffec109bac21 ---> a92abd1a14a5 Removing intermediate container ffec109bac21 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in 26187f4069ac   % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 1 12.1M 1 128k 0 0 88922 0 0:02:23 0:00:01 0:02:22 88862 10 12.1M 10 1248k 0 0 498k 0 0:00:24 0:00:02 0:00:22 498k 64 12.1M 64 8048k 0 0 2321k 0 0:00:05 0:00:03 0:00:02 2321k 100 12.1M 100 12.1M 0 0 3297k 0 0:00:03 0:00:03 --:--:-- 3296k  ---> 9d7f9eb804f7 Removing intermediate container 26187f4069ac Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-release0" '' ---> Running in f173539bb24a ---> 5d0acca60306 Removing intermediate container f173539bb24a Successfully built 5d0acca60306 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd/fedora-cloud-registry-disk-demo ./ Dockerfile sent 677 bytes received 34 bytes 1422.00 bytes/sec total size is 926 speedup is 1.30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32811/kubevirt/registry-disk-v1alpha:devel ---> 658a9e1d6b56 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in e864ec2c58aa ---> f550a365e03f Removing intermediate container e864ec2c58aa Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 30c3f21d2c2c   % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--  0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0  0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0  0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 100 327 100 327 0 0 107 0 0:00:03 0:00:03 --:--:-- 926  0 221M 0 81920 0 0 23141 0 2:47:19 0:00:03 2:47:16 23141 3 221M 3 7120k 0 0 1579k 0 0:02:23 0:00:04 0:02:19 7272k 8 221M 8 18.4M 0 0 3413k 0 0:01:06 0:00:05 0:01:01 9453k 16 221M 16 35.8M 0 0 5625k 0 0:00:40 0:00:06 0:00:34 11.9M 27 221M 27 60.7M 0 0 8275k 0 0:00:27 0:00:07 0:00:20 15.2M 40 221M 40 90.0M 0 0 10.5M 0 0:00:20 0:00:08 0:00:12 18.1M 54 221M 54 121M 0 0 12.7M 0 0:00:17 0:00:09 0:00:08 22.9M 68 221M 68 152M 0 0 14.5M 0 0:00:15 0:00:10 0:00:05 27.0M 82 221M 82 182M 0 0 15.8M 0 0:00:13 0:00:11 0:00:02 29.4M 92 221M 92 205M 0 0 16.3M 0 0:00:13 0:00:12 0:00:01 28.7M 96 221M 96 213M 0 0 15.8M 0 0:00:14 0:00:13 0:00:01 24.6M 100 221M 100 221M 0 0 15.6M 0 0:00:14 0:00:14 --:--:-- 21.4M  ---> dbde0b60ff7d Removing intermediate container 30c3f21d2c2c Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-release0" '' ---> Running in d5e54864771a ---> 2cfc954d3c24 Removing intermediate container d5e54864771a Successfully built 2cfc954d3c24 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd/alpine-registry-disk-demo ./ Dockerfile sent 639 bytes received 34 bytes 1346.00 bytes/sec total size is 866 speedup is 1.29 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32811/kubevirt/registry-disk-v1alpha:devel ---> 658a9e1d6b56 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> f550a365e03f Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in 233a701e52f8  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 37.0M 0 33712 0 0 65972 0 0:09:48 --:--:-- 0:09:48 65843 2 37.0M 2 1056k 0 0 700k 0 0:00:54 0:00:01 0:00:53 700k 5 37.0M 5 2208k 0 0 871k 0 0:00:43 0:00:02 0:00:41 871k 9 37.0M 9 3424k 0 0 976k 0 0:00:38 0:00:03 0:00:35 975k 12 37.0M 12 4786k 0 0 1056k 0 0:00:35 0:00:04 0:00:31 1056k 16 37.0M 16 6361k 0 0 1155k 0 0:00:32 0:00:05 0:00:27 1266k 21 37.0M 21 8321k 0 0 1273k 0 0:00:29 0:00:06 0:00:23 1445k 27 37.0M 27 10.0M 0 0 1368k 0 0:00:27 0:00:07 0:00:20 1621k 32 37.0M 32 11.8M 0 0 1430k 0 0:00:26 0:00:08 0:00:18 1749k 36 37.0M 36 13.4M 0 0 1439k 0 0:00:26 0:00:09 0:00:17 1786k 39 37.0M 39 14.6M 0 0 1427k 0 0:00:26 0:00:10 0:00:16 1726k 43 37.0M 43 16.0M 0 0 1431k 0 0:00:26 0:00:11 0:00:15 1638k 46 37.0M 46 17.2M 0 0 1414k 0 0:00:26 0:00:12 0:00:14 1483k 49 37.0M 49 18.4M 0 0  1397k 0 0:00:27 0:00:13 0:00:14 1340k 53 37.0M 53 19.8M 0 0 1397k 0 0:00:27 0:00:14 0:00:13 1316k 57 37.0M 57 21.1M 0 0 1398k 0 0:00:27 0:00:15 0:00:12 1338k 60 37.0M 60 22.5M 0 0 1398k 0 0:00:27 0:00:16 0:00:11 1322k 65 37.0M 65 24.0M 0 0 1405k 0 0:00:26 0:00:17 0:00:09 1381k 68 37.0M 68 25.1M 0 0 1391k 0 0:00:27 0:00:18 0:00:09 1374k 71 37.0M 71 26.4M 0 0 1387k 0 0:00:27 0:00:19 0:00:08 1359k 74 37.0M 74 27.7M 0 0 1382k 0 0:00:27 0:00:20 0:00:07 1332k 78 37.0M 78 28.9M 0 0 1376k 0 0:00:27 0:00:21 0:00:06 1304k 81 37.0M 81 30.0M 0 0 1366k 0 0:00:27 0:00:22 0:00:05 1232k 84 37.0M 84 31.2M 0 0 1359k 0 0:00:27 0:00:23 0:00:04 1242k 87 37.0M 87 32.3M 0 0 1353k 0 0:00:27 0:00:24 0:00:03 1219k 90 37.0M 90 33.6M 0 0 1349k 0 0:00:28 0:00:25 0:00:03 1214k 94 37.0M 94 34.8M 0 0 1344k 0 0:00:28 0:00:26 0:00:02 1204k 97 37.0M 97 36.1M 0 0 1344k 0 0:00:28 0:00:27 0:00:01 1245k 100 37.0M 100 37.0M 0 0 1345k 0 0:00:28 0:00:28 --:--:-- 1276k  ---> c9f0469511f3 Removing intermediate container 233a701e52f8 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-release0" '' ---> Running in dfa55c7007bb ---> d17ab7381643 Removing intermediate container dfa55c7007bb Successfully built d17ab7381643 hack/build-docker.sh push The push refers to a repository [localhost:32811/kubevirt/virt-controller] 0fb7f70f3d74: Preparing 2d2b47e1e58b: Preparing 39bae602f753: Preparing 2d2b47e1e58b: Pushed 0fb7f70f3d74: Pushed 39bae602f753: Pushed devel: digest: sha256:a6bcfd4e23c08b34b66741e2e54fcf029d86722a15a1e34ac244ebf19ceded81 size: 948 The push refers to a repository [localhost:32811/kubevirt/virt-launcher] 35a8f582d4bb: Preparing b28de5a2875e: Preparing b28de5a2875e: Preparing 18bd07100b7b: Preparing bc8b2318993f: Preparing d5cc67de3c8b: Preparing 097aa00d8d6b: Preparing 9170be750ee2: Preparing 91a2c9242f65: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 530cc55618cd: Waiting 34fa414dfdf6: Waiting a1359dc556dd: Waiting 490c7c373332: Waiting 4b440db36f72: Waiting 39bae602f753: Waiting 9170be750ee2: Waiting 91a2c9242f65: Waiting d5cc67de3c8b: Pushed 18bd07100b7b: Pushed 35a8f582d4bb: Pushed bc8b2318993f: Pushed b28de5a2875e: Pushed 9170be750ee2: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 490c7c373332: Pushed 097aa00d8d6b: Pushed 91a2c9242f65: Pushed 4b440db36f72: Pushed devel: digest: sha256:b4a9961237e5fc23478af49b3cb9ee7d84c4f99d2fefaac3b93e1bdebbebddda size: 3444 The push refers to a repository [localhost:32811/kubevirt/virt-handler] 36d3c7b68ced: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher 36d3c7b68ced: Pushed devel: digest: sha256:5cc6bb8f1adde4c560abc0c89edc0a25d022641f20c08aa0698bbc72a370d184 size: 740 The push refers to a repository [localhost:32811/kubevirt/iscsi-demo-target-tgtd] 2927410cd43a: Preparing b121fc13ece8: Preparing 18dd75eb79d2: Preparing 716441edb530: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 18dd75eb79d2: Pushed 2927410cd43a: Pushed b121fc13ece8: Pushed 716441edb530: Pushed devel: digest: sha256:7c951d1a1721178728af0b70808806823e49d78cea8ded0e72a0a884a0883e3e size: 1368 The push refers to a repository [localhost:32811/kubevirt/vm-killer] de7d92f6c129: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd de7d92f6c129: Pushed devel: digest: sha256:5a825bccfbbf02f4248964f2cc2f4ef91012041d45fe322ca9e7dc0699727f0e size: 740 The push refers to a repository [localhost:32811/kubevirt/registry-disk-v1alpha] cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing cf42eba6bfe3: Pushed a87a1c350b94: Pushed 6709b2da72b8: Pushed devel: digest: sha256:d90684932517489627d942342d425fcfe6feff6d4d51629c25f222c3a7010a78 size: 948 The push refers to a repository [localhost:32811/kubevirt/cirros-registry-disk-demo] b89529e5b717: Preparing cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha cf42eba6bfe3: Mounted from kubevirt/registry-disk-v1alpha a87a1c350b94: Mounted from kubevirt/registry-disk-v1alpha b89529e5b717: Pushed devel: digest: sha256:f3c9c8b9e0d59e7f8176b13338d2117a1a3c4152fb762d6370ccb0ba60edf6c3 size: 1160 The push refers to a repository [localhost:32811/kubevirt/fedora-cloud-registry-disk-demo] 006f3a3bee86: Preparing cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing cf42eba6bfe3: Mounted from kubevirt/cirros-registry-disk-demo a87a1c350b94: Mounted from kubevirt/cirros-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 006f3a3bee86: Pushed devel: digest: sha256:58835b13688af4e80c011cb38ef81953de191928f0d2adbc0fa256babb7c75a3 size: 1161 The push refers to a repository [localhost:32811/kubevirt/alpine-registry-disk-demo] 41449d37bd40: Preparing cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo cf42eba6bfe3: Mounted from kubevirt/fedora-cloud-registry-disk-demo a87a1c350b94: Mounted from kubevirt/fedora-cloud-registry-disk-demo 41449d37bd40: Pushed devel: digest: sha256:340116a30afd48477365d673b4423ff6b74c15c9b55224104cbbe2c38fa5a6bc size: 1160 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt' 2018/04/06 10:41:20 Waiting for host: 192.168.66.101:22 2018/04/06 10:41:20 Connected to tcp://192.168.66.101:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer 15a3c1110beb: Pulling fs layer b8d158ca618a: Pulling fs layer 15a3c1110beb: Verifying Checksum 15a3c1110beb: Download complete b8d158ca618a: Verifying Checksum b8d158ca618a: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete 15a3c1110beb: Pull complete b8d158ca618a: Pull complete Digest: sha256:a6bcfd4e23c08b34b66741e2e54fcf029d86722a15a1e34ac244ebf19ceded81 Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer 1475fa3276a7: Pulling fs layer cdad54148243: Pulling fs layer 6b08fe40b721: Pulling fs layer 26722246fd43: Pulling fs layer f573fae7425b: Pulling fs layer b2f3b6880d80: Pulling fs layer 5f31214cddc5: Pulling fs layer 8c7d9afad006: Pulling fs layer a1e80189bea5: Waiting 6cc174edcebf: Waiting 1475fa3276a7: Waiting cdad54148243: Waiting 6b08fe40b721: Waiting 26722246fd43: Waiting f573fae7425b: Waiting b2f3b6880d80: Waiting 5f31214cddc5: Waiting 8c7d9afad006: Waiting f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a4b9e9eb807b: Verifying Checksum a4b9e9eb807b: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete cdad54148243: Verifying Checksum cdad54148243: Download complete 1475fa3276a7: Verifying Checksum 1475fa3276a7: Download complete 6b08fe40b721: Verifying Checksum 6b08fe40b721: Download complete 26722246fd43: Verifying Checksum 26722246fd43: Download complete f573fae7425b: Verifying Checksum f573fae7425b: Download complete b2f3b6880d80: Verifying Checksum b2f3b6880d80: Download complete 5f31214cddc5: Verifying Checksum 5f31214cddc5: Download complete 8c7d9afad006: Verifying Checksum 8c7d9afad006: Download complete d7240bccd145: Verifying Checksum d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete 1475fa3276a7: Pull complete cdad54148243: Pull complete 6b08fe40b721: Pull complete 26722246fd43: Pull complete f573fae7425b: Pull complete b2f3b6880d80: Pull complete 5f31214cddc5: Pull complete 8c7d9afad006: Pull complete Digest: sha256:b4a9961237e5fc23478af49b3cb9ee7d84c4f99d2fefaac3b93e1bdebbebddda Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists 65f4bfa962ea: Pulling fs layer 65f4bfa962ea: Verifying Checksum 65f4bfa962ea: Download complete 65f4bfa962ea: Pull complete Digest: sha256:5cc6bb8f1adde4c560abc0c89edc0a25d022641f20c08aa0698bbc72a370d184 Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists c81a49dbe8f3: Pulling fs layer 064679d2dcf5: Pulling fs layer bcaf437d3487: Pulling fs layer de7654dd0183: Pulling fs layer de7654dd0183: Waiting 064679d2dcf5: Download complete de7654dd0183: Verifying Checksum de7654dd0183: Download complete bcaf437d3487: Verifying Checksum bcaf437d3487: Download complete c81a49dbe8f3: Verifying Checksum c81a49dbe8f3: Download complete c81a49dbe8f3: Pull complete 064679d2dcf5: Pull complete bcaf437d3487: Pull complete de7654dd0183: Pull complete Digest: sha256:7c951d1a1721178728af0b70808806823e49d78cea8ded0e72a0a884a0883e3e Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 297a75761f45: Pulling fs layer 297a75761f45: Verifying Checksum 297a75761f45: Download complete 297a75761f45: Pull complete Digest: sha256:5a825bccfbbf02f4248964f2cc2f4ef91012041d45fe322ca9e7dc0699727f0e Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 82da67e25ceb: Pulling fs layer ddbdb5d1f9b9: Pulling fs layer ddbdb5d1f9b9: Verifying Checksum ddbdb5d1f9b9: Download complete 82da67e25ceb: Verifying Checksum 82da67e25ceb: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 82da67e25ceb: Pull complete ddbdb5d1f9b9: Pull complete Digest: sha256:d90684932517489627d942342d425fcfe6feff6d4d51629c25f222c3a7010a78 Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 9c5f288947a5: Pulling fs layer 9c5f288947a5: Verifying Checksum 9c5f288947a5: Download complete 9c5f288947a5: Pull complete Digest: sha256:f3c9c8b9e0d59e7f8176b13338d2117a1a3c4152fb762d6370ccb0ba60edf6c3 Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 852f38131fab: Pulling fs layer 852f38131fab: Download complete 852f38131fab: Pull complete Digest: sha256:58835b13688af4e80c011cb38ef81953de191928f0d2adbc0fa256babb7c75a3 Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists c373beaca2de: Pulling fs layer c373beaca2de: Verifying Checksum c373beaca2de: Download complete c373beaca2de: Pull complete Digest: sha256:340116a30afd48477365d673b4423ff6b74c15c9b55224104cbbe2c38fa5a6bc 2018/04/06 10:42:31 Waiting for host: 192.168.66.101:22 2018/04/06 10:42:31 Connected to tcp://192.168.66.101:22 2018/04/06 10:42:33 Waiting for host: 192.168.66.102:22 2018/04/06 10:42:33 Connected to tcp://192.168.66.102:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer 15a3c1110beb: Pulling fs layer b8d158ca618a: Pulling fs layer 15a3c1110beb: Verifying Checksum 15a3c1110beb: Download complete b8d158ca618a: Verifying Checksum b8d158ca618a: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete 15a3c1110beb: Pull complete b8d158ca618a: Pull complete Digest: sha256:a6bcfd4e23c08b34b66741e2e54fcf029d86722a15a1e34ac244ebf19ceded81 Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer 1475fa3276a7: Pulling fs layer cdad54148243: Pulling fs layer 6b08fe40b721: Pulling fs layer 26722246fd43: Pulling fs layer f573fae7425b: Pulling fs layer b2f3b6880d80: Pulling fs layer 5f31214cddc5: Pulling fs layer 8c7d9afad006: Pulling fs layer 1475fa3276a7: Waiting cdad54148243: Waiting 6b08fe40b721: Waiting 26722246fd43: Waiting f573fae7425b: Waiting b2f3b6880d80: Waiting 5f31214cddc5: Waiting 8c7d9afad006: Waiting a1e80189bea5: Waiting 6cc174edcebf: Waiting f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a4b9e9eb807b: Verifying Checksum a4b9e9eb807b: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete cdad54148243: Verifying Checksum cdad54148243: Download complete 1475fa3276a7: Verifying Checksum 1475fa3276a7: Download complete 26722246fd43: Verifying Checksum 26722246fd43: Download complete 6b08fe40b721: Verifying Checksum 6b08fe40b721: Download complete f573fae7425b: Verifying Checksum f573fae7425b: Download complete b2f3b6880d80: Verifying Checksum b2f3b6880d80: Download complete 5f31214cddc5: Verifying Checksum 5f31214cddc5: Download complete 8c7d9afad006: Verifying Checksum 8c7d9afad006: Download complete d7240bccd145: Verifying Checksum d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete 1475fa3276a7: Pull complete cdad54148243: Pull complete 6b08fe40b721: Pull complete 26722246fd43: Pull complete f573fae7425b: Pull complete b2f3b6880d80: Pull complete 5f31214cddc5: Pull complete 8c7d9afad006: Pull complete Digest: sha256:b4a9961237e5fc23478af49b3cb9ee7d84c4f99d2fefaac3b93e1bdebbebddda Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists 65f4bfa962ea: Pulling fs layer 65f4bfa962ea: Verifying Checksum 65f4bfa962ea: Download complete 65f4bfa962ea: Pull complete Digest: sha256:5cc6bb8f1adde4c560abc0c89edc0a25d022641f20c08aa0698bbc72a370d184 Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists c81a49dbe8f3: Pulling fs layer 064679d2dcf5: Pulling fs layer bcaf437d3487: Pulling fs layer de7654dd0183: Pulling fs layer de7654dd0183: Waiting 064679d2dcf5: Verifying Checksum 064679d2dcf5: Download complete bcaf437d3487: Verifying Checksum bcaf437d3487: Download complete de7654dd0183: Verifying Checksum de7654dd0183: Download complete c81a49dbe8f3: Verifying Checksum c81a49dbe8f3: Download complete c81a49dbe8f3: Pull complete 064679d2dcf5: Pull complete bcaf437d3487: Pull complete de7654dd0183: Pull complete Digest: sha256:7c951d1a1721178728af0b70808806823e49d78cea8ded0e72a0a884a0883e3e Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 297a75761f45: Pulling fs layer 297a75761f45: Verifying Checksum 297a75761f45: Download complete 297a75761f45: Pull complete Digest: sha256:5a825bccfbbf02f4248964f2cc2f4ef91012041d45fe322ca9e7dc0699727f0e Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 82da67e25ceb: Pulling fs layer ddbdb5d1f9b9: Pulling fs layer ddbdb5d1f9b9: Verifying Checksum ddbdb5d1f9b9: Download complete 82da67e25ceb: Verifying Checksum 82da67e25ceb: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 82da67e25ceb: Pull complete ddbdb5d1f9b9: Pull complete Digest: sha256:d90684932517489627d942342d425fcfe6feff6d4d51629c25f222c3a7010a78 Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 9c5f288947a5: Pulling fs layer 9c5f288947a5: Verifying Checksum 9c5f288947a5: Download complete 9c5f288947a5: Pull complete Digest: sha256:f3c9c8b9e0d59e7f8176b13338d2117a1a3c4152fb762d6370ccb0ba60edf6c3 Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 852f38131fab: Pulling fs layer 852f38131fab: Verifying Checksum 852f38131fab: Download complete 852f38131fab: Pull complete Digest: sha256:58835b13688af4e80c011cb38ef81953de191928f0d2adbc0fa256babb7c75a3 Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists c373beaca2de: Pulling fs layer c373beaca2de: Verifying Checksum c373beaca2de: Download complete c373beaca2de: Pull complete Digest: sha256:340116a30afd48477365d673b4423ff6b74c15c9b55224104cbbe2c38fa5a6bc 2018/04/06 10:43:53 Waiting for host: 192.168.66.102:22 2018/04/06 10:43:53 Connected to tcp://192.168.66.102:22 Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=k8s-1.9.3 ++ provider_prefix=kubevirt-functional-tests-vagrant-release0 ++ job_prefix=kubevirt-functional-tests-vagrant-release0 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:2f1600681800f70de293d2d35fa129bfd2c64e14ea01bab0284e4cafcc330662 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ docker_prefix=localhost:32811/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ wc -l ++ cluster/k8s-1.9.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=k8s-1.9.3 ++ provider_prefix=kubevirt-functional-tests-vagrant-release0 ++ job_prefix=kubevirt-functional-tests-vagrant-release0 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:2f1600681800f70de293d2d35fa129bfd2c64e14ea01bab0284e4cafcc330662 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ docker_prefix=localhost:32811/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z vagrant-release ]] + [[ vagrant-release =~ .*-dev ]] + [[ vagrant-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created customresourcedefinition "offlinevirtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + '[' k8s-1.9.3 = vagrant-openshift ']' + '[' k8s-1.9.3 = os-3.9.0-alpha.4 ']' + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-controller-5f7c946cc4-8lhnk 0/1 ContainerCreating 0 0s virt-controller-5f7c946cc4-mw2z2 0/1 ContainerCreating 0 0s virt-handler-nflxp 0/1 ContainerCreating 0 0s virt-handler-vl2d8 0/1 ContainerCreating 0 0s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-controller-5f7c946cc4-8lhnk 0/1 ContainerCreating 0 0s virt-controller-5f7c946cc4-mw2z2 0/1 ContainerCreating 0 0s virt-handler-nflxp 0/1 ContainerCreating 0 0s virt-handler-vl2d8 0/1 ContainerCreating 0 0s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + awk '!/virt-controller/ && /false/' + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-hk2mn false iscsi-demo-target-tgtd-nr5cz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '/virt-controller/ && /true/' ++ wc -l + '[' 1 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-node01 1/1 Running 0 10m iscsi-demo-target-tgtd-hk2mn 1/1 Running 1 1m iscsi-demo-target-tgtd-nr5cz 1/1 Running 1 1m kube-apiserver-node01 1/1 Running 0 10m kube-controller-manager-node01 1/1 Running 0 10m kube-dns-6f4fd4bdf-wckt5 3/3 Running 0 11m kube-flannel-ds-4nvvc 1/1 Running 0 11m kube-flannel-ds-wj898 1/1 Running 0 11m kube-proxy-4rvwt 1/1 Running 0 11m kube-proxy-9r5p4 1/1 Running 0 11m kube-scheduler-node01 1/1 Running 0 10m virt-controller-5f7c946cc4-8lhnk 1/1 Running 0 1m virt-controller-5f7c946cc4-mw2z2 0/1 Running 0 1m virt-handler-nflxp 1/1 Running 0 1m virt-handler-vl2d8 1/1 Running 0 1m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params=--ginkgo.noColor + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS=--ginkgo.noColor + make functest hack/dockerized "hack/build-func-tests.sh" sha256:b18cb6d0540e8dd771c6ecf8eaa4b2884cb7d1ef29889bcfcd54070ed0067268 go version go1.9.2 linux/amd64 rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready skipping directory . go version go1.9.2 linux/amd64 Compiling tests... compiled tests.test 4378785c38e3684046b78a783ea9470d692161ed3e0f7a6814e9e13adadff704 4378785c38e3684046b78a783ea9470d692161ed3e0f7a6814e9e13adadff704 hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1523011570 Will run 63 of 63 specs •••••••volumedisk0 compute ------------------------------ • [SLOW TEST:50.991 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 VM definition /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:50 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:51 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:57 ------------------------------ • [SLOW TEST:52.524 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 New VM with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:109 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:132 ------------------------------ • [SLOW TEST:20.269 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:45 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:56 should update OfflineVirtualMachine once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:145 ------------------------------ ••• ------------------------------ • [SLOW TEST:18.399 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:45 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:56 should stop VM if running set to false /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:211 ------------------------------ • [SLOW TEST:212.035 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:45 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:56 should start and stop VM multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:219 ------------------------------ • [SLOW TEST:65.395 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:45 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:56 should not update the VM spec if Running /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:232 ------------------------------ STEP: Creating new OVM, not running STEP: Starting the VM STEP: OVM has the running condition STEP: Getting the running VM STEP: Obtaining the serial console level=info timestamp=2018-04-06T10:54:09.747843Z pos=utils.go:877 component=tests namespace=kubevirt-test-default name=testvmvx4jr kind=VirtualMachine uid=b7087b64-3988-11e8-9688-525500d15501 msg="[{2 \r\n\r\n[ 0.000000] Initializing cgroup subsys cpuset\r\n[ 0.000000] Initializing cgroup subsys cpu\r\n[ 0.000000] Initializing cgroup subsys cpuacct\r\n[ 0.000000] Linux version 4.4.0-28-generic (buildd@lcy01-13) (gcc version 5.3.1 20160413 (Ubuntu 5.3.1-14ubuntu2.1) ) #47-Ubuntu SMP Fri Jun 24 10:09:13 UTC 2016 (Ubuntu 4.4.0-28.47-generic 4.4.13)\r\n[ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0\r\n[ 0.000000] KERNEL supported cpus:\r\n[ 0.000000] Intel GenuineIntel\r\n[ 0.000000] AMD AuthenticAMD\r\n[ 0.000000] Centaur CentaurHauls\r\n[ 0.000000] x86/fpu: Legacy x87 FPU detected.\r\n[ 0.000000] x86/fpu: Using 'lazy' FPU context switches.\r\n[ 0.000000] e820: BIOS-provided physical RAM map:\r\n[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable\r\n[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved\r\n[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved\r\n[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000003ddbfff] usable\r\n[ 0.000000] BIOS-e820: [mem 0x0000000003ddc000-0x0000000003dfffff] reserved\r\n[ 0.000000] BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved\r\n[ 0.000000] BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved\r\n[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved\r\n[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved\r\n[ 0.000000] NX (Execute Disable) protection: active\r\n[ 0.000000] SMBIOS 2.8 present.\r\n[ 0.000000] Hypervisor detected: KVM\r\n[ 0.000000] e820: last_pfn = 0x3ddc max_arch_pfn = 0x400000000\r\n[ 0.000000] x86/PAT: PAT not supported by CPU.\r\n[ 0.000000] found SMP MP-table at [mem 0x000f6c20-0x000f6c2f] mapped at [ffff8800000f6c20]\r\n[ 0.000000] Scanning 1 areas for low memory corruption\r\n[ 0.000000] RAMDISK: [mem 0x03916000-0x03dcbfff]\r\n[ 0.000000] ACPI: Early table checksum verification disabled\r\n[ 0.000000] ACPI: RSDP 0x00000000000F6A50 000014 (v00 BOCHS )\r\n[ 0.000000] ACPI: RSDT 0x0000000003DE20A8 000034 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)\r\n[ 0.000000] ACPI: FACP 0x0000000003DE1EC8 0000F4 (v03 BOCHS BXPCFACP 00000001 BXPC 00000001)\r\n[ 0.000000] ACPI: DSDT 0x0000000003DE0040 001E88 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)\r\n[ 0.000000] ACPI: FACS 0x0000000003DE0000 000040\r\n[ 0.000000] ACPI: APIC 0x0000000003DE1FBC 000078 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)\r\n[ 0.000000] ACPI: HPET 0x0000000003DE2034 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)\r\n[ 0.000000] ACPI: MCFG 0x0000000003DE206C 00003C (v01 BOCHS BXPCMCFG 00000001 BXPC 00000001)\r\n[ 0.000000] No NUMA configuration found\r\n[ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000003ddbfff]\r\n[ 0.000000] NODE_DATA(0) allocated [mem 0x03dd7000-0x03ddbfff]\r\n[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00\r\n[ 0.000000] kvm-clock: cpu 0, msr 0:3dd3001, primary cpu clock\r\n[ 0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns\r\n[ 0.000000] Zone ranges:\r\n[ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff]\r\n[ 0.000000] DMA32 [mem 0x0000000001000000-0x0000000003ddbfff]\r\n[ 0.000000] Normal empty\r\n[ 0.000000] Device empty\r\n[ 0.000000] Movable zone start for each node\r\n[ 0.000000] Early memory node ranges\r\n[ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009efff]\r\n[ 0.000000] node 0: [mem 0x0000000000100000-0x0000000003ddbfff]\r\n[ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x0000000003ddbfff]\r\n[ 0.000000] ACPI: PM-Timer IO Port: 0x608\r\n[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])\r\n[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)\r\n[ 0.000000] Using ACPI (MADT) for SMP configuration information\r\n[ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000\r\n[ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]\r\n[ 0.000000] e820: [mem 0x03e00000-0xafffffff] available for PCI devices\r\n[ 0.000000] Booting paravirtualized kernel on KVM\r\n[ 0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns\r\n[ 0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:1 nr_node_ids:1\r\n[ 0.000000] PERCPU: Embedded 33 pages/cpu @ffff880003600000 s98008 r8192 d28968 u2097152\r\n[ 0.000000] KVM setup async PF for cpu 0\r\n[ 0.000000] kvm-stealtime: cpu 0, msr 360d940\r\n[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 15469\r\n[ 0.000000] Policy zone: DMA32\r\n[ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0\r\n[ 0.000000] PID hash table entries: 256 (order: -1, 2048 bytes)\r\n[ 0.000000] Memory: 37156K/62952K available (8368K kernel code, 1280K rwdata, 3928K rodata, 1480K init, 1292K bss, 25796K reserved, 0K cma-reserved)\r\n[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1\r\n[ 0.000000] Hierarchical RCU implementation.\r\n[ 0.000000] \tBuild-time adjustment of leaf fanout to 64.\r\n[ 0.000000] \tRCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=1.\r\n[ 0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=64, nr_cpu_ids=1\r\n[ 0.000000] NR_IRQS:16640 nr_irqs:256 16\r\n[ 0.000000] Console: colour VGA+ 80x25\r\n[ 0.000000] console [tty1] enabled\r\n[ 0.000000] console [ttyS0] enabled\r\n[ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns\r\n[ 0.000000] tsc: Detected 2659.996 MHz processor\r\n[ 0.016000] Calibrating delay loop (skipped) preset value.. 5319.99 BogoMIPS (lpj=10639984)\r\n[ 0.020103] pid_max: default: 32768 minimum: 301\r\n[ 0.024108] ACPI: Core revision 20150930\r\n[ 0.031184] ACPI: 1 ACPI AML tables successfully acquired and loaded\r\n[ 0.040761] Security Framework initialized\r\n[ 0.044139] Yama: becoming mindful.\r\n[ 0.052542] AppArmor: AppArmor initialized\r\n[ 0.056504] Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)\r\n[ 0.061132] Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)\r\n[ 0.068363] Mount-cache hash table entries: 512 (order: 0, 4096 bytes)\r\n[ 0.072049] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)\r\n[ 0.085801] Initializing cgroup subsys io\r\n[ 0.088130] Initializing cgroup subsys memory\r\n[ 0.092338] Initializing cgroup subsys devices\r\n[ 0.096118] Initializing cgroup subsys freezer\r\n[ 0.100057] Initializing cgroup subsys net_cls\r\n[ 0.104109] Initializing cgroup subsys perf_event\r\n[ 0.108890] Initializing cgroup subsys net_prio\r\n[ 0.112053] Initializing cgroup subsys hugetlb\r\n[ 0.116048] Initializing cgroup subsys pids\r\n[ 0.126679] CPU: Physical Processor ID: 0\r\n[ 0.128503] mce: CPU supports 10 MCE banks\r\n[ 0.133718] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0\r\n[ 0.136048] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0\r\n[ 2.935233] Freeing SMP alternatives memory: 28K (ffffffff820b4000 - ffffffff820bb000)\r\n[ 3.225683] ftrace: allocating 31920 entries in 125 pages\r\n[ 3.249219] smpboot: Max logical packages: 1\r\n[ 3.252049] smpboot: APIC(0) Converting physical 0 to logical package 0\r\n[ 3.262632] x2apic enabled\r\n[ 3.264059] Switched APIC routing to physical x2apic.\r\n[ 3.287126] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1\r\n[ 3.288000] APIC calibration not consistent with PM-Timer: 160ms instead of 100ms\r\n[ 3.288000] APIC delta adjusted to PM-Timer: 6250358 (10002539)\r\n[ 3.288806] smpboot: CPU0: Intel QEMU Virtual CPU version 2.5+ (family: 0x6, model: 0x6, stepping: 0x3)\r\n[ 3.308674] Performance Events: Broken PMU hardware detected, using software events only.\r\n[ 3.316006] Failed to access perfctr msr (MSR c2 is 0)\r\n[ 3.334600] x86: Booted up 1 node, 1 CPUs\r\n[ 3.336008] smpboot: Total of 1 processors activated (5319.99 BogoMIPS)\r\n[ 3.348149] devtmpfs: initialized\r\n[ 3.359322] evm: security.selinux\r\n[ 3.360004] evm: security.SMACK64\r\n[ 3.364004] evm: security.SMACK64EXEC\r\n[ 3.368003] evm: security.SMACK64TRANSMUTE\r\n[ 3.372010] evm: security.SMACK64MMAP\r\n[ 3.376009] evm: security.ima\r\n[ 3.380011] evm: security.capability\r\n[ 3.384911] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns\r\n[ 3.388399] pinctrl core: initialized pinctrl subsystem\r\n[ 3.394152] RTC time: 10:53:49, date: 04/06/18\r\n[ 3.398298] NET: Registered protocol family 16\r\n[ 3.405195] cpuidle: using governor ladder\r\n[ 3.408026] cpuidle: using governor menu\r\n[ 3.412030] PCCT header not found.\r\n[ 3.421041] ACPI: bus type PCI registered\r\n[ 3.424014] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5\r\n[ 3.432194] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000)\r\n[ 3.436015] PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820\r\n[ 3.440483] PCI: Using configuration type 1 for base access\r\n[ 3.453626] ACPI: Added _OSI(Module Device)\r\n[ 3.456005] ACPI: Added _OSI(Processor Device)\r\n[ 3.460006] ACPI: Added _OSI(3.0 _SCP Extensions)\r\n[ 3.464004] ACPI: Added _OSI(Processor Aggregator Device)\r\n[ 3.480623] ACPI: Interpreter enabled\r\n[ 3.484018] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\\_S1_] (20150930/hwxface-580)\r\n[ 3.496007] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\\_S2_] (20150930/hwxface-580)\r\n[ 3.508026] ACPI: (supports S0 S3 S4 S5)\r\n[ 3.512011] ACPI: Using IOAPIC for interrupt routing\r\n[ 3.516103] PCI: Using host bridge windows from ACPI; if necessary, use \"pci=nocrs\" and report a bug\r\n[ 3.529591] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])\r\n[ 3.532013] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]\r\n[ 3.536148] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]\r\n[ 3.540775] PCI host bridge to bus 0000:00\r\n[ 3.544006] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]\r\n[ 3.548006] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]\r\n[ 3.552006] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]\r\n[ 3.556007] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]\r\n[ 3.560006] pci_bus 0000:00: root bus resource [bus 00-ff]\r\n[ 3.917511] pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO\r\n[ 4.084337] pci 0000:00:02.0: PCI bridge to [bus 01]\r\n[ 4.172501] pci 0000:00:02.1: PCI bridge to [bus 02]\r\n[ 4.278959] pci 0000:00:02.2: PCI bridge to [bus 03]\r\n[ 4.335725] pci 0000:00:02.3: PCI bridge to [bus 04]\r\n[ 4.349681] pci 0000:00:02.4: PCI bridge to [bus 05]\r\n[ 4.397605] pci 0000:00:1e.0: PCI bridge to [bus 06-07] (subtractive decode)\r\n[ 4.478230] pci 0000:06:00.0: PCI bridge to [bus 07]\r\n[ 4.526006] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)\r\n[ 4.540742] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)\r\n[ 4.560025] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)\r\n[ 4.576623] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)\r\n[ 4.592578] ACPI: PCI Interrupt Link [LNKE] (IRQs 5 *10 11)\r\n[ 4.608572] ACPI: PCI Interrupt Link [LNKF] (IRQs 5 *10 11)\r\n[ 4.624563] ACPI: PCI Interrupt Link [LNKG] (IRQs 5 10 *11)\r\n[ 4.640578] ACPI: PCI Interrupt Link [LNKH] (IRQs 5 10 *11)\r\n[ 4.656277] ACPI: PCI Interrupt Link [GSIA] (IRQs *16)\r\n[ 4.668026] ACPI: PCI Interrupt Link [GSIB] (IRQs *17)\r\n[ 4.680018] ACPI: PCI Interrupt Link [GSIC] (IRQs *18)\r\n[ 4.692027] ACPI: PCI Interrupt Link [GSID] (IRQs *19)\r\n[ 4.704026] ACPI: PCI Interrupt Link [GSIE] (IRQs *20)\r\n[ 4.716028] ACPI: PCI Interrupt Link [GSIF] (IRQs *21)\r\n[ 4.728020] ACPI: PCI Interrupt Link [GSIG] (IRQs *22)\r\n[ 4.740026] ACPI: PCI Interrupt Link [GSIH] (IRQs *23)\r\n[ 4.753202] ACPI: Enabled 1 GPEs in block 00 to 3F\r\n[ 4.762274] vgaarb: setting as boot device: PCI:0000:00:01.0\r\n[ 4.764000] vgaarb: device added: PCI:0000:00:01.0,decodes=io+mem,owns=io+mem,locks=none\r\n[ 4.764013] vgaarb: loaded\r\n[ 4.768003] vgaarb: bridge control possible 0000:00:01.0\r\n[ 4.776622] SCSI subsystem initialized\r\n[ 4.787215] ACPI: bus type USB registered\r\n[ 4.788476] usbcore: registered new interface driver usbfs\r\n[ 4.792191] usbcore: registered new interface driver hub\r\n[ 4.796085] usbcore: registered new device driver usb\r\n[ 4.806839] PCI: Using ACPI for IRQ routing\r\n[ 5.847095] NetLabel: Initializing\r\n[ 5.848179] NetLabel: domain hash size = 128\r\n[ 5.852004] NetLabel: protocols = UNLABELED CIPSOv4\r\n[ 5.856453] NetLabel: unlabeled traffic allowed by default\r\n[ 5.867186] HPET: 3 timers in total, 0 timers will be used for per-cpu timer\r\n[ 5.868409] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0\r\n[ 5.884122] hpet0: 3 comparators, 64-bit 100.000000 MHz counter\r\n[ 5.896150] clocksource: Switched to clocksource kvm-clock\r\n[ 6.025765] AppArmor: AppArmor Filesystem Enabled\r\n[ 6.047560] pnp: PnP ACPI init\r\n[ 6.065613] pnp: PnP ACPI: found 4 devices\r\n[ 6.095540] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns\r\n[ 6.175816] pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff]\r\n[ 6.198883] pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff]\r\n[ 6.220404] pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff]\r\n[ 6.242129] pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff]\r\n[ 6.263468] pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff]\r\n[ 6.284620] pci 0000:00:02.0: PCI bridge to [bus 01]\r\n[ 6.303184] pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]\r\n[ 6.329869] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff]\r\n[ 6.367333] pci 0000:00:02.0: bridge window [mem 0xfda00000-0xfdbfffff 64bit pref]\r\n[ 6.422773] pci 0000:00:02.1: PCI bridge to [bus 02]\r\n[ 6.445088] pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]\r\n[ 6.473921] pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff]\r\n[ 6.503273] pci 0000:00:02.1: bridge window [mem 0xfd800000-0xfd9fffff 64bit pref]\r\n[ 6.544393] pci 0000:00:02.2: PCI bridge to [bus 03]\r\n[ 6.566057] pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]\r\n[ 6.595289] pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff]\r\n[ 6.625159] pci 0000:00:02.2: bridge window [mem 0xfd600000-0xfd7fffff 64bit pref]\r\n[ 6.667662] pci 0000:00:02.3: PCI bridge to [bus 04]\r\n[ 6.688385] pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]\r\n[ 6.719299] pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff]\r\n[ 6.748725] pci 0000:00:02.3: bridge window [mem 0xfd400000-0xfd5fffff 64bit pref]\r\n[ 6.788320] pci 0000:00:02.4: PCI bridge to [bus 05]\r\n[ 6.808888] pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]\r\n[ 6.836374] pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff]\r\n[ 6.865456] pci 0000:00:02.4: bridge window [mem 0xfd200000-0xfd3fffff 64bit pref]\r\n[ 6.906793] pci 0000:06:00.0: PCI bridge to [bus 07]\r\n[ 6.927343] pci 0000:06:00.0: bridge window [io 0xc000-0xcfff]\r\n[ 6.960748] pci 0000:06:00.0: bridge window [mem 0xfdc00000-0xfddfffff]\r\n[ 6.999613] pci 0000:06:00.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref]\r\n[ 7.049761] pci 0000:00:1e.0: PCI bridge to [bus 06-07]\r\n[ 7.075050] pci 0000:00:1e.0: bridge window [io 0xc000-0xcfff]\r\n[ 7.104983] pci 0000:00:1e.0: bridge window [mem 0xfdc00000-0xfdffffff]\r\n[ 7.134981] pci 0000:00:1e.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref]\r\n[ 7.175973] NET: Registered protocol family 2\r\n[ 7.198902] TCP established hash table entries: 512 (order: 0, 4096 bytes)\r\n[ 7.224995] TCP bind hash table entries: 512 (order: 1, 8192 bytes)\r\n[ 7.247388] TCP: Hash tables configured (established 512 bind 512)\r\n[ 7.274303] UDP hash table entries: 256 (order: 1, 8192 bytes)\r\n[ 7.295537] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)\r\n[ 7.317880] NET: Registered protocol family 1\r\n[ 7.338819] ACPI: PCI Interrupt Link [GSIG] enabled at IRQ 22\r\n[ 7.371965] Trying to unpack rootfs image as initramfs...\r\n[ 7.771134] Freeing initrd memory: 4824K (ffff880003916000 - ffff880003dcc000)\r\n[ 7.804770] Scanning for low memory corruption every 60 seconds\r\n[ 7.830448] futex hash table entries: 256 (order: 2, 16384 bytes)\r\n[ 7.864814] audit: initializing netlink subsys (disabled)\r\n[ 7.896871] audit: type=2000 audit(1523012029.199:1): initialized\r\n[ 7.925370] Initialise system trusted keyring\r\n[ 7.945474] HugeTLB registered 2 MB page size, pre-allocated 0 pages\r\n[ 7.974619] zbud: loaded\r\n[ 7.990296] VFS: Disk quotas dquot_6.6.0\r\n[ 8.009365] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)\r\n[ 8.037523] fuse init (API version 7.23)\r\n[ 8.057440] Key type big_key registered\r\n[ 8.075906] Allocating IMA MOK and blacklist keyrings.\r\n[ 8.100744] Key type asymmetric registered\r\n[ 8.120465] Asymmetric key parser 'x509' registered\r\n[ 8.141234] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)\r\n[ 8.175621] io scheduler noop registered\r\n[ 8.196625] io scheduler deadline registered (default)\r\n[ 8.229456] io scheduler cfq registered\r\n[ 8.347364] pcieport 0000:00:02.0: Signaling PME through PCIe PME interrupt\r\n[ 8.381856] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt\r\n[ 8.412809] pcieport 0000:00:02.1: Signaling PME through PCIe PME interrupt\r\n[ 8.438259] pci 0000:02:00.0: Signaling PME through PCIe PME interrupt\r\n[ 8.463445] pcieport 0000:00:02.2: Signaling PME through PCIe PME interrupt\r\n[ 8.488781] pci 0000:03:00.0: Signaling PME through PCIe PME interrupt\r\n[ 8.513991] pcieport 0000:00:02.3: Signaling PME through PCIe PME interrupt\r\n[ 8.539951] pci 0000:04:00.0: Signaling PME through PCIe PME interrupt\r\n[ 8.565336] pcieport 0000:00:02.4: Signaling PME through PCIe PME interrupt\r\n[ 8.591320] pci_hotplug: PCI Hot Plug PCI Core version: 0.5\r\n[ 8.613924] pciehp 0000:00:02.0:pcie04: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep-\r\n[ 8.667812] pciehp 0000:00:02.1:pcie04: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep-\r\n[ 8.726473] pciehp 0000:00:02.2:pcie04: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep-\r\n[ 8.783260] pciehp 0000:00:02.3:pcie04: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep-\r\n[ 8.841530] pciehp 0000:00:02.4:pcie04: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep-\r\n[ 8.895007] pciehp: PCI Express Hot Plug Controller Driver version: 0.4\r\n[ 8.921891] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0\r\n[ 8.960126] ACPI: Power Button [PWRF]\r\n[ 8.981035] GHES: HEST is not enabled!\r\n[ 9.052446] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled\r\n[ 9.128361] 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A\r\n[ 9.169515] Linux agpgart interface v0.103\r\n[ 9.213107] brd: module loaded\r\n[ 9.245860] loop: module loaded\r\n[ 9.289892] vda: vda1 vda15\r\n[ 9.339166] libphy: Fixed MDIO Bus: probed\r\n[ 9.359320] tun: Universal TUN/TAP device driver, 1.6\r\n[ 9.381578] tun: (C) 1999-2004 Max Krasnyansky \r\n[ 9.407680] PPP generic driver version 2.4.2\r\n[ 9.428923] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver\r\n[ 9.454917] ehci-pci: EHCI PCI platform driver\r\n[ 9.475536] ehci-platform: EHCI generic platform driver\r\n[ 9.498183] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver\r\n[ 9.523614] ohci-pci: OHCI PCI platform driver\r\n[ 9.544808] ohci-platform: OHCI generic platform driver\r\n[ 9.566556] uhci_hcd: USB Universal Host Controller Interface driver\r\n[ 9.596551] xhci_hcd 0000:01:00.0: xHCI Host Controller\r\n[ 9.622679] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 1\r\n[ 9.669993] xhci_hcd 0000:01:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x00000014\r\n[ 9.712271] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002\r\n[ 9.737214] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1\r\n[ 9.770711] usb usb1: Product: xHCI Host Controller\r\n[ 9.790873] usb usb1: Manufacturer: Linux 4.4.0-28-generic xhci-hcd\r\n[ 9.814739] usb usb1: SerialNumber: 0000:01:00.0\r\n[ 9.835427] hub 1-0:1.0: USB hub found\r\n[ 9.854982] hub 1-0:1.0: 4 ports detected\r\n[ 9.876894] xhci_hcd 0000:01:00.0: xHCI Host Controller\r\n[ 9.899355] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 2\r\n[ 9.935636] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.\r\n[ 9.973259] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003\r\n[ 9.999618] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1\r\n[ 10.409668] usb usb2: Product: xHCI Host Controller\r\n[ 10.433071] usb usb2: Manufacturer: Linux 4.4.0-28-generic xhci-hcd\r\n[ 10.459993] usb usb2: SerialNumber: 0000:01:00.0\r\n[ 10.482731] hub 2-0:1.0: USB hub found\r\n[ 10.502043] hub 2-0:1.0: 4 ports detected\r\n[ 10.523898] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12\r\n[ 10.569552] serio: i8042 KBD port at 0x60,0x64 irq 1\r\n[ 10.592743] serio: i8042 AUX port at 0x60,0x64 irq 12\r\n[ 10.615693] mousedev: PS/2 mouse device common for all mice\r\n[ 10.645010] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1\r\n[ 10.684699] rtc_cmos 00:00: RTC can wake from S4\r\n[ 10.711081] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0\r\n[ 10.745391] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs\r\n[ 10.788816] hpet1: lost 1 rtc interrupts\r\n[ 10.807166] i2c /dev entries driver\r\n[ 10.825063] device-mapper: uevent: version 1.0.3\r\n[ 10.845957] device-mapper: ioctl: 4.34.0-ioctl (2015-10-28) initialised: dm-devel@redhat.com\r\n[ 10.882919] ledtrig-cpu: registered to indicate activity on CPUs\r\n[ 10.909940] NET: Registered protocol family 10\r\n[ 10.932132] NET: Registered protocol family 17\r\n[ 10.959819] Key type dns_resolver registered\r\n[ 10.982610] microcode: CPU0 sig=0x663, pf=0x1, revision=0x1\r\n[ 11.009523] microcode: Microcode Update Driver: v2.01 , Peter Oruba\r\n[ 11.057180] registered taskstats version 1\r\n[ 11.083681] Loading compiled-in X.509 certificates\r\n[ 11.111631] Loaded X.509 cert 'Build time autogenerated kernel key: 6ea974e07bd0b30541f4d838a3b7a8a80d5ca9af'\r\n[ 11.161348] zswap: loaded using pool lzo/zbud\r\n[ 11.210440] Key type trusted registered\r\n[ 11.270637] Key type encrypted registered\r\n[ 11.287191] AppArmor: AppArmor sha1 policy hashing enabled\r\n[ 11.308132] ima: No TPM chip found, activating TPM-bypass!\r\n[ 11.328398] evm: HMAC attrs: 0x1\r\n[ 11.356289] Magic number: 14:834:880\r\n[ 11.373558] rtc_cmos 00:00: setting system clock to 2018-04-06 10:53:58 UTC (1523012038)\r\n[ 11.407156] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found\r\n[ 11.428480] EDD information not available.\r\n[ 11.458327] Freeing unused kernel memory: 1480K (ffffffff81f42000 - ffffffff820b4000)\r\n[ 11.490490] Write protecting the kernel read-only data: 14336k\r\n[ 11.531046] Freeing unused kernel memory: 1860K (ffff88000182f000 - ffff880001a00000)\r\n[ 11.564688] Freeing unused kernel memory: 168K (ffff880001dd6000 - ffff880001e00000)\r\n\r\ninfo: initramfs: up at 11.70\r\nmodprobe: module virtio_pci not found in modules.dep\r\nmodprobe: module virtio_blk not found in modules.dep\r\nmodprobe: module virtio_net not found in modules.dep\r\n[ 12.022638] ACPI: PCI Interrupt Link [GSIE] enabled at IRQ 20\r\n[ 12.058016] ACPI: PCI Interrupt Link [GSIF] enabled at IRQ 21\r\nmodprobe: module vfat not found in modules.dep\r\nmodprobe: module nls_cp437 not found in modules.dep\r\ninfo: copying initramfs to /dev/vda1\r\ninfo: initramfs loading root from /dev/vda1\r\ninfo: /etc/init.d/rc.sysinit: up at 14.82\r\ninfo: container: none\r\nStarting logging: OK\r\nmodprobe: module virtio_pci not found in modules.dep\r\nmodprobe: module virtio_blk not found in modules.dep\r\nmodprobe: module virtio_net not found in modules.dep\r\nmodprobe: module vfat not found in modules.dep\r\nmodprobe: module nls_cp437 not found in modules.dep\r\nWARN: /etc/rc3.d/S10-load-modules failed\r\nInitializing random number generator... [ 15.801733] random: dd urandom read with 21 bits of entropy available\r\ndone.\r\nStarting acpid: OK\r\nmcb [info=/dev/vdb dev=/dev/vdb target=tmp unmount=true callback=mcu_drop_dev_arg]: mount '/dev/vdb' '-o,ro' '/tmp/nocloud.mp.QkUSGp'\r\nmcudda: fn=cp dev=/dev/vdb mp=/tmp/nocloud.mp.QkUSGp : -a /tmp/cirros-ds.jIMRLx/nocloud/raw\r\nStarting network...\r\nudhcpc (v1.23.2) started\r\nSending discover...\r\nSending select for 10.244.1.14...\r\nLease of 10.244.1.14 obtained, lease time 86313600\r\nTop of dropbear init script\r\nStarting dropbear sshd: OK\r\nGROWROOT: NOCHANGE: partition 1 is size 71647. it cannot be grown\r\n/dev/root resized successfully [took 0.09s]\r\n/run/cirros/datasource/data/user-data was not '#!' or executable\r\n=== system information ===\r\nPlatform: QEMU Standard PC (Q35 + ICH9, 2009)\r\nContainer: none\r\nArch: x86_64\r\nCPU(s): 1 @ 2659.996 MHz\r\nCores/Sockets/Threads: 1/1/1\r\nVirt-type: \r\nRAM Size: 43MB\r\nDisks:\r\nNAME MAJ:MIN SIZE LABEL MOUNTPOINT\r\nvda 253:0 46137344 \r\nvda1 253:1 36683264 cirros-rootfs /\r\nvda15 253:15 8388608 \r\nvdb 253:16 374784 cidata \r\n=== sshd host keys ===\r\n-----BEGIN SSH HOST KEY KEYS-----\r\nssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzu3lpKD5j0KQrczZzuQX8nlfnx9mCAWwDviERB1ehQobzxXnfLkCfMKAI0Y72GJArgUzU62Rhk3xVk5PMv7+sUm8TX74HBMPqjN0ICcQLVhcAelSEwGUzXLDpNPlhj8WYVpTt6hLrEgrUvdjxLKwKy2OfemiO9nfEEcmTf9EJoRh9Hwl2Uj8gU0Ae40EiBaKwudI4k8pzWCacSNvHKucFXru9JuURJlV0Bkv4aLNxVYFOTD78XiUZN5som8jLbxnMpP3TE6RxnTTnrUhl9muj001repdgSHUBE33D7rVWZAHt7bF7JoBpGqI20h4NGxHDArcWGkfatRlrcfeq7FPD root@testvmvx4jr\r\nssh-dss AAAAB3NzaC1kc3MAAACBAJA+zGKXEShywvat8MczoN74Ys9WB2KpWfx6gVm+yz6uQedaXTsWe5hTq8TUgYewW3oSST36gNlRV4cjtOUkEjK9TKxsi8RYcv60GvGgMJoaORtJZkDqgbfFOJ8kXQ34ei6v/4ZPzkT6nxpb/AMqYYiKdxVMrGSY6hQKEmBNf9/rAAAAFQCDW6m4EmKCnd2K0Z4m1kb12KOytwAAAIANwkBOH9TPqFKEJ1uFahbOqdzAcD6LDmYVy729FPNqqJOwyN0fFFvRKG/bFJOEDlY/LUZnxa7i4zYtMlND99zIsvh7N40a+nsS0TDS7fibQpWE2bwuGTVwz7TFm3hWRXM+1k9dLkffb5DJpnHKxO1E+Vg8szI2IkEynLcSMItzqgAAAIEAi3iGH8rFhwGiwImfKv7Emy9FuQ7API6/ouWgB3iYw/UkvdqiV1TYmVP2gIeGl1OifiQaJO6HRP86sVnlvV0f0na4WhUfhZ1M67x8hVySwzex54ys07+ppcb6ifQBmdQaed8yI/IC91gGYnHIXflw2Haxk3aR8RMvNwPmMjH86xk= root@testvmvx4jr\r\n-----END SSH HOST KEY KEYS-----\r\n=== network info ===\r\nif-info: lo,up,127.0.0.1,8,,\r\nif-info: eth0,up,10.244.1.14,24,fe80::858:aff:fef4:10e/64,\r\nip-route:default via 10.244.1.1 dev eth0 \r\nip-route:10.244.0.0/16 via 10.244.1.1 dev eth0 \r\nip-route:10.244.1.0/24 dev eth0 src 10.244.1.14 \r\nip-route6:fe80::/64 dev eth0 metric 256 \r\nip-route6:unreachable default dev lo metric -1 error -101\r\nip-route6:ff00::/8 dev eth0 metric 256 \r\nip-route6:unreachable default dev lo metric -1 error -101\r\n=== datasource: nocloud local ===\r\ninstance-id: testvmvx4jr.kubevirt-test-default\r\nname: N/A\r\navailability-zone: N/A\r\nlocal-hostname: testvmvx4jr\r\nlaunch-index: N/A\r\n=== cirros: current=0.4.0 latest=0.4.0 uptime=21.40 ===\r\n ____ ____ ____\r\n / __/ __ ____ ____ / __ \\/ __/\r\n/ /__ / // __// __// /_/ /\\ \\ \r\n\\___//_//_/ /_/ \\____/___/ \r\n http://cirros-cloud.net\r\n\r\n\r\r\nlogin as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. [login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.]} {4 \r\n\rtestvmvx4jr login: [testvmvx4jr login:]} {6 \r\r\nlogin as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.\r\n\rtestvmvx4jr login: cirros\r\nPassword: [Password:]} {8 []}]" STEP: Guest shutdown STEP: Testing the VM is not running STEP: OVM should run the VM again STEP: Getting the running VM • Failure [296.115 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:45 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:56 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:273 Timed out after 240.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:286 ------------------------------ 2018/04/06 06:58:31 Error: VirtualMachine 'testvmqrdgz' is already running • [SLOW TEST:17.178 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:45 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:56 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:311 should start a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:312 ------------------------------ 2018/04/06 06:58:31 Error: VirtualMachine 'testvmtk4n8' is already stopped • ------------------------------ • [SLOW TEST:15.525 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:47 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:48 ------------------------------ • [SLOW TEST:40.921 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VM /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • [SLOW TEST:51.626 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81 ------------------------------ • [SLOW TEST:163.353 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:93 ------------------------------ • [SLOW TEST:57.320 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:116 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:117 ------------------------------ • [SLOW TEST:51.885 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:160 ------------------------------ • [SLOW TEST:61.729 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:37 A VM with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:57 ------------------------------ • ------------------------------ • [SLOW TEST:16.490 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:61 ------------------------------ • [SLOW TEST:16.037 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:69 ------------------------------ • ------------------------------ • [SLOW TEST:15.257 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:99 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:100 should retry starting the VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:101 ------------------------------ • [SLOW TEST:17.998 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:99 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:100 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:132 ------------------------------ • [SLOW TEST:56.559 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:180 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:181 ------------------------------ • [SLOW TEST:36.068 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:210 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:211 ------------------------------ S [SKIPPING] [0.090 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:247 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:252 ------------------------------ S [SKIPPING] [0.061 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:247 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:252 ------------------------------ • ------------------------------ • [SLOW TEST:22.627 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Delete a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:321 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:322 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:323 ------------------------------ • [SLOW TEST:30.122 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:376 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:377 ------------------------------ • [SLOW TEST:25.400 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:376 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:405 ------------------------------ STEP: Starting the VM level=info timestamp=2018-04-06T11:10:06.378475Z pos=utils.go:197 component=tests msg="VM defined." level=info timestamp=2018-04-06T11:10:06.384493Z pos=utils.go:197 component=tests msg="VM started." STEP: Stopping the VM level=info timestamp=2018-04-06T11:10:06.423843Z pos=utils.go:208 component=tests msg="VM defined." level=info timestamp=2018-04-06T11:10:06.423888Z pos=utils.go:208 component=tests msg="VM started." level=info timestamp=2018-04-06T11:10:06.455002Z pos=utils.go:208 component=tests msg="VM defined." level=info timestamp=2018-04-06T11:10:06.613909Z pos=utils.go:208 component=tests msg="VM stopping" STEP: Starting the VM level=info timestamp=2018-04-06T11:10:06.753383Z pos=utils.go:197 component=tests msg="VM defined." level=info timestamp=2018-04-06T11:10:06.753520Z pos=utils.go:197 component=tests msg="VM started." •... Timeout [140.077 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:42 Starting and stopping the same VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 should success multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:93 Timed out /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:93 ------------------------------ STEP: Starting the VM level=info timestamp=2018-04-06T11:12:27.707130Z pos=utils.go:197 component=tests msg="VM defined." level=info timestamp=2018-04-06T11:12:27.709373Z pos=utils.go:197 component=tests msg="VM started." STEP: Checking that the VM spec did not change • Failure [16.963 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 should not modify the spec on status update [It] /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:114 Expected error: <*errors.StatusError | 0xc4217f7680>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachines.kubevirt.io \"testvmlwtvq\" not found", Reason: "NotFound", Details: { Name: "testvmlwtvq", Group: "kubevirt.io", Kind: "virtualmachines", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachines.kubevirt.io "testvmlwtvq" not found not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/utils.go:729 ------------------------------ • [SLOW TEST:24.860 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:42 Starting multiple VMs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:132 ------------------------------ • [SLOW TEST:49.863 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:34 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:67.106 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:34 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:75 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 ------------------------------ • [SLOW TEST:53.487 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:34 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:84 ------------------------------ S [SKIPPING] [55.512 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:46 VirtualMachine with nodeNetwork definition given /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:108 should be able to reach the internet [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:110 Skip network test that requires DNS resolution in Jenkins environment /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:113 ------------------------------ • ------------------------------ • [SLOW TEST:5.035 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:46 VirtualMachine with nodeNetwork definition given /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:108 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••• ------------------------------ • [SLOW TEST:5.679 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to four, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:27.233 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should update readyReplicas once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:102 ------------------------------ •••• ------------------------------ • [SLOW TEST:64.561 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:38 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:118 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:120 ------------------------------ • [SLOW TEST:113.149 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:38 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:118 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:141 ------------------------------ • [SLOW TEST:54.230 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:38 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:118 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:174 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:176 ------------------------------ • [SLOW TEST:109.993 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:38 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:118 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:174 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:197 ------------------------------ • [SLOW TEST:257.989 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:38 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:118 With VM with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:257 should start vm multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:269 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 3 Failures: [Fail] OfflineVirtualMachine A valid OfflineVirtualMachine given [It] should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:286 [Timeout...] RegistryDisk Starting and stopping the same VM with ephemeral registry disk [It] should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:93 [Fail] RegistryDisk Starting a VM with ephemeral registry disk [It] should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/utils.go:729 Ran 60 of 63 Specs in 2495.933 seconds FAIL! -- 57 Passed | 3 Failed | 0 Pending | 3 Skipped --- FAIL: TestTests (2495.93s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh 1de32fe35752 1a5203ecfbac d10535fd984c 0f0afad48723 1de32fe35752 1a5203ecfbac d10535fd984c 0f0afad48723 kubevirt-functional-tests-vagrant-release0-node01 kubevirt-functional-tests-vagrant-release0-node02