Bottom
Top
+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release
+ WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release
+ [[ vagrant-release =~ openshift-.* ]]
+ export KUBEVIRT_PROVIDER=k8s-1.9.3
+ KUBEVIRT_PROVIDER=k8s-1.9.3
+ export KUBEVIRT_NUM_NODES=2
+ KUBEVIRT_NUM_NODES=2
+ export NFS_WINDOWS_DIR=/home/nfs/images/windows2016
+ NFS_WINDOWS_DIR=/home/nfs/images/windows2016
+ export NAMESPACE=kube-system
+ NAMESPACE=kube-system
+ trap '{ make cluster-down; }' EXIT
+ make cluster-down
./cluster/down.sh
+ make cluster-up
./cluster/up.sh
Downloading .......
Downloading .......
2018/05/31 17:14:28 Waiting for host: 192.168.66.101:22
2018/05/31 17:14:31 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s
2018/05/31 17:14:43 Connected to tcp://192.168.66.101:22
+ cat
+ kubeadm init --config /etc/kubernetes/kubeadm.conf
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 27.006287 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node node01 as master by adding a label and a taint
[markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: abcdef.1234567890123456
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:9cb154acfc7f6624cab8fa506ecee5197f399364349adb2a9dd7ffc2e558e3ee
+ kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
+ kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule-
node "node01" untainted
2018/05/31 17:15:24 Waiting for host: 192.168.66.102:22
2018/05/31 17:15:27 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s
2018/05/31 17:15:39 Connected to tcp://192.168.66.102:22
+ kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true
[preflight] Running pre-flight checks.
[discovery] Trying to connect to API Server "192.168.66.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443"
[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443"
[discovery] Successfully established connection with API Server "192.168.66.101:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Sending file modes: C0755 48668048 kubectl
Sending file modes: C0600 5454 admin.conf
Cluster "kubernetes" set.
Cluster "kubernetes" set.
++ kubectl get nodes --no-headers
++ cluster/kubectl.sh get nodes --no-headers
++ grep -v Ready
+ '[' -n '' ']'
+ echo 'Nodes are ready:'
Nodes are ready:
+ kubectl get nodes
+ cluster/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
node01 NotReady master 38s v1.9.3
node02 NotReady <none> 13s v1.9.3
+ make cluster-sync
./cluster/build.sh
Building ...
Untagged: localhost:35549/kubevirt/virt-controller:devel
Untagged: localhost:35549/kubevirt/virt-controller@sha256:5dcf24b68a9d2ad9bc8f55d120432a6cb0d5f16e12355a29328263fa96967dc9
Deleted: sha256:5b0857615737937deff571bd69446bba3c0ac08cf32e6f8423afbc5a14f0997d
Deleted: sha256:1303fe1693eb4f82a52cc39f40a1d82bdb96cd08e7f84101c5704b8c2b50458a
Deleted: sha256:ddc5239bd1b3d3181744b5b1fb0fc682f982ca663b906d31f3ecee6570162558
Deleted: sha256:4d8f90370b24ef9a0b6e5ed678d771f0afcbb3c34d6a197852475aa07485cabd
Untagged: localhost:35549/kubevirt/virt-launcher:devel
Untagged: localhost:35549/kubevirt/virt-launcher@sha256:bd593b8ebe990c4ad9531528dcb9a6f1d602ea6a6cfb75fc38fc86fb9d2f9648
Deleted: sha256:d5b9a595edef9f1f5a28d88730d842f8320ac815bd64c11921052ca7ed09bf30
Deleted: sha256:81772f5cb1e4e02bd80288874d5565d742c3b853e3bb5eca15597146b8dc0421
Deleted: sha256:0f2f64c2dae90c871a43689c794bec75df6124f3bf36d0553e63aadd6ae59040
Deleted: sha256:5ce950397b56308c1d6aa17192c044f4fc90eeb5b1e77d7d895ecfead6b4f34d
Deleted: sha256:396547c81634b20261b9188472836bc21a339bbf54bbbc760e84a2e9182a89fd
Deleted: sha256:d60d0383d3c981a79f3493a1166518a05abd3ac68613157756c80e99695532ff
Deleted: sha256:b1e9ed9a0ba8b6c9bd43cf948000f0539892c5ac66aa748d63f55de954b3f9c1
Deleted: sha256:b7bd990fd2da2806228121e1d328c95e3e7c8950db18bf3bceb446f522cf4343
Deleted: sha256:ab397aadd38ad605c5dd28f94ed2b3ea90057cb22e64326eecf87fdc58988943
Deleted: sha256:2f26da2c99b2066e42d16722744a2a41e2bbb4d5eed68a941986c10f46e5276a
Deleted: sha256:c74ebaa0e79540a8adb2d9231a0b1f3011623f873e190712d2d902125b5910d0
Deleted: sha256:96bcd04cb4f7c23a5e0ca587c91a14c4ddfb78e0712bd744611883797bfd69b2
Deleted: sha256:7165f603e9bfeb19a79c66b50971cce86113d0cee41b1067aea360327263dfc5
Deleted: sha256:2df1123615099f1a24c4c4d06b8565b0e1ca4b86a4f47a503523ba018b6e664d
Deleted: sha256:b7e948723ec9a15cd97fc6d72ecfdd93f7cc970ba396404b2f76c16e90b108c1
Deleted: sha256:f3072b40ee1587962376fa3e7d830cde511206bb9848f9f379d83cea9e932269
Untagged: localhost:35549/kubevirt/virt-handler:devel
Untagged: localhost:35549/kubevirt/virt-handler@sha256:c06309c10fd647fbb3b803651e88f78e40d423ae141c896cb72a2acab218d3a7
Deleted: sha256:ae6ed3aadeb2cd364335489bc284aef230be32884cc9509a344c9ba1f6a9b6a5
Deleted: sha256:d1d03b6ad3bae71108d11928adc27ed93a38d6438a0ba34556ba049423658c62
Deleted: sha256:a96e5c32c243b4142b100e8927a711a49b745773878d5687b2cbba2bb4b97afb
Deleted: sha256:69fce7e532f517468416067c6fcd8a03a832a3d7fc58ea9cf001a0228a156a9b
Untagged: localhost:35549/kubevirt/virt-api:devel
Untagged: localhost:35549/kubevirt/virt-api@sha256:9ed5df1ce5620e82b931eb68b2c2567c4883849c5dfa7d59896cd4860ac19b3f
Deleted: sha256:94a2d7b31839d8a4ef6357fa07ffe3ea93b8f89973dff6134150e9cd856ec45d
Deleted: sha256:ba81695bb7bbbfd3d07d192123c5e96459683c4ad6e2ec50b72687e3ac6f702a
Deleted: sha256:3d112ac3ffdb878153fc5c46c4a6c95e1df45db013d4c7414bed3995a3d11947
Deleted: sha256:26b44ba4ec418d309fc31c6c50f3835cc543cd28c3404a001ef330107025e639
Untagged: localhost:35549/kubevirt/subresource-access-test:devel
Untagged: localhost:35549/kubevirt/subresource-access-test@sha256:fe056acd4f3eb142b096a1db7ca55d7abfbc5e797b0855d533a6e9731c5be639
Deleted: sha256:98adbd124fdb3a7367ba9cba2dc8a7e4ea0c1ec3aaa1173af5ffdbe7ad8b4749
Deleted: sha256:f9650c6c971a040e59a6162e44ded7e16a7c4b27f37eef200baf65479f9a7100
Deleted: sha256:4ee319da0b63ef71bf389b171888973c20c9139ea644b021f2a22241d9c4d6e5
Deleted: sha256:724c7044a7c6cfaecb37261ccabe6defea5271b997162cb3954c462e2334b607
sha256:407eaf2c96939fd4b9af031ea33c13bf9da43c98d06f62e57afeca526fcb3c1f
go version go1.10 linux/amd64
go version go1.10 linux/amd64
make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt'
hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh
sha256:407eaf2c96939fd4b9af031ea33c13bf9da43c98d06f62e57afeca526fcb3c1f
go version go1.10 linux/amd64
go version go1.10 linux/amd64
Compiling tests...
compiled tests.test
hack/build-docker.sh build
Sending build context to Docker daemon 36.14 MB
Step 1/8 : FROM fedora:27
---> 9110ae7f579f
Step 2/8 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller
---> Using cache
---> 5c7d576d7c73
Step 4/8 : WORKDIR /home/virt-controller
---> Using cache
---> 83ec280c04c4
Step 5/8 : USER 1001
---> Using cache
---> 92b648073fa2
Step 6/8 : COPY virt-controller /virt-controller
---> ba132f7f1407
Removing intermediate container 99fcd601a012
Step 7/8 : ENTRYPOINT /virt-controller
---> Running in 4e4aeb961bbd
---> 123573ecae61
Removing intermediate container 4e4aeb961bbd
Step 8/8 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "virt-controller" ''
---> Running in d1d309bf9224
---> c5dcd6e35f8b
Removing intermediate container d1d309bf9224
Successfully built c5dcd6e35f8b
Sending build context to Docker daemon 38.08 MB
Step 1/14 : FROM kubevirt/libvirt:3.7.0
---> 60c80c8f7523
Step 2/14 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> 0b7dc10e33a1
Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107
---> Using cache
---> 2a498f25374d
Step 4/14 : COPY sock-connector /sock-connector
---> Using cache
---> 366634d1d57f
Step 5/14 : COPY sh.sh /sh.sh
---> Using cache
---> 266e53471c6c
Step 6/14 : COPY virt-launcher /virt-launcher
---> 33a6e4048ab6
Removing intermediate container cd705b6b7365
Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt
---> 389f74506fd9
Removing intermediate container 680ddc592f3b
Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt
---> Running in 20dab942189f
[91m[0m ---> e73dfeb31115
Removing intermediate container 20dab942189f
Step 9/14 : RUN rm -f /libvirtd.sh
---> Running in dbfd7030221f
[91m[0m ---> 119edcfed42e
Removing intermediate container dbfd7030221f
Step 10/14 : COPY libvirtd.sh /libvirtd.sh
---> 3da89d170ccb
Removing intermediate container d68f3b5bde5b
Step 11/14 : RUN chmod a+x /libvirtd.sh
---> Running in 85766da4d0fc
[91m[0m ---> d55112558004
Removing intermediate container 85766da4d0fc
Step 12/14 : COPY entrypoint.sh /entrypoint.sh
---> be99131c6c3c
Removing intermediate container 36a5a09a8a1b
Step 13/14 : ENTRYPOINT /entrypoint.sh
---> Running in cc4ec3dc234d
---> 16e70b64f93e
Removing intermediate container cc4ec3dc234d
Step 14/14 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "virt-launcher" ''
---> Running in ca25c33b3d88
---> ed0c887ba35a
Removing intermediate container ca25c33b3d88
Successfully built ed0c887ba35a
Sending build context to Docker daemon 36.7 MB
Step 1/5 : FROM fedora:27
---> 9110ae7f579f
Step 2/5 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/5 : COPY virt-handler /virt-handler
---> 5fa2ca6da21f
Removing intermediate container 7a9b041ee8f6
Step 4/5 : ENTRYPOINT /virt-handler
---> Running in 16b8d7369c92
---> 06a819304661
Removing intermediate container 16b8d7369c92
Step 5/5 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "virt-handler" ''
---> Running in ed6788d8cae1
---> 9973124556fa
Removing intermediate container ed6788d8cae1
Successfully built 9973124556fa
Sending build context to Docker daemon 36.86 MB
Step 1/8 : FROM fedora:27
---> 9110ae7f579f
Step 2/8 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api
---> Using cache
---> 1ee495c45665
Step 4/8 : WORKDIR /home/virt-api
---> Using cache
---> d5d529a63aa5
Step 5/8 : USER 1001
---> Using cache
---> b8cd6b01e5a1
Step 6/8 : COPY virt-api /virt-api
---> 6015f6c899c8
Removing intermediate container 9ef3636d48f2
Step 7/8 : ENTRYPOINT /virt-api
---> Running in cd9f8d265db6
---> 53a17ad61217
Removing intermediate container cd9f8d265db6
Step 8/8 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "virt-api" ''
---> Running in 277554a0da41
---> 839996b8829c
Removing intermediate container 277554a0da41
Successfully built 839996b8829c
Sending build context to Docker daemon 6.656 kB
Step 1/10 : FROM fedora:27
---> 9110ae7f579f
Step 2/10 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/10 : ENV container docker
---> Using cache
---> cc783cf25db1
Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs
---> Using cache
---> 93ffafb8b01d
Step 5/10 : RUN mkdir -p /images
---> Using cache
---> f6ebd00da1a0
Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img
---> Using cache
---> 7284b3199367
Step 7/10 : ADD run-tgt.sh /
---> Using cache
---> a47d3f2e650a
Step 8/10 : EXPOSE 3260
---> Using cache
---> 970bd1025fac
Step 9/10 : CMD /run-tgt.sh
---> Using cache
---> 3cca5a9b00f8
Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-vagrant-release1" ''
---> Using cache
---> 1ae72a67896e
Successfully built 1ae72a67896e
Sending build context to Docker daemon 2.56 kB
Step 1/5 : FROM fedora:27
---> 9110ae7f579f
Step 2/5 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/5 : ENV container docker
---> Using cache
---> cc783cf25db1
Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all
---> Using cache
---> f43092ff797b
Step 5/5 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "vm-killer" ''
---> Using cache
---> 43e91c8d31a9
Successfully built 43e91c8d31a9
Sending build context to Docker daemon 5.12 kB
Step 1/7 : FROM debian:sid
---> bcec0ae8107e
Step 2/7 : MAINTAINER "David Vossel" \<dvossel@redhat.com\>
---> Using cache
---> eb2ecba9d79d
Step 3/7 : ENV container docker
---> Using cache
---> 7c8d23462894
Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 1121e08529fa
Step 5/7 : ADD entry-point.sh /
---> Using cache
---> 1e9b22eccc69
Step 6/7 : CMD /entry-point.sh
---> Using cache
---> 918eb49e60d7
Step 7/7 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "registry-disk-v1alpha" ''
---> Using cache
---> 1c5ef3d30805
Successfully built 1c5ef3d30805
Sending build context to Docker daemon 2.56 kB
Step 1/4 : FROM localhost:35867/kubevirt/registry-disk-v1alpha:devel
---> 1c5ef3d30805
Step 2/4 : MAINTAINER "David Vossel" \<dvossel@redhat.com\>
---> Using cache
---> 6d3fcc00c759
Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img
---> Using cache
---> ee115049278a
Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-release1" ''
---> Using cache
---> 3ec5bc903c85
Successfully built 3ec5bc903c85
Sending build context to Docker daemon 2.56 kB
Step 1/4 : FROM localhost:35867/kubevirt/registry-disk-v1alpha:devel
---> 1c5ef3d30805
Step 2/4 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> c68228e2d1c7
Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2
---> Using cache
---> 906e9b5449da
Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-release1" ''
---> Using cache
---> 2692b030a1a6
Successfully built 2692b030a1a6
Sending build context to Docker daemon 2.56 kB
Step 1/4 : FROM localhost:35867/kubevirt/registry-disk-v1alpha:devel
---> 1c5ef3d30805
Step 2/4 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> c68228e2d1c7
Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso
---> Using cache
---> d999298b0f25
Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-release1" ''
---> Using cache
---> 353916f86329
Successfully built 353916f86329
Sending build context to Docker daemon 33.97 MB
Step 1/8 : FROM fedora:27
---> 9110ae7f579f
Step 2/8 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl
---> Using cache
---> a93c2ef4d06c
Step 4/8 : WORKDIR /home/virtctl
---> Using cache
---> b3278975ff14
Step 5/8 : USER 1001
---> Using cache
---> 7b9c3f06521e
Step 6/8 : COPY subresource-access-test /subresource-access-test
---> 87eee3ae70a8
Removing intermediate container 6df9584bb76e
Step 7/8 : ENTRYPOINT /subresource-access-test
---> Running in 453b2343c1c0
---> 1c5400ac2ded
Removing intermediate container 453b2343c1c0
Step 8/8 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "subresource-access-test" ''
---> Running in 6081b6ec4948
---> 60d2616a2288
Removing intermediate container 6081b6ec4948
Successfully built 60d2616a2288
Sending build context to Docker daemon 3.072 kB
Step 1/9 : FROM fedora:27
---> 9110ae7f579f
Step 2/9 : MAINTAINER "The KubeVirt Project" <kubevirt-dev@googlegroups.com>
---> Using cache
---> a96d7b80d8b6
Step 3/9 : ENV container docker
---> Using cache
---> cc783cf25db1
Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all
---> Using cache
---> 1f969d60dcdb
Step 5/9 : ENV GIMME_GO_VERSION 1.9.2
---> Using cache
---> ec50d6cdb417
Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh
---> Using cache
---> 481568cf019c
Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin"
---> Using cache
---> 8d12f44cea40
Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli
---> Using cache
---> 5f29a8914a5a
Step 9/9 : LABEL "kubevirt-functional-tests-vagrant-release1" '' "winrmcli" ''
---> Using cache
---> fba482dc8675
Successfully built fba482dc8675
hack/build-docker.sh push
The push refers to a repository [localhost:35867/kubevirt/virt-controller]
673af7940ceb: Preparing
711968c63dc4: Preparing
39bae602f753: Preparing
711968c63dc4: Pushed
673af7940ceb: Pushed
39bae602f753: Pushed
devel: digest: sha256:30e7fae3966862d43957a1c5dc4b1a0d12c125f428302dcbf789289620ab78c7 size: 948
The push refers to a repository [localhost:35867/kubevirt/virt-launcher]
2ac77151d505: Preparing
fa5acc0782f8: Preparing
fa5acc0782f8: Preparing
f52cd65d1725: Preparing
307b4e92567f: Preparing
e2703f52196a: Preparing
0cf87f81b863: Preparing
2a7c473fb9e1: Preparing
2ec5580bb244: Preparing
0cf87f81b863: Waiting
f5ef62c84c4d: Preparing
2a7c473fb9e1: Waiting
2ec5580bb244: Waiting
f5ef62c84c4d: Waiting
530cc55618cd: Preparing
34fa414dfdf6: Preparing
a1359dc556dd: Preparing
34fa414dfdf6: Waiting
530cc55618cd: Waiting
490c7c373332: Preparing
4b440db36f72: Preparing
a1359dc556dd: Waiting
39bae602f753: Preparing
490c7c373332: Waiting
39bae602f753: Waiting
f52cd65d1725: Pushed
307b4e92567f: Pushed
fa5acc0782f8: Pushed
e2703f52196a: Pushed
2ac77151d505: Pushed
530cc55618cd: Pushed
2a7c473fb9e1: Pushed
2ec5580bb244: Pushed
34fa414dfdf6: Pushed
490c7c373332: Pushed
a1359dc556dd: Pushed
39bae602f753: Mounted from kubevirt/virt-controller
0cf87f81b863: Pushed
f5ef62c84c4d: Pushed
4b440db36f72: Pushed
devel: digest: sha256:6d2691371d0b5c40bea9630e16e9eb38a8d01c779f969242bd445bd3067435b5 size: 3653
The push refers to a repository [localhost:35867/kubevirt/virt-handler]
de97d1227a39: Preparing
39bae602f753: Preparing
39bae602f753: Mounted from kubevirt/virt-launcher
de97d1227a39: Pushed
devel: digest: sha256:1708e68d07b9313a4ecee5204b79d0a557f97793dae47b640a1b73ed236a597b size: 740
The push refers to a repository [localhost:35867/kubevirt/virt-api]
b53320748e1c: Preparing
53839c3b2a5a: Preparing
39bae602f753: Preparing
39bae602f753: Mounted from kubevirt/virt-handler
53839c3b2a5a: Pushed
b53320748e1c: Pushed
devel: digest: sha256:935eb852cc92b02336191d67ef6755d4b29a6cb8d4f311b665eae54a4e3bdc7f size: 948
The push refers to a repository [localhost:35867/kubevirt/iscsi-demo-target-tgtd]
3d850ecbc9a5: Preparing
e4d54b2824d0: Preparing
c83b591c37b8: Preparing
1160d44f1b30: Preparing
39bae602f753: Preparing
39bae602f753: Mounted from kubevirt/virt-api
3d850ecbc9a5: Pushed
c83b591c37b8: Pushed
e4d54b2824d0: Pushed
1160d44f1b30: Pushed
devel: digest: sha256:0d6240b5f4332a1c33e73849ae482cb4d6b0a036585b90621564ef52e7300b79 size: 1368
The push refers to a repository [localhost:35867/kubevirt/vm-killer]
151ffba76ca1: Preparing
39bae602f753: Preparing
39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd
151ffba76ca1: Pushed
devel: digest: sha256:a3c5d0fa6b6ec3f59dc1a23d95f3542f112c244fceca6ebe9a2d545fc2541620 size: 740
The push refers to a repository [localhost:35867/kubevirt/registry-disk-v1alpha]
780c7b8dc263: Preparing
9e4c3ba110cf: Preparing
6709b2da72b8: Preparing
780c7b8dc263: Pushed
9e4c3ba110cf: Pushed
6709b2da72b8: Pushed
devel: digest: sha256:9b6970f6978c83b3dc74f26111b36321af60333baf37442c12662f607764165a size: 948
The push refers to a repository [localhost:35867/kubevirt/cirros-registry-disk-demo]
d2935c098a0e: Preparing
780c7b8dc263: Preparing
9e4c3ba110cf: Preparing
6709b2da72b8: Preparing
9e4c3ba110cf: Mounted from kubevirt/registry-disk-v1alpha
6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha
780c7b8dc263: Mounted from kubevirt/registry-disk-v1alpha
d2935c098a0e: Pushed
devel: digest: sha256:d077a39b4795f214c6f2ab9cb00e05e7b0ac5f9fee96b779448db7016538da1e size: 1160
The push refers to a repository [localhost:35867/kubevirt/fedora-cloud-registry-disk-demo]
69d15c635ccb: Preparing
780c7b8dc263: Preparing
9e4c3ba110cf: Preparing
6709b2da72b8: Preparing
6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo
9e4c3ba110cf: Mounted from kubevirt/cirros-registry-disk-demo
780c7b8dc263: Mounted from kubevirt/cirros-registry-disk-demo
69d15c635ccb: Pushed
devel: digest: sha256:6a95535dedef37a221f90096b7f5ba555c14a9e44c95ab38b92f7ba00d6c57c2 size: 1161
The push refers to a repository [localhost:35867/kubevirt/alpine-registry-disk-demo]
fa46da7b108c: Preparing
780c7b8dc263: Preparing
9e4c3ba110cf: Preparing
6709b2da72b8: Preparing
9e4c3ba110cf: Mounted from kubevirt/fedora-cloud-registry-disk-demo
780c7b8dc263: Mounted from kubevirt/fedora-cloud-registry-disk-demo
6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo
fa46da7b108c: Pushed
devel: digest: sha256:135d1a6aac1d500dc34603cfd2ebbfd81a377f2fc05ebfdb5c3e6444f3f55c28 size: 1160
The push refers to a repository [localhost:35867/kubevirt/subresource-access-test]
96fa8cefafd4: Preparing
d583c2eb3ac0: Preparing
39bae602f753: Preparing
39bae602f753: Mounted from kubevirt/vm-killer
d583c2eb3ac0: Pushed
96fa8cefafd4: Pushed
devel: digest: sha256:8d33ddb4d0b4dac6bffe58fa556a05679183c2bf646641bd5364eafc5c861959 size: 948
The push refers to a repository [localhost:35867/kubevirt/winrmcli]
3658db2c75ba: Preparing
7a99a4697526: Preparing
8146dcce8c7a: Preparing
39bae602f753: Preparing
39bae602f753: Mounted from kubevirt/subresource-access-test
3658db2c75ba: Pushed
8146dcce8c7a: Pushed
7a99a4697526: Pushed
devel: digest: sha256:af76171dfde15c142952d37935ff97b1f95bd1d9b314e6df5f112f42972e67cc size: 1165
make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt'
Done
./cluster/clean.sh
+ source hack/common.sh
++++ dirname 'hack/common.sh[0]'
+++ cd hack/../
+++ pwd
++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt
++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out
++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/vendor
++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd
++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/tests
++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/apidocs
++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests
++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/client-python
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ KUBEVIRT_NUM_NODES=2
++ KUBEVIRT_NUM_NODES=2
++ '[' -z kubevirt-functional-tests-vagrant-release ']'
++ provider_prefix=kubevirt-functional-tests-vagrant-release1
++ job_prefix=kubevirt-functional-tests-vagrant-release1
+++ kubevirt_version
+++ '[' -n '' ']'
+++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/.git ']'
++++ git describe --always --tags
+++ echo v0.5.1-alpha.2-42-ge0a60e2
++ KUBEVIRT_VERSION=v0.5.1-alpha.2-42-ge0a60e2
+ source cluster/k8s-1.9.3/provider.sh
++ set -e
++ image=k8s-1.9.3@sha256:265ccfeeb0352a87141d4f0f041fa8cc6409b82fe3456622f4c549ec1bfe65c0
++ source cluster/ephemeral-provider-common.sh
+++ set -e
+++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a'
+ source hack/config.sh
++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh
+++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test'
+++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli'
+++ docker_prefix=kubevirt
+++ docker_tag=latest
+++ master_ip=192.168.200.2
+++ network_provider=flannel
+++ kubeconfig=cluster/vagrant/.kubeconfig
+++ namespace=kube-system
++ test -f hack/config-provider-k8s-1.9.3.sh
++ source hack/config-provider-k8s-1.9.3.sh
+++ master_ip=127.0.0.1
+++ docker_tag=devel
+++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig
+++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl
+++ docker_prefix=localhost:35867/kubevirt
+++ manifest_docker_prefix=registry:5000/kubevirt
++ test -f hack/config-local.sh
++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace
+ echo 'Cleaning up ...'
Cleaning up ...
+ cluster/kubectl.sh get vms --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers
+ grep foregroundDeleteVirtualMachine
+ read p
the server doesn't have a resource type "vms"
+ _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0
No resources found
+ _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0
No resources found
+ _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0
No resources found
+ namespaces=(default ${namespace})
+ for i in '${namespaces[@]}'
+ _kubectl -n default delete apiservices -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io
No resources found
+ _kubectl -n default delete deployment -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete deployment -l kubevirt.io
No resources found
+ _kubectl -n default delete rs -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete rs -l kubevirt.io
No resources found
+ _kubectl -n default delete services -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete services -l kubevirt.io
No resources found
+ _kubectl -n default delete apiservices -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io
No resources found
+ _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io
No resources found
+ _kubectl -n default delete secrets -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete secrets -l kubevirt.io
No resources found
+ _kubectl -n default delete pv -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete pv -l kubevirt.io
No resources found
+ _kubectl -n default delete pvc -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete pvc -l kubevirt.io
No resources found
+ _kubectl -n default delete ds -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete ds -l kubevirt.io
No resources found
+ _kubectl -n default delete customresourcedefinitions -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io
No resources found
+ _kubectl -n default delete pods -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete pods -l kubevirt.io
No resources found
+ _kubectl -n default delete clusterrolebinding -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io
No resources found
+ _kubectl -n default delete rolebinding -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete rolebinding -l kubevirt.io
No resources found
+ _kubectl -n default delete roles -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete roles -l kubevirt.io
No resources found
+ _kubectl -n default delete clusterroles -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete clusterroles -l kubevirt.io
No resources found
+ _kubectl -n default delete serviceaccounts -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n default delete serviceaccounts -l kubevirt.io
No resources found
++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io
++ wc -l
++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
++ cluster/k8s-1.9.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found
+ '[' 0 -gt 0 ']'
+ for i in '${namespaces[@]}'
+ _kubectl -n kube-system delete apiservices -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete deployment -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete deployment -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete rs -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete rs -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete services -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete services -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete apiservices -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete secrets -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete secrets -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete pv -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete pv -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete pvc -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete pvc -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete ds -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete ds -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete pods -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete pods -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete rolebinding -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete roles -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete roles -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete clusterroles -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io
No resources found
+ _kubectl -n kube-system delete serviceaccounts -l kubevirt.io
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io
No resources found
++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io
++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
++ wc -l
++ cluster/k8s-1.9.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found
+ '[' 0 -gt 0 ']'
+ sleep 2
+ echo Done
Done
./cluster/deploy.sh
+ source hack/common.sh
++++ dirname 'hack/common.sh[0]'
+++ cd hack/../
+++ pwd
++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt
++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out
++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/vendor
++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/cmd
++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/tests
++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/apidocs
++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests
++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/client-python
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ KUBEVIRT_NUM_NODES=2
++ KUBEVIRT_NUM_NODES=2
++ '[' -z kubevirt-functional-tests-vagrant-release ']'
++ provider_prefix=kubevirt-functional-tests-vagrant-release1
++ job_prefix=kubevirt-functional-tests-vagrant-release1
+++ kubevirt_version
+++ '[' -n '' ']'
+++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/.git ']'
++++ git describe --always --tags
+++ echo v0.5.1-alpha.2-42-ge0a60e2
++ KUBEVIRT_VERSION=v0.5.1-alpha.2-42-ge0a60e2
+ source cluster/k8s-1.9.3/provider.sh
++ set -e
++ image=k8s-1.9.3@sha256:265ccfeeb0352a87141d4f0f041fa8cc6409b82fe3456622f4c549ec1bfe65c0
++ source cluster/ephemeral-provider-common.sh
+++ set -e
+++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a'
+ source hack/config.sh
++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ KUBEVIRT_PROVIDER=k8s-1.9.3
++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh
+++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test'
+++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli'
+++ docker_prefix=kubevirt
+++ docker_tag=latest
+++ master_ip=192.168.200.2
+++ network_provider=flannel
+++ kubeconfig=cluster/vagrant/.kubeconfig
+++ namespace=kube-system
++ test -f hack/config-provider-k8s-1.9.3.sh
++ source hack/config-provider-k8s-1.9.3.sh
+++ master_ip=127.0.0.1
+++ docker_tag=devel
+++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig
+++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl
+++ docker_prefix=localhost:35867/kubevirt
+++ manifest_docker_prefix=registry:5000/kubevirt
++ test -f hack/config-local.sh
++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace
+ echo 'Deploying ...'
Deploying ...
+ [[ -z vagrant-release ]]
+ [[ vagrant-release =~ .*-dev ]]
+ [[ vagrant-release =~ .*-release ]]
+ for manifest in '${MANIFESTS_OUT_DIR}/release/*'
+ [[ /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]]
+ continue
+ for manifest in '${MANIFESTS_OUT_DIR}/release/*'
+ [[ /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]]
+ _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml
clusterrole "kubevirt.io:admin" created
clusterrole "kubevirt.io:edit" created
clusterrole "kubevirt.io:view" created
serviceaccount "kubevirt-apiserver" created
clusterrolebinding "kubevirt-apiserver" created
clusterrolebinding "kubevirt-apiserver-auth-delegator" created
rolebinding "kubevirt-apiserver" created
role "kubevirt-apiserver" created
clusterrole "kubevirt-apiserver" created
clusterrole "kubevirt-controller" created
serviceaccount "kubevirt-controller" created
serviceaccount "kubevirt-privileged" created
clusterrolebinding "kubevirt-controller" created
clusterrolebinding "kubevirt-controller-cluster-admin" created
clusterrolebinding "kubevirt-privileged-cluster-admin" created
clusterrole "kubevirt.io:default" created
clusterrolebinding "kubevirt.io:default" created
service "virt-api" created
deployment "virt-api" created
deployment "virt-controller" created
daemonset "virt-handler" created
customresourcedefinition "virtualmachines.kubevirt.io" created
customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created
customresourcedefinition "virtualmachinepresets.kubevirt.io" created
customresourcedefinition "offlinevirtualmachines.kubevirt.io" created
+ _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R
+ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig
+ cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R
persistentvolumeclaim "disk-alpine" created
persistentvolume "iscsi-disk-alpine" created
persistentvolumeclaim "disk-custom" created
persistentvolume "iscsi-disk-custom" created
daemonset "iscsi-demo-target-tgtd" created
serviceaccount "kubevirt-testing" created
clusterrolebinding "kubevirt-testing-cluster-admin" created
+ '[' k8s-1.9.3 = vagrant-openshift ']'
+ [[ k8s-1.9.3 =~ os-3.9.0.* ]]
+ echo Done
Done
++ kubectl get pods -n kube-system --no-headers
++ grep -v Running
++ cluster/kubectl.sh get pods -n kube-system --no-headers
+ '[' -n 'virt-api-fd96f94b5-7sqx6 0/1 ContainerCreating 0 2s
virt-api-fd96f94b5-b2cns 0/1 ContainerCreating 0 2s
virt-controller-5f7c946cc4-bd4rr 0/1 ContainerCreating 0 2s
virt-controller-5f7c946cc4-slg95 0/1 ContainerCreating 0 2s
virt-handler-5j6zl 0/1 ContainerCreating 0 2s
virt-handler-xkqxm 0/1 ContainerCreating 0 2s' ']'
+ echo 'Waiting for kubevirt pods to enter the Running state ...'
Waiting for kubevirt pods to enter the Running state ...
+ kubectl get pods -n kube-system --no-headers
+ grep -v Running
+ cluster/kubectl.sh get pods -n kube-system --no-headers
iscsi-demo-target-tgtd-vtwc4 0/1 Pending 0 1s
iscsi-demo-target-tgtd-x7dwb 0/1 Pending 0 1s
virt-api-fd96f94b5-7sqx6 0/1 ContainerCreating 0 3s
virt-api-fd96f94b5-b2cns 0/1 ContainerCreating 0 3s
virt-controller-5f7c946cc4-bd4rr 0/1 ContainerCreating 0 3s
virt-controller-5f7c946cc4-slg95 0/1 ContainerCreating 0 3s
virt-handler-5j6zl 0/1 ContainerCreating 0 3s
virt-handler-xkqxm 0/1 ContainerCreating 0 3s
+ sleep 10
++ kubectl get pods -n kube-system --no-headers
++ grep -v Running
++ cluster/kubectl.sh get pods -n kube-system --no-headers
+ '[' -n 'iscsi-demo-target-tgtd-vtwc4 0/1 ContainerCreating 0 14s
iscsi-demo-target-tgtd-x7dwb 0/1 ContainerCreating 0 14s
virt-api-fd96f94b5-7sqx6 0/1 ContainerCreating 0 16s
virt-handler-5j6zl 0/1 ContainerCreating 0 16s' ']'
+ echo 'Waiting for kubevirt pods to enter the Running state ...'
Waiting for kubevirt pods to enter the Running state ...
+ kubectl get pods -n kube-system --no-headers
+ grep -v Running
+ cluster/kubectl.sh get pods -n kube-system --no-headers
iscsi-demo-target-tgtd-vtwc4 0/1 ContainerCreating 0 16s
iscsi-demo-target-tgtd-x7dwb 0/1 ContainerCreating 0 16s
virt-api-fd96f94b5-7sqx6 0/1 ContainerCreating 0 18s
virt-handler-5j6zl 0/1 ContainerCreating 0 18s
+ sleep 10
++ kubectl get pods -n kube-system --no-headers
++ grep -v Running
++ cluster/kubectl.sh get pods -n kube-system --no-headers
+ '[' -n 'iscsi-demo-target-tgtd-x7dwb 0/1 ContainerCreating 0 26s
virt-api-fd96f94b5-7sqx6 0/1 ContainerCreating 0 28s
virt-handler-5j6zl 0/1 ContainerCreating 0 28s' ']'
+ echo 'Waiting for kubevirt pods to enter the Running state ...'
Waiting for kubevirt pods to enter the Running state ...
+ kubectl get pods -n kube-system --no-headers
+ grep -v Running
+ cluster/kubectl.sh get pods -n kube-system --no-headers
iscsi-demo-target-tgtd-x7dwb 0/1 ContainerCreating 0 27s
virt-api-fd96f94b5-7sqx6 0/1 ContainerCreating 0 29s
virt-handler-5j6zl 0/1 ContainerCreating 0 29s
+ sleep 10
++ kubectl get pods -n kube-system --no-headers
++ grep -v Running
++ cluster/kubectl.sh get pods -n kube-system --no-headers
+ '[' -n '' ']'
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
+ '[' -n 'false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ awk '!/virt-controller/ && /false/'
false iscsi-demo-target-tgtd-vtwc4
false iscsi-demo-target-tgtd-x7dwb
+ sleep 10
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '!/virt-controller/ && /false/'
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
+ '[' -n '' ']'
++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
++ awk '/virt-controller/ && /true/'
++ wc -l
+ '[' 2 -lt 1 ']'
+ kubectl get pods -n kube-system
+ cluster/kubectl.sh get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-node01 1/1 Running 0 8m
iscsi-demo-target-tgtd-vtwc4 1/1 Running 1 2m
iscsi-demo-target-tgtd-x7dwb 1/1 Running 1 2m
kube-apiserver-node01 1/1 Running 0 8m
kube-controller-manager-node01 1/1 Running 0 8m
kube-dns-6f4fd4bdf-vnrcr 3/3 Running 0 9m
kube-flannel-ds-jg776 1/1 Running 0 9m
kube-flannel-ds-svtm6 1/1 Running 0 9m
kube-proxy-dz27n 1/1 Running 0 9m
kube-proxy-m9ctz 1/1 Running 0 9m
kube-scheduler-node01 1/1 Running 0 8m
virt-api-fd96f94b5-7sqx6 1/1 Running 0 2m
virt-api-fd96f94b5-b2cns 1/1 Running 0 2m
virt-controller-5f7c946cc4-bd4rr 1/1 Running 0 2m
virt-controller-5f7c946cc4-slg95 1/1 Running 0 2m
virt-handler-5j6zl 1/1 Running 0 2m
virt-handler-xkqxm 1/1 Running 0 2m
+ kubectl version
+ cluster/kubectl.sh version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
+ ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/junit.xml'
+ [[ -d /home/nfs/images/windows2016 ]]
+ FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-release/junit.xml'
+ make functest
hack/dockerized "hack/build-func-tests.sh"
sha256:407eaf2c96939fd4b9af031ea33c13bf9da43c98d06f62e57afeca526fcb3c1f
go version go1.10 linux/amd64
Waiting for rsyncd to be ready.
go version go1.10 linux/amd64
Compiling tests...
compiled tests.test
hack/functests.sh
Running Suite: Tests Suite
==========================
Random Seed: 1527787526
Will run 109 of 109 specs
•••••••••••
------------------------------
• [SLOW TEST:72.963 seconds]
Health Monitoring
/root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:37
A VM with a watchdog device
/root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:56
should be shut down when the watchdog expires
/root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:57
------------------------------
•volumedisk0
compute
------------------------------
• [SLOW TEST:57.658 seconds]
Configurations
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39
VM definition
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:50
with 3 CPU cores
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:51
should report 3 cpu cores under guest OS
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:57
------------------------------
• [SLOW TEST:52.420 seconds]
Configurations
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39
New VM with all supported drives
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:110
should have all the device nodes
/root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:133
------------------------------
• [SLOW TEST:55.081 seconds]
CloudInit UserData
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80
with cloudInitNoCloud userDataBase64 source
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81
should have cloud-init data
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:82
------------------------------
• [SLOW TEST:165.765 seconds]
CloudInit UserData
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80
with cloudInitNoCloud userDataBase64 source
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81
with injected ssh-key
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:92
should have ssh-key under authorized keys
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:93
------------------------------
• [SLOW TEST:57.807 seconds]
CloudInit UserData
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80
with cloudInitNoCloud userData source
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:118
should process provided cloud-init data
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:119
------------------------------
• [SLOW TEST:52.125 seconds]
CloudInit UserData
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80
should take user-data from k8s secret
/root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:161
------------------------------
• [SLOW TEST:44.118 seconds]
LeaderElection
/root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43
Start a VM
/root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53
when the controller pod is not running
/root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54
should success
/root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55
------------------------------
••
------------------------------
• [SLOW TEST:16.660 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should update OfflineVirtualMachine once VMs are up
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:195
------------------------------
• [SLOW TEST:11.362 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should remove VM once the OVM is marked for deletion
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:204
------------------------------
------------------------------
• [SLOW TEST:88.369 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should recreate VM if it gets deleted
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:245
------------------------------
• [SLOW TEST:44.538 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should recreate VM if the VM's pod gets deleted
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:265
------------------------------
• [SLOW TEST:20.343 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should stop VM if running set to false
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:325
------------------------------
• [SLOW TEST:205.269 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should start and stop VM multiple times
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:333
------------------------------
• [SLOW TEST:44.405 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should not update the VM spec if Running
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:346
------------------------------
• [SLOW TEST:345.453 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
should survive guest shutdown, multiple times
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:387
------------------------------
• [SLOW TEST:16.383 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
Using virtctl interface
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:435
should start a VM once
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:436
------------------------------
• [SLOW TEST:23.337 seconds]
OfflineVirtualMachine
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47
A valid OfflineVirtualMachine given
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115
Using virtctl interface
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:435
should stop a VM once
/root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:467
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.054 seconds]
Windows VM
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57
should succeed to start a vm [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:132
Skip Windows tests that requires PVC disk-windows
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
Windows VM
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57
should succeed to stop a running vm [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:138
Skip Windows tests that requires PVC disk-windows
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
Windows VM
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57
with winrm connection [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149
should have correct UUID
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:191
Skip Windows tests that requires PVC disk-windows
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.006 seconds]
Windows VM
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57
with winrm connection [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149
should have pod IP
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:207
Skip Windows tests that requires PVC disk-windows
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.007 seconds]
Windows VM
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57
with kubectl command [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225
should succeed to start a vm
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:241
Skip Windows tests that requires PVC disk-windows
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds]
Windows VM
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57
with kubectl command [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225
should succeed to stop a vm
/root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:249
Skip Windows tests that requires PVC disk-windows
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268
------------------------------
• [SLOW TEST:140.144 seconds]
RegistryDisk
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41
Starting and stopping the same VM
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90
with ephemeral registry disk
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91
should success multiple times
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92
------------------------------
• [SLOW TEST:16.098 seconds]
RegistryDisk
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111
with ephemeral registry disk
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112
should not modify the spec on status update
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113
------------------------------
• [SLOW TEST:23.223 seconds]
RegistryDisk
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41
Starting multiple VMs
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129
with ephemeral registry disk
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130
should success
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131
------------------------------
• [SLOW TEST:6.286 seconds]
User Access
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33
With default kubevirt service accounts
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41
should verify only admin role has access only to kubevirt-config
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:42
------------------------------
• [SLOW TEST:6.899 seconds]
User Access
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33
With default kubevirt service accounts
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41
should verify permissions are correct for view, edit, and admin
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
given a vm
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:6.867 seconds]
User Access
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33
With default kubevirt service accounts
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41
should verify permissions are correct for view, edit, and admin
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
given an ovm
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:6.858 seconds]
User Access
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33
With default kubevirt service accounts
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41
should verify permissions are correct for view, edit, and admin
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
given a vm preset
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:7.037 seconds]
User Access
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33
With default kubevirt service accounts
/root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41
should verify permissions are correct for view, edit, and admin
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
given a vm replica set
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:15.399 seconds]
VNC
/root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:37
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:48
with VNC connection
/root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:49
should allow accessing the VNC device
/root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:50
------------------------------
------------------------------
• [SLOW TEST:60.464 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
with Alpine PVC
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123
should be successfully started
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
with Disk PVC
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:50.311 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
with Alpine PVC
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123
should be successfully started
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
with CDRom PVC
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:125.837 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
with Alpine PVC
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123
should be successfully started and stopped multiple times
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
with Disk PVC
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:160.313 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
with Alpine PVC
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123
should be successfully started and stopped multiple times
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
with CDRom PVC
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:51.805 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
With an emptyDisk defined
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:182
should create a writeable emptyDisk with the right capacity
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:184
------------------------------
• [SLOW TEST:69.372 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
With ephemeral alpine PVC
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:232
should be successfully started
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:234
------------------------------
• [SLOW TEST:116.382 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
With ephemeral alpine PVC
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:232
should not persist data
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:254
------------------------------
• [SLOW TEST:132.673 seconds]
Storage
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42
Starting a VM
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122
With VM with two PVCs
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:314
should start vm multiple times
/root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:326
------------------------------
•••••
------------------------------
• [SLOW TEST:15.773 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
should start it
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:65
------------------------------
• [SLOW TEST:16.100 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
should attach virt-launcher to it
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:71
------------------------------
••••
------------------------------
• [SLOW TEST:54.703 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
with boot order
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:159
should be able to boot from selected disk
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
Alpine as first boot
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:26.289 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
with boot order
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:159
should be able to boot from selected disk
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
Cirros as first boot
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:15.174 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
with user-data
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:186
without k8s secret
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:187
should retry starting the VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:188
------------------------------
• [SLOW TEST:15.044 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
with user-data
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:186
without k8s secret
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:187
should log warning and proceed once the secret is there
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:218
------------------------------
• [SLOW TEST:37.460 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
when virt-launcher crashes
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:263
should be stopped and have Failed phase
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:264
------------------------------
• [SLOW TEST:22.932 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
when virt-handler crashes
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:286
should recover and continue management
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:287
------------------------------
------------------------------
• Failure in Spec Setup (BeforeEach) [98.297 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
when virt-handler is not responsive
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:347
the node controller should react [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:385
Unexpected Warning event received.
Expected
<string>: Warning
not to equal
<string>: Warning
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:233
------------------------------
level=info timestamp=2018-05-31T18:08:54.479618Z pos=utils.go:231 component=tests msg="Created virtual machine pod virt-launcher-testvms7qhd-frbcc"
level=info timestamp=2018-05-31T18:09:08.510272Z pos=utils.go:231 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvms7qhd-frbcc"
level=error timestamp=2018-05-31T18:09:08.541923Z pos=utils.go:229 component=tests reason="unexpected warning event received" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')"
S [SKIPPING] [0.213 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
with non default namespace
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:433
should log libvirt start and stop lifecycle events of the domain
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
kubevirt-test-default [It]
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
Skip log query tests for JENKINS ci test environment
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:438
------------------------------
S [SKIPPING] [0.091 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
with non default namespace
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:433
should log libvirt start and stop lifecycle events of the domain
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
kubevirt-test-alternative [It]
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
Skip log query tests for JENKINS ci test environment
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:438
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.057 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
VM Emulation Mode
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:494
should enable emulation in virt-launcher [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:514
Software emulation is not enabled on this cluster
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:510
------------------------------
S [SKIPPING] in Spec Setup (BeforeEach) [0.070 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Creating a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59
VM Emulation Mode
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:494
should be reflected in domain XML [BeforeEach]
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:551
Software emulation is not enabled on this cluster
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:510
------------------------------
------------------------------
• [SLOW TEST:18.434 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Delete a VM's Pod
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:610
should result in the VM moving to a finalized state
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:611
------------------------------
• [SLOW TEST:35.625 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Delete a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:642
with an active pod.
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:643
should result in pod being terminated
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:644
------------------------------
• [SLOW TEST:20.462 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Delete a VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:642
with grace period greater than 0
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:667
should run graceful shutdown
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:668
------------------------------
• [SLOW TEST:30.028 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Killed VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:719
should be in Failed phase
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:720
------------------------------
• [SLOW TEST:24.149 seconds]
Vmlifecycle
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45
Killed VM
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:719
should be left alone by virt-handler
/root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:747
------------------------------
• [SLOW TEST:54.573 seconds]
Networking
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47
VirtualMachine attached to the pod network
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:143
should be able to reach
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
the Inbound VM
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:5.748 seconds]
Networking
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47
VirtualMachine attached to the pod network
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:143
should be able to reach
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
the internet
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
------------------------------
• [SLOW TEST:5.051 seconds]
Networking
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47
VirtualMachine attached to the pod network
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:143
should be reachable via the propagated IP from a Pod
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
on a different node from Pod
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
•••••
------------------------------
• [SLOW TEST:52.170 seconds]
Networking
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47
VirtualMachine with custom interface model
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:357
should expose the right device type to the guest
/root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:358
------------------------------
------------------------------
• [SLOW TEST:51.515 seconds]
Console
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64
with a serial console
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65
with a cirros image
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66
should return that we are running cirros
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67
------------------------------
• [SLOW TEST:55.957 seconds]
Console
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64
with a serial console
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65
with a fedora image
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76
should return that we are running fedora
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77
------------------------------
• [SLOW TEST:53.467 seconds]
Console
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35
A new VM
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64
with a serial console
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65
should be able to reconnect to console multiple times
/root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86
------------------------------
• [SLOW TEST:5.287 seconds]
VirtualMachineReplicaSet
/root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46
should scale
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
to three, to two and then to zero replicas
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
• [SLOW TEST:12.786 seconds]
VirtualMachineReplicaSet
/root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46
should scale
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92
to five, to six and then to zero replicas
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46
------------------------------
••
------------------------------
• [SLOW TEST:18.168 seconds]
VirtualMachineReplicaSet
/root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46
should update readyReplicas once VMs are up
/root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157
------------------------------
••
------------------------------
• [SLOW TEST:5.452 seconds]
VirtualMachineReplicaSet
/root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46
should not scale when paused and scale when resume
/root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223
------------------------------
Waiting for namespace kubevirt-test-default to be removed, this can take a while ...
Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ...
Summarizing 1 Failure:
[Fail] Vmlifecycle Creating a VM when virt-handler is not responsive [BeforeEach] the node controller should react
/root/go/src/kubevirt.io/kubevirt/tests/utils.go:233
Ran 99 of 109 Specs in 3227.310 seconds
FAIL! -- 98 Passed | 1 Failed | 0 Pending | 10 Skipped --- FAIL: TestTests (3227.33s)
FAIL
make: *** [functest] Error 1
+ make cluster-down
./cluster/down.sh