+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio + [[ openshift-release-crio =~ openshift-.* ]] + export PROVIDER=os-3.9.0 + PROVIDER=os-3.9.0 + [[ openshift-release-crio =~ .*-crio ]] + export CRIO=true + CRIO=true + export VAGRANT_NUM_NODES=1 + VAGRANT_NUM_NODES=1 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh WARNING: You're not using the default seccomp profile kubevirt-functional-tests-openshift-release-crio1-crio-node02 2018/04/29 07:24:22 Waiting for host: 192.168.66.102:22 2018/04/29 07:24:25 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:24:33 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:24:41 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:24:49 Connected to tcp://192.168.66.102:22 Removed symlink /etc/systemd/system/multi-user.target.wants/origin-master-api.service. Removed symlink /etc/systemd/system/origin-node.service.wants/origin-master-api.service. Removed symlink /etc/systemd/system/multi-user.target.wants/origin-master-controllers.service. kubevirt-functional-tests-openshift-release-crio1-crio-node01 2018/04/29 07:24:58 Waiting for host: 192.168.66.101:22 2018/04/29 07:25:01 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:25:09 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:25:17 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:25:25 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/29 07:25:30 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/04/29 07:25:35 Connected to tcp://192.168.66.101:22 NAME STATUS ROLES AGE VERSION node01 Ready master 3d v1.9.1+a0ce1bc657 PING node02 (192.168.66.102) 56(84) bytes of data. 64 bytes from node02 (192.168.66.102): icmp_seq=1 ttl=64 time=0.868 ms --- node02 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.868/0.868/0.868/0.000 ms Found node02. Adding it to the inventory. ping: node03: Name or service not known PLAY [Populate config host groups] ********************************************* TASK [Load group name mapping variables] *************************************** ok: [localhost] TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********* skipping: [localhost] TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_lb_hosts required] *********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts required] ********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts is single host] **************************** skipping: [localhost] TASK [Evaluate groups - g_glusterfs_hosts required] **************************** skipping: [localhost] TASK [Evaluate groups - Fail if no etcd hosts group is defined] **************** skipping: [localhost] TASK [Evaluate oo_all_hosts] *************************************************** ok: [localhost] => (item=node01) ok: [localhost] => (item=node02) TASK [Evaluate oo_masters] ***************************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_master] ************************************************ ok: [localhost] TASK [Evaluate oo_new_etcd_to_config] ****************************************** TASK [Evaluate oo_masters_to_config] ******************************************* ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_to_config] ********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_etcd] ************************************************** ok: [localhost] TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_nodes_to_config] ********************************************* ok: [localhost] => (item=node02) TASK [Add master to oo_nodes_to_config] **************************************** skipping: [localhost] => (item=node01) TASK [Evaluate oo_lb_to_config] ************************************************ TASK [Evaluate oo_nfs_to_config] *********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_glusterfs_to_config] ***************************************** TASK [Evaluate oo_etcd_to_migrate] ********************************************* ok: [localhost] => (item=node01) PLAY [Ensure there are new_nodes] ********************************************** TASK [fail] ******************************************************************** skipping: [localhost] TASK [fail] ******************************************************************** skipping: [localhost] PLAY [Initialization Checkpoint Start] ***************************************** TASK [Set install initialization 'In Progress'] ******************************** ok: [node01] PLAY [Populate config host groups] ********************************************* TASK [Load group name mapping variables] *************************************** ok: [localhost] TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********* skipping: [localhost] TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_lb_hosts required] *********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts required] ********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts is single host] **************************** skipping: [localhost] TASK [Evaluate groups - g_glusterfs_hosts required] **************************** skipping: [localhost] TASK [Evaluate groups - Fail if no etcd hosts group is defined] **************** skipping: [localhost] TASK [Evaluate oo_all_hosts] *************************************************** ok: [localhost] => (item=node01) ok: [localhost] => (item=node02) TASK [Evaluate oo_masters] ***************************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_master] ************************************************ ok: [localhost] TASK [Evaluate oo_new_etcd_to_config] ****************************************** TASK [Evaluate oo_masters_to_config] ******************************************* ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_to_config] ********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_etcd] ************************************************** ok: [localhost] TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_nodes_to_config] ********************************************* ok: [localhost] => (item=node02) TASK [Add master to oo_nodes_to_config] **************************************** skipping: [localhost] => (item=node01) TASK [Evaluate oo_lb_to_config] ************************************************ TASK [Evaluate oo_nfs_to_config] *********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_glusterfs_to_config] ***************************************** TASK [Evaluate oo_etcd_to_migrate] ********************************************* ok: [localhost] => (item=node01) [WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config PLAY [Ensure that all non-node hosts are accessible] *************************** TASK [Gathering Facts] ********************************************************* ok: [node01] PLAY [Initialize basic host facts] ********************************************* TASK [Gathering Facts] ********************************************************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : include_tasks] **************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for node01, node02 TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *** ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : debug] ************************************ skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : set_stats] ******************************** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Assign deprecated variables to correct counterparts] *** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml for node01, node02 included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_metrics.yml for node01, node02 TASK [openshift_sanitize_inventory : conditional_set_fact] ********************* ok: [node02] ok: [node01] TASK [openshift_sanitize_inventory : set_fact] ********************************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : conditional_set_fact] ********************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : Standardize on latest variable names] ***** ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : Normalize openshift_release] ************** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : include_tasks] **************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for node01, node02 TASK [openshift_sanitize_inventory : Ensure that openshift_use_dnsmasq is true] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure that openshift_node_dnsmasq_install_network_manager_hook is true] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : set_fact] ********************************* skipping: [node01] => (item=None) skipping: [node02] => (item=None) TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] *** skipping: [node02] skipping: [node01] TASK [Detecting Operating System from ostree_booted] *************************** ok: [node01] ok: [node02] TASK [set openshift_deployment_type if unset] ********************************** skipping: [node01] skipping: [node02] TASK [initialize_facts set fact openshift_is_atomic and openshift_is_containerized] *** ok: [node01] ok: [node02] TASK [Determine Atomic Host Docker Version] ************************************ skipping: [node01] skipping: [node02] TASK [assert atomic host docker version is 1.12 or later] ********************** skipping: [node01] skipping: [node02] PLAY [Initialize special first-master variables] ******************************* TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [set_fact] **************************************************************** ok: [node01] PLAY [Disable web console if required] ***************************************** TASK [set_fact] **************************************************************** skipping: [node01] PLAY [Install packages necessary for installer] ******************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [Ensure openshift-ansible installer package deps are installed] *********** ok: [node02] => (item=iproute) ok: [node02] => (item=dbus-python) ok: [node02] => (item=PyYAML) ok: [node02] => (item=python-ipaddress) ok: [node02] => (item=yum-utils) TASK [Ensure various deps for running system containers are installed] ********* skipping: [node02] => (item=atomic) skipping: [node02] => (item=ostree) skipping: [node02] => (item=runc) PLAY [Initialize cluster facts] ************************************************ TASK [Gathering Facts] ********************************************************* ok: [node02] ok: [node01] TASK [Gather Cluster facts] **************************************************** changed: [node02] ok: [node01] TASK [Set fact of no_proxy_internal_hostnames] ********************************* skipping: [node01] skipping: [node02] TASK [Initialize openshift.node.sdn_mtu] *************************************** ok: [node02] ok: [node01] PLAY [Determine openshift_version to configure on first master] **************** TASK [Gathering Facts] ********************************************************* skipping: [node01] TASK [include_role] ************************************************************ skipping: [node01] TASK [debug] ******************************************************************* skipping: [node01] PLAY [Set openshift_version for etcd, node, and master hosts] ****************** skipping: no hosts matched PLAY [Ensure the requested version packages are available.] ******************** skipping: no hosts matched PLAY [Verify Requirements] ***************************************************** TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [Run variable sanity checks] ********************************************** ok: [node01] PLAY [Initialization Checkpoint End] ******************************************* TASK [Set install initialization 'Complete'] *********************************** ok: [node01] PLAY [Validate node hostnames] ************************************************* TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [Query DNS for IP address of node02] ************************************** ok: [node02] TASK [Validate openshift_hostname when defined] ******************************** skipping: [node02] TASK [Validate openshift_ip exists on node when defined] *********************** skipping: [node02] PLAY [Setup yum repositories for all hosts] ************************************ TASK [rhel_subscribe : fail] *************************************************** skipping: [node02] TASK [rhel_subscribe : Install Red Hat Subscription manager] ******************* skipping: [node02] TASK [rhel_subscribe : Is host already registered?] **************************** skipping: [node02] TASK [rhel_subscribe : Register host] ****************************************** skipping: [node02] TASK [rhel_subscribe : fail] *************************************************** skipping: [node02] TASK [rhel_subscribe : Determine if OpenShift Pool Already Attached] *********** skipping: [node02] TASK [rhel_subscribe : Attach to OpenShift Pool] ******************************* skipping: [node02] TASK [rhel_subscribe : include_tasks] ****************************************** skipping: [node02] TASK [openshift_repos : openshift_repos detect ostree] ************************* ok: [node02] TASK [openshift_repos : Ensure libselinux-python is installed] ***************** ok: [node02] TASK [openshift_repos : Remove openshift_additional.repo file] ***************** ok: [node02] TASK [openshift_repos : Create any additional repos that are defined] ********** TASK [openshift_repos : include_tasks] ***************************************** skipping: [node02] TASK [openshift_repos : include_tasks] ***************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/centos_repos.yml for node02 TASK [openshift_repos : Configure origin gpg keys] ***************************** ok: [node02] TASK [openshift_repos : Configure correct origin release repository] *********** ok: [node02] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_repos/templates/CentOS-OpenShift-Origin.repo.j2) TASK [openshift_repos : Ensure clean repo cache in the event repos have been changed manually] *** changed: [node02] => { "msg": "First run of openshift_repos" } TASK [openshift_repos : Record that openshift_repos already ran] *************** ok: [node02] RUNNING HANDLER [openshift_repos : refresh cache] ****************************** changed: [node02] PLAY [Configure os_firewall] *************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [os_firewall : Detecting Atomic Host Operating System] ******************** ok: [node02] TASK [os_firewall : Set fact r_os_firewall_is_atomic] ************************** ok: [node02] TASK [os_firewall : include_tasks] ********************************************* skipping: [node02] TASK [os_firewall : include_tasks] ********************************************* included: /usr/share/ansible/openshift-ansible/roles/os_firewall/tasks/iptables.yml for node02 TASK [os_firewall : Ensure firewalld service is not enabled] ******************* ok: [node02] TASK [os_firewall : Wait 10 seconds after disabling firewalld] ***************** skipping: [node02] TASK [os_firewall : Install iptables packages] ********************************* ok: [node02] => (item=iptables) ok: [node02] => (item=iptables-services) TASK [os_firewall : Start and enable iptables service] ************************* ok: [node02 -> node02] => (item=node02) TASK [os_firewall : need to pause here, otherwise the iptables service starting can sometimes cause ssh to fail] *** skipping: [node02] PLAY [create oo_hosts_containerized_managed_true host group] ******************* TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [group_by] **************************************************************** ok: [node01] PLAY [oo_nodes_to_config] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [container_runtime : Setup the docker-storage for overlay] **************** skipping: [node02] PLAY [create oo_hosts_containerized_managed_true host group] ******************* TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [group_by] **************************************************************** ok: [node01] PLAY [oo_nodes_to_config] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [openshift_excluder : Install excluders] ********************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml for node02 TASK [openshift_excluder : Install docker excluder - yum] ********************** ok: [node02] TASK [openshift_excluder : Install docker excluder - dnf] ********************** skipping: [node02] TASK [openshift_excluder : Install openshift excluder - yum] ******************* skipping: [node02] TASK [openshift_excluder : Install openshift excluder - dnf] ******************* skipping: [node02] TASK [openshift_excluder : set_fact] ******************************************* ok: [node02] TASK [openshift_excluder : Enable excluders] *********************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : Enable docker excluder] ***************************** changed: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : Enable openshift excluder] ************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/common/pre.yml for node02 TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Add enterprise registry, if necessary] *************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Get current installed Docker version] **************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_sanity.yml for node02 TASK [container_runtime : Error out if Docker pre-installed but too old] ******* skipping: [node02] TASK [container_runtime : Error out if requested Docker is too old] ************ skipping: [node02] TASK [container_runtime : Fail if Docker version requested but downgrade is required] *** skipping: [node02] TASK [container_runtime : Error out if attempting to upgrade Docker across the 1.10 boundary] *** skipping: [node02] TASK [container_runtime : Install Docker] ************************************** skipping: [node02] TASK [container_runtime : Ensure docker.service.d directory exists] ************ ok: [node02] TASK [container_runtime : Configure Docker service unit file] ****************** ok: [node02] TASK [container_runtime : stat] ************************************************ ok: [node02] TASK [container_runtime : Set registry params] ********************************* skipping: [node02] => (item={u'reg_conf_var': u'ADD_REGISTRY', u'reg_flag': u'--add-registry', u'reg_fact_val': []}) skipping: [node02] => (item={u'reg_conf_var': u'BLOCK_REGISTRY', u'reg_flag': u'--block-registry', u'reg_fact_val': []}) skipping: [node02] => (item={u'reg_conf_var': u'INSECURE_REGISTRY', u'reg_flag': u'--insecure-registry', u'reg_fact_val': []}) TASK [container_runtime : Place additional/blocked/insecure registries in /etc/containers/registries.conf] *** skipping: [node02] TASK [container_runtime : Set Proxy Settings] ********************************** skipping: [node02] => (item={u'reg_conf_var': u'HTTP_PROXY', u'reg_fact_val': u''}) skipping: [node02] => (item={u'reg_conf_var': u'HTTPS_PROXY', u'reg_fact_val': u''}) skipping: [node02] => (item={u'reg_conf_var': u'NO_PROXY', u'reg_fact_val': u''}) TASK [container_runtime : Set various Docker options] ************************** ok: [node02] TASK [container_runtime : stat] ************************************************ ok: [node02] TASK [container_runtime : Configure Docker Network OPTIONS] ******************** ok: [node02] TASK [container_runtime : Detect if docker is already started] ***************** ok: [node02] TASK [container_runtime : Start the Docker service] **************************** ok: [node02] TASK [container_runtime : set_fact] ******************************************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/common/post.yml for node02 TASK [container_runtime : Ensure /var/lib/containers exists] ******************* ok: [node02] TASK [container_runtime : Fix SELinux Permissions on /var/lib/containers] ****** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/registry_auth.yml for node02 TASK [container_runtime : Check for credentials file for registry auth] ******** skipping: [node02] TASK [container_runtime : Create credentials for docker cli registry auth] ***** skipping: [node02] TASK [container_runtime : Create credentials for docker cli registry auth (alternative)] *** skipping: [node02] TASK [container_runtime : stat the docker data dir] **************************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Fail quickly if openshift_docker_options are set] **** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Install Docker so we can use the client] ************* skipping: [node02] TASK [container_runtime : Disable Docker] ************************************** skipping: [node02] TASK [container_runtime : Ensure proxies are in the atomic.conf] *************** skipping: [node02] TASK [container_runtime : debug] *********************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Pre-pull Container Engine System Container image] **** skipping: [node02] TASK [container_runtime : Ensure container-engine.service.d directory exists] *** skipping: [node02] TASK [container_runtime : Ensure /etc/docker directory exists] ***************** skipping: [node02] TASK [container_runtime : Install Container Engine System Container] *********** skipping: [node02] TASK [container_runtime : Configure Container Engine Service File] ************* skipping: [node02] TASK [container_runtime : Configure Container Engine] ************************** skipping: [node02] TASK [container_runtime : Start the Container Engine service] ****************** skipping: [node02] TASK [container_runtime : set_fact] ******************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Check we are not using node as a Docker container with CRI-O] *** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/common/pre.yml for node02 TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Add enterprise registry, if necessary] *************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/common/syscontainer_packages.yml for node02 TASK [container_runtime : Ensure container-selinux is installed] *************** ok: [node02] TASK [container_runtime : Ensure atomic is installed] ************************** ok: [node02] TASK [container_runtime : Ensure runc is installed] **************************** ok: [node02] TASK [container_runtime : Check that overlay is in the kernel] ***************** changed: [node02] TASK [container_runtime : Add overlay to modprobe.d] *************************** skipping: [node02] TASK [container_runtime : Manually modprobe overlay into the kernel] *********** skipping: [node02] TASK [container_runtime : Enable and start systemd-modules-load] *************** skipping: [node02] TASK [container_runtime : Ensure proxies are in the atomic.conf] *************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/common/atomic_proxy.yml for node02 TASK [container_runtime : Add http_proxy to /etc/atomic.conf] ****************** skipping: [node02] TASK [container_runtime : Add https_proxy to /etc/atomic.conf] ***************** skipping: [node02] TASK [container_runtime : Add no_proxy to /etc/atomic.conf] ******************** skipping: [node02] TASK [container_runtime : debug] *********************************************** ok: [node02] => { "l_crio_image": "docker.io/alukiano/crio:1.9.11" } TASK [container_runtime : Pre-pull CRI-O System Container image] *************** ok: [node02] TASK [container_runtime : Install CRI-O System Container] ********************** ok: [node02] TASK [container_runtime : Remove CRI-O default configuration files] ************ ok: [node02] => (item=/etc/cni/net.d/200-loopback.conf) ok: [node02] => (item=/etc/cni/net.d/100-crio-bridge.conf) TASK [container_runtime : Create the CRI-O configuration] ********************** ok: [node02] TASK [container_runtime : Ensure CNI configuration directory exists] *********** ok: [node02] TASK [container_runtime : Add iptables allow rules] **************************** ok: [node02] => (item={u'port': u'10010/tcp', u'service': u'crio'}) TASK [container_runtime : Remove iptables rules] ******************************* TASK [container_runtime : Add firewalld allow rules] *************************** skipping: [node02] => (item={u'port': u'10010/tcp', u'service': u'crio'}) TASK [container_runtime : Remove firewalld allow rules] ************************ TASK [container_runtime : Configure the CNI network] *************************** ok: [node02] TASK [container_runtime : Create /etc/sysconfig/crio-storage] ****************** ok: [node02] TASK [container_runtime : Create /etc/sysconfig/crio-network] ****************** ok: [node02] TASK [container_runtime : Start the CRI-O service] ***************************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/common/post.yml for node02 TASK [container_runtime : Ensure /var/lib/containers exists] ******************* ok: [node02] TASK [container_runtime : Fix SELinux Permissions on /var/lib/containers] ****** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/registry_auth.yml for node02 TASK [container_runtime : Check for credentials file for registry auth] ******** skipping: [node02] TASK [container_runtime : Create credentials for docker cli registry auth] ***** skipping: [node02] TASK [container_runtime : Create credentials for docker cli registry auth (alternative)] *** skipping: [node02] TASK [container_runtime : stat the docker data dir] **************************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] PLAY [Determine openshift_version to configure on first master] **************** TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [include_role] ************************************************************ TASK [openshift_version : Use openshift.common.version fact as version to configure if already installed] *** ok: [node01] TASK [openshift_version : include_tasks] *************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master_rpm_version.yml for node01 TASK [openshift_version : Set rpm version to configure if openshift_pkg_version specified] *** skipping: [node01] TASK [openshift_version : Set openshift_version for rpm installation] ********** included: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/check_available_rpms.yml for node01 TASK [openshift_version : Get available origin version] ************************ ok: [node01] TASK [openshift_version : fail] ************************************************ skipping: [node01] TASK [openshift_version : set_fact] ******************************************** skipping: [node01] TASK [openshift_version : debug] *********************************************** ok: [node01] TASK [openshift_version : set_fact] ******************************************** ok: [node01] TASK [openshift_version : debug] *********************************************** skipping: [node01] TASK [openshift_version : set_fact] ******************************************** skipping: [node01] TASK [openshift_version : debug] *********************************************** ok: [node01] TASK [openshift_version : debug] *********************************************** ok: [node01] TASK [openshift_version : debug] *********************************************** ok: [node01] TASK [openshift_version : debug] *********************************************** ok: [node01] TASK [debug] ******************************************************************* ok: [node01] => { "msg": "openshift_pkg_version set to -3.9.0" } PLAY [Set openshift_version for etcd, node, and master hosts] ****************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [set_fact] **************************************************************** ok: [node02] PLAY [Ensure the requested version packages are available.] ******************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [include_role] ************************************************************ TASK [openshift_version : Check openshift_version for rpm installation] ******** included: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/check_available_rpms.yml for node02 TASK [openshift_version : Get available origin version] ************************ ok: [node02] TASK [openshift_version : fail] ************************************************ skipping: [node02] TASK [openshift_version : Fail if rpm version and docker image version are different] *** skipping: [node02] TASK [openshift_version : For an RPM install, abort when the release requested does not match the available version.] *** skipping: [node02] TASK [openshift_version : debug] *********************************************** ok: [node02] => { "openshift_release": "VARIABLE IS NOT DEFINED!" } TASK [openshift_version : debug] *********************************************** ok: [node02] => { "openshift_image_tag": "v3.9.0" } TASK [openshift_version : debug] *********************************************** ok: [node02] => { "openshift_pkg_version": "-3.9.0" } PLAY [Node Install Checkpoint Start] ******************************************* TASK [Set Node install 'In Progress'] ****************************************** ok: [node01] PLAY [Create OpenShift certificates for node hosts] **************************** TASK [openshift_node_certificates : Ensure CA certificate exists on openshift_ca_host] *** ok: [node02 -> node01] TASK [openshift_node_certificates : fail] ************************************** skipping: [node02] TASK [openshift_node_certificates : Check status of node certificates] ********* ok: [node02] => (item=system:node:node02.crt) ok: [node02] => (item=system:node:node02.key) ok: [node02] => (item=system:node:node02.kubeconfig) ok: [node02] => (item=ca.crt) ok: [node02] => (item=server.key) ok: [node02] => (item=server.crt) TASK [openshift_node_certificates : set_fact] ********************************** ok: [node02] TASK [openshift_node_certificates : Create openshift_generated_configs_dir if it does not exist] *** ok: [node02 -> node01] TASK [openshift_node_certificates : find] ************************************** ok: [node02 -> node01] TASK [openshift_node_certificates : Generate the node client config] *********** changed: [node02 -> node01] => (item=node02) TASK [openshift_node_certificates : Generate the node server certificate] ****** changed: [node02 -> node01] => (item=node02) TASK [openshift_node_certificates : Create a tarball of the node config directories] *** changed: [node02 -> node01] TASK [openshift_node_certificates : Retrieve the node config tarballs from the master] *** changed: [node02 -> node01] TASK [openshift_node_certificates : Ensure certificate directory exists] ******* ok: [node02] TASK [openshift_node_certificates : Unarchive the tarball on the node] ********* changed: [node02] TASK [openshift_node_certificates : Delete local temp directory] *************** ok: [node02 -> localhost] TASK [openshift_node_certificates : Copy OpenShift CA to system CA trust] ****** ok: [node02] => (item={u'cert': u'/etc/origin/node/ca.crt', u'id': u'openshift'}) PLAY [Disable excluders] ******************************************************* TASK [openshift_excluder : Detecting Atomic Host Operating System] ************* ok: [node02] TASK [openshift_excluder : Debug r_openshift_excluder_enable_docker_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_docker_excluder": true } TASK [openshift_excluder : Debug r_openshift_excluder_enable_openshift_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_openshift_excluder": true } TASK [openshift_excluder : Fail if invalid openshift_excluder_action provided] *** skipping: [node02] TASK [openshift_excluder : Fail if r_openshift_excluder_upgrade_target is not defined] *** skipping: [node02] TASK [openshift_excluder : Include main action task file] ********************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/disable.yml for node02 TASK [openshift_excluder : Include verify_upgrade.yml when upgrading] ********** skipping: [node02] TASK [openshift_excluder : Disable excluders before the upgrade to remove older excluding expressions] *** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : disable docker excluder] **************************** changed: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : disable openshift excluder] ************************* changed: [node02] TASK [openshift_excluder : Include install.yml] ******************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml for node02 TASK [openshift_excluder : Install docker excluder - yum] ********************** skipping: [node02] TASK [openshift_excluder : Install docker excluder - dnf] ********************** skipping: [node02] TASK [openshift_excluder : Install openshift excluder - yum] ******************* skipping: [node02] TASK [openshift_excluder : Install openshift excluder - dnf] ******************* skipping: [node02] TASK [openshift_excluder : set_fact] ******************************************* skipping: [node02] TASK [openshift_excluder : Include exclude.yml] ******************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : Enable docker excluder] ***************************** changed: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : Enable openshift excluder] ************************** changed: [node02] TASK [openshift_excluder : Include unexclude.yml] ****************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : disable docker excluder] **************************** skipping: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : disable openshift excluder] ************************* changed: [node02] PLAY [Evaluate node groups] **************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Evaluate oo_containerized_master_nodes] ********************************** skipping: [localhost] => (item=node02) [WARNING]: Could not match supplied host pattern, ignoring: oo_containerized_master_nodes PLAY [Configure containerized nodes] ******************************************* skipping: no hosts matched PLAY [Configure nodes] ********************************************************* TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [openshift_clock : Determine if chrony is installed] ********************** [WARNING]: Consider using yum, dnf or zypper module rather than running rpm changed: [node02] TASK [openshift_clock : Install ntp package] *********************************** skipping: [node02] TASK [openshift_clock : Start and enable ntpd/chronyd] ************************* changed: [node02] TASK [openshift_cloud_provider : Set cloud provider facts] ********************* skipping: [node02] TASK [openshift_cloud_provider : Create cloudprovider config dir] ************** skipping: [node02] TASK [openshift_cloud_provider : include the defined cloud provider files] ***** skipping: [node02] TASK [openshift_node : fail] *************************************************** skipping: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq_install.yml for node02 TASK [openshift_node : Check for NetworkManager service] *********************** ok: [node02] TASK [openshift_node : Set fact using_network_manager] ************************* ok: [node02] TASK [openshift_node : Install dnsmasq] **************************************** ok: [node02] TASK [openshift_node : ensure origin/node directory exists] ******************** ok: [node02] => (item=/etc/origin) changed: [node02] => (item=/etc/origin/node) TASK [openshift_node : Install node-dnsmasq.conf] ****************************** ok: [node02] TASK [openshift_node : include_tasks] ****************************************** skipping: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq.yml for node02 TASK [openshift_node : Install dnsmasq configuration] ************************** ok: [node02] TASK [openshift_node : Deploy additional dnsmasq.conf] ************************* skipping: [node02] TASK [openshift_node : Enable dnsmasq] ***************************************** ok: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq/network-manager.yml for node02 TASK [openshift_node : Install network manager dispatch script] **************** ok: [node02] TASK [openshift_node : Add iptables allow rules] ******************************* ok: [node02] => (item={u'port': u'10250/tcp', u'service': u'Kubernetes kubelet'}) ok: [node02] => (item={u'port': u'80/tcp', u'service': u'http'}) ok: [node02] => (item={u'port': u'443/tcp', u'service': u'https'}) ok: [node02] => (item={u'cond': u'openshift_use_openshift_sdn | bool', u'port': u'4789/udp', u'service': u'OpenShift OVS sdn'}) skipping: [node02] => (item={u'cond': False, u'port': u'179/tcp', u'service': u'Calico BGP Port'}) skipping: [node02] => (item={u'cond': False, u'port': u'/tcp', u'service': u'Kubernetes service NodePort TCP'}) skipping: [node02] => (item={u'cond': False, u'port': u'/udp', u'service': u'Kubernetes service NodePort UDP'}) TASK [openshift_node : Remove iptables rules] ********************************** TASK [openshift_node : Add firewalld allow rules] ****************************** skipping: [node02] => (item={u'port': u'10250/tcp', u'service': u'Kubernetes kubelet'}) skipping: [node02] => (item={u'port': u'80/tcp', u'service': u'http'}) skipping: [node02] => (item={u'port': u'443/tcp', u'service': u'https'}) skipping: [node02] => (item={u'cond': u'openshift_use_openshift_sdn | bool', u'port': u'4789/udp', u'service': u'OpenShift OVS sdn'}) skipping: [node02] => (item={u'cond': False, u'port': u'179/tcp', u'service': u'Calico BGP Port'}) skipping: [node02] => (item={u'cond': False, u'port': u'/tcp', u'service': u'Kubernetes service NodePort TCP'}) skipping: [node02] => (item={u'cond': False, u'port': u'/udp', u'service': u'Kubernetes service NodePort UDP'}) TASK [openshift_node : Remove firewalld allow rules] *************************** TASK [openshift_node : Update journald config] ********************************* included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/journald.yml for node02 TASK [openshift_node : Checking for journald.conf] ***************************** ok: [node02] TASK [openshift_node : Create journald persistence directories] **************** ok: [node02] TASK [openshift_node : Update journald setup] ********************************** ok: [node02] => (item={u'var': u'Storage', u'val': u'persistent'}) ok: [node02] => (item={u'var': u'Compress', u'val': True}) ok: [node02] => (item={u'var': u'SyncIntervalSec', u'val': u'1s'}) ok: [node02] => (item={u'var': u'RateLimitInterval', u'val': u'1s'}) ok: [node02] => (item={u'var': u'RateLimitBurst', u'val': 10000}) ok: [node02] => (item={u'var': u'SystemMaxUse', u'val': u'8G'}) ok: [node02] => (item={u'var': u'SystemKeepFree', u'val': u'20%'}) ok: [node02] => (item={u'var': u'SystemMaxFileSize', u'val': u'10M'}) ok: [node02] => (item={u'var': u'MaxRetentionSec', u'val': u'1month'}) ok: [node02] => (item={u'var': u'MaxFileSec', u'val': u'1day'}) ok: [node02] => (item={u'var': u'ForwardToSyslog', u'val': False}) ok: [node02] => (item={u'var': u'ForwardToWall', u'val': False}) TASK [openshift_node : Restart journald] *************************************** skipping: [node02] TASK [openshift_node : Disable swap] ******************************************* ok: [node02] TASK [openshift_node : include node installer] ********************************* included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/install.yml for node02 TASK [openshift_node : Install Node package, sdn-ovs, conntrack packages] ****** ok: [node02] => (item={u'name': u'origin-node-3.9.0'}) ok: [node02] => (item={u'name': u'origin-sdn-ovs-3.9.0', u'install': True}) ok: [node02] => (item={u'name': u'conntrack-tools'}) TASK [openshift_node : Pre-pull node image when containerized] ***************** skipping: [node02] TASK [openshift_node : Restart cri-o] ****************************************** changed: [node02] TASK [openshift_node : restart NetworkManager to ensure resolv.conf is present] *** skipping: [node02] TASK [openshift_node : sysctl] ************************************************* ok: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml for node02 TASK [openshift_node : Check for credentials file for registry auth] *********** skipping: [node02] TASK [openshift_node : Create credentials for registry auth] ******************* skipping: [node02] TASK [openshift_node : Create credentials for registry auth (alternative)] ***** skipping: [node02] TASK [openshift_node : Setup ro mount of /root/.docker for containerized hosts] *** skipping: [node02] TASK [openshift_node : include standard node config] *************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/config.yml for node02 TASK [openshift_node : Install the systemd units] ****************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/systemd_units.yml for node02 TASK [openshift_node : Install Node service file] ****************************** ok: [node02] TASK [openshift_node : include node deps docker service file] ****************** skipping: [node02] TASK [openshift_node : include ovs service environment file] ******************* skipping: [node02] TASK [openshift_node : include_tasks] ****************************************** skipping: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/config/configure-node-settings.yml for node02 TASK [openshift_node : Configure Node settings] ******************************** ok: [node02] => (item={u'regex': u'^OPTIONS=', u'line': u'OPTIONS=--loglevel=2 '}) ok: [node02] => (item={u'regex': u'^CONFIG_FILE=', u'line': u'CONFIG_FILE=/etc/origin/node/node-config.yaml'}) ok: [node02] => (item={u'regex': u'^IMAGE_VERSION=', u'line': u'IMAGE_VERSION=v3.9.0'}) TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/config/configure-proxy-settings.yml for node02 TASK [openshift_node : Configure Proxy Settings] ******************************* skipping: [node02] => (item={u'regex': u'^HTTP_PROXY=', u'line': u'HTTP_PROXY='}) skipping: [node02] => (item={u'regex': u'^HTTPS_PROXY=', u'line': u'HTTPS_PROXY='}) skipping: [node02] => (item={u'regex': u'^NO_PROXY=', u'line': u'NO_PROXY=[],172.30.0.0/16,10.128.0.0/14'}) TASK [openshift_node : Pull container images] ********************************** skipping: [node02] TASK [openshift_node : Start and enable openvswitch service] ******************* skipping: [node02] TASK [openshift_node : set_fact] *********************************************** ok: [node02] TASK [openshift_node : file] *************************************************** skipping: [node02] TASK [openshift_node : Create the Node config] ********************************* changed: [node02] TASK [openshift_node : Configure Node Environment Variables] ******************* TASK [openshift_node : Configure AWS Cloud Provider Settings] ****************** skipping: [node02] => (item=None) skipping: [node02] => (item=None) TASK [openshift_node : Wait for master API to become available before proceeding] *** skipping: [node02] TASK [openshift_node : Start and enable node dep] ****************************** skipping: [node02] TASK [openshift_node : Start and enable node] ********************************** ok: [node02] TASK [openshift_node : Dump logs from node service if it failed] *************** skipping: [node02] TASK [openshift_node : Abort if node failed to start] ************************** skipping: [node02] TASK [openshift_node : set_fact] *********************************************** ok: [node02] TASK [openshift_node : NFS storage plugin configuration] *********************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/storage_plugins/nfs.yml for node02 TASK [openshift_node : Install NFS storage plugin dependencies] **************** ok: [node02] TASK [openshift_node : Check for existence of nfs sebooleans] ****************** ok: [node02] => (item=virt_use_nfs) ok: [node02] => (item=virt_sandbox_use_nfs) TASK [openshift_node : Set seboolean to allow nfs storage plugin access from containers] *** ok: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:18.840601', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_use_nfs'], u'rc': 0, 'item': u'virt_use_nfs', u'delta': u'0:00:00.007994', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:18.832607', '_ansible_ignore_errors': None, 'failed': False}) skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:20.273053', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_nfs'], u'rc': 0, 'item': u'virt_sandbox_use_nfs', u'delta': u'0:00:00.020569', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:20.252484', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : Set seboolean to allow nfs storage plugin access from containers (python 3)] *** skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:18.840601', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_use_nfs'], u'rc': 0, 'item': u'virt_use_nfs', u'delta': u'0:00:00.007994', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:18.832607', '_ansible_ignore_errors': None, 'failed': False}) skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:20.273053', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_nfs'], u'rc': 0, 'item': u'virt_sandbox_use_nfs', u'delta': u'0:00:00.020569', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:20.252484', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : GlusterFS storage plugin configuration] ***************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/storage_plugins/glusterfs.yml for node02 TASK [openshift_node : Install GlusterFS storage plugin dependencies] ********** ok: [node02] TASK [openshift_node : Check for existence of fusefs sebooleans] *************** ok: [node02] => (item=virt_use_fusefs) ok: [node02] => (item=virt_sandbox_use_fusefs) TASK [openshift_node : Set seboolean to allow gluster storage plugin access from containers] *** ok: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:26.254472', '_ansible_no_log': False, u'stdout': u'virt_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_use_fusefs'], u'rc': 0, 'item': u'virt_use_fusefs', u'delta': u'0:00:00.014308', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:26.240164', '_ansible_ignore_errors': None, 'failed': False}) ok: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:27.367727', '_ansible_no_log': False, u'stdout': u'virt_sandbox_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_fusefs'], u'rc': 0, 'item': u'virt_sandbox_use_fusefs', u'delta': u'0:00:00.009203', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_sandbox_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:27.358524', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : Set seboolean to allow gluster storage plugin access from containers (python 3)] *** skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:26.254472', '_ansible_no_log': False, u'stdout': u'virt_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_use_fusefs'], u'rc': 0, 'item': u'virt_use_fusefs', u'delta': u'0:00:00.014308', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:26.240164', '_ansible_ignore_errors': None, 'failed': False}) skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-29 07:33:27.367727', '_ansible_no_log': False, u'stdout': u'virt_sandbox_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_fusefs'], u'rc': 0, 'item': u'virt_sandbox_use_fusefs', u'delta': u'0:00:00.009203', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_sandbox_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-29 07:33:27.358524', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : Ceph storage plugin configuration] ********************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/storage_plugins/ceph.yml for node02 TASK [openshift_node : Install Ceph storage plugin dependencies] *************** ok: [node02] TASK [openshift_node : iSCSI storage plugin configuration] ********************* included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/storage_plugins/iscsi.yml for node02 TASK [openshift_node : Install iSCSI storage plugin dependencies] ************** ok: [node02] => (item=iscsi-initiator-utils) ok: [node02] => (item=device-mapper-multipath) TASK [openshift_node : restart services] *************************************** ok: [node02] => (item=multipathd) ok: [node02] => (item=rpcbind) TASK [openshift_node : Template multipath configuration] *********************** changed: [node02] TASK [openshift_node : Enable multipath] *************************************** changed: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/config/workaround-bz1331590-ovs-oom-fix.yml for node02 TASK [openshift_node : Create OpenvSwitch service.d directory] ***************** ok: [node02] TASK [openshift_node : Install OpenvSwitch service OOM fix] ******************** ok: [node02] TASK [tuned : Check for tuned package] ***************************************** ok: [node02] TASK [tuned : Set tuned OpenShift variables] *********************************** ok: [node02] TASK [tuned : Ensure directory structure exists] ******************************* ok: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'state': 'directory', 'ctime': 1524661407.2267268, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1524661407.2267268, 'owner': 'root', 'path': u'openshift', 'size': 24, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) ok: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'state': 'directory', 'ctime': 1524661407.2257268, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1524661407.2257268, 'owner': 'root', 'path': u'openshift-control-plane', 'size': 24, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) ok: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'state': 'directory', 'ctime': 1524661407.2267268, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1524661407.2267268, 'owner': 'root', 'path': u'openshift-node', 'size': 24, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) skipping: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/recommend.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2267268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'recommend.conf', 'size': 268, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) skipping: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/openshift/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2267268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'openshift/tuned.conf', 'size': 593, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) skipping: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/openshift-control-plane/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2257268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'openshift-control-plane/tuned.conf', 'size': 744, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) skipping: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/openshift-node/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2267268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'openshift-node/tuned.conf', 'size': 135, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) TASK [tuned : Ensure files are populated from templates] *********************** skipping: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'state': 'directory', 'ctime': 1524661407.2267268, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1524661407.2267268, 'owner': 'root', 'path': u'openshift', 'size': 24, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) skipping: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'state': 'directory', 'ctime': 1524661407.2257268, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1524661407.2257268, 'owner': 'root', 'path': u'openshift-control-plane', 'size': 24, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) skipping: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'state': 'directory', 'ctime': 1524661407.2267268, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1524661407.2267268, 'owner': 'root', 'path': u'openshift-node', 'size': 24, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) ok: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/recommend.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2267268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'recommend.conf', 'size': 268, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) ok: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/openshift/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2267268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'openshift/tuned.conf', 'size': 593, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) ok: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/openshift-control-plane/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2257268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'openshift-control-plane/tuned.conf', 'size': 744, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) ok: [node02] => (item={'src': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates/openshift-node/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'system_u', 'serole': 'object_r', 'ctime': 1524661407.2267268, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1523891799.0, 'owner': 'root', 'path': u'openshift-node/tuned.conf', 'size': 135, 'root': u'/usr/share/ansible/openshift-ansible/roles/tuned/templates', 'setype': 'usr_t'}) TASK [tuned : Make tuned use the recommended tuned profile on restart] ********* changed: [node02] => (item=/etc/tuned/active_profile) ok: [node02] => (item=/etc/tuned/profile_mode) TASK [tuned : Restart tuned service] ******************************************* changed: [node02] TASK [nickhammond.logrotate : nickhammond.logrotate | Install logrotate] ******* ok: [node02] TASK [nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d scripts] *** RUNNING HANDLER [openshift_node : restart node] ******************************** changed: [node02] PLAY [create additional node network plugin groups] **************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_flannel [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_calico [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_contiv [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_kuryr PLAY [etcd_client node config] ************************************************* skipping: no hosts matched PLAY [Additional node config] ************************************************** skipping: no hosts matched PLAY [Additional node config] ************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_nuage PLAY [Additional node config] ************************************************** skipping: no hosts matched PLAY [Configure Contiv masters] ************************************************ TASK [Gathering Facts] ********************************************************* ok: [node01] PLAY [Configure rest of Contiv nodes] ****************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] ok: [node01] PLAY [Configure Kuryr node] **************************************************** skipping: no hosts matched PLAY [Additional node config] ************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [openshift_manage_node : Wait for master API to become available before proceeding] *** skipping: [node02] TASK [openshift_manage_node : Wait for Node Registration] ********************** ok: [node02 -> node01] TASK [openshift_manage_node : include_tasks] *********************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_manage_node/tasks/config.yml for node02 TASK [openshift_manage_node : Set node schedulability] ************************* ok: [node02 -> node01] TASK [openshift_manage_node : Label nodes] ************************************* changed: [node02 -> node01] TASK [Create group for deployment type] **************************************** ok: [node02] PLAY [Re-enable excluder if it was previously enabled] ************************* TASK [openshift_excluder : Detecting Atomic Host Operating System] ************* ok: [node02] TASK [openshift_excluder : Debug r_openshift_excluder_enable_docker_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_docker_excluder": true } TASK [openshift_excluder : Debug r_openshift_excluder_enable_openshift_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_openshift_excluder": true } TASK [openshift_excluder : Fail if invalid openshift_excluder_action provided] *** skipping: [node02] TASK [openshift_excluder : Fail if r_openshift_excluder_upgrade_target is not defined] *** skipping: [node02] TASK [openshift_excluder : Include main action task file] ********************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/enable.yml for node02 TASK [openshift_excluder : Install excluders] ********************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml for node02 TASK [openshift_excluder : Install docker excluder - yum] ********************** skipping: [node02] TASK [openshift_excluder : Install docker excluder - dnf] ********************** skipping: [node02] TASK [openshift_excluder : Install openshift excluder - yum] ******************* skipping: [node02] TASK [openshift_excluder : Install openshift excluder - dnf] ******************* skipping: [node02] TASK [openshift_excluder : set_fact] ******************************************* skipping: [node02] TASK [openshift_excluder : Enable excluders] *********************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : Enable docker excluder] ***************************** changed: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : Enable openshift excluder] ************************** changed: [node02] PLAY [Node Install Checkpoint End] ********************************************* TASK [Set Node install 'Complete'] ********************************************* ok: [node01] PLAY RECAP ********************************************************************* localhost : ok=25 changed=0 unreachable=0 failed=0 node01 : ok=42 changed=0 unreachable=0 failed=0 node02 : ok=208 changed=28 unreachable=0 failed=0 INSTALLER STATUS *************************************************************** Initialization : Complete (0:01:16) Node Install : Complete (0:03:55) grep: inventory: No such file or directory PLAY [new_nodes] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [Restart openvswitch service] ********************************************* changed: [node02] PLAY [nodes, new_nodes] ******************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] ok: [node01] TASK [replace] ***************************************************************** changed: [node01] changed: [node02] TASK [replace] ***************************************************************** changed: [node01] changed: [node02] TASK [service] ***************************************************************** changed: [node01] changed: [node02] PLAY RECAP ********************************************************************* node01 : ok=4 changed=3 unreachable=0 failed=0 node02 : ok=6 changed=4 unreachable=0 failed=0 2018/04/29 07:35:21 Waiting for host: 192.168.66.101:22 2018/04/29 07:35:21 Connected to tcp://192.168.66.101:22 2018/04/29 07:35:23 Waiting for host: 192.168.66.101:22 2018/04/29 07:35:23 Connected to tcp://192.168.66.101:22 Warning: Permanently added '[127.0.0.1]:33064' (ECDSA) to the list of known hosts. Warning: Permanently added '[127.0.0.1]:33064' (ECDSA) to the list of known hosts. Cluster "node01:8443" set. Cluster "node01:8443" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 3d v1.9.1+a0ce1bc657 node02 Ready 1m v1.9.1+a0ce1bc657 + make cluster-sync ./cluster/build.sh Building ... sha256:0e817e41f9750e44335dde1be5cb34809abe48c8add43baf165907418e2e75ce go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " sha256:0e817e41f9750e44335dde1be5cb34809abe48c8add43baf165907418e2e75ce go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/build-docker.sh build sending incremental file list ./ Dockerfile kubernetes.repo sent 854 bytes received 53 bytes 1814.00 bytes/sec total size is 1167 speedup is 1.29 Sending build context to Docker daemon 36.12 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 65d6d48cdb35 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> e1ade8663337 Step 5/8 : USER 1001 ---> Using cache ---> 2ce44d6f372a Step 6/8 : COPY virt-controller /virt-controller ---> d8c08b794384 Removing intermediate container 79f0fe280f27 Step 7/8 : ENTRYPOINT /virt-controller ---> Running in 2f9964b80db8 ---> 3e2e8306e66b Removing intermediate container 2f9964b80db8 Step 8/8 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "virt-controller" '' ---> Running in 9bca3b263f89 ---> 122c49f0fe84 Removing intermediate container 9bca3b263f89 Successfully built 122c49f0fe84 sending incremental file list ./ Dockerfile entrypoint.sh kubevirt-sudo libvirtd.sh sh.sh sock-connector sent 3502 bytes received 129 bytes 7262.00 bytes/sec total size is 5953 speedup is 1.64 Sending build context to Docker daemon 38.06 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d4ddb23dff45 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 142a2ba860cf Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 02569da61faa Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> 47d4a51575e2 Step 6/14 : COPY virt-launcher /virt-launcher ---> 898668916b96 Removing intermediate container 19332a4252a8 Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> e8af16a4bfb7 Removing intermediate container 5512490b10a5 Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Running in aa5bd7c564a9  ---> 537a3c643d4e Removing intermediate container aa5bd7c564a9 Step 9/14 : RUN rm -f /libvirtd.sh ---> Running in 8dce752c8e62  ---> 179a0c07ac31 Removing intermediate container 8dce752c8e62 Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> 0c8dd02d527a Removing intermediate container 140f10cf6687 Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Running in 057305561e20  ---> 14ab9c56c94b Removing intermediate container 057305561e20 Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> 8e1c13345608 Removing intermediate container 410448893f28 Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Running in c84146c77acd ---> 201f3eb248e6 Removing intermediate container c84146c77acd Step 14/14 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "virt-launcher" '' ---> Running in 63b28a25d2ca ---> a65c540682a8 Removing intermediate container 63b28a25d2ca Successfully built a65c540682a8 sending incremental file list ./ Dockerfile sent 585 bytes received 34 bytes 1238.00 bytes/sec total size is 775 speedup is 1.25 Sending build context to Docker daemon 36.68 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/5 : COPY virt-handler /virt-handler ---> 237f638d3a74 Removing intermediate container 6187a9f19735 Step 4/5 : ENTRYPOINT /virt-handler ---> Running in 841adfaed2e5 ---> 5bfefc432f43 Removing intermediate container 841adfaed2e5 Step 5/5 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "virt-handler" '' ---> Running in 1e6c882c5d3e ---> 0df9d69aba2d Removing intermediate container 1e6c882c5d3e Successfully built 0df9d69aba2d sending incremental file list ./ Dockerfile sent 646 bytes received 34 bytes 1360.00 bytes/sec total size is 876 speedup is 1.29 Sending build context to Docker daemon 36.81 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 2eeb55f39191 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 56cea32a45d4 Step 5/8 : USER 1001 ---> Using cache ---> d121920c238b Step 6/8 : COPY virt-api /virt-api ---> 5316b8e7cf4a Removing intermediate container ad65443ff2d0 Step 7/8 : ENTRYPOINT /virt-api ---> Running in b275f3c86a68 ---> 78a18cb6f926 Removing intermediate container b275f3c86a68 Step 8/8 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "virt-api" '' ---> Running in 71747338d1b3 ---> 3d39c586f7f8 Removing intermediate container 71747338d1b3 Successfully built 3d39c586f7f8 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/iscsi-demo-target-tgtd ./ Dockerfile run-tgt.sh sent 2185 bytes received 53 bytes 4476.00 bytes/sec total size is 3992 speedup is 1.78 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/10 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> c2339817cfe0 Step 5/10 : RUN mkdir -p /images ---> Using cache ---> a19645b68794 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> 3f0fa7f50785 Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> 35ac6b299ab7 Step 8/10 : EXPOSE 3260 ---> Using cache ---> 259db1618b21 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> 4c9f18dec05a Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-openshift-release-crio1" '' ---> Running in ebf2e0f54fd8 ---> 6276c16a9c7a Removing intermediate container ebf2e0f54fd8 Successfully built 6276c16a9c7a sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/vm-killer ./ Dockerfile sent 610 bytes received 34 bytes 1288.00 bytes/sec total size is 797 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/5 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 391fa00b27f9 Step 5/5 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "vm-killer" '' ---> Running in 0aea962f9db2 ---> e256bc673e0f Removing intermediate container 0aea962f9db2 Successfully built e256bc673e0f sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/registry-disk-v1alpha ./ Dockerfile entry-point.sh sent 1566 bytes received 53 bytes 3238.00 bytes/sec total size is 2542 speedup is 1.57 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 6696837acee7 Step 3/7 : ENV container docker ---> Using cache ---> 2dd2b1a02be6 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> dd3c4950b5c8 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> d221e0eb5770 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6506e61a9f41 Step 7/7 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "registry-disk-v1alpha" '' ---> Running in 40bd7a25127b ---> dfe7c1d61642 Removing intermediate container 40bd7a25127b Successfully built dfe7c1d61642 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/cirros-registry-disk-demo ./ Dockerfile sent 630 bytes received 34 bytes 1328.00 bytes/sec total size is 825 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33063/kubevirt/registry-disk-v1alpha:devel ---> dfe7c1d61642 Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in a353009fb80a ---> 334bc3815446 Removing intermediate container a353009fb80a Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in eba54aa7de3f  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0  0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 1 12.1M 1 176k 0 0 127k 0 0:01:37 0:00:01 0:01:36 127k 12 12.1M 12 1552k 0 0 671k 0 0:00:18 0:00:02 0:00:16 671k 61 12.1M 61 7584k 0 0 2295k 0  0:00:05 0:00:03 0:00:02 2294k 100 12.1M 100 12.1M 0 0 3279k 0 0:00:03 0:00:03 --:--:-- 3279k  ---> 62d86c6f7e6b Removing intermediate container eba54aa7de3f Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-openshift-release-crio1" '' ---> Running in e4a0467ea3b6 ---> 690340827df1 Removing intermediate container e4a0467ea3b6 Successfully built 690340827df1 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/fedora-cloud-registry-disk-demo ./ Dockerfile sent 677 bytes received 34 bytes 1422.00 bytes/sec total size is 926 speedup is 1.30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33063/kubevirt/registry-disk-v1alpha:devel ---> dfe7c1d61642 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in e073176b157a ---> 4d54c2d63e7e Removing intermediate container e073176b157a Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 0761522d4684   % Total % Received % Xferd Average Speed Time Time  Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0  0 221M 0 221k 0 0 75745 0 0:51:07  0:00:02 0:51:05 75745 0 221M 0 1333k 0 0 334k 0 0:11:17 0:00:03  0:11:14 1126k 2 221M 2 5112k 0 0 1028k 0 0:03:40 0:00:04 0:03:36 2478k 5 221M 5 11.3M 0 0 1947k 0 0:01:56 0:00:05 0:01:51 3848k 9 221M 9 22.0M 0 0 3252k 0 0:01:09 0:00:06 0:01:03 5665k 16 221M 16 36.3M 0 0 4663k 0 0:00:48 0:00:07 0:00:41 7417k 26 221M 26 58.5M 0 0 6676k 0 0:00:33 0:00:08 0:00:25 11.4M 38 221M 38 86.2M 0 0 8853k 0 0:00:25 0:00:09 0:00:16 16.2M 50 221M 50 112M 0 0 10.2M 0 0:00:21 0:00:10 0:00:11 20.2M 62 221M 62 139M 0 0 11.6M 0 0:00:18 0:00:11 0:00:07 23.4M 75 221M 75 166M 0 0 12.8M 0 0:00:17 0:00:12 0:00:05 26.1M 87 221M 87 194M 0 0 13.9M 0 0:00:15 0:00:13 0:00:02 27.1M 99 221M 99 221M 0 0 14.7M 0 0:00:14 0:00:14 --:--:-- 27.0M 100 221M 100 221M 0 0 14.7M 0 0:00:15 0:00:15 --:--:-- 26.7M  ---> d4fa7a7a0815 Removing intermediate container 0761522d4684 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-openshift-release-crio1" '' ---> Running in 3b864b20c480 ---> e19e8ab0c706 Removing intermediate container 3b864b20c480 Successfully built e19e8ab0c706 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/alpine-registry-disk-demo ./ Dockerfile sent 639 bytes received 34 bytes 1346.00 bytes/sec total size is 866 speedup is 1.29 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33063/kubevirt/registry-disk-v1alpha:devel ---> dfe7c1d61642 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 4d54c2d63e7e Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in 1059c25a91fe  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 37.0M 0 15928 0 0 24887 0 0:25:58 --:--:-- 0:25:58 24848 6 37.0M 6 2365k 0 0 1449k 0 0:00:26 0:00:01 0:00:25 1449k 31 37.0M 31 11.6M 0 0 4553k 0 0:00:08 0:00:02 0:00:06 4551k 50 37.0M 50 18.6M 0 0 5251k 0 0:00:07 0:00:03 0:00:04 5251k 66 37.0M 66 24.5M 0 0 5440k 0 0:00:06 0:00:04 0:00:02 5439k 81 37.0M 81 30.2M 0  0 5505k 0 0:00:06 0:00:05 0:00:01 6209k 97 37.0M 97 36.1M 0 0 5583k 0 0:00:06 0:00:06 --:--:-- 6933k 100 37.0M 100 37.0M 0 0 5616k 0 0:00:06 0:00:06 --:--:-- 6295k  ---> 62507440edf5 Removing intermediate container 1059c25a91fe Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-openshift-release-crio1" '' ---> Running in db52dc29f64b ---> 9f2069ae3270 Removing intermediate container db52dc29f64b Successfully built 9f2069ae3270 sending incremental file list ./ Dockerfile sent 660 bytes received 34 bytes 1388.00 bytes/sec total size is 918 speedup is 1.32 Sending build context to Docker daemon 33.96 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 6e6e1b7931e0 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 9d27e69a25f2 Step 5/8 : USER 1001 ---> Using cache ---> 1760a8e197af Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 2ba6e30e999b Removing intermediate container ba9cfbda93d6 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in db5aaff401d3 ---> c2125d799c3a Removing intermediate container db5aaff401d3 Step 8/8 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "subresource-access-test" '' ---> Running in 216a85508044 ---> 46dbed6289f2 Removing intermediate container 216a85508044 Successfully built 46dbed6289f2 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd/winrmcli ./ Dockerfile sent 773 bytes received 34 bytes 1614.00 bytes/sec total size is 1098 speedup is 1.36 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/9 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 8e034c77f534 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 28ec1d482013 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> db78d0286f58 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 7ebe54e98be4 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> a3b04c1816f5 Step 9/9 : LABEL "kubevirt-functional-tests-openshift-release-crio1" '' "winrmcli" '' ---> Running in 65f582c2c7eb ---> 059ebd80e859 Removing intermediate container 65f582c2c7eb Successfully built 059ebd80e859 hack/build-docker.sh push The push refers to a repository [localhost:33063/kubevirt/virt-controller] ea6d0b8e9956: Preparing 52069b1f5033: Preparing 39bae602f753: Preparing 52069b1f5033: Pushed ea6d0b8e9956: Pushed 39bae602f753: Pushed devel: digest: sha256:4e6d8c0ae307b2b32a82704b62c8a22d771be058b1b5b684ea29f2d592bd8e02 size: 948 The push refers to a repository [localhost:33063/kubevirt/virt-launcher] 443d59543cb8: Preparing 4ea00b83d578: Preparing 4ea00b83d578: Preparing 6da7bb09ddae: Preparing d8c053d5365e: Preparing 2b01358ac514: Preparing 61b59acafe58: Preparing 4ebc38848be0: Preparing b9fd8c21001d: Preparing 4d2f0529ab56: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 61b59acafe58: Waiting 490c7c373332: Preparing 4b440db36f72: Preparing 4ebc38848be0: Waiting 39bae602f753: Preparing 4d2f0529ab56: Waiting b9fd8c21001d: Waiting a1359dc556dd: Waiting 39bae602f753: Waiting 4b440db36f72: Waiting 490c7c373332: Waiting 34fa414dfdf6: Waiting 6da7bb09ddae: Pushed 4ea00b83d578: Pushed 443d59543cb8: Pushed d8c053d5365e: Pushed 2b01358ac514: Pushed 4ebc38848be0: Pushed b9fd8c21001d: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed 490c7c373332: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 4d2f0529ab56: Pushed 61b59acafe58: Pushed 4b440db36f72: Pushed devel: digest: sha256:a3b900abcba038229f01b5ed7969b04a8f3ffcb66f952af75262d8b880aee6ae size: 3653 The push refers to a repository [localhost:33063/kubevirt/virt-handler] c292ee3098d5: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher c292ee3098d5: Pushed devel: digest: sha256:f1e4827ce5ff99b05d98c62232f8aacd256902fc0040c5c9c9a656b6e541841a size: 740 The push refers to a repository [localhost:33063/kubevirt/virt-api] 9e6b95a6edb0: Preparing 86b4b25303b4: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 86b4b25303b4: Pushed 9e6b95a6edb0: Pushed devel: digest: sha256:cdd8969e72c287b94bfb26268e52db4ae1805d030269483e52a146c248d4018b size: 948 The push refers to a repository [localhost:33063/kubevirt/iscsi-demo-target-tgtd] 80220be9fed7: Preparing 89fef61f2c06: Preparing b18a27986676: Preparing db8a56c06e31: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api b18a27986676: Pushed 80220be9fed7: Pushed 89fef61f2c06: Pushed db8a56c06e31: Pushed devel: digest: sha256:165b483903ac0d54f32de0a38848dee1f1f2578b79d19d3a1583f145a56ee2d5 size: 1368 The push refers to a repository [localhost:33063/kubevirt/vm-killer] 040d3361950b: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd 040d3361950b: Pushed devel: digest: sha256:161c8522909d91274a998a270ecb864534b9f72962ee80e56429aba9c11ec4f0 size: 740 The push refers to a repository [localhost:33063/kubevirt/registry-disk-v1alpha] 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Pushed 9beeb9a18439: Pushed 6709b2da72b8: Pushed devel: digest: sha256:cccf1a0fa063b322ce8800f115891844b594e8de745c87790bfc2998bc023c78 size: 948 The push refers to a repository [localhost:33063/kubevirt/cirros-registry-disk-demo] 4ebab680c8e4: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha 4cd98e29acca: Mounted from kubevirt/registry-disk-v1alpha 9beeb9a18439: Mounted from kubevirt/registry-disk-v1alpha 4ebab680c8e4: Pushed devel: digest: sha256:d0bec9db16c2e4117209e2470e8e911fed8b2912edef15d3c8082bf617bbeafa size: 1160 The push refers to a repository [localhost:33063/kubevirt/fedora-cloud-registry-disk-demo] 81f7c16a1519: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Mounted from kubevirt/cirros-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 9beeb9a18439: Mounted from kubevirt/cirros-registry-disk-demo 81f7c16a1519: Pushed devel: digest: sha256:5938b990652ba3513cb6d80b7886e39e6adc7a9784c2b7193d5e6a3372fad33d size: 1161 The push refers to a repository [localhost:33063/kubevirt/alpine-registry-disk-demo] f93bc443981c: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Mounted from kubevirt/fedora-cloud-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo 9beeb9a18439: Mounted from kubevirt/fedora-cloud-registry-disk-demo f93bc443981c: Pushed devel: digest: sha256:75b24d127da52b8154e02f88d1358fa8c5f2219e083f7be148fda011e4a753c3 size: 1160 The push refers to a repository [localhost:33063/kubevirt/subresource-access-test] d2744f854711: Preparing 2c4f6b64d5e3: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2c4f6b64d5e3: Pushed d2744f854711: Pushed devel: digest: sha256:c73b6167437d355dac9e4ba5c77cb3d4ff7f1d330d4789fd3b7cfac097f76193 size: 948 The push refers to a repository [localhost:33063/kubevirt/winrmcli] 161ef5381259: Preparing 2bef46eb5bf3: Preparing ac5611d25ed9: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 161ef5381259: Pushed ac5611d25ed9: Pushed 2bef46eb5bf3: Pushed devel: digest: sha256:c54d2198f4362bb7506c3ee26836a6fb13838c47d41ac7b5d6178b5534cd437a size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt' 2018/04/29 07:49:20 Waiting for host: 192.168.66.101:22 2018/04/29 07:49:20 Connected to tcp://192.168.66.101:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer b2fd69abfadb: Pulling fs layer 7b17aec6cdd9: Pulling fs layer b2fd69abfadb: Verifying Checksum b2fd69abfadb: Download complete 7b17aec6cdd9: Verifying Checksum 7b17aec6cdd9: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete b2fd69abfadb: Pull complete 7b17aec6cdd9: Pull complete Digest: sha256:4e6d8c0ae307b2b32a82704b62c8a22d771be058b1b5b684ea29f2d592bd8e02 Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer a7576ba71334: Pulling fs layer 635c67b1c9ef: Pulling fs layer c60da62d83fb: Pulling fs layer 92814fe0e266: Pulling fs layer 60b18f716f4a: Pulling fs layer 2e4f030f3f94: Pulling fs layer a145473bb7c5: Pulling fs layer 15f8848ef768: Pulling fs layer a4eaf0d63e34: Pulling fs layer a1e80189bea5: Waiting 6cc174edcebf: Waiting a7576ba71334: Waiting 635c67b1c9ef: Waiting c60da62d83fb: Waiting 92814fe0e266: Waiting 60b18f716f4a: Waiting a145473bb7c5: Waiting 15f8848ef768: Waiting a4eaf0d63e34: Waiting 2e4f030f3f94: Waiting f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a4b9e9eb807b: Verifying Checksum a4b9e9eb807b: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete 635c67b1c9ef: Verifying Checksum 635c67b1c9ef: Download complete c60da62d83fb: Verifying Checksum c60da62d83fb: Download complete 92814fe0e266: Verifying Checksum 92814fe0e266: Download complete a7576ba71334: Verifying Checksum a7576ba71334: Download complete 60b18f716f4a: Verifying Checksum 60b18f716f4a: Download complete 2e4f030f3f94: Verifying Checksum 2e4f030f3f94: Download complete a145473bb7c5: Verifying Checksum a145473bb7c5: Download complete a4eaf0d63e34: Verifying Checksum a4eaf0d63e34: Download complete 15f8848ef768: Verifying Checksum 15f8848ef768: Download complete d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete a7576ba71334: Pull complete 635c67b1c9ef: Pull complete c60da62d83fb: Pull complete 92814fe0e266: Pull complete 60b18f716f4a: Pull complete 2e4f030f3f94: Pull complete a145473bb7c5: Pull complete 15f8848ef768: Pull complete a4eaf0d63e34: Pull complete Digest: sha256:a3b900abcba038229f01b5ed7969b04a8f3ffcb66f952af75262d8b880aee6ae Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists 50a34de8856d: Pulling fs layer 50a34de8856d: Verifying Checksum 50a34de8856d: Download complete 50a34de8856d: Pull complete Digest: sha256:f1e4827ce5ff99b05d98c62232f8aacd256902fc0040c5c9c9a656b6e541841a Trying to pull repository registry:5000/kubevirt/virt-api ... devel: Pulling from registry:5000/kubevirt/virt-api 2176639d844b: Already exists ecbe4adfb5a6: Pulling fs layer 7178a5f99836: Pulling fs layer ecbe4adfb5a6: Verifying Checksum ecbe4adfb5a6: Download complete 7178a5f99836: Verifying Checksum 7178a5f99836: Download complete ecbe4adfb5a6: Pull complete 7178a5f99836: Pull complete Digest: sha256:cdd8969e72c287b94bfb26268e52db4ae1805d030269483e52a146c248d4018b Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists e41ccbba2812: Pulling fs layer 1525a0b70164: Pulling fs layer f69087ebfcf1: Pulling fs layer 4180f6dc22d7: Pulling fs layer 4180f6dc22d7: Waiting 1525a0b70164: Verifying Checksum 1525a0b70164: Download complete 4180f6dc22d7: Verifying Checksum 4180f6dc22d7: Download complete f69087ebfcf1: Verifying Checksum f69087ebfcf1: Download complete e41ccbba2812: Verifying Checksum e41ccbba2812: Download complete e41ccbba2812: Pull complete 1525a0b70164: Pull complete f69087ebfcf1: Pull complete 4180f6dc22d7: Pull complete Digest: sha256:165b483903ac0d54f32de0a38848dee1f1f2578b79d19d3a1583f145a56ee2d5 Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 138296e7088e: Pulling fs layer 138296e7088e: Download complete 138296e7088e: Pull complete Digest: sha256:161c8522909d91274a998a270ecb864534b9f72962ee80e56429aba9c11ec4f0 Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 1d498b3a9c67: Pulling fs layer 542e5d603739: Pulling fs layer 542e5d603739: Verifying Checksum 542e5d603739: Download complete 1d498b3a9c67: Verifying Checksum 1d498b3a9c67: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 1d498b3a9c67: Pull complete 542e5d603739: Pull complete Digest: sha256:cccf1a0fa063b322ce8800f115891844b594e8de745c87790bfc2998bc023c78 Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 1d498b3a9c67: Already exists 542e5d603739: Already exists cccdb325ffb3: Pulling fs layer cccdb325ffb3: Download complete cccdb325ffb3: Pull complete Digest: sha256:d0bec9db16c2e4117209e2470e8e911fed8b2912edef15d3c8082bf617bbeafa Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 1d498b3a9c67: Already exists 542e5d603739: Already exists c02ed0b1ad8a: Pulling fs layer c02ed0b1ad8a: Verifying Checksum c02ed0b1ad8a: Download complete c02ed0b1ad8a: Pull complete Digest: sha256:5938b990652ba3513cb6d80b7886e39e6adc7a9784c2b7193d5e6a3372fad33d Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 1d498b3a9c67: Already exists 542e5d603739: Already exists 00ca95c94564: Pulling fs layer 00ca95c94564: Verifying Checksum 00ca95c94564: Download complete 00ca95c94564: Pull complete Digest: sha256:75b24d127da52b8154e02f88d1358fa8c5f2219e083f7be148fda011e4a753c3 Trying to pull repository registry:5000/kubevirt/subresource-access-test ... devel: Pulling from registry:5000/kubevirt/subresource-access-test 2176639d844b: Already exists 3e89ff6a57d9: Pulling fs layer bbe53f474925: Pulling fs layer 3e89ff6a57d9: Verifying Checksum 3e89ff6a57d9: Download complete 3e89ff6a57d9: Pull complete bbe53f474925: Download complete bbe53f474925: Pull complete Digest: sha256:c73b6167437d355dac9e4ba5c77cb3d4ff7f1d330d4789fd3b7cfac097f76193 Trying to pull repository registry:5000/kubevirt/winrmcli ... devel: Pulling from registry:5000/kubevirt/winrmcli 2176639d844b: Already exists 7c1ab5de42d5: Pulling fs layer 9391531b0959: Pulling fs layer d4e9df2eaabc: Pulling fs layer d4e9df2eaabc: Verifying Checksum d4e9df2eaabc: Download complete 7c1ab5de42d5: Verifying Checksum 7c1ab5de42d5: Download complete 9391531b0959: Verifying Checksum 9391531b0959: Download complete 7c1ab5de42d5: Pull complete 9391531b0959: Pull complete d4e9df2eaabc: Pull complete Digest: sha256:c54d2198f4362bb7506c3ee26836a6fb13838c47d41ac7b5d6178b5534cd437a 2018/04/29 07:52:59 Waiting for host: 192.168.66.101:22 2018/04/29 07:52:59 Connected to tcp://192.168.66.101:22 2018/04/29 07:53:02 Waiting for host: 192.168.66.102:22 2018/04/29 07:53:02 Connected to tcp://192.168.66.102:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer b2fd69abfadb: Pulling fs layer 7b17aec6cdd9: Pulling fs layer b2fd69abfadb: Verifying Checksum b2fd69abfadb: Download complete 7b17aec6cdd9: Verifying Checksum 7b17aec6cdd9: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete b2fd69abfadb: Pull complete 7b17aec6cdd9: Pull complete Digest: sha256:4e6d8c0ae307b2b32a82704b62c8a22d771be058b1b5b684ea29f2d592bd8e02 Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer a7576ba71334: Pulling fs layer 635c67b1c9ef: Pulling fs layer c60da62d83fb: Pulling fs layer 92814fe0e266: Pulling fs layer 60b18f716f4a: Pulling fs layer 2e4f030f3f94: Pulling fs layer a145473bb7c5: Pulling fs layer 15f8848ef768: Pulling fs layer a4eaf0d63e34: Pulling fs layer a1e80189bea5: Waiting 6cc174edcebf: Waiting a7576ba71334: Waiting 635c67b1c9ef: Waiting 15f8848ef768: Waiting a4eaf0d63e34: Waiting c60da62d83fb: Waiting 92814fe0e266: Waiting 60b18f716f4a: Waiting 2e4f030f3f94: Waiting a145473bb7c5: Waiting f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a4b9e9eb807b: Verifying Checksum a4b9e9eb807b: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete 635c67b1c9ef: Verifying Checksum 635c67b1c9ef: Download complete c60da62d83fb: Verifying Checksum c60da62d83fb: Download complete a7576ba71334: Verifying Checksum a7576ba71334: Download complete 92814fe0e266: Verifying Checksum 92814fe0e266: Download complete 60b18f716f4a: Verifying Checksum 60b18f716f4a: Download complete 2e4f030f3f94: Verifying Checksum 2e4f030f3f94: Download complete a145473bb7c5: Verifying Checksum a145473bb7c5: Download complete 15f8848ef768: Verifying Checksum 15f8848ef768: Download complete a4eaf0d63e34: Verifying Checksum a4eaf0d63e34: Download complete d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete a7576ba71334: Pull complete 635c67b1c9ef: Pull complete c60da62d83fb: Pull complete 92814fe0e266: Pull complete 60b18f716f4a: Pull complete 2e4f030f3f94: Pull complete a145473bb7c5: Pull complete 15f8848ef768: Pull complete a4eaf0d63e34: Pull complete Digest: sha256:a3b900abcba038229f01b5ed7969b04a8f3ffcb66f952af75262d8b880aee6ae Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists 50a34de8856d: Pulling fs layer 50a34de8856d: Verifying Checksum 50a34de8856d: Download complete 50a34de8856d: Pull complete Digest: sha256:f1e4827ce5ff99b05d98c62232f8aacd256902fc0040c5c9c9a656b6e541841a Trying to pull repository registry:5000/kubevirt/virt-api ... devel: Pulling from registry:5000/kubevirt/virt-api 2176639d844b: Already exists ecbe4adfb5a6: Pulling fs layer 7178a5f99836: Pulling fs layer ecbe4adfb5a6: Verifying Checksum ecbe4adfb5a6: Download complete 7178a5f99836: Verifying Checksum 7178a5f99836: Download complete ecbe4adfb5a6: Pull complete 7178a5f99836: Pull complete Digest: sha256:cdd8969e72c287b94bfb26268e52db4ae1805d030269483e52a146c248d4018b Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists e41ccbba2812: Pulling fs layer 1525a0b70164: Pulling fs layer f69087ebfcf1: Pulling fs layer 4180f6dc22d7: Pulling fs layer 4180f6dc22d7: Waiting 1525a0b70164: Verifying Checksum 1525a0b70164: Download complete 4180f6dc22d7: Verifying Checksum 4180f6dc22d7: Download complete f69087ebfcf1: Verifying Checksum f69087ebfcf1: Download complete e41ccbba2812: Verifying Checksum e41ccbba2812: Download complete e41ccbba2812: Pull complete 1525a0b70164: Pull complete f69087ebfcf1: Pull complete 4180f6dc22d7: Pull complete Digest: sha256:165b483903ac0d54f32de0a38848dee1f1f2578b79d19d3a1583f145a56ee2d5 Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 138296e7088e: Pulling fs layer 138296e7088e: Verifying Checksum 138296e7088e: Download complete 138296e7088e: Pull complete Digest: sha256:161c8522909d91274a998a270ecb864534b9f72962ee80e56429aba9c11ec4f0 Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 1d498b3a9c67: Pulling fs layer 542e5d603739: Pulling fs layer 542e5d603739: Download complete 1d498b3a9c67: Verifying Checksum 1d498b3a9c67: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 1d498b3a9c67: Pull complete 542e5d603739: Pull complete Digest: sha256:cccf1a0fa063b322ce8800f115891844b594e8de745c87790bfc2998bc023c78 Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 1d498b3a9c67: Already exists 542e5d603739: Already exists cccdb325ffb3: Pulling fs layer cccdb325ffb3: Verifying Checksum cccdb325ffb3: Download complete cccdb325ffb3: Pull complete Digest: sha256:d0bec9db16c2e4117209e2470e8e911fed8b2912edef15d3c8082bf617bbeafa Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 1d498b3a9c67: Already exists 542e5d603739: Already exists c02ed0b1ad8a: Pulling fs layer c02ed0b1ad8a: Verifying Checksum c02ed0b1ad8a: Download complete c02ed0b1ad8a: Pull complete Digest: sha256:5938b990652ba3513cb6d80b7886e39e6adc7a9784c2b7193d5e6a3372fad33d Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 1d498b3a9c67: Already exists 542e5d603739: Already exists 00ca95c94564: Pulling fs layer 00ca95c94564: Verifying Checksum 00ca95c94564: Download complete 00ca95c94564: Pull complete Digest: sha256:75b24d127da52b8154e02f88d1358fa8c5f2219e083f7be148fda011e4a753c3 Trying to pull repository registry:5000/kubevirt/subresource-access-test ... devel: Pulling from registry:5000/kubevirt/subresource-access-test 2176639d844b: Already exists 3e89ff6a57d9: Pulling fs layer bbe53f474925: Pulling fs layer 3e89ff6a57d9: Verifying Checksum 3e89ff6a57d9: Download complete bbe53f474925: Verifying Checksum bbe53f474925: Download complete 3e89ff6a57d9: Pull complete bbe53f474925: Pull complete Digest: sha256:c73b6167437d355dac9e4ba5c77cb3d4ff7f1d330d4789fd3b7cfac097f76193 Trying to pull repository registry:5000/kubevirt/winrmcli ... devel: Pulling from registry:5000/kubevirt/winrmcli 2176639d844b: Already exists 7c1ab5de42d5: Pulling fs layer 9391531b0959: Pulling fs layer d4e9df2eaabc: Pulling fs layer d4e9df2eaabc: Verifying Checksum d4e9df2eaabc: Download complete 7c1ab5de42d5: Verifying Checksum 7c1ab5de42d5: Download complete 9391531b0959: Verifying Checksum 9391531b0959: Download complete 7c1ab5de42d5: Pull complete 9391531b0959: Pull complete d4e9df2eaabc: Pull complete Digest: sha256:c54d2198f4362bb7506c3ee26836a6fb13838c47d41ac7b5d6178b5534cd437a 2018/04/29 07:56:22 Waiting for host: 192.168.66.102:22 2018/04/29 07:56:22 Connected to tcp://192.168.66.102:22 Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=os-3.9.0 ++ provider_prefix=kubevirt-functional-tests-openshift-release-crio1 ++ job_prefix=kubevirt-functional-tests-openshift-release-crio1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.0-alpha.1-16-ge182e75 ++ KUBEVIRT_VERSION=v0.5.0-alpha.1-16-ge182e75 + source cluster/os-3.9.0/provider.sh ++ set -e ++ image=os-3.9.0@sha256:d55fc1bef8a9ab327c5f213deec75ad9cc3a1c258593b9fd966b11bef6010bd6 ++ [[ true == \t\r\u\e ]] ++ image=os-3.9.0-crio@sha256:edea5ea811eaa150f0ae56022188d2e33872d855b84e2a8ffb5294f868b23840 ++ provider_prefix=kubevirt-functional-tests-openshift-release-crio1-crio ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=os-3.9.0 ++ source hack/config-default.sh source hack/config-os-3.9.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-os-3.9.0.sh ++ source hack/config-provider-os-3.9.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/cluster/os-3.9.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/cluster/os-3.9.0/.kubectl +++ docker_prefix=localhost:33063/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vms --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p the server doesn't have a resource type "vms" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/os-3.9.0/.kubeconfig ++ KUBECONFIG=cluster/os-3.9.0/.kubeconfig ++ wc -l ++ cluster/os-3.9.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/os-3.9.0/.kubeconfig ++ KUBECONFIG=cluster/os-3.9.0/.kubeconfig ++ cluster/os-3.9.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=os-3.9.0 ++ provider_prefix=kubevirt-functional-tests-openshift-release-crio1 ++ job_prefix=kubevirt-functional-tests-openshift-release-crio1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.0-alpha.1-16-ge182e75 ++ KUBEVIRT_VERSION=v0.5.0-alpha.1-16-ge182e75 + source cluster/os-3.9.0/provider.sh ++ set -e ++ image=os-3.9.0@sha256:d55fc1bef8a9ab327c5f213deec75ad9cc3a1c258593b9fd966b11bef6010bd6 ++ [[ true == \t\r\u\e ]] ++ image=os-3.9.0-crio@sha256:edea5ea811eaa150f0ae56022188d2e33872d855b84e2a8ffb5294f868b23840 ++ provider_prefix=kubevirt-functional-tests-openshift-release-crio1-crio ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=os-3.9.0 ++ source hack/config-default.sh source hack/config-os-3.9.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-os-3.9.0.sh ++ source hack/config-provider-os-3.9.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/cluster/os-3.9.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/cluster/os-3.9.0/.kubectl +++ docker_prefix=localhost:33063/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z openshift-release-crio ]] + [[ openshift-release-crio =~ .*-dev ]] + [[ openshift-release-crio =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml serviceaccount "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver-auth-delegator" created rolebinding "kubevirt-apiserver" created role "kubevirt-apiserver" created clusterrole "kubevirt-apiserver" created clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created service "virt-api" created deployment "virt-api" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created customresourcedefinition "offlinevirtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "iscsi-disk-custom" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + '[' os-3.9.0 = vagrant-openshift ']' + '[' os-3.9.0 = os-3.9.0 ']' + _kubectl adm policy add-scc-to-user privileged -z kubevirt-controller -n kube-system + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl adm policy add-scc-to-user privileged -z kubevirt-controller -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-controller"] + _kubectl adm policy add-scc-to-user privileged -z kubevirt-testing -n kube-system + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl adm policy add-scc-to-user privileged -z kubevirt-testing -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-testing"] + _kubectl adm policy add-scc-to-user privileged -z kubevirt-privileged -n kube-system + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl adm policy add-scc-to-user privileged -z kubevirt-privileged -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-privileged"] + _kubectl adm policy add-scc-to-user privileged -z kubevirt-apiserver -n kube-system + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl adm policy add-scc-to-user privileged -z kubevirt-apiserver -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-apiserver"] + _kubectl adm policy add-scc-to-user privileged admin + export KUBECONFIG=cluster/os-3.9.0/.kubeconfig + KUBECONFIG=cluster/os-3.9.0/.kubeconfig + cluster/os-3.9.0/.kubectl adm policy add-scc-to-user privileged admin scc "privileged" added to: ["admin"] + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 4s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 4s virt-api-fd96f94b5-lft7t 0/1 ContainerCreating 0 8s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 8s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 8s virt-controller-5f7c946cc4-r6ssp 0/1 ContainerCreating 0 8s virt-handler-tcdhg 0/1 ContainerCreating 0 2s virt-handler-twxqs 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 5s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 5s virt-api-fd96f94b5-lft7t 0/1 ContainerCreating 0 9s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 9s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 9s virt-controller-5f7c946cc4-r6ssp 0/1 ContainerCreating 0 9s virt-handler-tcdhg 0/1 ContainerCreating 0 3s virt-handler-twxqs 0/1 ContainerCreating 0 3s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 16s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 16s virt-api-fd96f94b5-lft7t 0/1 ContainerCreating 0 20s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 20s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 20s virt-controller-5f7c946cc4-r6ssp 0/1 ContainerCreating 0 20s virt-handler-tcdhg 0/1 ContainerCreating 0 14s virt-handler-twxqs 0/1 ContainerCreating 0 14s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 18s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 18s virt-api-fd96f94b5-lft7t 0/1 ContainerCreating 0 22s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 22s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 22s virt-controller-5f7c946cc4-r6ssp 0/1 ContainerCreating 0 22s virt-handler-tcdhg 0/1 ContainerCreating 0 16s virt-handler-twxqs 0/1 ContainerCreating 0 16s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 29s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 29s virt-api-fd96f94b5-lft7t 0/1 ContainerCreating 0 33s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 33s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 33s virt-controller-5f7c946cc4-r6ssp 0/1 ContainerCreating 0 33s virt-handler-tcdhg 0/1 ContainerCreating 0 27s virt-handler-twxqs 0/1 ContainerCreating 0 27s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 31s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 31s virt-api-fd96f94b5-lft7t 0/1 ContainerCreating 0 35s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 35s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 35s virt-controller-5f7c946cc4-r6ssp 0/1 ContainerCreating 0 35s virt-handler-tcdhg 0/1 ContainerCreating 0 29s virt-handler-twxqs 0/1 ContainerCreating 0 29s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 42s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 42s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 46s virt-controller-5f7c946cc4-dpmx7 0/1 ContainerCreating 0 46s virt-handler-tcdhg 0/1 ContainerCreating 0 40s virt-handler-twxqs 0/1 ContainerCreating 0 40s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 43s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 43s virt-api-fd96f94b5-vtv8s 0/1 ContainerCreating 0 47s virt-handler-tcdhg 0/1 ContainerCreating 0 41s virt-handler-twxqs 0/1 ContainerCreating 0 41s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 55s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 55s virt-handler-tcdhg 0/1 ContainerCreating 0 53s virt-handler-twxqs 0/1 ContainerCreating 0 53s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 56s iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 56s virt-handler-tcdhg 0/1 ContainerCreating 0 54s virt-handler-twxqs 0/1 ContainerCreating 0 54s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 1m iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 1m virt-handler-tcdhg 0/1 ContainerCreating 0 1m virt-handler-twxqs 0/1 ContainerCreating 0 1m' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 1m iscsi-demo-target-tgtd-nx7x9 0/1 ContainerCreating 0 1m virt-handler-tcdhg 0/1 ContainerCreating 0 1m virt-handler-twxqs 0/1 ContainerCreating 0 1m + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 1m virt-handler-tcdhg 0/1 ContainerCreating 0 1m virt-handler-twxqs 0/1 ContainerCreating 0 1m' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 1m virt-handler-twxqs 0/1 ContainerCreating 0 1m + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 1m virt-handler-twxqs 0/1 ContainerCreating 0 1m' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-6rxjb 0/1 ContainerCreating 0 1m virt-handler-twxqs 0/1 ContainerCreating 0 1m + sleep 10 ++ grep -v Running ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-6rxjb false iscsi-demo-target-tgtd-nx7x9' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-6rxjb false iscsi-demo-target-tgtd-nx7x9 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-6rxjb false iscsi-demo-target-tgtd-nx7x9' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' false iscsi-demo-target-tgtd-6rxjb false iscsi-demo-target-tgtd-nx7x9 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-6rxjb false iscsi-demo-target-tgtd-nx7x9' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-6rxjb false iscsi-demo-target-tgtd-nx7x9 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-6rxjb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' false iscsi-demo-target-tgtd-6rxjb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-6rxjb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '/virt-controller/ && /true/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ wc -l + '[' 2 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE iscsi-demo-target-tgtd-6rxjb 1/1 Running 1 3m iscsi-demo-target-tgtd-nx7x9 1/1 Running 1 3m virt-api-fd96f94b5-lft7t 1/1 Running 0 3m virt-api-fd96f94b5-vtv8s 1/1 Running 0 3m virt-controller-5f7c946cc4-dpmx7 1/1 Running 0 3m virt-controller-5f7c946cc4-r6ssp 1/1 Running 0 3m virt-handler-tcdhg 1/1 Running 0 3m virt-handler-twxqs 1/1 Running 0 3m + kubectl version + cluster/kubectl.sh version oc v3.9.0+ba7faec-1 kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://127.0.0.1:33060 openshift v3.9.0+ba7faec-1 kubernetes v1.9.1+a0ce1bc657 + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ openshift-release-crio == windows ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release-crio/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:0e817e41f9750e44335dde1be5cb34809abe48c8add43baf165907418e2e75ce go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1524988861 Will run 90 of 90 specs •••••••••• ------------------------------ • [SLOW TEST:7.825 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to three, to two and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.821 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:82.334 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:150 ------------------------------ • [SLOW TEST:15.763 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:162 ------------------------------ • ------------------------------ • [SLOW TEST:6.468 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:216 ------------------------------ • [SLOW TEST:59.479 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:37 A VM with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:57 ------------------------------ • [SLOW TEST:43.150 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:48 VirtualMachine attached to the pod network /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:146 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VM /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • ------------------------------ • [SLOW TEST:5.480 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:48 VirtualMachine attached to the pod network /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:146 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:6.211 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:48 VirtualMachine attached to the pod network /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:146 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:5.460 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:48 VirtualMachine attached to the pod network /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:146 with a service matching the vm exposed /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:218 should be able to reach the vm based on labels specified on the vm /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:238 ------------------------------ • ------------------------------ • [SLOW TEST:9.558 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:49 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:50 ------------------------------ • [SLOW TEST:5.176 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:54 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:55 ------------------------------ volumedisk0 compute • Failure [92.741 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 VM definition /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:50 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:51 should report 3 cpu cores under guest OS [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:57 Unexpected Warning event recieved. Expected : Warning not to equal : Warning /root/go/src/kubevirt.io/kubevirt/tests/utils.go:226 ------------------------------ STEP: Starting a VM level=info timestamp=2018-04-29T08:05:53.197809Z pos=utils.go:224 component=tests msg="Created virtual machine pod virt-launcher-testvmhqzl2-t2wm8" level=info timestamp=2018-04-29T08:06:07.588730Z pos=utils.go:224 component=tests msg="Pod owner ship transfered to the node virt-launcher-testvmhqzl2-t2wm8" level=error timestamp=2018-04-29T08:06:07.687017Z pos=utils.go:222 component=tests reason="unexpected warning event recieved" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" STEP: Expecting the VM console STEP: Checking the number of CPU cores under guest OS STEP: Checking the requested amount of memory allocated for a guest • [SLOW TEST:35.676 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 New VM with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:110 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:133 ------------------------------ •• ------------------------------ • [SLOW TEST:20.905 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should update OfflineVirtualMachine once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:197 ------------------------------ • [SLOW TEST:13.122 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should remove VM once the OVM is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:206 ------------------------------ •• ------------------------------ • [SLOW TEST:41.949 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should recreate VM if the VM's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:263 ------------------------------ • [SLOW TEST:32.433 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should stop VM if running set to false /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:323 ------------------------------ • [SLOW TEST:139.664 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should start and stop VM multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:331 ------------------------------ • [SLOW TEST:39.861 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should not update the VM spec if Running /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:344 ------------------------------ • [SLOW TEST:133.464 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:385 ------------------------------ • [SLOW TEST:19.826 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:423 should start a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:424 ------------------------------ • [SLOW TEST:23.704 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:108 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:423 should stop a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:455 ------------------------------ • [SLOW TEST:73.591 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:17.971 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:29.989 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:39.742 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:82 ------------------------------ • [SLOW TEST:114.959 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:93 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:94 ------------------------------ • Failure [181.042 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:119 should process provided cloud-init data [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:120 Unexpected Warning event recieved. Expected : Warning not to equal : Warning /root/go/src/kubevirt.io/kubevirt/tests/utils.go:226 ------------------------------ STEP: Starting a VM STEP: Waiting the VM start level=info timestamp=2018-04-29T08:20:30.549550Z pos=utils.go:224 component=tests msg="Created virtual machine pod virt-launcher-testvmb44lh-n7622" level=info timestamp=2018-04-29T08:20:46.139947Z pos=utils.go:224 component=tests msg="Pod owner ship transfered to the node virt-launcher-testvmb44lh-n7622" level=error timestamp=2018-04-29T08:20:46.235598Z pos=utils.go:222 component=tests reason="unexpected warning event recieved" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" STEP: executing a user-data script STEP: Expecting the VM console STEP: Checking that the VM serial console output equals to expected one • Failure [37.042 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 should take user-data from k8s secret [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:163 Expected error: : 120000000000 expect: timer expired after 120 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:73 ------------------------------ STEP: Creating a user-data secret STEP: Starting a VM STEP: Waiting the VM start level=info timestamp=2018-04-29T08:23:31.402950Z pos=utils.go:224 component=tests msg="Created virtual machine pod virt-launcher-testvmz46tb-6w8gt" level=info timestamp=2018-04-29T08:23:46.652438Z pos=utils.go:224 component=tests msg="Pod owner ship transfered to the node virt-launcher-testvmz46tb-6w8gt" level=info timestamp=2018-04-29T08:23:48.091999Z pos=utils.go:224 component=tests msg="VM defined." level=info timestamp=2018-04-29T08:23:48.126669Z pos=utils.go:224 component=tests msg="VM started." STEP: Expecting the VM console STEP: Checking that the VM serial console output equals to expected one level=info timestamp=2018-04-29T08:24:00.090593Z pos=vm_userdata_test.go:72 component=tests namespace=kubevirt-test-default name=testvmb44lh kind=VirtualMachine uid= msg="[{0 []}]" level=info timestamp=2018-04-29T08:24:07.081466Z pos=vm_userdata_test.go:72 component=tests namespace=kubevirt-test-default name=testvmz46tb kind=VirtualMachine uid= msg="[{0 [ 0.000000] Initializing cgroup subsys cpuset\r\r\n[ 0.000000] Initializing cgroup subsys cpu\r\r\n[ 0.000000] Initializing cgroup subsys cpuacct\r\r\n[ 0.000000] Linux version 4.4.0-28-generic (buildd@lcy01-13) (gcc version 5.3.1 20160413 (Ubuntu 5.3.1-14ubuntu2.1) ) #47-Ubuntu SMP Fri Jun 24 10:09:13 UTC 2016 (Ubuntu 4.4.0-28.47-generic 4.4.13)\r\r\n[ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0\r\r\n[ 0.000000] KERNEL supported cpus:\r\r\n[ 0.000000] Intel GenuineIntel\r\r\n[ 0.000000] AMD AuthenticAMD\r\r\n[ 0.000000] Centaur CentaurHauls\r\r\n[ 0.000000] x86/fpu: Legacy x87 FPU detected.\r\r\n[ 0.000000] x86/fpu: Using 'lazy' FPU context switches.\r\r\n[ 0.000000] e820: BIOS-provided physical RAM map:\r\r\n[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable\r\r\n[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved\r\r\n[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved\r\r\n[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000003df9fff] usable\r\r\n[ 0.000000] BIOS-e820: [mem 0x0000000003dfa000-0x0000000003dfffff] reserved\r\r\n[ 0.000000] BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved\r\r\n[ 0.000000] BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved\r\r\n[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved\r\r\n[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved\r\r\n[ 0.000000] NX (Execute Disable) protection: active\r\r\n[ 0.000000] SMBIOS 2.8 present.\r\r\n[ 0.000000] Hypervisor detected: KVM\r\r\n[ 0.000000] e820: last_pfn = 0x3dfa max_arch_pfn = 0x400000000\r\r\n[ 0.000000] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WC UC- WT \r\r\n[ 0.000000] found SMP MP-table at [mem 0x000f6c20-0x000f6c2f] mapped at [ffff8800000f6c20]\r\r\n[ 0.000000] Scanning 1 areas for low memory corruption\r\r\n[ 0.000000] RAMDISK: [mem 0x03934000-0x03de9fff]\r\r\n[ 0.000000] ACPI: Early table checksum verification disabled\r\r\n[ 0.000000] ACPI: RSDP 0x00000000000F6A60 000014 (v00 BOCHS )\r\r\n[ 0.000000] ACPI: RSDT 0x0000000003DFE430 000038 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)\r\r\n[ 0.000000] ACPI: FACP 0x0000000003DFFF80 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)\r\r\n[ 0.000000] ACPI: DSDT 0x0000000003DFE470 001135 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)\r\r\n[ 0.000000] ACPI: FACS 0x0000000003DFFF40 000040\r\r\n[ 0.000000] ACPI: SSDT 0x0000000003DFF720 000819 (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001)\r\r\n[ 0.000000] ACPI: APIC 0x0000000003DFF630 000078 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)\r\r\n[ 0.000000] ACPI: HPET 0x0000000003DFF5F0 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)\r\r\n[ 0.000000] ACPI: MCFG 0x0000000003DFF5B0 00003C (v01 BOCHS BXPCMCFG 00000001 BXPC 00000001)\r\r\n[ 0.000000] No NUMA configuration found\r\r\n[ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000003df9fff]\r\r\n[ 0.000000] NODE_DATA(0) allocated [mem 0x03df5000-0x03df9fff]\r\r\n[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00\r\r\n[ 0.000000] kvm-clock: cpu 0, msr 0:3df1001, primary cpu clock\r\r\n[ 0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns\r\r\n[ 0.000000] Zone ranges:\r\r\n[ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff]\r\r\n[ 0.000000] DMA32 [mem 0x0000000001000000-0x0000000003df9fff]\r\r\n[ 0.000000] Normal empty\r\r\n[ 0.000000] Device empty\r\r\n[ 0.000000] Movable zone start for each node\r\r\n[ 0.000000] Early memory node ranges\r\r\n[ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009efff]\r\r\n[ 0.000000] node 0: [mem 0x0000000000100000-0x0000000003df9fff]\r\r\n[ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x0000000003df9fff]\r\r\n[ 0.000000] ACPI: PM-Timer IO Port: 0xb008\r\r\n[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])\r\r\n[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23\r\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)\r\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)\r\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)\r\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)\r\r\n[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)\r\r\n[ 0.000000] Using ACPI (MADT) for SMP configuration information\r\r\n[ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000\r\r\n[ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs\r\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]\r\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]\r\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]\r\r\n[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]\r\r\n[ 0.000000] e820: [mem 0x03e00000-0xafffffff] available for PCI devices\r\r\n[ 0.000000] Booting paravirtualized kernel on KVM\r\r\n[ 0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns\r\r\n[ 0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:1 nr_node_ids:1\r\r\n[ 0.000000] PERCPU: Embedded 33 pages/cpu @ffff880003600000 s98008 r8192 d28968 u2097152\r\r\n[ 0.000000] KVM setup async PF for cpu 0\r\r\n[ 0.000000] kvm-stealtime: cpu 0, msr 360d940\r\r\n[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 15499\r\r\n[ 0.000000] Policy zone: DMA32\r\r\n[ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0\r\r\n[ 0.000000] PID hash table entries: 256 (order: -1, 2048 bytes)\r\r\n[ 0.000000] Memory: 37276K/63072K available (8368K kernel code, 1280K rwdata, 3928K rodata, 1480K init, 1292K bss, 25796K reserved, 0K cma-reserved)\r\r\n[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1\r\r\n[ 0.000000] Hierarchical RCU implementation.\r\r\n[ 0.000000] \tBuild-time adjustment of leaf fanout to 64.\r\r\n[ 0.000000] \tRCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=1.\r\r\n[ 0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=64, nr_cpu_ids=1\r\r\n[ 0.000000] NR_IRQS:16640 nr_irqs:256 16\r\r\n[ 0.000000] Console: colour VGA+ 80x25\r\r\n[ 0.000000] console [tty1] enabled\r\r\n[ 0.000000] console [ttyS0] enabled\r\r\n[ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns\r\r\n[ 0.000000] tsc: Detected 2099.998 MHz processor\r\r\n[ 0.012000] Calibrating delay loop (skipped) preset value.. 4199.99 BogoMIPS (lpj=8399992)\r\r\n[ 0.016034] pid_max: default: 32768 minimum: 301\r\r\n[ 0.020047] ACPI: Core revision 20150930\r\r\n[ 0.025552] ACPI: 2 ACPI AML tables successfully acquired and loaded\r\r\n[ 0.036079] Security Framework initialized\r\r\n[ 0.040052] Yama: becoming mindful.\r\r\n[ 0.044068] AppArmor: AppArmor initialized\r\r\n[ 0.048091] Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)\r\r\n[ 0.052116] Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)\r\r\n[ 0.056093] Mount-cache hash table entries: 512 (order: 0, 4096 bytes)\r\r\n[ 0.060043] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)\r\r\n[ 0.068492] Initializing cgroup subsys io\r\r\n[ 0.072045] Initializing cgroup subsys memory\r\r\n[ 0.076050] Initializing cgroup subsys devices\r\r\n[ 0.080042] Initializing cgroup subsys freezer\r\r\n[ 0.084041] Initializing cgroup subsys net_cls\r\r\n[ 0.088041] Initializing cgroup subsys perf_event\r\r\n[ 0.092071] Initializing cgroup subsys net_prio\r\r\n[ 0.096041] Initializing cgroup subsys hugetlb\r\r\n[ 0.100039] Initializing cgroup subsys pids\r\r\n[ 0.108351] mce: CPU supports 10 MCE banks\r\r\n[ 0.113299] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0\r\r\n[ 0.116036] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0\r\r\n[ 0.408942] Freeing SMP alternatives memory: 28K (ffffffff820b4000 - ffffffff820bb000)\r\r\n[ 0.606100] ftrace: allocating 31920 entries in 125 pages\r\r\n[ 0.633011] smpboot: Max logical packages: 1\r\r\n[ 0.636124] smpboot: APIC(0) Converting physical 0 to logical package 0\r\r\n[ 0.646854] x2apic enabled\r\r\n[ 0.648124] Switched APIC routing to physical x2apic.\r\r\n[ 0.669824] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1\r\r\n[ 0.672000] smpboot: CPU0: AMD QEMU Virtual CPU version 2.5+ (family: 0x6, model: 0x6, stepping: 0x3)\r\r\n[ 0.688214] Performance Events: AMD PMU driver.\r\r\n[ 0.696119] ... version: 0\r\r\n[ 0.700008] ... bit width: 48\r\r\n[ 0.704009] ... generic registers: 4\r\r\n[ 0.708008] ... value mask: 0000ffffffffffff\r\r\n[ 0.712008] ... max period: 00007fffffffffff\r\r\n[ 0.716008] ... fixed-purpose events: 0\r\r\n[ 0.720171] ... event mask: 000000000000000f\r\r\n[ 0.729505] x86: Booted up 1 node, 1 CPUs\r\r\n[ 0.732014] smpboot: Total of 1 processors activated (4199.99 BogoMIPS)\r\r\n[ 0.740910] devtmpfs: initialized\r\r\n[ 0.750811] evm: security.selinux\r\r\n[ 0.752013] evm: security.SMACK64\r\r\n[ 0.756006] evm: security.SMACK64EXEC\r\r\n[ 0.760005] evm: security.SMACK64TRANSMUTE\r\r\n[ 0.764005] evm: security.SMACK64MMAP\r\r\n[ 0.768005] evm: security.ima\r\r\n[ 0.772006] evm: security.capability\r\r\n[ 0.776530] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns\r\r\n[ 0.780404] pinctrl core: initialized pinctrl subsystem\r\r\n[ 0.785347] RTC time: 8:23:54, date: 04/29/18\r\r\n[ 0.788793] NET: Registered protocol family 16\r\r\n[ 0.792947] cpuidle: using governor ladder\r\r\n[ 0.796015] cpuidle: using governor menu\r\r\n[ 0.800069] PCCT header not found.\r\r\n[ 0.805045] ACPI: bus type PCI registered\r\r\n[ 0.808048] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5\r\r\n[ 0.812980] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000)\r\r\n[ 0.816010] PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820\r\r\n[ 0.820096] PCI: Using configuration type 1 for base access\r\r\n[ 0.827031] ACPI: Added _OSI(Module Device)\r\r\n[ 0.828008] ACPI: Added _OSI(Processor Device)\r\r\n[ 0.832007] ACPI: Added _OSI(3.0 _SCP Extensions)\r\r\n[ 0.836007] ACPI: Added _OSI(Processor Aggregator Device)\r\r\n[ 0.847504] ACPI: Interpreter enabled\r\r\n[ 0.848018] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\\_S1_] (20150930/hwxface-580)\r\r\n[ 0.860011] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\\_S2_] (20150930/hwxface-580)\r\r\n[ 0.872094] ACPI: (supports S0 S3 S4 S5)\r\r\n[ 0.876007] ACPI: Using IOAPIC for interrupt routing\r\r\n[ 0.880041] PCI: Using host bridge windows from ACPI; if necessary, use \"pci=nocrs\" and report a bug\r\r\n[ 0.891883] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])\r\r\n[ 0.892022] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]\r\r\n[ 0.896066] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM\r\r\n[ 0.900983] acpiphp: Slot [3] registered\r\r\n[ 0.904118] acpiphp: Slot [4] registered\r\r\n[ 0.908134] acpiphp: Slot [5] registered\r\r\n[ 0.912131] acpiphp: Slot [6] registered\r\r\n[ 0.920036] acpiphp: Slot [7] registered\r\r\n[ 0.924148] acpiphp: Slot [8] registered\r\r\n[ 0.928122] acpiphp: Slot [9] registered\r\r\n[ 0.932160] acpiphp: Slot [10] registered\r\r\n[ 0.936124] acpiphp: Slot [11] registered\r\r\n[ 0.940152] acpiphp: Slot [12] registered\r\r\n[ 0.944136] acpiphp: Slot [13] registered\r\r\n[ 0.948169] acpiphp: Slot [14] registered\r\r\n[ 0.952148] acpiphp: Slot [15] registered\r\r\n[ 0.956142] acpiphp: Slot [16] registered\r\r\n[ 0.960130] acpiphp: Slot [17] registered\r\r\n[ 0.964130] acpiphp: Slot [18] registered\r\r\n[ 0.968131] acpiphp: Slot [19] registered\r\r\n[ 0.972143] acpiphp: Slot [20] registered\r\r\n[ 0.976134] acpiphp: Slot [21] registered\r\r\n[ 0.980134] acpiphp: Slot [22] registered\r\r\n[ 0.984130] acpiphp: Slot [23] registered\r\r\n[ 0.988137] acpiphp: Slot [24] registered\r\r\n[ 0.992127] acpiphp: Slot [25] registered\r\r\n[ 0.996151] acpiphp: Slot [26] registered\r\r\n[ 1.000133] acpiphp: Slot [27] registered\r\r\n[ 1.004185] acpiphp: Slot [28] registered\r\r\n[ 1.008131] acpiphp: Slot [29] registered\r\r\n[ 1.012173] acpiphp: Slot [30] registered\r\r\n[ 1.016129] acpiphp: Slot [31] registered\r\r\n[ 1.020110] PCI host bridge to bus 0000:00\r\r\n[ 1.024015] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]\r\r\n[ 1.028013] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]\r\r\n[ 1.032014] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]\r\r\n[ 1.036012] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]\r\r\n[ 1.040012] pci_bus 0000:00: root bus resource [bus 00-ff]\r\r\n[ 1.313168] pci 0000:00:1f.0: quirk: [io 0xb000-0xb07f] claimed by ICH6 ACPI/GPIO/TCO\r\r\n[ 1.469724] pci 0000:00:02.0: PCI bridge to [bus 01]\r\r\n[ 1.546017] pci 0000:00:02.1: PCI bridge to [bus 02]\r\r\n[ 1.621511] pci 0000:00:02.2: PCI bridge to [bus 03]\r\r\n[ 1.683805] pci 0000:00:02.3: PCI bridge to [bus 04]\r\r\n[ 1.693523] pci 0000:00:02.4: PCI bridge to [bus 05]\r\r\n[ 1.732097] pci 0000:00:1e.0: PCI bridge to [bus 06-07] (subtractive decode)\r\r\n[ 1.810839] pci 0000:06:00.0: PCI bridge to [bus 07]\r\r\n[ 1.857366] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 10 11) *0\r\r\n[ 1.868483] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 10 11) *0\r\r\n[ 1.880480] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 11) *0\r\r\n[ 1.892464] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 11) *0\r\r\n[ 1.904289] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)\r\r\n[ 1.918422] ACPI: Enabled 16 GPEs in block 00 to 3F\r\r\n[ 1.920695] vgaarb: setting as boot device: PCI:0000:00:01.0\r\r\n[ 1.924000] vgaarb: device added: PCI:0000:00:01.0,decodes=io+mem,owns=io+mem,locks=none\r\r\n[ 1.924013] vgaarb: loaded\r\r\n[ 1.928007] vgaarb: bridge control possible 0000:00:01.0\r\r\n[ 1.933011] SCSI subsystem initialized\r\r\n[ 1.936689] ACPI: bus type USB registered\r\r\n[ 1.940092] usbcore: registered new interface driver usbfs\r\r\n[ 1.944106] usbcore: registered new interface driver hub\r\r\n[ 1.948106] usbcore: registered new device driver usb\r\r\n[ 1.953055] PCI: Using ACPI for IRQ routing\r\r\n[ 2.638460] NetLabel: Initializing\r\r\n[ 2.640015] NetLabel: domain hash size = 128\r\r\n[ 2.644029] NetLabel: protocols = UNLABELED CIPSOv4\r\r\n[ 2.652046] NetLabel: unlabeled traffic allowed by default\r\r\n[ 2.660052] HPET: 3 timers in total, 0 timers will be used for per-cpu timer\r\r\n[ 2.664430] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0\r\r\n[ 2.684022] hpet0: 3 comparators, 64-bit 100.000000 MHz counter\r\r\n[ 2.696214] clocksource: Switched to clocksource kvm-clock\r\r\n[ 2.732637] AppArmor: AppArmor Filesystem Enabled\r\r\n[ 2.750994] pnp: PnP ACPI init\r\r\n[ 2.761075] pnp: PnP ACPI: found 7 devices\r\r\n[ 2.780206] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns\r\r\n[ 2.844336] pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff]\r\r\n[ 2.870965] pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff]\r\r\n[ 2.888226] pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff]\r\r\n[ 2.905062] pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff]\r\r\n[ 2.922415] pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff]\r\r\n[ 2.939743] pci 0000:00:02.0: PCI bridge to [bus 01]\r\r\n[ 2.953678] pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]\r\r\n[ 2.976555] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff]\r\r\n[ 3.002608] pci 0000:00:02.0: bridge window [mem 0xfda00000-0xfdbfffff 64bit pref]\r\r\n[ 3.035218] pci 0000:00:02.1: PCI bridge to [bus 02]\r\r\n[ 3.054564] pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]\r\r\n[ 3.079220] pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff]\r\r\n[ 3.105827] pci 0000:00:02.1: bridge window [mem 0xfd800000-0xfd9fffff 64bit pref]\r\r\n[ 3.139334] pci 0000:00:02.2: PCI bridge to [bus 03]\r\r\n[ 3.156253] pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]\r\r\n[ 3.179119] pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff]\r\r\n[ 3.202524] pci 0000:00:02.2: bridge window [mem 0xfd600000-0xfd7fffff 64bit pref]\r\r\n[ 3.234928] pci 0000:00:02.3: PCI bridge to [bus 04]\r\r\n[ 3.249764] pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]\r\r\n[ 3.271840] pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff]\r\r\n[ 3.294624] pci 0000:00:02.3: bridge window [mem 0xfd400000-0xfd5fffff 64bit pref]\r\r\n[ 3.326609] pci 0000:00:02.4: PCI bridge to [bus 05]\r\r\n[ 3.343386] pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]\r\r\n[ 3.365688] pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff]\r\r\n[ 3.388209] pci 0000:00:02.4: bridge window [mem 0xfd200000-0xfd3fffff 64bit pref]\r\r\n[ 3.419166] pci 0000:06:00.0: PCI bridge to [bus 07]\r\r\n[ 3.434152] pci 0000:06:00.0: bridge window [io 0xc000-0xcfff]\r\r\n[ 3.456656] pci 0000:06:00.0: bridge window [mem 0xfdc00000-0xfddfffff]\r\r\n[ 3.485486] pci 0000:06:00.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref]\r\r\n[ 3.513581] pci 0000:00:1e.0: PCI bridge to [bus 06-07]\r\r\n[ 3.527000] pci 0000:00:1e.0: bridge window [io 0xc000-0xcfff]\r\r\n[ 3.547278] pci 0000:00:1e.0: bridge window [mem 0xfdc00000-0xfdffffff]\r\r\n[ 3.573068] pci 0000:00:1e.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref]\r\r\n[ 3.620713] NET: Registered protocol family 2\r\r\n[ 3.634532] TCP established hash table entries: 512 (order: 0, 4096 bytes)\r\r\n[ 3.655209] TCP bind hash table entries: 512 (order: 1, 8192 bytes)\r\r\n[ 3.677154] TCP: Hash tables configured (established 512 bind 512)\r\r\n[ 3.694560] UDP hash table entries: 256 (order: 1, 8192 bytes)\r\r\n[ 3.712787] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)\r\r\n[ 3.734514] NET: Registered protocol family 1\r\r\n[ 3.750669] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11\r\r\n[ 3.778149] Trying to unpack rootfs image as initramfs...\r\r\n[ 3.926399] Freeing initrd memory: 4824K (ffff880003934000 - ffff880003dea000)\r\r\n[ 3.949076] Scanning for low memory corruption every 60 seconds\r\r\n[ 3.974123] futex hash table entries: 256 (order: 2, 16384 bytes)\r\r\n[ 3.994403] audit: initializing netlink subsys (disabled)\r\r\n[ 4.013721] audit: type=2000 audit(1524990234.632:1): initialized\r\r\n[ 4.035958] Initialise system trusted keyring\r\r\n[ 4.052649] HugeTLB registered 2 MB page size, pre-allocated 0 pages\r\r\n[ 4.081617] zbud: loaded\r\r\n[ 4.090690] VFS: Disk quotas dquot_6.6.0\r\r\n[ 4.102590] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)\r\r\n[ 4.122412] fuse init (API version 7.23)\r\r\n[ 4.134974] Key type big_key registered\r\r\n[ 4.146870] Allocating IMA MOK and blacklist keyrings.\r\r\n[ 4.164141] Key type asymmetric registered\r\r\n[ 4.176733] Asymmetric key parser 'x509' registered\r\r\n[ 4.193818] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)\r\r\n[ 4.223571] io scheduler noop registered\r\r\n[ 4.273969] io scheduler deadline registered (default)\r\r\n[ 4.343005] io scheduler cfq registered\r\r\n[ 4.422174] pci_hotplug: PCI Hot Plug PCI Core version: 0.5\r\r\n[ 4.486176] pciehp: PCI Express Hot Plug Controller Driver version: 0.4\r\r\n[ 4.577981] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0\r\r\n[ 4.678807] ACPI: Power Button [PWRF]\r\r\n[ 4.689773] GHES: HEST is not enabled!\r\r\n[ 4.761845] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled\r\r\n[ 4.835750] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A\r\r\n[ 4.869160] Linux agpgart interface v0.103\r\r\n[ 4.890254] brd: module loaded\r\r\n[ 4.903095] loop: module loaded\r\r\n[ 4.933981] vda: vda1 vda15\r\r\n[ 4.958737] libphy: Fixed MDIO Bus: probed\r\r\n[ 4.974062] tun: Universal TUN/TAP device driver, 1.6\r\r\n[ 4.990999] tun: (C) 1999-2004 Max Krasnyansky \r\r\n[ 5.010630] PPP generic driver version 2.4.2\r\r\n[ 5.026162] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver\r\r\n[ 5.047737] ehci-pci: EHCI PCI platform driver\r\r\n[ 5.064313] ehci-platform: EHCI generic platform driver\r\r\n[ 5.081755] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver\r\r\n[ 5.101251] ohci-pci: OHCI PCI platform driver\r\r\n[ 5.116526] ohci-platform: OHCI generic platform driver\r\r\n[ 5.133376] uhci_hcd: USB Universal Host Controller Interface driver\r\r\n[ 5.160615] xhci_hcd 0000:01:00.0: xHCI Host Controller\r\r\n[ 5.177593] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 1\r\r\n[ 5.201741] xhci_hcd 0000:01:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x00000014\r\r\n[ 5.227411] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002\r\r\n[ 5.243936] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1\r\r\n[ 5.263698] usb usb1: Product: xHCI Host Controller\r\r\n[ 5.276695] usb usb1: Manufacturer: Linux 4.4.0-28-generic xhci-hcd\r\r\n[ 5.292259] usb usb1: SerialNumber: 0000:01:00.0\r\r\n[ 5.304837] hub 1-0:1.0: USB hub found\r\r\n[ 5.315791] hub 1-0:1.0: 4 ports detected\r\r\n[ 5.328237] xhci_hcd 0000:01:00.0: xHCI Host Controller\r\r\n[ 5.342815] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 2\r\r\n[ 5.363860] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.\r\r\n[ 5.385421] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003\r\r\n[ 5.549869] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1\r\r\n[ 5.569891] usb usb2: Product: xHCI Host Controller\r\r\n[ 5.582858] usb usb2: Manufacturer: Linux 4.4.0-28-generic xhci-hcd\r\r\n[ 5.598561] usb usb2: SerialNumber: 0000:01:00.0\r\r\n[ 5.612184] hub 2-0:1.0: USB hub found\r\r\n[ 5.623266] hub 2-0:1.0: 4 ports detected\r\r\n[ 5.635617] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12\r\r\n[ 5.662942] serio: i8042 KBD port at 0x60,0x64 irq 1\r\r\n[ 5.676310] serio: i8042 AUX port at 0x60,0x64 irq 12\r\r\n[ 5.690104] mousedev: PS/2 mouse device common for all mice\r\r\n[ 5.707921] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1\r\r\n[ 5.730926] rtc_cmos 00:00: RTC can wake from S4\r\r\n[ 5.747769] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0\r\r\n[ 5.765820] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram, hpet irqs\r\r\n[ 5.783203] i2c /dev entries driver\r\r\n[ 5.794026] device-mapper: uevent: version 1.0.3\r\r\n[ 5.806720] device-mapper: ioctl: 4.34.0-ioctl (2015-10-28) initialised: dm-devel@redhat.com\r\r\n[ 5.829187] ledtrig-cpu: registered to indicate activity on CPUs\r\r\n[ 5.846882] NET: Registered protocol family 10\r\r\n[ 5.859576] NET: Registered protocol family 17\r\r\n[ 5.871869] Key type dns_resolver registered\r\r\n[ 5.883825] microcode: AMD CPU family 0x6 not supported\r\r\n[ 5.898415] registered taskstats version 1\r\r\n[ 5.912673] Loading compiled-in X.509 certificates\r\r\n[ 5.929284] Loaded X.509 cert 'Build time autogenerated kernel key: 6ea974e07bd0b30541f4d838a3b7a8a80d5ca9af'\r\r\n[ 5.960193] zswap: loaded using pool lzo/zbud\r\r\n[ 5.978531] Key type trusted registered\r\r\n[ 5.994589] Key type encrypted registered\r\r\n[ 6.008114] AppArmor: AppArmor sha1 policy hashing enabled\r\r\n[ 6.023595] ima: No TPM chip found, activating TPM-bypass!\r\r\n[ 6.040658] evm: HMAC attrs: 0x1\r\r\n[ 6.062194] Magic number: 14:835:372\r\r\n[ 6.074978] virtio-pci 0000:02:00.0: hash matches\r\r\n[ 6.090584] rtc_cmos 00:00: setting system clock to 2018-04-29 08:24:01 UTC (1524990241)\r\r\n[ 6.116205] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found\r\r\n[ 6.132207] EDD information not available.\r\r\n[ 6.149847] Freeing unused kernel memory: 1480K (ffffffff81f42000 - ffffffff820b4000)\r\r\n[ 6.172137] Write protecting the kernel read-only data: 14336k\r\r\n[ 6.191060] Freeing unused kernel memory: 1860K (ffff88000182f000 - ffff880001a00000)\r\r\n[ 6.214332] Freeing unused kernel memory: 168K (ffff880001dd6000 - ffff880001e00000)\r\r\n\r\r\ninfo: initramfs: up at 6.25\r\r\nmodprobe: module virtio_pci not found in modules.dep\r\r\nmodprobe: module virtio_blk not found in modules.dep\r\r\nmodprobe: module virtio_net not found in modules.dep\r\r\n[ 6.334530] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10\r\r\nmodprobe: module vfat not found in modules.dep\r\r\nmodprobe: module nls_cp437 not found in modules.dep\r\r\ninfo: copying initramfs to /dev/vda1\r\r\ninfo: initramfs loading root from /dev/vda1\r\r\ninfo: /etc/init.d/rc.sysinit: up at 8.25\r\r\ninfo: container: none\r\r\nStarting logging: OK\r\r\nmodprobe: module virtio_pci not found in modules.dep\r\r\nmodprobe: module virtio_blk not found in modules.dep\r\r\nmodprobe: module virtio_net not found in modules.dep\r\r\nmodprobe: module vfat not found in modules.dep\r\r\nmodprobe: module nls_cp437 not found in modules.dep\r\r\nWARN: /etc/rc3.d/S10-load-modules failed\r\r\nInitializing random number generator... [ 8.517905] random: dd urandom read with 12 bits of entropy available\r\r\ndone.\r\r\nStarting acpid: OK\r\r\nmcb [info=/dev/vdb dev=/dev/vdb target=tmp unmount=true callback=mcu_drop_dev_arg]: mount '/dev/vdb' '-o,ro' '/tmp/nocloud.mp.lVB3Q3'\r\r\nmcudda: fn=cp dev=/dev/vdb mp=/tmp/nocloud.mp.lVB3Q3 : -a /tmp/cirros-ds.i62sNB/nocloud/raw\r\r\nStarting network...\r\r\nudhcpc (v1.23.2) started\r\r\nSending discover...\r\r\nSending select for 10.129.0.61...\r\r\nLease of 10.129.0.61 obtained, lease time 86313600\r\r\nroute: SIOCADDRT: File exists\r\r\nWARN: failed: route add -net \"0.0.0.0/0\" gw \"10.129.0.1\"\r\r\nTop of dropbear init script\r\r\nStarting dropbear sshd: OK\r\r\nGROWROOT: NOCHANGE: partition 1 is size 71647. it cannot be grown\r\r\n/dev/root resized successfully [took 0.05s]\r\r\nprinted from cloud-init userdata\r\r\n [printed from cloud-init userdata]}]" • [SLOW TEST:20.266 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:47 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:48 ------------------------------ • [SLOW TEST:40.161 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:39.674 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • Failure [177.377 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 Expected error: : 160000000000 expect: timer expired after 160 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:104 ------------------------------ STEP: Creating a new VM level=info timestamp=2018-04-29T08:25:49.001567Z pos=utils.go:224 component=tests msg="Created virtual machine pod virt-launcher-testvmstml4-99n7h" level=info timestamp=2018-04-29T08:26:03.468879Z pos=utils.go:224 component=tests msg="Pod owner ship transfered to the node virt-launcher-testvmstml4-99n7h" level=info timestamp=2018-04-29T08:26:04.527294Z pos=utils.go:224 component=tests msg="VM defined." level=info timestamp=2018-04-29T08:26:04.546622Z pos=utils.go:224 component=tests msg="VM started." STEP: Expecting a VM console STEP: Checking that the console output equals to expected one S [SKIPPING] in Spec Setup (BeforeEach) [0.016 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vm [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:131 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1100 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.009 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vm [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:137 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1100 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.011 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:148 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:190 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1100 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:148 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:206 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1100 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.010 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:224 should succeed to start a vm /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:240 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1100 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.011 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:224 should succeed to stop a vm /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:248 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1100 ------------------------------ • ------------------------------ • [SLOW TEST:19.833 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:65 ------------------------------ • [SLOW TEST:18.614 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:73 ------------------------------ •••• ------------------------------ • [SLOW TEST:17.035 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:149 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:150 should retry starting the VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:151 ------------------------------ • [SLOW TEST:40.892 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:149 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:150 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:182 ------------------------------ • [SLOW TEST:70.991 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:230 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:231 ------------------------------ • [SLOW TEST:42.668 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:260 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:261 ------------------------------ • [SLOW TEST:87.431 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:297 should indicate that a node is ready for vms /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:298 ------------------------------ • [SLOW TEST:64.394 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:328 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:361 ------------------------------ S [SKIPPING] [1.133 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:401 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:406 ------------------------------ S [SKIPPING] [0.859 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:401 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:406 ------------------------------ • ------------------------------ • [SLOW TEST:23.480 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Delete a VM's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:476 should result in the VM moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:477 ------------------------------ • [SLOW TEST:33.756 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Delete a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:509 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:510 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:511 ------------------------------ • [SLOW TEST:24.781 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Delete a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:509 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:536 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:537 ------------------------------ • [SLOW TEST:32.200 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:588 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:589 ------------------------------ • [SLOW TEST:29.571 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:588 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:617 ------------------------------ • [SLOW TEST:38.370 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VM /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • ------------------------------ • [SLOW TEST:56.424 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:39 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:120 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 ------------------------------ • [SLOW TEST:113.646 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:39 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:120 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:141 ------------------------------ • [SLOW TEST:38.923 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:39 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:173 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:175 ------------------------------ • [SLOW TEST:62.611 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:39 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:223 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:225 ------------------------------ • [SLOW TEST:109.129 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:39 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:223 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:245 ------------------------------ • [SLOW TEST:176.823 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:39 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 With VM with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:305 should start vm multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:317 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 4 Failures: [Fail] Configurations VM definition with 3 CPU cores [It] should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/utils.go:226 [Fail] CloudInit UserData A new VM with cloudInitNoCloud userData source [It] should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/utils.go:226 [Fail] CloudInit UserData A new VM [It] should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:73 [Fail] Console A new VM with a serial console [It] should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:104 Ran 82 of 90 Specs in 2792.305 seconds FAIL! -- 78 Passed | 4 Failed | 0 Pending | 8 Skipped --- FAIL: TestTests (2792.31s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh 18bca7fc8c76 ce0e3cd6d74f 912e75f7ab3f 38582596df9a 1fbfe0e2030c 18bca7fc8c76 ce0e3cd6d74f 912e75f7ab3f 38582596df9a 1fbfe0e2030c kubevirt-functional-tests-openshift-release-crio1-crio-node01 kubevirt-functional-tests-openshift-release-crio1-crio-node02