+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release + [[ openshift-release =~ openshift-.* ]] + export PROVIDER=os-3.9.0-alpha.4 + PROVIDER=os-3.9.0-alpha.4 + export VAGRANT_NUM_NODES=1 + VAGRANT_NUM_NODES=1 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Unable to find image 'kubevirtci/os-3.9@sha256:c214267c1252e51f5ea845ac7868dbc219c63627e9f96ec30cc0a8e9e6e9fc0d' locally Trying to pull repository docker.io/kubevirtci/os-3.9 ... sha256:c214267c1252e51f5ea845ac7868dbc219c63627e9f96ec30cc0a8e9e6e9fc0d: Pulling from docker.io/kubevirtci/os-3.9 a8ee583972c2: Already exists dd50e5a4fc23: Already exists d867b8969b5b: Already exists bc770f22e8ac: Already exists d22f17305a59: Already exists 74c8d4bbaa28: Already exists a1be06ea19b0: Already exists 95ebfdc88880: Pulling fs layer 95ebfdc88880: Verifying Checksum 95ebfdc88880: Download complete 95ebfdc88880: Pull complete Digest: sha256:c214267c1252e51f5ea845ac7868dbc219c63627e9f96ec30cc0a8e9e6e9fc0d Status: Downloaded newer image for docker.io/kubevirtci/os-3.9@sha256:c214267c1252e51f5ea845ac7868dbc219c63627e9f96ec30cc0a8e9e6e9fc0d kubevirt-functional-tests-openshift-release1_registry WARNING: You're not using the default seccomp profile kubevirt-functional-tests-openshift-release1-node02 2018/04/08 05:57:04 Waiting for host: 192.168.66.102:22 2018/04/08 05:57:07 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:57:15 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:57:23 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:57:28 Connected to tcp://192.168.66.102:22 Removed symlink /etc/systemd/system/docker.service.wants/origin-master-api.service. Removed symlink /etc/systemd/system/origin-node.service.wants/origin-master-api.service. Removed symlink /etc/systemd/system/docker.service.wants/origin-master-controllers.service. kubevirt-functional-tests-openshift-release1-node01 2018/04/08 05:57:35 Waiting for host: 192.168.66.101:22 2018/04/08 05:57:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:57:46 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:57:54 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:58:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/08 05:58:07 Connected to tcp://192.168.66.101:22 The connection to the server node01:8443 was refused - did you specify the right host or port? NAME STATUS ROLES AGE VERSION node01 Ready master 2d v1.9.1+a0ce1bc657 PING node02 (192.168.66.102) 56(84) bytes of data. 64 bytes from node02 (192.168.66.102): icmp_seq=1 ttl=64 time=0.925 ms --- node02 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.925/0.925/0.925/0.000 ms Found node02. Adding it to the inventory. ping: node03: Name or service not known PLAY [Populate config host groups] ********************************************* TASK [Load group name mapping variables] *************************************** ok: [localhost] TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********* skipping: [localhost] TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_lb_hosts required] *********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts required] ********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts is single host] **************************** skipping: [localhost] TASK [Evaluate groups - g_glusterfs_hosts required] **************************** skipping: [localhost] TASK [Evaluate groups - Fail if no etcd hosts group is defined] **************** skipping: [localhost] TASK [Evaluate oo_all_hosts] *************************************************** ok: [localhost] => (item=node01) ok: [localhost] => (item=node02) TASK [Evaluate oo_masters] ***************************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_master] ************************************************ ok: [localhost] TASK [Evaluate oo_new_etcd_to_config] ****************************************** TASK [Evaluate oo_masters_to_config] ******************************************* ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_to_config] ********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_etcd] ************************************************** ok: [localhost] TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_nodes_to_config] ********************************************* ok: [localhost] => (item=node02) TASK [Add master to oo_nodes_to_config] **************************************** skipping: [localhost] => (item=node01) TASK [Evaluate oo_lb_to_config] ************************************************ TASK [Evaluate oo_nfs_to_config] *********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_glusterfs_to_config] ***************************************** TASK [Evaluate oo_etcd_to_migrate] ********************************************* ok: [localhost] => (item=node01) PLAY [Ensure there are new_nodes] ********************************************** TASK [fail] ******************************************************************** skipping: [localhost] TASK [fail] ******************************************************************** skipping: [localhost] PLAY [Initialization Checkpoint Start] ***************************************** TASK [Set install initialization 'In Progress'] ******************************** ok: [node01] PLAY [Populate config host groups] ********************************************* TASK [Load group name mapping variables] *************************************** ok: [localhost] TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********* skipping: [localhost] TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************* skipping: [localhost] TASK [Evaluate groups - g_lb_hosts required] *********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts required] ********************************** skipping: [localhost] TASK [Evaluate groups - g_nfs_hosts is single host] **************************** skipping: [localhost] TASK [Evaluate groups - g_glusterfs_hosts required] **************************** skipping: [localhost] TASK [Evaluate groups - Fail if no etcd hosts group is defined] **************** skipping: [localhost] TASK [Evaluate oo_all_hosts] *************************************************** ok: [localhost] => (item=node01) ok: [localhost] => (item=node02) TASK [Evaluate oo_masters] ***************************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_master] ************************************************ ok: [localhost] TASK [Evaluate oo_new_etcd_to_config] ****************************************** TASK [Evaluate oo_masters_to_config] ******************************************* ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_to_config] ********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_first_etcd] ************************************************** ok: [localhost] TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_nodes_to_config] ********************************************* ok: [localhost] => (item=node02) TASK [Add master to oo_nodes_to_config] **************************************** skipping: [localhost] => (item=node01) TASK [Evaluate oo_lb_to_config] ************************************************ TASK [Evaluate oo_nfs_to_config] *********************************************** ok: [localhost] => (item=node01) TASK [Evaluate oo_glusterfs_to_config] ***************************************** TASK [Evaluate oo_etcd_to_migrate] ********************************************* ok: [localhost] => (item=node01) [WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config PLAY [Ensure that all non-node hosts are accessible] *************************** TASK [Gathering Facts] ********************************************************* ok: [node01] PLAY [Initialize basic host facts] ********************************************* TASK [Gathering Facts] ********************************************************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : include_tasks] **************************** included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for node01, node02 TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *** skipping: [node01] => (item=openshift_hosted_logging_deploy) skipping: [node01] => (item=openshift_hosted_logging_hostname) skipping: [node01] => (item=openshift_hosted_logging_ops_hostname) skipping: [node02] => (item=openshift_hosted_logging_deploy) skipping: [node01] => (item=openshift_hosted_logging_master_public_url) skipping: [node02] => (item=openshift_hosted_logging_hostname) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_cluster_size) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_cluster_size) skipping: [node01] => (item=openshift_hosted_logging_image_pull_secret) skipping: [node02] => (item=openshift_hosted_logging_ops_hostname) skipping: [node01] => (item=openshift_hosted_logging_enable_ops_cluster) skipping: [node02] => (item=openshift_hosted_logging_master_public_url) skipping: [node01] => (item=openshift_hosted_logging_curator_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_curator_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_kibana_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_cluster_size) skipping: [node01] => (item=openshift_hosted_logging_kibana_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_fluentd_nodeselector_label) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_cluster_size) skipping: [node01] => (item=openshift_hosted_logging_journal_source) skipping: [node02] => (item=openshift_hosted_logging_image_pull_secret) skipping: [node01] => (item=openshift_hosted_logging_journal_read_from_head) skipping: [node02] => (item=openshift_hosted_logging_enable_ops_cluster) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_instance_ram) skipping: [node01] => (item=openshift_hosted_logging_storage_labels) skipping: [node02] => (item=openshift_hosted_logging_curator_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_pvc_dynamic) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_pvc_size) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_pvc_prefix) skipping: [node02] => (item=openshift_hosted_logging_curator_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_storage_group) skipping: [node02] => (item=openshift_hosted_logging_kibana_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_kibana_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_instance_ram) skipping: [node02] => (item=openshift_hosted_logging_fluentd_nodeselector_label) skipping: [node01] => (item=openshift_hosted_loggingops_storage_labels) skipping: [node02] => (item=openshift_hosted_logging_journal_source) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_pvc_dynamic) skipping: [node02] => (item=openshift_hosted_logging_journal_read_from_head) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_pvc_size) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_pvc_prefix) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_instance_ram) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_storage_group) skipping: [node02] => (item=openshift_hosted_logging_storage_labels) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_pvc_dynamic) skipping: [node01] => (item=openshift_hosted_logging_storage_access_modes) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_pvc_size) skipping: [node01] => (item=openshift_hosted_logging_storage_kind) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_pvc_prefix) skipping: [node01] => (item=openshift_hosted_loggingops_storage_kind) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_storage_group) skipping: [node01] => (item=openshift_hosted_logging_storage_host) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_nodeselector) skipping: [node01] => (item=openshift_hosted_loggingops_storage_host) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_instance_ram) skipping: [node01] => (item=openshift_hosted_logging_storage_nfs_directory) skipping: [node02] => (item=openshift_hosted_loggingops_storage_labels) skipping: [node01] => (item=openshift_hosted_loggingops_storage_nfs_directory) skipping: [node01] => (item=openshift_hosted_logging_storage_volume_name) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_pvc_dynamic) skipping: [node01] => (item=openshift_hosted_loggingops_storage_volume_name) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_pvc_size) skipping: [node01] => (item=openshift_hosted_logging_storage_volume_size) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_pvc_prefix) skipping: [node01] => (item=openshift_hosted_loggingops_storage_volume_size) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_storage_group) skipping: [node01] => (item=openshift_hosted_logging_enable_ops_cluster) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_image_pull_secret) skipping: [node01] => (item=openshift_hosted_logging_curator_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_storage_access_modes) skipping: [node01] => (item=openshift_hosted_logging_curator_ops_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_storage_kind) skipping: [node02] => (item=openshift_hosted_loggingops_storage_kind) skipping: [node01] => (item=openshift_hosted_logging_kibana_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_storage_host) skipping: [node01] => (item=openshift_hosted_logging_kibana_ops_nodeselector) skipping: [node02] => (item=openshift_hosted_loggingops_storage_host) skipping: [node01] => (item=openshift_hosted_logging_ops_hostname) skipping: [node02] => (item=openshift_hosted_logging_storage_nfs_directory) skipping: [node01] => (item=openshift_hosted_logging_fluentd_nodeselector_label) skipping: [node02] => (item=openshift_hosted_loggingops_storage_nfs_directory) skipping: [node01] => (item=openshift_hosted_logging_journal_source) skipping: [node01] => (item=openshift_hosted_logging_journal_read_from_head) skipping: [node02] => (item=openshift_hosted_logging_storage_volume_name) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_instance_ram) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_instance_ram) skipping: [node02] => (item=openshift_hosted_loggingops_storage_volume_name) skipping: [node01] => (item=openshift_hosted_logging_elasticsearch_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_logging_storage_access_modes) skipping: [node01] => (item=openshift_hosted_logging_deployer_prefix) skipping: [node02] => (item=openshift_hosted_logging_storage_volume_size) skipping: [node01] => (item=openshift_hosted_logging_deployer_version) skipping: [node01] => (item=openshift_hosted_metrics_deploy) skipping: [node01] => (item=openshift_hosted_metrics_storage_kind) skipping: [node01] => (item=openshift_hosted_metrics_storage_access_modes) skipping: [node02] => (item=openshift_hosted_loggingops_storage_volume_size) skipping: [node01] => (item=openshift_hosted_metrics_storage_host) skipping: [node02] => (item=openshift_hosted_logging_enable_ops_cluster) skipping: [node01] => (item=openshift_hosted_metrics_storage_nfs_directory) skipping: [node02] => (item=openshift_hosted_logging_image_pull_secret) skipping: [node01] => (item=openshift_hosted_metrics_storage_volume_name) skipping: [node01] => (item=openshift_hosted_metrics_storage_volume_size) skipping: [node02] => (item=openshift_hosted_logging_curator_nodeselector) skipping: [node01] => (item=openshift_hosted_metrics_storage_labels) skipping: [node02] => (item=openshift_hosted_logging_curator_ops_nodeselector) skipping: [node01] => (item=openshift_hosted_metrics_deployer_prefix) skipping: [node02] => (item=openshift_hosted_logging_kibana_nodeselector) skipping: [node01] => (item=openshift_hosted_metrics_deployer_version) skipping: [node02] => (item=openshift_hosted_logging_kibana_ops_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_ops_hostname) skipping: [node02] => (item=openshift_hosted_logging_fluentd_nodeselector_label) skipping: [node02] => (item=openshift_hosted_logging_journal_source) skipping: [node02] => (item=openshift_hosted_logging_journal_read_from_head) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_instance_ram) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_instance_ram) skipping: [node02] => (item=openshift_hosted_logging_elasticsearch_ops_nodeselector) skipping: [node02] => (item=openshift_hosted_logging_storage_access_modes) skipping: [node02] => (item=openshift_hosted_logging_deployer_prefix) skipping: [node02] => (item=openshift_hosted_logging_deployer_version) skipping: [node02] => (item=openshift_hosted_metrics_deploy) skipping: [node02] => (item=openshift_hosted_metrics_storage_kind) skipping: [node02] => (item=openshift_hosted_metrics_storage_access_modes) skipping: [node02] => (item=openshift_hosted_metrics_storage_host) skipping: [node02] => (item=openshift_hosted_metrics_storage_nfs_directory) skipping: [node02] => (item=openshift_hosted_metrics_storage_volume_name) skipping: [node02] => (item=openshift_hosted_metrics_storage_volume_size) skipping: [node02] => (item=openshift_hosted_metrics_storage_labels) skipping: [node02] => (item=openshift_hosted_metrics_deployer_prefix) skipping: [node02] => (item=openshift_hosted_metrics_deployer_version) TASK [openshift_sanitize_inventory : debug] ************************************ skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : set_stats] ******************************** skipping: [node01] TASK [openshift_sanitize_inventory : Assign deprecated variables to correct counterparts] *** included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml for node01, node02 included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_metrics.yml for node01, node02 TASK [openshift_sanitize_inventory : conditional_set_fact] ********************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : set_fact] ********************************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : conditional_set_fact] ********************* ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : Standardize on latest variable names] ***** ok: [node01] ok: [node02] TASK [openshift_sanitize_inventory : Normalize openshift_release] ************** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : include_tasks] **************************** included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for node01, node02 TASK [openshift_sanitize_inventory : Ensure that openshift_use_dnsmasq is true] *** skipping: [node02] skipping: [node01] TASK [openshift_sanitize_inventory : Ensure that openshift_node_dnsmasq_install_network_manager_hook is true] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : set_fact] ********************************* skipping: [node01] => (item=openshift_hosted_etcd_storage_kind) skipping: [node02] => (item=openshift_hosted_etcd_storage_kind) TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] *** skipping: [node01] skipping: [node02] TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] *** skipping: [node02] skipping: [node01] TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] *** skipping: [node01] skipping: [node02] TASK [Detecting Operating System from ostree_booted] *************************** ok: [node01] ok: [node02] TASK [set openshift_deployment_type if unset] ********************************** skipping: [node01] skipping: [node02] TASK [initialize_facts set fact openshift_is_atomic and openshift_is_containerized] *** ok: [node01] ok: [node02] TASK [Determine Atomic Host Docker Version] ************************************ skipping: [node01] skipping: [node02] TASK [assert atomic host docker version is 1.12 or later] ********************** skipping: [node01] skipping: [node02] PLAY [Initialize special first-master variables] ******************************* TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [set_fact] **************************************************************** ok: [node01] PLAY [Disable web console if required] ***************************************** TASK [set_fact] **************************************************************** skipping: [node01] PLAY [Install packages necessary for installer] ******************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [Ensure openshift-ansible installer package deps are installed] *********** ok: [node02] => (item=iproute) ok: [node02] => (item=dbus-python) ok: [node02] => (item=PyYAML) ok: [node02] => (item=python-ipaddress) ok: [node02] => (item=yum-utils) TASK [Ensure various deps for running system containers are installed] ********* skipping: [node02] => (item=atomic) skipping: [node02] => (item=ostree) skipping: [node02] => (item=runc) PLAY [Initialize cluster facts] ************************************************ TASK [Gathering Facts] ********************************************************* ok: [node01] ok: [node02] TASK [Gather Cluster facts] **************************************************** ok: [node01] changed: [node02] TASK [Set fact of no_proxy_internal_hostnames] ********************************* skipping: [node01] skipping: [node02] TASK [Initialize openshift.node.sdn_mtu] *************************************** ok: [node02] ok: [node01] PLAY [Determine openshift_version to configure on first master] **************** TASK [Gathering Facts] ********************************************************* skipping: [node01] TASK [include_role] ************************************************************ skipping: [node01] TASK [debug] ******************************************************************* skipping: [node01] PLAY [Set openshift_version for etcd, node, and master hosts] ****************** TASK [Gathering Facts] ********************************************************* skipping: [node02] TASK [set_fact] **************************************************************** skipping: [node02] PLAY [Ensure the requested version packages are available.] ******************** TASK [Gathering Facts] ********************************************************* skipping: [node02] TASK [include_role] ************************************************************ skipping: [node02] PLAY [Verify Requirements] ***************************************************** TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [Run variable sanity checks] ********************************************** ok: [node01] PLAY [Initialization Checkpoint End] ******************************************* TASK [Set install initialization 'Complete'] *********************************** ok: [node01] PLAY [Validate node hostnames] ************************************************* TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [Query DNS for IP address of node02] ************************************** ok: [node02] TASK [Validate openshift_hostname when defined] ******************************** skipping: [node02] TASK [Validate openshift_ip exists on node when defined] *********************** skipping: [node02] PLAY [Setup yum repositories for all hosts] ************************************ TASK [rhel_subscribe : fail] *************************************************** skipping: [node02] TASK [rhel_subscribe : Install Red Hat Subscription manager] ******************* skipping: [node02] TASK [rhel_subscribe : Is host already registered?] **************************** skipping: [node02] TASK [rhel_subscribe : Register host] ****************************************** skipping: [node02] TASK [rhel_subscribe : fail] *************************************************** skipping: [node02] TASK [rhel_subscribe : Determine if OpenShift Pool Already Attached] *********** skipping: [node02] TASK [rhel_subscribe : Attach to OpenShift Pool] ******************************* skipping: [node02] TASK [rhel_subscribe : include_tasks] ****************************************** skipping: [node02] TASK [openshift_repos : openshift_repos detect ostree] ************************* ok: [node02] TASK [openshift_repos : Ensure libselinux-python is installed] ***************** ok: [node02] TASK [openshift_repos : Remove openshift_additional.repo file] ***************** ok: [node02] TASK [openshift_repos : Create any additional repos that are defined] ********** TASK [openshift_repos : include_tasks] ***************************************** skipping: [node02] TASK [openshift_repos : include_tasks] ***************************************** included: /root/openshift-ansible/roles/openshift_repos/tasks/centos_repos.yml for node02 TASK [openshift_repos : Configure origin gpg keys] ***************************** ok: [node02] TASK [openshift_repos : Configure correct origin release repository] *********** ok: [node02] => (item=/root/openshift-ansible/roles/openshift_repos/templates/CentOS-OpenShift-Origin.repo.j2) TASK [openshift_repos : Ensure clean repo cache in the event repos have been changed manually] *** changed: [node02] => { "msg": "First run of openshift_repos" } TASK [openshift_repos : Record that openshift_repos already ran] *************** ok: [node02] RUNNING HANDLER [openshift_repos : refresh cache] ****************************** changed: [node02] PLAY [Configure os_firewall] *************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [os_firewall : Detecting Atomic Host Operating System] ******************** ok: [node02] TASK [os_firewall : Set fact r_os_firewall_is_atomic] ************************** ok: [node02] TASK [os_firewall : include_tasks] ********************************************* skipping: [node02] TASK [os_firewall : include_tasks] ********************************************* included: /root/openshift-ansible/roles/os_firewall/tasks/iptables.yml for node02 TASK [os_firewall : Ensure firewalld service is not enabled] ******************* ok: [node02] TASK [os_firewall : Wait 10 seconds after disabling firewalld] ***************** skipping: [node02] TASK [os_firewall : Install iptables packages] ********************************* ok: [node02] => (item=iptables) ok: [node02] => (item=iptables-services) TASK [os_firewall : Start and enable iptables service] ************************* ok: [node02 -> node02] => (item=node02) TASK [os_firewall : need to pause here, otherwise the iptables service starting can sometimes cause ssh to fail] *** skipping: [node02] PLAY [create oo_hosts_containerized_managed_true host group] ******************* TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [group_by] **************************************************************** ok: [node01] PLAY [oo_nodes_to_config] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [container_runtime : Setup the docker-storage for overlay] **************** skipping: [node02] PLAY [create oo_hosts_containerized_managed_true host group] ******************* TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [group_by] **************************************************************** ok: [node01] PLAY [oo_nodes_to_config] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [openshift_excluder : Install excluders] ********************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml for node02 TASK [openshift_excluder : Install docker excluder - yum] ********************** skipping: [node02] TASK [openshift_excluder : Install docker excluder - dnf] ********************** skipping: [node02] TASK [openshift_excluder : Install openshift excluder - yum] ******************* skipping: [node02] TASK [openshift_excluder : Install openshift excluder - dnf] ******************* skipping: [node02] TASK [openshift_excluder : set_fact] ******************************************* ok: [node02] TASK [openshift_excluder : Enable excluders] *********************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : Enable docker excluder] ***************************** skipping: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : Enable openshift excluder] ************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** included: /root/openshift-ansible/roles/container_runtime/tasks/common/pre.yml for node02 TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Add enterprise registry, if necessary] *************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Get current installed Docker version] **************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /root/openshift-ansible/roles/container_runtime/tasks/docker_sanity.yml for node02 TASK [container_runtime : Error out if Docker pre-installed but too old] ******* skipping: [node02] TASK [container_runtime : Error out if requested Docker is too old] ************ skipping: [node02] TASK [container_runtime : Fail if Docker version requested but downgrade is required] *** skipping: [node02] TASK [container_runtime : Error out if attempting to upgrade Docker across the 1.10 boundary] *** skipping: [node02] TASK [container_runtime : Install Docker] ************************************** skipping: [node02] TASK [container_runtime : Ensure docker.service.d directory exists] ************ ok: [node02] TASK [container_runtime : Configure Docker service unit file] ****************** ok: [node02] TASK [container_runtime : stat] ************************************************ ok: [node02] TASK [container_runtime : Set registry params] ********************************* skipping: [node02] => (item={u'reg_conf_var': u'ADD_REGISTRY', u'reg_flag': u'--add-registry', u'reg_fact_val': []}) skipping: [node02] => (item={u'reg_conf_var': u'BLOCK_REGISTRY', u'reg_flag': u'--block-registry', u'reg_fact_val': []}) skipping: [node02] => (item={u'reg_conf_var': u'INSECURE_REGISTRY', u'reg_flag': u'--insecure-registry', u'reg_fact_val': []}) TASK [container_runtime : Place additional/blocked/insecure registries in /etc/containers/registries.conf] *** skipping: [node02] TASK [container_runtime : Set Proxy Settings] ********************************** skipping: [node02] => (item={u'reg_conf_var': u'HTTP_PROXY', u'reg_fact_val': u''}) skipping: [node02] => (item={u'reg_conf_var': u'HTTPS_PROXY', u'reg_fact_val': u''}) skipping: [node02] => (item={u'reg_conf_var': u'NO_PROXY', u'reg_fact_val': u''}) TASK [container_runtime : Set various Docker options] ************************** ok: [node02] TASK [container_runtime : stat] ************************************************ ok: [node02] TASK [container_runtime : Configure Docker Network OPTIONS] ******************** ok: [node02] TASK [container_runtime : Detect if docker is already started] ***************** ok: [node02] TASK [container_runtime : Start the Docker service] **************************** ok: [node02] TASK [container_runtime : set_fact] ******************************************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /root/openshift-ansible/roles/container_runtime/tasks/common/post.yml for node02 TASK [container_runtime : Ensure /var/lib/containers exists] ******************* ok: [node02] TASK [container_runtime : Fix SELinux Permissions on /var/lib/containers] ****** ok: [node02] TASK [container_runtime : include_tasks] *************************************** included: /root/openshift-ansible/roles/container_runtime/tasks/registry_auth.yml for node02 TASK [container_runtime : Check for credentials file for registry auth] ******** skipping: [node02] TASK [container_runtime : Create credentials for docker cli registry auth] ***** skipping: [node02] TASK [container_runtime : Create credentials for docker cli registry auth (alternative)] *** skipping: [node02] TASK [container_runtime : stat the docker data dir] **************************** ok: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Fail quickly if openshift_docker_options are set] **** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Install Docker so we can use the client] ************* skipping: [node02] TASK [container_runtime : Disable Docker] ************************************** skipping: [node02] TASK [container_runtime : Ensure proxies are in the atomic.conf] *************** skipping: [node02] TASK [container_runtime : debug] *********************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Pre-pull Container Engine System Container image] **** skipping: [node02] TASK [container_runtime : Ensure container-engine.service.d directory exists] *** skipping: [node02] TASK [container_runtime : Ensure /etc/docker directory exists] ***************** skipping: [node02] TASK [container_runtime : Install Container Engine System Container] *********** skipping: [node02] TASK [container_runtime : Configure Container Engine Service File] ************* skipping: [node02] TASK [container_runtime : Configure Container Engine] ************************** skipping: [node02] TASK [container_runtime : Start the Container Engine service] ****************** skipping: [node02] TASK [container_runtime : set_fact] ******************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Check we are not using node as a Docker container with CRI-O] *** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] TASK [container_runtime : Check that overlay is in the kernel] ***************** skipping: [node02] TASK [container_runtime : Add overlay to modprobe.d] *************************** skipping: [node02] TASK [container_runtime : Manually modprobe overlay into the kernel] *********** skipping: [node02] TASK [container_runtime : Enable and start systemd-modules-load] *************** skipping: [node02] TASK [container_runtime : Ensure proxies are in the atomic.conf] *************** skipping: [node02] TASK [container_runtime : debug] *********************************************** skipping: [node02] TASK [container_runtime : Pre-pull CRI-O System Container image] *************** skipping: [node02] TASK [container_runtime : Install CRI-O System Container] ********************** skipping: [node02] TASK [container_runtime : Remove CRI-O default configuration files] ************ skipping: [node02] => (item=/etc/cni/net.d/200-loopback.conf) skipping: [node02] => (item=/etc/cni/net.d/100-crio-bridge.conf) TASK [container_runtime : Create the CRI-O configuration] ********************** skipping: [node02] TASK [container_runtime : Ensure CNI configuration directory exists] *********** skipping: [node02] TASK [container_runtime : Add iptables allow rules] **************************** skipping: [node02] => (item={u'port': u'10010/tcp', u'service': u'crio'}) TASK [container_runtime : Remove iptables rules] ******************************* TASK [container_runtime : Add firewalld allow rules] *************************** skipping: [node02] => (item={u'port': u'10010/tcp', u'service': u'crio'}) TASK [container_runtime : Remove firewalld allow rules] ************************ TASK [container_runtime : Configure the CNI network] *************************** skipping: [node02] TASK [container_runtime : Create /etc/sysconfig/crio-storage] ****************** skipping: [node02] TASK [container_runtime : Create /etc/sysconfig/crio-network] ****************** skipping: [node02] TASK [container_runtime : Start the CRI-O service] ***************************** skipping: [node02] TASK [container_runtime : include_tasks] *************************************** skipping: [node02] PLAY [Determine openshift_version to configure on first master] **************** TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [include_role] ************************************************************ TASK [openshift_version : Use openshift.common.version fact as version to configure if already installed] *** ok: [node01] TASK [openshift_version : include_tasks] *************************************** included: /root/openshift-ansible/roles/openshift_version/tasks/first_master_containerized_version.yml for node01 TASK [openshift_version : Set containerized version to configure if openshift_image_tag specified] *** skipping: [node01] TASK [openshift_version : Set containerized version to configure if openshift_release specified] *** skipping: [node01] TASK [openshift_version : Lookup latest containerized version if no version specified] *** skipping: [node01] TASK [openshift_version : set_fact] ******************************************** skipping: [node01] TASK [openshift_version : set_fact] ******************************************** skipping: [node01] TASK [openshift_version : Set precise containerized version to configure if openshift_release specified] *** skipping: [node01] TASK [openshift_version : set_fact] ******************************************** skipping: [node01] TASK [openshift_version : set_fact] ******************************************** ok: [node01] TASK [openshift_version : debug] *********************************************** ok: [node01] => { "msg": "openshift_pkg_version was not defined. Falling back to -3.9.0" } TASK [openshift_version : set_fact] ******************************************** ok: [node01] TASK [openshift_version : debug] *********************************************** skipping: [node01] TASK [openshift_version : set_fact] ******************************************** skipping: [node01] TASK [debug] ******************************************************************* ok: [node01] => { "msg": "openshift_pkg_version set to -3.9.0" } PLAY [Set openshift_version for etcd, node, and master hosts] ****************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [set_fact] **************************************************************** ok: [node02] PLAY [Ensure the requested version packages are available.] ******************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [include_role] ************************************************************ TASK [openshift_version : Check openshift_version for rpm installation] ******** included: /root/openshift-ansible/roles/openshift_version/tasks/check_available_rpms.yml for node02 TASK [openshift_version : Get available origin version] ************************ ok: [node02] TASK [openshift_version : fail] ************************************************ skipping: [node02] TASK [openshift_version : Fail if rpm version and docker image version are different] *** skipping: [node02] TASK [openshift_version : For an RPM install, abort when the release requested does not match the available version.] *** skipping: [node02] TASK [openshift_version : debug] *********************************************** ok: [node02] => { "openshift_release": "VARIABLE IS NOT DEFINED!" } TASK [openshift_version : debug] *********************************************** ok: [node02] => { "openshift_image_tag": "v3.9.0-alpha.4" } TASK [openshift_version : debug] *********************************************** ok: [node02] => { "openshift_pkg_version": "-3.9.0" } PLAY [Node Install Checkpoint Start] ******************************************* TASK [Set Node install 'In Progress'] ****************************************** ok: [node01] PLAY [Create OpenShift certificates for node hosts] **************************** TASK [openshift_node_certificates : Ensure CA certificate exists on openshift_ca_host] *** ok: [node02 -> node01] TASK [openshift_node_certificates : fail] ************************************** skipping: [node02] TASK [openshift_node_certificates : Check status of node certificates] ********* ok: [node02] => (item=system:node:node02.crt) ok: [node02] => (item=system:node:node02.key) ok: [node02] => (item=system:node:node02.kubeconfig) ok: [node02] => (item=ca.crt) ok: [node02] => (item=server.key) ok: [node02] => (item=server.crt) TASK [openshift_node_certificates : set_fact] ********************************** ok: [node02] TASK [openshift_node_certificates : Create openshift_generated_configs_dir if it does not exist] *** ok: [node02 -> node01] TASK [openshift_node_certificates : find] ************************************** ok: [node02 -> node01] TASK [openshift_node_certificates : Generate the node client config] *********** changed: [node02 -> node01] => (item=node02) TASK [openshift_node_certificates : Generate the node server certificate] ****** changed: [node02 -> node01] => (item=node02) TASK [openshift_node_certificates : Create a tarball of the node config directories] *** changed: [node02 -> node01] TASK [openshift_node_certificates : Retrieve the node config tarballs from the master] *** changed: [node02 -> node01] TASK [openshift_node_certificates : Ensure certificate directory exists] ******* ok: [node02] TASK [openshift_node_certificates : Unarchive the tarball on the node] ********* changed: [node02] TASK [openshift_node_certificates : Delete local temp directory] *************** ok: [node02 -> localhost] TASK [openshift_node_certificates : Copy OpenShift CA to system CA trust] ****** ok: [node02] => (item={u'cert': u'/etc/origin/node/ca.crt', u'id': u'openshift'}) PLAY [Disable excluders] ******************************************************* TASK [openshift_excluder : Detecting Atomic Host Operating System] ************* ok: [node02] TASK [openshift_excluder : Debug r_openshift_excluder_enable_docker_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_docker_excluder": "false" } TASK [openshift_excluder : Debug r_openshift_excluder_enable_openshift_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_openshift_excluder": "false" } TASK [openshift_excluder : Fail if invalid openshift_excluder_action provided] *** skipping: [node02] TASK [openshift_excluder : Fail if r_openshift_excluder_upgrade_target is not defined] *** skipping: [node02] TASK [openshift_excluder : Include main action task file] ********************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/disable.yml for node02 TASK [openshift_excluder : Include verify_upgrade.yml when upgrading] ********** skipping: [node02] TASK [openshift_excluder : Disable excluders before the upgrade to remove older excluding expressions] *** included: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : disable docker excluder] **************************** skipping: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : disable openshift excluder] ************************* skipping: [node02] TASK [openshift_excluder : Include install.yml] ******************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml for node02 TASK [openshift_excluder : Install docker excluder - yum] ********************** skipping: [node02] TASK [openshift_excluder : Install docker excluder - dnf] ********************** skipping: [node02] TASK [openshift_excluder : Install openshift excluder - yum] ******************* skipping: [node02] TASK [openshift_excluder : Install openshift excluder - dnf] ******************* skipping: [node02] TASK [openshift_excluder : set_fact] ******************************************* skipping: [node02] TASK [openshift_excluder : Include exclude.yml] ******************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : Enable docker excluder] ***************************** skipping: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : Enable openshift excluder] ************************** skipping: [node02] TASK [openshift_excluder : Include unexclude.yml] ****************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : disable docker excluder] **************************** skipping: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : disable openshift excluder] ************************* skipping: [node02] PLAY [Evaluate node groups] **************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Evaluate oo_containerized_master_nodes] ********************************** skipping: [localhost] => (item=node02) [WARNING]: Could not match supplied host pattern, ignoring: oo_containerized_master_nodes PLAY [Configure containerized nodes] ******************************************* skipping: no hosts matched PLAY [Configure nodes] ********************************************************* TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [openshift_clock : Determine if chrony is installed] ********************** [WARNING]: Consider using yum, dnf or zypper module rather than running rpm changed: [node02] TASK [openshift_clock : Install ntp package] *********************************** skipping: [node02] TASK [openshift_clock : Start and enable ntpd/chronyd] ************************* changed: [node02] TASK [openshift_cloud_provider : Set cloud provider facts] ********************* ok: [node02] TASK [openshift_cloud_provider : Create cloudprovider config dir] ************** skipping: [node02] TASK [openshift_cloud_provider : include_tasks] ******************************** skipping: [node02] TASK [openshift_cloud_provider : include_tasks] ******************************** skipping: [node02] TASK [openshift_cloud_provider : include_tasks] ******************************** skipping: [node02] TASK [openshift_cloud_provider : include_tasks] ******************************** skipping: [node02] TASK [openshift_node : fail] *************************************************** skipping: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq_install.yml for node02 TASK [openshift_node : Check for NetworkManager service] *********************** ok: [node02] TASK [openshift_node : Set fact using_network_manager] ************************* ok: [node02] TASK [openshift_node : Install dnsmasq] **************************************** ok: [node02] TASK [openshift_node : ensure origin/node directory exists] ******************** ok: [node02] => (item=/etc/origin) changed: [node02] => (item=/etc/origin/node) TASK [openshift_node : Install node-dnsmasq.conf] ****************************** ok: [node02] TASK [openshift_node : include_tasks] ****************************************** skipping: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq.yml for node02 TASK [openshift_node : Install dnsmasq configuration] ************************** ok: [node02] TASK [openshift_node : Deploy additional dnsmasq.conf] ************************* skipping: [node02] TASK [openshift_node : Enable dnsmasq] ***************************************** ok: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq/network-manager.yml for node02 TASK [openshift_node : Install network manager dispatch script] **************** ok: [node02] TASK [openshift_node : Add iptables allow rules] ******************************* ok: [node02] => (item={u'port': u'10250/tcp', u'service': u'Kubernetes kubelet'}) ok: [node02] => (item={u'port': u'80/tcp', u'service': u'http'}) ok: [node02] => (item={u'port': u'443/tcp', u'service': u'https'}) ok: [node02] => (item={u'cond': u'openshift_use_openshift_sdn | bool', u'port': u'4789/udp', u'service': u'OpenShift OVS sdn'}) skipping: [node02] => (item={u'cond': False, u'port': u'179/tcp', u'service': u'Calico BGP Port'}) skipping: [node02] => (item={u'cond': False, u'port': u'/tcp', u'service': u'Kubernetes service NodePort TCP'}) skipping: [node02] => (item={u'cond': False, u'port': u'/udp', u'service': u'Kubernetes service NodePort UDP'}) TASK [openshift_node : Remove iptables rules] ********************************** TASK [openshift_node : Add firewalld allow rules] ****************************** skipping: [node02] => (item={u'port': u'10250/tcp', u'service': u'Kubernetes kubelet'}) skipping: [node02] => (item={u'port': u'80/tcp', u'service': u'http'}) skipping: [node02] => (item={u'port': u'443/tcp', u'service': u'https'}) skipping: [node02] => (item={u'cond': u'openshift_use_openshift_sdn | bool', u'port': u'4789/udp', u'service': u'OpenShift OVS sdn'}) skipping: [node02] => (item={u'cond': False, u'port': u'179/tcp', u'service': u'Calico BGP Port'}) skipping: [node02] => (item={u'cond': False, u'port': u'/tcp', u'service': u'Kubernetes service NodePort TCP'}) skipping: [node02] => (item={u'cond': False, u'port': u'/udp', u'service': u'Kubernetes service NodePort UDP'}) TASK [openshift_node : Remove firewalld allow rules] *************************** TASK [openshift_node : Disable swap] ******************************************* ok: [node02] TASK [openshift_node : include node installer] ********************************* included: /root/openshift-ansible/roles/openshift_node/tasks/install.yml for node02 TASK [openshift_node : Install Node package, sdn-ovs, conntrack packages] ****** skipping: [node02] => (item={u'name': u'origin-node-3.9.0'}) skipping: [node02] => (item={u'name': u'origin-sdn-ovs-3.9.0', u'install': True}) skipping: [node02] => (item={u'name': u'conntrack-tools'}) TASK [openshift_node : Pre-pull node image when containerized] ***************** ok: [node02] TASK [openshift_node : Restart cri-o] ****************************************** skipping: [node02] TASK [openshift_node : restart NetworkManager to ensure resolv.conf is present] *** skipping: [node02] TASK [openshift_node : sysctl] ************************************************* ok: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml for node02 TASK [openshift_node : Check for credentials file for registry auth] *********** skipping: [node02] TASK [openshift_node : Create credentials for registry auth] ******************* skipping: [node02] TASK [openshift_node : Create credentials for registry auth (alternative)] ***** skipping: [node02] TASK [openshift_node : Setup ro mount of /root/.docker for containerized hosts] *** skipping: [node02] TASK [openshift_node : include standard node config] *************************** included: /root/openshift-ansible/roles/openshift_node/tasks/config.yml for node02 TASK [openshift_node : Install the systemd units] ****************************** included: /root/openshift-ansible/roles/openshift_node/tasks/systemd_units.yml for node02 TASK [openshift_node : Install Node service file] ****************************** ok: [node02] TASK [openshift_node : include node deps docker service file] ****************** included: /root/openshift-ansible/roles/openshift_node/tasks/config/install-node-deps-docker-service-file.yml for node02 TASK [openshift_node : Install Node dependencies docker service file] ********** ok: [node02] TASK [openshift_node : include ovs service environment file] ******************* included: /root/openshift-ansible/roles/openshift_node/tasks/config/install-ovs-service-env-file.yml for node02 TASK [openshift_node : Create the openvswitch service env file] **************** ok: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/config/install-ovs-docker-service-file.yml for node02 TASK [openshift_node : Install OpenvSwitch docker service file] **************** ok: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/config/configure-node-settings.yml for node02 TASK [openshift_node : Configure Node settings] ******************************** ok: [node02] => (item={u'regex': u'^OPTIONS=', u'line': u'OPTIONS=--loglevel=2'}) ok: [node02] => (item={u'regex': u'^CONFIG_FILE=', u'line': u'CONFIG_FILE=/etc/origin/node/node-config.yaml'}) ok: [node02] => (item={u'regex': u'^IMAGE_VERSION=', u'line': u'IMAGE_VERSION=v3.9.0-alpha.4'}) TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/config/configure-proxy-settings.yml for node02 TASK [openshift_node : Configure Proxy Settings] ******************************* skipping: [node02] => (item={u'regex': u'^HTTP_PROXY=', u'line': u'HTTP_PROXY='}) skipping: [node02] => (item={u'regex': u'^HTTPS_PROXY=', u'line': u'HTTPS_PROXY='}) skipping: [node02] => (item={u'regex': u'^NO_PROXY=', u'line': u'NO_PROXY=[],172.30.0.0/16,10.128.0.0/14'}) TASK [openshift_node : Pull container images] ********************************** included: /root/openshift-ansible/roles/openshift_node/tasks/container_images.yml for node02 TASK [openshift_node : Install Node system container] ************************** skipping: [node02] TASK [openshift_node : Install OpenvSwitch system containers] ****************** skipping: [node02] TASK [openshift_node : Pre-pull openvswitch image] ***************************** ok: [node02] TASK [openshift_node : Start and enable openvswitch service] ******************* ok: [node02] TASK [openshift_node : set_fact] *********************************************** ok: [node02] TASK [openshift_node : file] *************************************************** skipping: [node02] TASK [openshift_node : Create the Node config] ********************************* changed: [node02] TASK [openshift_node : Configure Node Environment Variables] ******************* TASK [openshift_node : Configure AWS Cloud Provider Settings] ****************** skipping: [node02] => (item=None) skipping: [node02] => (item=None) TASK [openshift_node : Wait for master API to become available before proceeding] *** ok: [node02] TASK [openshift_node : Start and enable node dep] ****************************** changed: [node02] TASK [openshift_node : Start and enable node] ********************************** ok: [node02] TASK [openshift_node : Dump logs from node service if it failed] *************** skipping: [node02] TASK [openshift_node : Abort if node failed to start] ************************** skipping: [node02] TASK [openshift_node : set_fact] *********************************************** ok: [node02] TASK [openshift_node : NFS storage plugin configuration] *********************** included: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/nfs.yml for node02 TASK [openshift_node : Install NFS storage plugin dependencies] **************** ok: [node02] TASK [openshift_node : Check for existence of nfs sebooleans] ****************** ok: [node02] => (item=virt_use_nfs) ok: [node02] => (item=virt_sandbox_use_nfs) TASK [openshift_node : Set seboolean to allow nfs storage plugin access from containers] *** ok: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:41.377265', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_use_nfs'], u'rc': 0, 'item': u'virt_use_nfs', u'delta': u'0:00:00.007792', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:41.369473', '_ansible_ignore_errors': None, 'failed': False}) skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:42.292662', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_nfs'], u'rc': 0, 'item': u'virt_sandbox_use_nfs', u'delta': u'0:00:00.013665', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:42.278997', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : Set seboolean to allow nfs storage plugin access from containers (python 3)] *** skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:41.377265', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_use_nfs'], u'rc': 0, 'item': u'virt_use_nfs', u'delta': u'0:00:00.007792', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:41.369473', '_ansible_ignore_errors': None, 'failed': False}) skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:42.292662', '_ansible_no_log': False, u'stdout': u'virt_use_nfs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_nfs'], u'rc': 0, 'item': u'virt_sandbox_use_nfs', u'delta': u'0:00:00.013665', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_nfs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_nfs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:42.278997', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : GlusterFS storage plugin configuration] ***************** included: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/glusterfs.yml for node02 TASK [openshift_node : Install GlusterFS storage plugin dependencies] ********** ok: [node02] TASK [openshift_node : Check for existence of fusefs sebooleans] *************** ok: [node02] => (item=virt_use_fusefs) ok: [node02] => (item=virt_sandbox_use_fusefs) TASK [openshift_node : Set seboolean to allow gluster storage plugin access from containers] *** ok: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:47.527659', '_ansible_no_log': False, u'stdout': u'virt_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_use_fusefs'], u'rc': 0, 'item': u'virt_use_fusefs', u'delta': u'0:00:00.010632', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:47.517027', '_ansible_ignore_errors': None, 'failed': False}) ok: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:49.018463', '_ansible_no_log': False, u'stdout': u'virt_sandbox_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_fusefs'], u'rc': 0, 'item': u'virt_sandbox_use_fusefs', u'delta': u'0:00:00.011776', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_sandbox_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:49.006687', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : Set seboolean to allow gluster storage plugin access from containers (python 3)] *** skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:47.527659', '_ansible_no_log': False, u'stdout': u'virt_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_use_fusefs'], u'rc': 0, 'item': u'virt_use_fusefs', u'delta': u'0:00:00.010632', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:47.517027', '_ansible_ignore_errors': None, 'failed': False}) skipping: [node02] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-04-08 06:03:49.018463', '_ansible_no_log': False, u'stdout': u'virt_sandbox_use_fusefs --> on', u'cmd': [u'getsebool', u'virt_sandbox_use_fusefs'], u'rc': 0, 'item': u'virt_sandbox_use_fusefs', u'delta': u'0:00:00.011776', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'getsebool virt_sandbox_use_fusefs', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'virt_sandbox_use_fusefs --> on'], 'failed_when_result': False, u'start': u'2018-04-08 06:03:49.006687', '_ansible_ignore_errors': None, 'failed': False}) TASK [openshift_node : Ceph storage plugin configuration] ********************** included: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/ceph.yml for node02 TASK [openshift_node : Install Ceph storage plugin dependencies] *************** ok: [node02] TASK [openshift_node : iSCSI storage plugin configuration] ********************* included: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/iscsi.yml for node02 TASK [openshift_node : Install iSCSI storage plugin dependencies] ************** ok: [node02] => (item=iscsi-initiator-utils) ok: [node02] => (item=device-mapper-multipath) TASK [openshift_node : restart services] *************************************** ok: [node02] => (item=multipathd) ok: [node02] => (item=rpcbind) TASK [openshift_node : Template multipath configuration] *********************** changed: [node02] TASK [openshift_node : Enable multipath] *************************************** changed: [node02] TASK [openshift_node : include_tasks] ****************************************** included: /root/openshift-ansible/roles/openshift_node/tasks/config/workaround-bz1331590-ovs-oom-fix.yml for node02 TASK [openshift_node : Create OpenvSwitch service.d directory] ***************** ok: [node02] TASK [openshift_node : Install OpenvSwitch service OOM fix] ******************** ok: [node02] TASK [tuned : Check for tuned package] ***************************************** ok: [node02] TASK [tuned : Set tuned OpenShift variables] *********************************** ok: [node02] TASK [tuned : Ensure directory structure exists] ******************************* ok: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'state': 'directory', 'ctime': 1522931328.2514014, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-control-plane', 'size': 24, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) ok: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'state': 'directory', 'ctime': 1522931328.2514014, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-node', 'size': 24, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) ok: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'state': 'directory', 'ctime': 1522931328.2514014, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift', 'size': 24, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) skipping: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/recommend.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'recommend.conf', 'size': 268, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) skipping: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/openshift-control-plane/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-control-plane/tuned.conf', 'size': 744, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) skipping: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/openshift-node/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-node/tuned.conf', 'size': 135, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) skipping: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/openshift/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift/tuned.conf', 'size': 593, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) TASK [tuned : Ensure files are populated from templates] *********************** skipping: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'state': 'directory', 'ctime': 1522931328.2514014, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-control-plane', 'size': 24, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) skipping: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'state': 'directory', 'ctime': 1522931328.2514014, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-node', 'size': 24, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) skipping: [node02] => (item={'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'state': 'directory', 'ctime': 1522931328.2514014, 'serole': 'object_r', 'gid': 0, 'mode': '0755', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift', 'size': 24, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) ok: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/recommend.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'recommend.conf', 'size': 268, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) ok: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/openshift-control-plane/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-control-plane/tuned.conf', 'size': 744, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) ok: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/openshift-node/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift-node/tuned.conf', 'size': 135, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) ok: [node02] => (item={'src': u'/root/openshift-ansible/roles/tuned/templates/openshift/tuned.conf', 'group': u'root', 'uid': 0, 'selevel': 's0', 'seuser': 'unconfined_u', 'serole': 'object_r', 'ctime': 1522931328.2514014, 'state': 'file', 'gid': 0, 'mode': '0644', 'mtime': 1522931328.2514014, 'owner': 'root', 'path': u'openshift/tuned.conf', 'size': 593, 'root': u'/root/openshift-ansible/roles/tuned/templates', 'setype': 'admin_home_t'}) TASK [tuned : Make tuned use the recommended tuned profile on restart] ********* changed: [node02] => (item=/etc/tuned/active_profile) ok: [node02] => (item=/etc/tuned/profile_mode) TASK [tuned : Restart tuned service] ******************************************* changed: [node02] TASK [nickhammond.logrotate : nickhammond.logrotate | Install logrotate] ******* ok: [node02] TASK [nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d scripts] *** RUNNING HANDLER [openshift_node : restart node] ******************************** changed: [node02] PLAY [create additional node network plugin groups] **************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] TASK [group_by] **************************************************************** ok: [node02] [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_flannel [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_calico [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_contiv [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_kuryr PLAY [etcd_client node config] ************************************************* skipping: no hosts matched PLAY [Additional node config] ************************************************** skipping: no hosts matched PLAY [Additional node config] ************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: oo_nodes_use_nuage PLAY [Additional node config] ************************************************** skipping: no hosts matched PLAY [Configure Contiv masters] ************************************************ TASK [Gathering Facts] ********************************************************* ok: [node01] TASK [contiv_facts : Determine if CoreOS] ************************************** skipping: [node01] TASK [contiv_facts : Init the contiv_is_coreos fact] *************************** skipping: [node01] TASK [contiv_facts : Set the contiv_is_coreos fact] **************************** skipping: [node01] TASK [contiv_facts : Set the bin directory path for CoreOS] ******************** skipping: [node01] TASK [contiv_facts : Create the directory used to store binaries] ************** skipping: [node01] TASK [contiv_facts : Create Ansible temp directory] **************************** skipping: [node01] TASK [contiv_facts : Determine if has rpm] ************************************* skipping: [node01] TASK [contiv_facts : Init the contiv_has_rpm fact] ***************************** skipping: [node01] TASK [contiv_facts : Set the contiv_has_rpm fact] ****************************** skipping: [node01] TASK [contiv_facts : Init the contiv_has_firewalld fact] *********************** skipping: [node01] TASK [contiv_facts : Init the contiv_has_iptables fact] ************************ skipping: [node01] TASK [contiv_facts : include_tasks] ******************************************** skipping: [node01] TASK [contiv_facts : include_tasks] ******************************************** skipping: [node01] TASK [contiv : include_tasks] ************************************************** skipping: [node01] TASK [contiv : Ensure contiv_bin_dir exists] *********************************** skipping: [node01] TASK [contiv : include_tasks] ************************************************** skipping: [node01] TASK [contiv : include_tasks] ************************************************** skipping: [node01] TASK [contiv : include_tasks] ************************************************** skipping: [node01] PLAY [Configure rest of Contiv nodes] ****************************************** TASK [Gathering Facts] ********************************************************* ok: [node01] ok: [node02] TASK [contiv_facts : Determine if CoreOS] ************************************** skipping: [node02] skipping: [node01] TASK [contiv_facts : Init the contiv_is_coreos fact] *************************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Set the contiv_is_coreos fact] **************************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Set the bin directory path for CoreOS] ******************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Create the directory used to store binaries] ************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Create Ansible temp directory] **************************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Determine if has rpm] ************************************* skipping: [node01] skipping: [node02] TASK [contiv_facts : Init the contiv_has_rpm fact] ***************************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Set the contiv_has_rpm fact] ****************************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Init the contiv_has_firewalld fact] *********************** skipping: [node01] skipping: [node02] TASK [contiv_facts : Init the contiv_has_iptables fact] ************************ skipping: [node01] skipping: [node02] TASK [contiv_facts : include_tasks] ******************************************** skipping: [node02] skipping: [node01] TASK [contiv_facts : include_tasks] ******************************************** skipping: [node02] skipping: [node01] TASK [contiv : include_tasks] ************************************************** skipping: [node01] skipping: [node02] TASK [contiv : Ensure contiv_bin_dir exists] *********************************** skipping: [node01] skipping: [node02] TASK [contiv : include_tasks] ************************************************** skipping: [node01] skipping: [node02] TASK [contiv : include_tasks] ************************************************** skipping: [node01] skipping: [node02] TASK [contiv : include_tasks] ************************************************** skipping: [node01] skipping: [node02] PLAY [Configure Kuryr node] **************************************************** skipping: no hosts matched PLAY [Additional node config] ************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [openshift_manage_node : Wait for master API to become available before proceeding] *** ok: [node02 -> node01] TASK [openshift_manage_node : Wait for Node Registration] ********************** ok: [node02 -> node01] TASK [openshift_manage_node : include_tasks] *********************************** included: /root/openshift-ansible/roles/openshift_manage_node/tasks/config.yml for node02 TASK [openshift_manage_node : Set node schedulability] ************************* ok: [node02 -> node01] TASK [openshift_manage_node : Label nodes] ************************************* ok: [node02 -> node01] TASK [Create group for deployment type] **************************************** ok: [node02] PLAY [Re-enable excluder if it was previously enabled] ************************* TASK [openshift_excluder : Detecting Atomic Host Operating System] ************* ok: [node02] TASK [openshift_excluder : Debug r_openshift_excluder_enable_docker_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_docker_excluder": "false" } TASK [openshift_excluder : Debug r_openshift_excluder_enable_openshift_excluder] *** ok: [node02] => { "r_openshift_excluder_enable_openshift_excluder": "false" } TASK [openshift_excluder : Fail if invalid openshift_excluder_action provided] *** skipping: [node02] TASK [openshift_excluder : Fail if r_openshift_excluder_upgrade_target is not defined] *** skipping: [node02] TASK [openshift_excluder : Include main action task file] ********************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/enable.yml for node02 TASK [openshift_excluder : Install excluders] ********************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml for node02 TASK [openshift_excluder : Install docker excluder - yum] ********************** skipping: [node02] TASK [openshift_excluder : Install docker excluder - dnf] ********************** skipping: [node02] TASK [openshift_excluder : Install openshift excluder - yum] ******************* skipping: [node02] TASK [openshift_excluder : Install openshift excluder - dnf] ******************* skipping: [node02] TASK [openshift_excluder : set_fact] ******************************************* skipping: [node02] TASK [openshift_excluder : Enable excluders] *********************************** included: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml for node02 TASK [openshift_excluder : Check for docker-excluder] ************************** ok: [node02] TASK [openshift_excluder : Enable docker excluder] ***************************** skipping: [node02] TASK [openshift_excluder : Check for openshift excluder] *********************** ok: [node02] TASK [openshift_excluder : Enable openshift excluder] ************************** skipping: [node02] PLAY [Node Install Checkpoint End] ********************************************* TASK [Set Node install 'Complete'] ********************************************* ok: [node01] PLAY RECAP ********************************************************************* localhost : ok=25 changed=0 unreachable=0 failed=0 node01 : ok=36 changed=0 unreachable=0 failed=0 node02 : ok=183 changed=18 unreachable=0 failed=0 INSTALLER STATUS *************************************************************** Initialization : Complete (0:01:00) Node Install : Complete (0:03:32) PLAY [new_nodes] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [node02] TASK [Restart openvswitch service] ********************************************* changed: [node02] PLAY RECAP ********************************************************************* node02 : ok=2 changed=1 unreachable=0 failed=0 2018/04/08 06:05:44 Waiting for host: 192.168.66.101:22 2018/04/08 06:05:44 Connected to tcp://192.168.66.101:22 2018/04/08 06:05:47 Waiting for host: 192.168.66.101:22 2018/04/08 06:05:47 Connected to tcp://192.168.66.101:22 Warning: Permanently added '[127.0.0.1]:32845' (ECDSA) to the list of known hosts. Warning: Permanently added '[127.0.0.1]:32845' (ECDSA) to the list of known hosts. Cluster "node01:8443" set. Cluster "node01:8443" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 2d v1.9.1+a0ce1bc657 node02 Ready 1m v1.9.1+a0ce1bc657 + make cluster-sync ./cluster/build.sh Building ... sha256:c90fc4dd370dfa6a26541d1993dc42ab8f083c22510abae856ba4a3f7052b736 go version go1.9.2 linux/amd64 rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready skipping directory . go version go1.9.2 linux/amd64 54fb7542a36c080d4b29212c2c518043014f4b00f4b5d80ea01808891ebc1e71 54fb7542a36c080d4b29212c2c518043014f4b00f4b5d80ea01808891ebc1e71 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && ./hack/build-go.sh install " sha256:c90fc4dd370dfa6a26541d1993dc42ab8f083c22510abae856ba4a3f7052b736 go version go1.9.2 linux/amd64 skipping directory . go version go1.9.2 linux/amd64 Compiling tests... compiled tests.test 5e8077a6d9ed770e8519c219b62c8613a231d75ff8cfc341fe66fe7fc9e4167e 5e8077a6d9ed770e8519c219b62c8613a231d75ff8cfc341fe66fe7fc9e4167e hack/build-docker.sh build sending incremental file list ./ Dockerfile kubernetes.repo sent 854 bytes received 53 bytes 1814.00 bytes/sec total size is 1167 speedup is 1.29 Sending build context to Docker daemon 35.7 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 71c3d482487e Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 31e2c695509f Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 24d4616566ef Step 5/8 : USER 1001 ---> Using cache ---> 93387777f457 Step 6/8 : COPY virt-controller /virt-controller ---> Using cache ---> 078e5eda9f78 Step 7/8 : ENTRYPOINT /virt-controller ---> Using cache ---> dc83eb9e6e5d Step 8/8 : LABEL "kubevirt-functional-tests-openshift-release1" '' "virt-controller" '' ---> Running in 932d6dac7e24 ---> c4b922660889 Removing intermediate container 932d6dac7e24 Successfully built c4b922660889 sending incremental file list ./ Dockerfile entrypoint.sh kubevirt-sudo libvirtd.sh sh.sh sock-connector sent 3286 bytes received 129 bytes 6830.00 bytes/sec total size is 5469 speedup is 1.60 Sending build context to Docker daemon 37.44 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 4a94b3474ba7 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 1060bf73289c Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 0e21145d5129 Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> 6f40988349d4 Step 6/14 : COPY virt-launcher /virt-launcher ---> Using cache ---> 2040a6398736 Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> Using cache ---> 777aac10ff52 Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Using cache ---> 862ff6ab3218 Step 9/14 : RUN rm -f /libvirtd.sh ---> Using cache ---> 3bd72dde701a Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> Using cache ---> 41df4be053ff Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Using cache ---> b8bff8db3a4b Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> Using cache ---> 8cfe2dbb8291 Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> 56e34dcee2d3 Step 14/14 : LABEL "kubevirt-functional-tests-openshift-release1" '' "virt-launcher" '' ---> Running in b1863651e497 ---> 3023b08d5163 Removing intermediate container b1863651e497 Successfully built 3023b08d5163 sending incremental file list ./ Dockerfile sent 585 bytes received 34 bytes 1238.00 bytes/sec total size is 775 speedup is 1.25 Sending build context to Docker daemon 36.37 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 71c3d482487e Step 3/5 : COPY virt-handler /virt-handler ---> Using cache ---> 3e6bcfc8982a Step 4/5 : ENTRYPOINT /virt-handler ---> Using cache ---> cd7a5f8cffdf Step 5/5 : LABEL "kubevirt-functional-tests-openshift-release1" '' "virt-handler" '' ---> Running in 6389368ab9ac ---> 80c35d2d25c2 Removing intermediate container 6389368ab9ac Successfully built 80c35d2d25c2 sending incremental file list ./ Dockerfile sent 864 bytes received 34 bytes 1796.00 bytes/sec total size is 1377 speedup is 1.53 Sending build context to Docker daemon 36.12 MB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 71c3d482487e Step 3/9 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 52824ea59532 Step 4/9 : WORKDIR /home/virt-api ---> Using cache ---> aad9b1e251be Step 5/9 : USER 1001 ---> Using cache ---> b47160c51405 Step 6/9 : RUN curl -OL https://github.com/swagger-api/swagger-ui/tarball/38f74164a7062edb5dc80ef2fdddda24f3f6eb85/swagger-ui.tar.gz && mkdir swagger-ui && tar xf swagger-ui.tar.gz -C swagger-ui --strip-components 1 && mkdir third_party && mv swagger-ui/dist third_party/swagger-ui && rm -rf swagger-ui && sed -e 's@"http://petstore.swagger.io/v2/swagger.json"@"/swaggerapi/"@' -i third_party/swagger-ui/index.html && rm swagger-ui.tar.gz && rm -rf swagger-ui ---> Using cache ---> 4dd3f32334f9 Step 7/9 : COPY virt-api /virt-api ---> Using cache ---> 6d42d96636dc Step 8/9 : ENTRYPOINT /virt-api ---> Using cache ---> c57afac8c334 Step 9/9 : LABEL "kubevirt-functional-tests-openshift-release1" '' "virt-api" '' ---> Running in c63ba3dfa985 ---> 88d4a0eed932 Removing intermediate container c63ba3dfa985 Successfully built 88d4a0eed932 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd/iscsi-demo-target-tgtd ./ Dockerfile run-tgt.sh sent 2185 bytes received 53 bytes 4476.00 bytes/sec total size is 3992 speedup is 1.78 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 71c3d482487e Step 3/10 : ENV container docker ---> Using cache ---> 453fb17b7f2a Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> 257f70388ed1 Step 5/10 : RUN mkdir -p /images ---> Using cache ---> 8c74dfc48702 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> 8b9a52ef2456 Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> cfa30ab2f553 Step 8/10 : EXPOSE 3260 ---> Using cache ---> eb3a3602eb3c Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> 56b1e43742ed Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-openshift-release1" '' ---> Running in 5879cc91a189 ---> db7a8bd675ac Removing intermediate container 5879cc91a189 Successfully built db7a8bd675ac sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd/vm-killer ./ Dockerfile sent 602 bytes received 34 bytes 1272.00 bytes/sec total size is 787 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 71c3d482487e Step 3/5 : ENV container docker ---> Using cache ---> 453fb17b7f2a Step 4/5 : RUN dnf -y install procps-ng && dnf -y clean all ---> Using cache ---> 41a521e2b7e1 Step 5/5 : LABEL "kubevirt-functional-tests-openshift-release1" '' "vm-killer" '' ---> Running in c501a45f6d92 ---> 3089fbda12c2 Removing intermediate container c501a45f6d92 Successfully built 3089fbda12c2 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd/registry-disk-v1alpha ./ Dockerfile entry-point.sh sent 1529 bytes received 53 bytes 3164.00 bytes/sec total size is 2482 speedup is 1.57 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 1ca40cafe086 Step 3/7 : ENV container docker ---> Using cache ---> 453271b8b5a3 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 4ab88b363377 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> eaa7729dd9dd Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 23b1b7a48aee Step 7/7 : LABEL "kubevirt-functional-tests-openshift-release1" '' "registry-disk-v1alpha" '' ---> Running in 7a39658e102a ---> 322355f27356 Removing intermediate container 7a39658e102a Successfully built 322355f27356 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd/cirros-registry-disk-demo ./ Dockerfile sent 630 bytes received 34 bytes 1328.00 bytes/sec total size is 825 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32844/kubevirt/registry-disk-v1alpha:devel ---> 322355f27356 Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in 751a9fb92715 ---> cfb84a5491f1 Removing intermediate container 751a9fb92715 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in 06489b9b43c0  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0  0 --:--:-- --:--:-- --:--:-- 0 0 12.1M 0 81920 0 0 74676 0 0:02:50 0:00:01 0:02:49 74608 7 12.1M 7 944k  0 0 451k 0 0:00:27 0:00:02 0:00:25 451k 43 12.1M 43 5456k 0 0 1776k 0 0:00:06 0:00:03 0:00:03 1775k 100 12.1M 100 12.1M 0 0 3173k 0 0:00:03 0:00:03 --:--:-- 3173k  ---> cf4736aa49e7 Removing intermediate container 06489b9b43c0 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-openshift-release1" '' ---> Running in fb7965e94da8 ---> 0cbeb5cb7430 Removing intermediate container fb7965e94da8 Successfully built 0cbeb5cb7430 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd/fedora-cloud-registry-disk-demo ./ Dockerfile sent 677 bytes received 34 bytes 1422.00 bytes/sec total size is 926 speedup is 1.30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32844/kubevirt/registry-disk-v1alpha:devel ---> 322355f27356 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in 630368aa08b6 ---> 41a9e95edfd9 Removing intermediate container 630368aa08b6 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 3c7759ad28b3  % Total % Received % Xferd Average Speed Time Time  Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0  0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0  0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0  0 221M 0 51984 0 0 31794 0 2:01:47 0:00:01 2:01:46 31794 0 221M 0 1218k 0 0 460k 0 0:08:12 0:00:02 0:08:10 1157k 2 221M 2 5106k 0 0 1388k 0 0:02:43 0:00:03 0:02:40 2474k 5 221M 5 11.1M 0 0 2446k 0 0:01:32 0:00:04 0:01:28 3755k  5 221M 5 11.7M 0 0 2097k 0 0:01:48 0:00:05 0:01:43 2922k 5 221M 5 11.7M 0 0 1785k 0 0:02:07 0:00:06 0:02:01 2348k 5 221M 5 11.7M 0 0 1554k 0 0:02:25 0:00:07 0:02:18 2123k 5 221M 5 11.7M 0 0 1390k 0 0:02:43 0:00:08 0:02:35 1392k 5 221M 5 12.3M 0 0 1304k 0 0:02:53 0:00:09 0:02:44 243k 5 221M 5 13.1M 0 0 1264k 0 0:02:59 0:00:10 0:02:49 287k 6 221M 6 13.7M 0 0 1169k 0 0:03:14 0:00:12 0:03:02 386k 6 221M 6 13.7M 0 0 1088k 0 0:03:28 0:00:12 0:03:16 396k 6 221M 6 14.0M 0 0 1049k 0 0:03:36 0:00:13 0:03:23 454k 6 221M 6 14.8M 0 0 1041k 0 0:03:37 0:00:14 0:03:23 530k 7 221M 7 16.8M 0 0 1104k 0 0:03:25 0:00:15 0:03:10 764k 8 221M 8 18.9M 0 0 1168k 0 0:03:14 0:00:16 0:02:58 1164k 9 221M 9 20.5M 0 0 1162k 0 0:03:15 0:00:18 0:02:57 1348k 9 221M 9 20.9M 0 0 1150k 0 0:03:17 0:00:18 0:02:59 1427k 10 221M 10 22.2M 0 0 1162k 0 0:03:15 0:00:19 0:02:56 1517k 10 221M 10 23.7M 0 0 1147k 0 0:03:17 0:00:21 0:02:56 1266k 10 221M 10 24.0M 0 0 1107k 0 0:03:24 0:00:22 0:03:02 927k 10 221M 10 24.1M 0 0 1094k 0 0:03:27 0:00:22 0:03:05 820k 11 221M 11 24.8M 0 0 1075k 0 0:03:30 0:00:23 0:03:07 798k 11 221M 11 25.7M 0 0 1038k 0 0:03:38 0:00:25 0:03:13 616k 11 221M 11 25.7M 0 0 998k 0 0:03:47 0:00:26  0:03:21 391k 11 221M 11 25.7M 0 0 962k 0 0:03:55 0:00:27 0:03:28 342k 11 221M 11 25.7M 0 0 928k 0 0:04:04 0:00:28 0:03:36 280k 11 221M 11 25.7M 0 0 896k 0 0:04:12 0:00:29 0:03:43 164k 11 221M 11 25.7M 0 0 867k 0 0:04:21  0:00:30 0:03:51 0 11 221M 11 25.7M 0 0 839k 0 0:04:30 0:00:31 0:03:59 0 11 221M 11 25.7M 0 0 813k 0 0:04:38 0:00:32 0:04:06 0 11 221M 11 25.7M 0 0 789k 0 0:04:47 0:00:33 0:04:14 0 11 221M 11 25.7M 0 0 766k 0 0:04:56 0:00:34 0:04:22 0 11 221M 11 25.7M 0 0 744k 0 0:05:04 0:00:35 0:04:29 0 11 221M 11 25.7M 0 0 740k 0 0:05:06 0:00:35 0:04:31 7726 11 221M 11 26.4M 0 0 739k 0 0:05:06 0:00:36 0:04:30 170k 12 221M 12 27.9M 0 0 760k 0 0:04:58 0:00:37 0:04:21 535k 13 221M 13 30.4M 0 0 808k 0 0:04:40 0:00:38 0:04:02 1145k 15 221M 15 33.5M 0 0 867k 0 0:04:21 0:00:39 0:03:42 1895k 16 221M 16 36.4M 0 0 903k 0 0:04:11 0:00:41 0:03:30 1921k 16 221M 16 36.4M 0 0 882k 0 0:04:17 0:00:42 0:03:35 1791k 16 221M 16 36.4M 0 0 861k 0 0:04:23 0:00:43 0:03:40 1534k 16 221M 16 36.4M 0 0 842k 0 0:04:29 0:00:44 0:03:45 1074k 16 221M 16 36.5M 0 0 837k 0 0:04:30 0:00:44 0:03:46 602k 16 221M 16 37.5M 0 0 842k 0 0:04:29 0:00:45 0:03:44 249k 18 221M 18 41.1M 0 0 903k 0 0:04:11 0:00:46 0:03:25 1116k 18 221M 18 41.2M 0 0 886k 0 0:04:15 0:00:47 0:03:28 1141k 18 221M 18 41.2M 0 0 868k 0 0:04:21 0:00:48 0:03:33 1141k 18 221M 18 41.2M 0 0 851k  0 0:04:26 0:00:49 0:03:37 975k 18 221M 18 41.2M 0 0 834k 0 0:04:31 0:00:50 0:03:41 758k 18 221M 18 41.5M 0 0 824k 0 0:04:35 0:00:51 0:03:44 82862 19 221M 19 42.3M 0 0 823k 0 0:04:35 0:00:52 0:03:43 220k 19 221M 19 44.2M 0 0 844k 0 0:04:28 0:00:53 0:03:35 609k 20 221M 20 44.4M 0 0 831k 0 0:04:32 0:00:54 0:03:38 644k 20 221M 20 44.4M 0 0 816k 0 0:04:37 0:00:55 0:03:42 643k 20 221M 20 44.5M 0 0 804k 0 0:04:41 0:00:56 0:03:45 605k 20 221M 20 44.9M 0 0 798k 0 0:04:44 0:00:57 0:03:47 530k 20 221M 20 45.9M 0 0 801k 0 0:04:42 0:00:58 0:03:44 347k 21 221M 21 47.4M 0 0 814k 0 0:04:38 0:00:59 0:03:39 617k 21 221M 21 48.2M 0 0 808k 0 0:04:40 0:01:01 0:03:39 721k 21 221M 21 48.2M 0 0 795k 0 0:04:45 0:01:02 0:03:43 698k 21 221M 21 48.2M 0 0 782k 0 0:04:49 0:01:03 0:03:46 621k 21 221M 21 48.2M 0 0 770k 0 0:04:54 0:01:04 0:03:50 433k 21 221M 21 48.3M 0 0 765k 0 0:04:56 0:01:04 0:03:52 190k 22 221M 22 49.1M 0 0 766k 0 0:04:56 0:01:05 0:03:51 197k 22 221M 22 50.8M 0 0 782k 0 0:04:50 0:01:06 0:03:44 598k 23 221M 23 52.4M 0 0 784k 0 0:04:49 0:01:08 0:03:41 798k 23 221M 23 52.4M 0 0 772k 0 0:04:53 0:01:09 0:03:44 798k 23 221M 23 52.4M 0 0 761k 0 0:04:57 0:01:10 0:03:47 719k 23 221M 23 52.5M 0 0 754k 0 0:05:00 0:01:11 0:03:49 612k 23 221M 23 52.7M 0 0 753k 0 0:05:01 0:01:11 0:03:50 375k 24 221M 24 53.6M 0 0 756k 0 0:05:00 0:01:12 0:03:48 285k 24 221M 24 54.0M 0 0 751k 0 0:05:01 0:01:13 0:03:48 389k 24 221M 24 54.4M 0 0 747k 0 0:05:03 0:01:14 0:03:49 497k 24 221M 24 54.8M 0 0 743k 0 0:05:05 0:01:15 0:03:50 560k 24 221M 24 55.3M 0 0 739k 0 0:05:06 0:01:16 0:03:50 536k 25 221M 25 55.8M 0 0 736k 0 0:05:07 0:01:17 0:03:50 456k 25 221M 25 56.5M 0 0 736k 0 0:05:08 0:01:18 0:03:50 513k 25 221M 25 57.4M 0 0 739k 0 0:05:06 0:01:19 0:03:47 615k  26 221M 26 58.8M 0 0 747k 0 0:05:03 0:01:20 0:03:43 812k 27 221M 27 59.9M 0 0 746k 0 0:05:03 0:01:22 0:03:41 849k 27 221M 27 59.9M 0 0 737k 0 0:05:07 0:01:23 0:03:44 754k 27 221M 27 59.9M 0 0 729k 0 0:05:11 0:01:24 0:03:47 628k 27 221M 27 59.9M 0 0 720k 0 0:05:14 0:01:25 0:03:49 458k 27 221M 27 59.9M 0 0 712k 0 0:05:18 0:01:26 0:03:52 208k 27 221M 27 60.0M 0 0 705k 0 0:05:21 0:01:27 0:03:54 2820 27 221M 27 60.1M 0 0 703k 0 0:05:22 0:01:27 0:03:55 43806 27 221M 27 61.3M 0 0 709k 0 0:05:19 0:01:28 0:03:51 323k 29 221M 29 64.9M 0 0 742k 0 0:05:05 0:01:29 0:03:36 1161k 30 221M 30 66.4M 0 0 748k 0 0:05:02 0:01:30 0:03:32 1424k 30 221M 30 66.4M 0 0 740k 0 0:05:06 0:01:31 0:03:35 1375k 30 221M 30 66.4M 0 0 732k 0 0:05:09 0:01:32 0:03:37 1218k 30 221M 30 66.4M 0 0 724k 0 0:05:12 0:01:33 0:03:39 989k 30 221M 30 66.4M 0 0 717k 0 0:05:16 0:01:34 0:03:42 295k 30 221M 30 66.5M 0 0 712k 0 0:05:18 0:01:35 0:03:43 14171 30 221M 30 67.2M 0 0 712k 0 0:05:18 0:01:36 0:03:42 157k 31 221M 31 68.7M 0 0 721k 0 0:05:14 0:01:37 0:03:37 489k 32 221M 32 73.0M 0 0 757k 0 0:04:59 0:01:38 0:03:21 1412k 34 221M 34 77.3M 0 0 794k 0 0:04:45 0:01:39 0:03:06 2337k 36 221M 36 79.8M 0 0 812k 0 0:04:39 0:01:40 0:02:59 2741k 37 221M 37 82.0M 0 0 826k 0 0:04:34 0:01:41 0:02:53 3023k 37 221M 37 83.7M 0 0 835k 0 0:04:31 0:01:42 0:02:49 3067k 38 221M 38 85.4M 0 0 843k 0 0:04:28 0:01:43 0:02:45 2529k 39 221M 39 87.1M 0 0 852k 0 0:04:26 0:01:44 0:02:42 2013k 39 221M 39 88.0M 0 0 854k 0 0:04:25 0:01:45 0:02:40 1689k 40 221M 40 89.2M 0 0 851k 0 0:04:26 0:01:47 0:02:39 1290k 40 221M 40 89.2M 0 0 843k 0 0:04:29 0:01:48 0:02:41 991k 40 221M 40 89.2M 0 0 835k 0 0:04:31 0:01:49 0:02:42 688k 40 221M 40 89.2M 0 0 828k 0 0:04:33 0:01:50 0:02:43 384k 40 221M 40 89.2M 0 0 820k 0 0:04:36 0:01:51 0:02:45 205k 40 221M 40 89.2M 0 0 819k 0 0:04:36 0:01:51 0:02:45 11514 40 221M 40 89.9M 0 0 817k 0 0:04:37 0:01:52 0:02:45 167k 41 221M 41 91.0M 0 0 820k 0 0:04:36 0:01:53 0:02:43 435k 41 221M 41 92.6M 0 0 828k 0 0:04:33 0:01:54 0:02:39 826k 42 221M 42 94.4M 0 0 836k 0 0:04:31 0:01:55 0:02:36 1248k 43 221M 43 95.9M 0 0 841k 0 0:04:29 0:01:56 0:02:33 1344k 43 221M 43 97.3M 0 0 846k 0 0:04:27  0:01:57 0:02:30 1503k 44 221M 44 97.9M 0 0 842k 0 0:04:29 0:01:59 0:02:30 1310k 44 221M 44 97.9M 0 0 835k 0 0:04:31 0:02:00 0:02:31 999k 44 221M 44 97.9M 0 0 828k 0 0:04:33 0:02:01 0:02:32 668k 44 221M 44 97.9M 0 0 822k 0 0:04:35 0:02:02 0:02:33 394k 44 221M 44 97.9M 0 0 815k 0 0:04:38 0:02:03 0:02:35 130k 44 221M 44 97.9M 0 0 808k 0 0:04:40 0:02:04 0:02:36 0 44 221M 44 97.9M 0 0 802k 0 0:04:42 0:02:05 0:02:37 0 44 221M 44 97.9M 0 0 795k 0 0:04:45 0:02:06 0:02:39 0 44 221M 44 98.0M 0 0 792k 0 0:04:46 0:02:06 0:02:40 11293 44 221M 44 98.4M 0 0 790k 0 0:04:47 0:02:07 0:02:40 111k 44 221M 44 99.4M 0 0 791k 0 0:04:46 0:02:08 0:02:38 329k 45 221M 45 100M 0 0 796k 0 0:04:44 0:02:09 0:02:35 643k 46 221M 46 102M 0 0 803k 0 0:04:42 0:02:10 0:02:32 1024k 46 221M 46 102M 0 0 798k 0 0:04:44 0:02:11 0:02:33 952k 46 221M 46 102M 0 0 792k 0 0:04:46 0:02:12 0:02:34 856k 46 221M 46 102M 0 0 786k 0 0:04:48 0:02:13 0:02:35 659k 46 221M 46 102M 0 0 780k 0 0:04:50 0:02:14 0:02:36 378k 46 221M 46 103M 0 0 779k 0 0:04:51 0:02:15 0:02:36 139k 47 221M 47 104M 0 0 782k 0 0:04:50 0:02:16 0:02:34 339k 47 221M 47 105M 0 0 780k 0 0:04:50 0:02:18 0:02:32 471k 47 221M 47 105M 0 0 774k 0 0:04:52 0:02:19 0:02:33 471k 47 221M 47 105M 0 0 768k 0 0:04:55 0:02:20 0:02:35 471k 47 221M 47 105M 0 0 766k 0 0:04:56 0:02:20 0:02:36 412k 47 221M 47 105M 0 0 764k 0 0:04:56 0:02:21 0:02:35 294k 48 221M 48 106M 0 0 763k 0 0:04:57 0:02:22 0:02:35 256k 48 221M 48 106M 0 0 762k 0 0:04:57 0:02:23 0:02:34 392k 48 221M 48 107M 0 0 761k 0 0:04:57 0:02:24 0:02:33 529k 48 221M 48 108M 0 0 760k 0 0:04:58 0:02:25 0:02:33 604k 49 221M 49 108M 0 0 760k 0 0:04:58 0:02:26 0:02:32 621k 49 221M 49 109M 0 0 755k 0 0:05:00 0:02:28 0:02:32 548k 49 221M 49 109M 0 0 750k 0 0:05:02 0:02:29 0:02:33 444k 49 221M 49 109M 0 0 747k 0 0:05:03 0:02:30 0:02:33 363k 49 221M 49 109M 0 0 742k 0 0:05:05 0:02:31 0:02:34 245k 49 221M 49 109M 0 0 740k 0 0:05:06 0:02:31 0:02:35 156k 49 221M 49 110M 0 0 738k 0 0:05:07 0:02:32 0:02:35 130k 49 221M 49 110M 0 0 735k 0 0:05:08  0:02:33 0:02:35 217k 49 221M 49 110M 0 0 733k 0 0:05:09 0:02:34 0:02:35 287k 50 221M 50 111M 0 0 731k 0 0:05:10 0:02:35 0:02:35 374k 50 221M 50 111M 0 0 729k 0 0:05:10 0:02:36 0:02:34 413k 50 221M 50 112M 0 0 728k 0 0:05:11 0:02:37 0:02:34 444k 50 221M 50 112M 0 0 724k 0 0:05:13 0:02:39 0:02:34 417k 50 221M 50 112M 0 0 719k 0 0:05:15 0:02:40 0:02:35 347k 50 221M 50 112M 0 0 715k 0 0:05:16 0:02:41 0:02:35 288k 50 221M 50 112M 0 0 715k 0 0:05:17 0:02:41 0:02:36 260k 51 221M 51 113M 0 0 717k 0 0:05:16 0:02:42 0:02:34 348k 52 221M 52 115M 0 0 718k 0 0:05:15 0:02:44 0:02:31 543k 52 221M 52 115M 0 0 714k 0 0:05:17 0:02:45 0:02:32 543k 52 221M 52 115M 0 0 709k 0 0:05:19 0:02:46 0:02:33 525k 52 221M 52 115M 0 0 706k 0 0:05:20 0:02:47 0:02:33 469k 52 221M 52 115M 0 0 706k 0 0:05:21 0:02:47 0:02:34 347k 52 221M 52 116M 0 0 708k 0 0:05:20 0:02:48 0:02:32 296k 53 221M 53 117M 0 0 708k 0 0:05:20 0:02:50 0:02:30 495k 53 221M 53 117M 0 0 704k 0 0:05:22 0:02:51 0:02:31 495k 53 221M 53 117M 0 0 699k 0 0:05:24 0:02:52 0:02:32 464k 53 221M 53 117M 0 0 695k 0 0:05:26 0:02:53 0:02:33 393k 53 221M 53 117M 0 0 692k 0 0:05:27 0:02:54 0:02:33 201k 53 221M 53 117M 0 0 691k 0 0:05:28 0:02:54 0:02:34 45067 53 221M 53 118M 0 0 692k 0 0:05:27 0:02:55 0:02:32 259k 54 221M 54 120M 0 0 697k 0 0:05:25 0:02:56 0:02:29 595k 54 221M 54 121M 0 0 698k 0 0:05:25 0:02:58 0:02:27 773k 54 221M 54 121M 0 0 694k 0 0:05:26 0:02:59 0:02:27 754k 54 221M 54 121M 0 0 690k 0 0:05:28 0:03:00 0:02:28 651k 54 221M 54 121M 0 0 686k 0 0:05:30 0:03:01 0:02:29 484k 54 221M 54 121M 0 0 684k 0 0:05:31 0:03:01 0:02:30 239k 55 221M 55 121M 0 0 683k 0 0:05:31 0:03:02 0:02:29 111k 55 221M 55 123M 0 0 686k 0 0:05:30 0:03:03 0:02:27 382k 56 221M 56 124M 0 0 690k 0 0:05:28 0:03:04 0:02:24 698k 56 221M 56 126M 0 0 695k 0 0:05:26 0:03:05 0:02:21 1066k 57 221M 57 127M 0 0 698k 0 0:05:24 0:03:06 0:02:18 1239k 58 221M 58 128M 0 0 702k 0 0:05:22 0:03:07 0:02:15 1372k 58 221M 58 130M 0 0 706k 0 0:05:21 0:03:08 0:02:13 1410k 59 221M 59 131M 0 0 709k 0 0:05:19 0:03:09 0:02:10 1418k 59 221M 59 132M 0 0 713k 0 0:05:18 0:03:10 0:02:08 1374k 60 221M 60 134M 0 0 717k 0 0:05:16 0:03:11 0:02:05 1412k 61 221M 61 135M 0 0 722k 0 0:05:14 0:03:12 0:02:02 1480k 62 221M 62 137M 0 0 728k 0 0:05:11 0:03:13 0:01:58 1584k 63 221M 63 140M 0 0 737k 0 0:05:07 0:03:14 0:01:53 1778k 64 221M 64 143M 0 0 749k 0 0:05:02 0:03:15 0:01:47 2150k 65 221M 65 146M 0 0 760k 0 0:04:58 0:03:16 0:01:42 2417k 67 221M 67 149M 0 0 773k 0 0:04:53 0:03:17 0:01:36 2735k 68 221M 68 152M 0 0 786k 0 0:04:48 0:03:18 0:01:30 3048k 69 221M 69 154M 0 0 794k 0 0:04:45 0:03:19 0:01:26 2991k 71 221M 71 157M 0 0 803k 0 0:04:42 0:03:20 0:01:22 2918k 72 221M 72 160M 0 0 813k 0 0:04:38 0:03:21 0:01:17 2878k 73 221M 73 162M 0 0 820k 0 0:04:36 0:03:22 0:01:14 2686k 74 221M 74 164M 0 0 828k 0 0:04:33 0:03:23 0:01:10 2500k 75 221M 75 167M 0 0 833k 0 0:04:32 0:03:25 0:01:07 2143k 75 221M 75 167M 0 0 828k 0 0:04:33 0:03:26 0:01:07 1674k 75 221M 75 167M 0 0 825k 0 0:04:35 0:03:27 0:01:08 1217k 75 221M 75 167M 0 0 821k 0 0:04:36 0:03:28 0:01:08 835k 75 221M 75 167M 0 0 817k 0 0:04:37 0:03:29 0:01:08 419k 75 221M 75 167M 0 0 814k 0  0:04:38 0:03:30 0:01:08 2900 75 221M 75 167M 0 0 813k 0 0:04:38 0:03:30 0:01:08 26431 76 221M 76 168M 0 0 815k 0 0:04:38 0:03:31 0:01:07 316k 76 221M 76 170M 0 0 819k 0 0:04:36 0:03:32 0:01:04 763k 77 221M 77 170M 0 0 818k 0 0:04:37 0:03:34 0:01:03 860k 77 221M 77 170M 0 0 814k 0 0:04:38 0:03:35 0:01:03 805k 77 221M 77 170M 0 0 810k 0 0:04:39 0:03:36 0:01:03 687k 77 221M 77 170M 0 0 806k 0 0:04:41 0:03:37 0:01:04 473k 77 221M 77 170M 0 0 802k 0 0:04:42 0:03:38 0:01:04 136k 77 221M 77 171M 0 0 800k 0 0:04:43 0:03:38 0:01:05 2950 77 221M 77 171M 0 0 800k 0 0:04:43 0:03:39 0:01:04 143k 78 221M 78 173M 0 0 804k 0 0:04:41 0:03:40 0:01:01 534k 79 221M 79 175M 0 0 810k 0 0:04:40 0:03:41 0:00:59 975k 79 221M 79 176M 0 0 810k 0 0:04:39 0:03:43 0:00:56 1132k 79 221M 79 176M 0 0 807k 0 0:04:41 0:03:44 0:00:57 1057k 79 221M 79 176M 0 0 803k 0 0:04:42 0:03:45 0:00:57 940k 79 221M 79 177M 0 0 801k 0 0:04:43 0:03:46 0:00:57 660k 80 221M 80 177M 0 0 802k 0 0:04:42 0:03:46 0:00:56 444k 80 221M 80 179M 0 0 806k 0 0:04:41 0:03:47 0:00:54 563k 81 221M 81 181M 0 0 810k 0 0:04:39 0:03:48 0:00:51 997k 81 221M 81 181M 0 0 808k 0 0:04:40 0:03:49 0:00:51 1040k 81 221M 81 181M 0 0 804k 0 0:04:41 0:03:50 0:00:51 985k 81 221M 81 181M 0 0 802k 0 0:04:42 0:03:51 0:00:51 798k 82 221M 82 181M 0 0 800k 0 0:04:43 0:03:52 0:00:51 531k 82 221M 82 182M 0 0 801k 0 0:04:43 0:03:53 0:00:50 383k 83 221M 83 184M 0 0 803k 0 0:04:42 0:03:54 0:00:48 548k 83 221M 83 184M 0 0 799k 0 0:04:43 0:03:55 0:00:48 566k 83 221M 83 184M 0 0 796k 0 0:04:44 0:03:56 0:00:48 543k 83 221M 83 184M 0 0 793k 0 0:04:46 0:03:57 0:00:49 462k 83 221M 83 184M 0 0 791k 0 0:04:46 0:03:58 0:00:48 333k 83 221M 83 185M 0 0 791k 0 0:04:46 0:03:59 0:00:47 221k 83 221M 83 185M 0 0 790k 0 0:04:47 0:04:00 0:00:47 321k 84 221M 84 186M 0 0 789k 0 0:04:47 0:04:01 0:00:46 451k 84 221M 84 186M 0 0 786k 0 0:04:48 0:04:02 0:00:46 461k 84 221M 84 186M 0 0 785k 0 0:04:48 0:04:03 0:00:45 465k 84 221M 84 187M 0 0 784k 0 0:04:49 0:04:04 0:00:45 458k 84 221M 84 188M 0 0 783k 0 0:04:49 0:04:05 0:00:44 464k 85 221M 85 188M 0 0 783k 0 0:04:49 0:04:06 0:00:43 484k 85 221M 85 188M 0 0 780k 0 0:04:50 0:04:07 0:00:43 506k 85 221M 85 188M 0 0 777k 0 0:04:51 0:04:08 0:00:43 421k 85 221M 85 188M 0 0 774k 0 0:04:53 0:04:09 0:00:44 312k 85 221M 85 189M 0 0 772k 0 0:04:53 0:04:10 0:00:43 242k 85 221M 85 189M 0 0 772k 0 0:04:53 0:04:11 0:00:42 236k 86 221M 86 190M 0 0 773k 0 0:04:53 0:04:12 0:00:41 417k 86 221M 86 192M 0 0 776k 0 0:04:52 0:04:13 0:00:39 748k 87 221M 87 193M 0 0 775k 0 0:04:52 0:04:15 0:00:37 855k 87 221M 87 193M 0 0 772k 0 0:04:53 0:04:16 0:00:37 769k 87 221M 87 193M 0 0 771k 0 0:04:54 0:04:16 0:00:38 697k 87 221M 87 193M 0 0 769k 0 0:04:54 0:04:17 0:00:37 563k 87 221M 87 194M 0 0 770k 0 0:04:54 0:04:18 0:00:36 455k 88 221M 88 196M 0 0 771k 0 0:04:53 0:04:20 0:00:33 562k 88 221M 88 196M 0 0 771k 0 0:04:53 0:04:20 0:00:33 708k 88 221M 88 196M 0 0 768k 0 0:04:55 0:04:21 0:00:34 659k 88 221M 88 196M 0 0 765k 0 0:04:56 0:04:22 0:00:34 574k 88 221M 88 196M 0 0 763k 0 0:04:57 0:04:23 0:00:34 373k 88 221M 88 196M 0 0 760k 0 0:04:58 0:04:24 0:00:34 109k 88 221M 88 196M 0 0 757k 0 0:04:59 0:04:25 0:00:34 16471 88 221M 88 196M 0 0 755k 0 0:05:00 0:04:26 0:00:34 60418 89 221M 89 197M 0 0 757k 0 0:04:59 0:04:27 0:00:32 303k 90 221M 90 199M 0 0 761k 0 0:04:57 0:04:28 0:00:29 690k 91 221M 91 202M 0 0 769k 0 0:04:54 0:04:29 0:00:25 1295k 92 221M 92 206M 0 0 779k 0 0:04:51 0:04:30 0:00:21 1981k 94 221M 94 209M 0 0 787k 0 0:04:48 0:04:31 0:00:17 2411k 95 221M 95 211M 0 0 792k 0 0:04:46 0:04:32 0:00:14 2658k 96 221M 96 213M 0 0 798k 0 0:04:44 0:04:33 0:00:11 2764k 97 221M 97 215M 0 0 804k 0 0:04:42 0:04:34 0:00:08 2643k 98 221M  98 218M 0 0 810k 0 0:04:39 0:04:35 0:00:04 2477k 99 221M 99 220M 0 0 817k 0 0:04:37 0:04:36 0:00:01 2502k 100 221M 100 221M 0 0 819k 0 0:04:36 0:04:36 --:--:-- 2522k  ---> c098e3909737 Removing intermediate container 3c7759ad28b3 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-openshift-release1" '' ---> Running in 65bd8451f2e7 ---> 94eb175b96aa Removing intermediate container 65bd8451f2e7 Successfully built 94eb175b96aa sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd/alpine-registry-disk-demo ./ Dockerfile sent 639 bytes received 34 bytes 1346.00 bytes/sec total size is 866 speedup is 1.29 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32844/kubevirt/registry-disk-v1alpha:devel ---> 322355f27356 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 41a9e95edfd9 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in 5dff43401686  % Total % Received % Xferd Average Speed Time Time Time Current  Dload Upload Total Spent Left Speed  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 37.0M 0 278k 0 0 235k 0 0:02:40 0:00:01 0:02:39 235k 2 37.0M 2 901k 0 0 422k 0 0:01:29 0:00:02 0:01:27 422k 4 37.0M 4 1565k 0 0 502k 0 0:01:15 0:00:03 0:01:12 502k 5 37.0M 5 2239k 0 0 544k 0 0:01:09 0:00:04 0:01:05 544k 7 37.0M 7 2805k 0 0 544k 0 0:01:09 0:00:05 0:01:04 561k 8 37.0M 8 3359k 0 0 549k 0 0:01:09 0:00:06 0:01:03 623k 10 37.0M 10 3901k 0 0 547k 0 0:01:09 0:00:07 0:01:02 601k 11 37.0M 11 4455k 0 0 548k 0 0:01:09 0:00:08 0:01:01 577k 13 37.0M 13 5076k 0 0 556k 0 0:01:08 0:00:09 0:00:59 566k 15 37.0M 15 5704k 0 0 562k 0 0:01:07 0:00:10 0:00:57 581k 17 37.0M 17 6472k 0 0 582k 0 0:01:05 0:00:11 0:00:54 623k 19 37.0M 19 7480k 0 0 616k 0 0:01:01 0:00:12 0:00:49 714k 23 37.0M 23 8857k 0 0 674k 0 0:00:56 0:00:13 0:00:43 878k 26 37.0M 26 9.8M 0 0 715k 0 0:00:52 0:00:14 0:00:38 1004k 30 37.0M 30 11.2M 0 0 762k 0 0:00:49 0:00:15 0:00:34 1169k 34 37.0M 34 12.9M 0 0 820k 0 0:00:46 0:00:16 0:00:30 1347k 39 37.0M 39 14.5M 0 0 867k 0 0:00:43 0:00:17 0:00:26 1476k 43 37.0M 43 16.2M 0 0 916k 0 0:00:41 0:00:18 0:00:23 1551k 47 37.0M 47 17.6M 0 0 943k 0 0:00:40 0:00:19 0:00:21 1589k 51 37.0M 51 19.0M 0 0 968k 0 0:00:39 0:00:20 0:00:19 1590k 55 37.0M 55 20.5M 0 0 996k 0 0:00:38 0:00:21 0:00:17 1563k 58 37.0M 58 21.7M 0 0 1006k 0 0:00:37 0:00:22 0:00:15 1482k 62 37.0M 62 23.1M 0 0 1023k 0 0:00:37 0:00:23 0:00:14 1413k 65 37.0M 65 24.4M 0 0 1036k  0 0:00:36 0:00:24 0:00:12 1393k 69 37.0M 69 25.8M 0 0 1053k 0 0:00:35 0:00:25 0:00:10 1393k 73 37.0M 73 27.1M 0 0 1064k 0 0:00:35 0:00:26 0:00:09 1354k 76 37.0M 76 28.2M 0 0 1068k 0 0:00:35 0:00:27 0:00:08 1342k 79 37.0M 79 29.4M 0 0 1073k 0 0:00:35 0:00:28 0:00:07 1302k 83 37.0M 83 30.7M 0 0 1080k 0 0:00:35 0:00:29 0:00:06 1292k 86 37.0M 86 32.0M 0 0 1090k  0 0:00:34 0:00:30 0:00:04 1275k 90 37.0M 90 33.5M 0 0 1102k  0 0:00:34 0:00:31 0:00:03 1297k 93 37.0M 93 34.5M 0 0 1100k 0 0:00:34 0:00:32 0:00:02 1273k 96 37.0M 96 35.7M 0 0 1104k 0 0:00:34 0:00:33 0:00:01 1280k 100 37.0M 100 37.0M 0 0 1112k 0 0:00:34 0:00:34 --:--:-- 1298k  ---> 5a544539e2b4 Removing intermediate container 5dff43401686 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-openshift-release1" '' ---> Running in 7730ba22d3df ---> 28f796de3b46 Removing intermediate container 7730ba22d3df Successfully built 28f796de3b46 sending incremental file list ./ Dockerfile sent 660 bytes received 34 bytes 1388.00 bytes/sec total size is 918 speedup is 1.32 Sending build context to Docker daemon 33.59 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 71c3d482487e Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 934d38e9f0b5 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 14635c1cb3c5 Step 5/8 : USER 1001 ---> Using cache ---> 868158465567 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> Using cache ---> d0da322f9299 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Using cache ---> e6e87d93880c Step 8/8 : LABEL "kubevirt-functional-tests-openshift-release1" '' "subresource-access-test" '' ---> Running in 33bf31d0fb1f ---> 45dddb8a2c5d Removing intermediate container 33bf31d0fb1f Successfully built 45dddb8a2c5d hack/build-docker.sh push The push refers to a repository [localhost:32844/kubevirt/virt-controller] c1a9e2d741ef: Preparing ebdaa8997db7: Preparing 39bae602f753: Preparing ebdaa8997db7: Pushed c1a9e2d741ef: Pushed 39bae602f753: Pushed devel: digest: sha256:d9c35a5325e572923730952e32c3d1828dd3db072ab0317751b270c85cf2093a size: 948 The push refers to a repository [localhost:32844/kubevirt/virt-launcher] 9025582f266c: Preparing ca8a0129689f: Preparing ca8a0129689f: Preparing 6b21904d8bab: Preparing 72930471c247: Preparing 9211f00c7369: Preparing 361eb5ff3004: Preparing 22b0b3033053: Preparing 5a3c34a960cd: Preparing 746161d2a6d5: Preparing 530cc55618cd: Preparing 22b0b3033053: Waiting 34fa414dfdf6: Preparing a1359dc556dd: Preparing 530cc55618cd: Waiting 5a3c34a960cd: Waiting 746161d2a6d5: Waiting 490c7c373332: Preparing 4b440db36f72: Preparing 34fa414dfdf6: Waiting 4b440db36f72: Waiting 39bae602f753: Preparing 361eb5ff3004: Waiting 39bae602f753: Waiting 9025582f266c: Pushed 9211f00c7369: Pushed 6b21904d8bab: Pushed ca8a0129689f: Pushed 72930471c247: Pushed 22b0b3033053: Pushed 5a3c34a960cd: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed 490c7c373332: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 746161d2a6d5: Pushed 361eb5ff3004: Pushed 4b440db36f72: Pushed devel: digest: sha256:79ed8f8c1745a3df5303563cd59dd5d6cd6005e695a49fada2df759427f721b2 size: 3652 The push refers to a repository [localhost:32844/kubevirt/virt-handler] fc0eacdb5d17: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher fc0eacdb5d17: Pushed devel: digest: sha256:e251c05322044337875714a7d25ea405fe0e3a28b621dd033d607c29aae9e662 size: 740 The push refers to a repository [localhost:32844/kubevirt/virt-api] d31a2e27dbce: Preparing d57b3fcdcebf: Preparing 325865597484: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 325865597484: Pushed d57b3fcdcebf: Pushed d31a2e27dbce: Pushed devel: digest: sha256:5e28c20f07f7bbfff7cf017727aaf0a862944bfc3d473d65ef9cc84e94d83523 size: 1159 The push refers to a repository [localhost:32844/kubevirt/iscsi-demo-target-tgtd] d7a94c2260cf: Preparing 51f335cf165a: Preparing 34892f148b26: Preparing 27e33247129d: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api d7a94c2260cf: Pushed 34892f148b26: Pushed 51f335cf165a: Pushed 27e33247129d: Pushed devel: digest: sha256:0a10ce1f4522627e76e82f50013da4c2b5fb0c82ef874fe58bb9b0a97f1593ed size: 1368 The push refers to a repository [localhost:32844/kubevirt/vm-killer] 7074bc3e2ecc: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd 7074bc3e2ecc: Pushed devel: digest: sha256:0bca6e56709fdfe03535b2f3b52d21b12465560547094b674ab1da32e84545d4 size: 740 The push refers to a repository [localhost:32844/kubevirt/registry-disk-v1alpha] ba2358758a96: Preparing dd28ef9bfc97: Preparing 6709b2da72b8: Preparing ba2358758a96: Pushed dd28ef9bfc97: Pushed 6709b2da72b8: Pushed devel: digest: sha256:123b54dd6e02a01cf1abbc7669623f8eb307bc8248035b21564c8d93c212168e size: 948 The push refers to a repository [localhost:32844/kubevirt/cirros-registry-disk-demo] 63ef36afb922: Preparing ba2358758a96: Preparing dd28ef9bfc97: Preparing 6709b2da72b8: Preparing dd28ef9bfc97: Mounted from kubevirt/registry-disk-v1alpha 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha ba2358758a96: Mounted from kubevirt/registry-disk-v1alpha 63ef36afb922: Pushed devel: digest: sha256:799f34529cde593243bd687a34ef0bd8d07c60ac696f3f25494053ac42565e84 size: 1160 The push refers to a repository [localhost:32844/kubevirt/fedora-cloud-registry-disk-demo] 8fa781dd7030: Preparing ba2358758a96: Preparing dd28ef9bfc97: Preparing 6709b2da72b8: Preparing ba2358758a96: Mounted from kubevirt/cirros-registry-disk-demo dd28ef9bfc97: Mounted from kubevirt/cirros-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 8fa781dd7030: Pushed devel: digest: sha256:839bbb391a2c83014dba4a4352beb4fc76a92c50c0016d8d3335e241ec93ed16 size: 1161 The push refers to a repository [localhost:32844/kubevirt/alpine-registry-disk-demo] 1f3de9753edf: Preparing ba2358758a96: Preparing dd28ef9bfc97: Preparing 6709b2da72b8: Preparing dd28ef9bfc97: Mounted from kubevirt/fedora-cloud-registry-disk-demo ba2358758a96: Mounted from kubevirt/fedora-cloud-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo 1f3de9753edf: Pushed devel: digest: sha256:eef1203945478523fa95866dbdf3522cb789f072a2b67c59a2ead4f45bbed122 size: 1160 The push refers to a repository [localhost:32844/kubevirt/subresource-access-test] ffc7b3dc986f: Preparing 601899d28c74: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 601899d28c74: Pushed ffc7b3dc986f: Pushed devel: digest: sha256:2d8447df4f6c476616e7004934edd5c7536f1081781f039acc1bf97dcb47b009 size: 948 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt' 2018/04/08 06:27:07 Waiting for host: 192.168.66.101:22 2018/04/08 06:27:07 Connected to tcp://192.168.66.101:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer 0f9c31467808: Pulling fs layer a4cba2692d06: Pulling fs layer 0f9c31467808: Verifying Checksum 0f9c31467808: Download complete a4cba2692d06: Verifying Checksum a4cba2692d06: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete 0f9c31467808: Pull complete a4cba2692d06: Pull complete Digest: sha256:d9c35a5325e572923730952e32c3d1828dd3db072ab0317751b270c85cf2093a Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer 622fd4d469ec: Pulling fs layer 424529d59559: Pulling fs layer 88a7cd7d71a5: Pulling fs layer 5c7d752e7e52: Pulling fs layer ade0b1746bac: Pulling fs layer 3f4087eea4ff: Pulling fs layer 2ff8673bbc01: Pulling fs layer 2cd0bdf70106: Pulling fs layer 47d1c609f225: Pulling fs layer a1e80189bea5: Waiting 6cc174edcebf: Waiting 622fd4d469ec: Waiting 424529d59559: Waiting 88a7cd7d71a5: Waiting 5c7d752e7e52: Waiting ade0b1746bac: Waiting 3f4087eea4ff: Waiting 2ff8673bbc01: Waiting 2cd0bdf70106: Waiting 47d1c609f225: Waiting a4b9e9eb807b: Verifying Checksum a4b9e9eb807b: Download complete f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete 424529d59559: Verifying Checksum 424529d59559: Download complete 88a7cd7d71a5: Verifying Checksum 88a7cd7d71a5: Download complete 622fd4d469ec: Verifying Checksum 622fd4d469ec: Download complete 5c7d752e7e52: Verifying Checksum 5c7d752e7e52: Download complete ade0b1746bac: Verifying Checksum ade0b1746bac: Download complete 3f4087eea4ff: Verifying Checksum 3f4087eea4ff: Download complete 2ff8673bbc01: Verifying Checksum 2ff8673bbc01: Download complete 2cd0bdf70106: Verifying Checksum 2cd0bdf70106: Download complete 47d1c609f225: Verifying Checksum 47d1c609f225: Download complete d7240bccd145: Verifying Checksum d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete 622fd4d469ec: Pull complete 424529d59559: Pull complete 88a7cd7d71a5: Pull complete 5c7d752e7e52: Pull complete ade0b1746bac: Pull complete 3f4087eea4ff: Pull complete 2ff8673bbc01: Pull complete 2cd0bdf70106: Pull complete 47d1c609f225: Pull complete Digest: sha256:79ed8f8c1745a3df5303563cd59dd5d6cd6005e695a49fada2df759427f721b2 Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists d4c13a6c82ac: Pulling fs layer d4c13a6c82ac: Verifying Checksum d4c13a6c82ac: Download complete d4c13a6c82ac: Pull complete Digest: sha256:e251c05322044337875714a7d25ea405fe0e3a28b621dd033d607c29aae9e662 Trying to pull repository registry:5000/kubevirt/virt-api ... devel: Pulling from registry:5000/kubevirt/virt-api 2176639d844b: Already exists 37e23c3b1f9e: Pulling fs layer ff02d6919641: Pulling fs layer 462ebf464f4c: Pulling fs layer 37e23c3b1f9e: Verifying Checksum 37e23c3b1f9e: Download complete ff02d6919641: Download complete 462ebf464f4c: Download complete 37e23c3b1f9e: Pull complete ff02d6919641: Pull complete 462ebf464f4c: Pull complete Digest: sha256:5e28c20f07f7bbfff7cf017727aaf0a862944bfc3d473d65ef9cc84e94d83523 Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists e929cf197109: Pulling fs layer 26e3b3165052: Pulling fs layer aa37eee00b09: Pulling fs layer 6c4271069635: Pulling fs layer 6c4271069635: Waiting 26e3b3165052: Verifying Checksum 26e3b3165052: Download complete 6c4271069635: Verifying Checksum 6c4271069635: Download complete aa37eee00b09: Verifying Checksum aa37eee00b09: Download complete e929cf197109: Verifying Checksum e929cf197109: Download complete e929cf197109: Pull complete 26e3b3165052: Pull complete aa37eee00b09: Pull complete 6c4271069635: Pull complete Digest: sha256:0a10ce1f4522627e76e82f50013da4c2b5fb0c82ef874fe58bb9b0a97f1593ed Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 3726544f0c71: Pulling fs layer 3726544f0c71: Verifying Checksum 3726544f0c71: Download complete 3726544f0c71: Pull complete Digest: sha256:0bca6e56709fdfe03535b2f3b52d21b12465560547094b674ab1da32e84545d4 Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 07ea0109ea1b: Pulling fs layer c4272834948a: Pulling fs layer c4272834948a: Verifying Checksum c4272834948a: Download complete 07ea0109ea1b: Verifying Checksum 07ea0109ea1b: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 07ea0109ea1b: Pull complete c4272834948a: Pull complete Digest: sha256:123b54dd6e02a01cf1abbc7669623f8eb307bc8248035b21564c8d93c212168e Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 07ea0109ea1b: Already exists c4272834948a: Already exists 85981f8173b8: Pulling fs layer 85981f8173b8: Download complete 85981f8173b8: Pull complete Digest: sha256:799f34529cde593243bd687a34ef0bd8d07c60ac696f3f25494053ac42565e84 Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 07ea0109ea1b: Already exists c4272834948a: Already exists e6067c9cb2c7: Pulling fs layer e6067c9cb2c7: Verifying Checksum e6067c9cb2c7: Download complete e6067c9cb2c7: Pull complete Digest: sha256:839bbb391a2c83014dba4a4352beb4fc76a92c50c0016d8d3335e241ec93ed16 Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 07ea0109ea1b: Already exists c4272834948a: Already exists 58041b262260: Pulling fs layer 58041b262260: Verifying Checksum 58041b262260: Download complete 58041b262260: Pull complete Digest: sha256:eef1203945478523fa95866dbdf3522cb789f072a2b67c59a2ead4f45bbed122 Trying to pull repository registry:5000/kubevirt/subresource-access-test ... devel: Pulling from registry:5000/kubevirt/subresource-access-test 2176639d844b: Already exists 555998d3afb2: Pulling fs layer 4b7bb5238488: Pulling fs layer 555998d3afb2: Verifying Checksum 555998d3afb2: Download complete 4b7bb5238488: Verifying Checksum 555998d3afb2: Pull complete 4b7bb5238488: Pull complete Digest: sha256:2d8447df4f6c476616e7004934edd5c7536f1081781f039acc1bf97dcb47b009 2018/04/08 06:30:59 Waiting for host: 192.168.66.101:22 2018/04/08 06:30:59 Connected to tcp://192.168.66.101:22 2018/04/08 06:31:02 Waiting for host: 192.168.66.102:22 2018/04/08 06:31:02 Connected to tcp://192.168.66.102:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer 0f9c31467808: Pulling fs layer a4cba2692d06: Pulling fs layer 0f9c31467808: Verifying Checksum 0f9c31467808: Download complete a4cba2692d06: Verifying Checksum a4cba2692d06: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete 0f9c31467808: Pull complete a4cba2692d06: Pull complete Digest: sha256:d9c35a5325e572923730952e32c3d1828dd3db072ab0317751b270c85cf2093a Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer 622fd4d469ec: Pulling fs layer 424529d59559: Pulling fs layer 88a7cd7d71a5: Pulling fs layer 5c7d752e7e52: Pulling fs layer ade0b1746bac: Pulling fs layer 3f4087eea4ff: Pulling fs layer 2ff8673bbc01: Pulling fs layer 2cd0bdf70106: Pulling fs layer 47d1c609f225: Pulling fs layer 88a7cd7d71a5: Waiting 5c7d752e7e52: Waiting ade0b1746bac: Waiting 3f4087eea4ff: Waiting 2ff8673bbc01: Waiting 2cd0bdf70106: Waiting 47d1c609f225: Waiting 424529d59559: Waiting a1e80189bea5: Waiting 6cc174edcebf: Waiting 622fd4d469ec: Waiting f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a4b9e9eb807b: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete 424529d59559: Verifying Checksum 424529d59559: Download complete 88a7cd7d71a5: Verifying Checksum 88a7cd7d71a5: Download complete 622fd4d469ec: Verifying Checksum 622fd4d469ec: Download complete ade0b1746bac: Verifying Checksum ade0b1746bac: Download complete 3f4087eea4ff: Verifying Checksum 3f4087eea4ff: Download complete 5c7d752e7e52: Verifying Checksum 5c7d752e7e52: Download complete 2ff8673bbc01: Verifying Checksum 2ff8673bbc01: Download complete 2cd0bdf70106: Verifying Checksum 2cd0bdf70106: Download complete 47d1c609f225: Verifying Checksum 47d1c609f225: Download complete d7240bccd145: Verifying Checksum d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete 622fd4d469ec: Pull complete 424529d59559: Pull complete 88a7cd7d71a5: Pull complete 5c7d752e7e52: Pull complete ade0b1746bac: Pull complete 3f4087eea4ff: Pull complete 2ff8673bbc01: Pull complete 2cd0bdf70106: Pull complete 47d1c609f225: Pull complete Digest: sha256:79ed8f8c1745a3df5303563cd59dd5d6cd6005e695a49fada2df759427f721b2 Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists d4c13a6c82ac: Pulling fs layer d4c13a6c82ac: Verifying Checksum d4c13a6c82ac: Download complete d4c13a6c82ac: Pull complete Digest: sha256:e251c05322044337875714a7d25ea405fe0e3a28b621dd033d607c29aae9e662 Trying to pull repository registry:5000/kubevirt/virt-api ... devel: Pulling from registry:5000/kubevirt/virt-api 2176639d844b: Already exists 37e23c3b1f9e: Pulling fs layer ff02d6919641: Pulling fs layer 462ebf464f4c: Pulling fs layer ff02d6919641: Verifying Checksum ff02d6919641: Download complete 37e23c3b1f9e: Verifying Checksum 37e23c3b1f9e: Download complete 462ebf464f4c: Verifying Checksum 462ebf464f4c: Download complete 37e23c3b1f9e: Pull complete ff02d6919641: Pull complete 462ebf464f4c: Pull complete Digest: sha256:5e28c20f07f7bbfff7cf017727aaf0a862944bfc3d473d65ef9cc84e94d83523 Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists e929cf197109: Pulling fs layer 26e3b3165052: Pulling fs layer aa37eee00b09: Pulling fs layer 6c4271069635: Pulling fs layer 6c4271069635: Waiting 26e3b3165052: Verifying Checksum 26e3b3165052: Download complete 6c4271069635: Verifying Checksum 6c4271069635: Download complete aa37eee00b09: Verifying Checksum aa37eee00b09: Download complete e929cf197109: Verifying Checksum e929cf197109: Download complete e929cf197109: Pull complete 26e3b3165052: Pull complete aa37eee00b09: Pull complete 6c4271069635: Pull complete Digest: sha256:0a10ce1f4522627e76e82f50013da4c2b5fb0c82ef874fe58bb9b0a97f1593ed Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 3726544f0c71: Pulling fs layer 3726544f0c71: Verifying Checksum 3726544f0c71: Download complete 3726544f0c71: Pull complete Digest: sha256:0bca6e56709fdfe03535b2f3b52d21b12465560547094b674ab1da32e84545d4 Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 07ea0109ea1b: Pulling fs layer c4272834948a: Pulling fs layer c4272834948a: Verifying Checksum c4272834948a: Download complete 07ea0109ea1b: Verifying Checksum 07ea0109ea1b: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 07ea0109ea1b: Pull complete c4272834948a: Pull complete Digest: sha256:123b54dd6e02a01cf1abbc7669623f8eb307bc8248035b21564c8d93c212168e Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 07ea0109ea1b: Already exists c4272834948a: Already exists 85981f8173b8: Pulling fs layer 85981f8173b8: Download complete 85981f8173b8: Pull complete Digest: sha256:799f34529cde593243bd687a34ef0bd8d07c60ac696f3f25494053ac42565e84 Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 07ea0109ea1b: Already exists c4272834948a: Already exists e6067c9cb2c7: Pulling fs layer e6067c9cb2c7: Verifying Checksum e6067c9cb2c7: Download complete e6067c9cb2c7: Pull complete Digest: sha256:839bbb391a2c83014dba4a4352beb4fc76a92c50c0016d8d3335e241ec93ed16 Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 07ea0109ea1b: Already exists c4272834948a: Already exists 58041b262260: Pulling fs layer 58041b262260: Download complete 58041b262260: Pull complete Digest: sha256:eef1203945478523fa95866dbdf3522cb789f072a2b67c59a2ead4f45bbed122 Trying to pull repository registry:5000/kubevirt/subresource-access-test ... devel: Pulling from registry:5000/kubevirt/subresource-access-test 2176639d844b: Already exists 555998d3afb2: Pulling fs layer 4b7bb5238488: Pulling fs layer 555998d3afb2: Verifying Checksum 555998d3afb2: Download complete 4b7bb5238488: Verifying Checksum 4b7bb5238488: Download complete 555998d3afb2: Pull complete 4b7bb5238488: Pull complete Digest: sha256:2d8447df4f6c476616e7004934edd5c7536f1081781f039acc1bf97dcb47b009 2018/04/08 06:33:46 Waiting for host: 192.168.66.102:22 2018/04/08 06:33:46 Connected to tcp://192.168.66.102:22 Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=os-3.9.0-alpha.4 ++ provider_prefix=kubevirt-functional-tests-openshift-release1 ++ job_prefix=kubevirt-functional-tests-openshift-release1 + source cluster/os-3.9.0-alpha.4/provider.sh ++ set -e ++ image=os-3.9@sha256:c214267c1252e51f5ea845ac7868dbc219c63627e9f96ec30cc0a8e9e6e9fc0d ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=os-3.9.0-alpha.4 ++ source hack/config-default.sh source hack/config-os-3.9.0-alpha.4.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-os-3.9.0-alpha.4.sh ++ source hack/config-provider-os-3.9.0-alpha.4.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/cluster/os-3.9.0-alpha.4/.kubeconfig +++ docker_prefix=localhost:32844/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig ++ KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig ++ cluster/os-3.9.0-alpha.4/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig ++ cluster/os-3.9.0-alpha.4/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=os-3.9.0-alpha.4 ++ provider_prefix=kubevirt-functional-tests-openshift-release1 ++ job_prefix=kubevirt-functional-tests-openshift-release1 + source cluster/os-3.9.0-alpha.4/provider.sh ++ set -e ++ image=os-3.9@sha256:c214267c1252e51f5ea845ac7868dbc219c63627e9f96ec30cc0a8e9e6e9fc0d ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=os-3.9.0-alpha.4 ++ source hack/config-default.sh source hack/config-os-3.9.0-alpha.4.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-os-3.9.0-alpha.4.sh ++ source hack/config-provider-os-3.9.0-alpha.4.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/cluster/os-3.9.0-alpha.4/.kubeconfig +++ docker_prefix=localhost:32844/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z openshift-release ]] + [[ openshift-release =~ .*-dev ]] + [[ openshift-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml serviceaccount "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver-auth-delegator" created rolebinding "kubevirt-apiserver" created role "kubevirt-apiserver" created clusterrole "kubevirt-apiserver" created clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created service "virt-api" created deployment "virt-api" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created customresourcedefinition "offlinevirtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-openshift-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + '[' os-3.9.0-alpha.4 = vagrant-openshift ']' + '[' os-3.9.0-alpha.4 = os-3.9.0-alpha.4 ']' + _kubectl adm policy add-scc-to-user privileged -z kubevirt-controller -n kube-system + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl adm policy add-scc-to-user privileged -z kubevirt-controller -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-controller"] + _kubectl adm policy add-scc-to-user privileged -z kubevirt-testing -n kube-system + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl adm policy add-scc-to-user privileged -z kubevirt-testing -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-testing"] + _kubectl adm policy add-scc-to-user privileged -z kubevirt-privileged -n kube-system + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl adm policy add-scc-to-user privileged -z kubevirt-privileged -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-privileged"] + _kubectl adm policy add-scc-to-user privileged -z kubevirt-apiserver -n kube-system + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl adm policy add-scc-to-user privileged -z kubevirt-apiserver -n kube-system scc "privileged" added to: ["system:serviceaccount:kube-system:kubevirt-apiserver"] + _kubectl adm policy add-scc-to-user privileged admin + export KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + KUBECONFIG=cluster/os-3.9.0-alpha.4/.kubeconfig + cluster/os-3.9.0-alpha.4/.kubectl adm policy add-scc-to-user privileged admin scc "privileged" added to: ["admin"] + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-mcbz2 0/1 ContainerCreating 0 4s iscsi-demo-target-tgtd-p86p5 0/1 ContainerCreating 0 4s virt-api-fd96f94b5-mtsg7 0/1 ContainerCreating 0 7s virt-controller-5f7c946cc4-b8sjz 0/1 ContainerCreating 0 7s virt-controller-5f7c946cc4-vtjj9 0/1 ContainerCreating 0 7s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-p86p5 0/1 ContainerCreating 0 6s virt-api-fd96f94b5-mtsg7 0/1 ContainerCreating 0 9s virt-controller-5f7c946cc4-b8sjz 0/1 ContainerCreating 0 9s virt-controller-5f7c946cc4-vtjj9 0/1 ContainerCreating 0 9s virt-handler-6qjpd 0/1 ContainerCreating 0 3s virt-handler-w7jdl 0/1 ContainerCreating 0 3s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-mcbz2 false iscsi-demo-target-tgtd-p86p5' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-p86p5 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '/virt-controller/ && /true/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ wc -l + '[' 1 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE iscsi-demo-target-tgtd-mcbz2 1/1 Running 1 1m iscsi-demo-target-tgtd-p86p5 1/1 Running 1 1m virt-api-fd96f94b5-mtsg7 1/1 Running 0 1m virt-api-fd96f94b5-ztr4z 1/1 Running 0 1m virt-controller-5f7c946cc4-b8sjz 0/1 Running 0 1m virt-controller-5f7c946cc4-vtjj9 1/1 Running 0 1m virt-handler-6qjpd 1/1 Running 0 1m virt-handler-w7jdl 1/1 Running 0 1m + kubectl version + cluster/kubectl.sh version oc v3.9.0-alpha.4+9ab7a71 kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://127.0.0.1:32841 openshift v3.9.0-alpha.4+9ab7a71 kubernetes v1.9.1+a0ce1bc657 + ginko_params=--ginkgo.noColor + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS=--ginkgo.noColor + make functest hack/dockerized "hack/build-func-tests.sh" sha256:c90fc4dd370dfa6a26541d1993dc42ab8f083c22510abae856ba4a3f7052b736 go version go1.9.2 linux/amd64 skipping directory . go version go1.9.2 linux/amd64 Compiling tests... compiled tests.test 28a86cbadf02865771b68b81fcaa490712d7c32052a4352d278218bc95e55361 28a86cbadf02865771b68b81fcaa490712d7c32052a4352d278218bc95e55361 hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1523169440 Will run 67 of 67 specs • ------------------------------ • [SLOW TEST:51.970 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123 ------------------------------ • [SLOW TEST:142.557 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:144 ------------------------------ • [SLOW TEST:44.178 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:177 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:179 ------------------------------ • [SLOW TEST:38.825 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:229 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:231 ------------------------------ • [SLOW TEST:104.919 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:229 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:252 ------------------------------ volumedisk0 compute • [SLOW TEST:48.349 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 VM definition /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:50 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:51 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:57 ------------------------------ • [SLOW TEST:46.002 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 New VM with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:109 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:132 ------------------------------ • [SLOW TEST:71.848 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:42 Starting and stopping the same VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:93 ------------------------------ • [SLOW TEST:21.161 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:114 ------------------------------ • [SLOW TEST:43.692 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:42 Starting multiple VMs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:132 ------------------------------ • ------------------------------ • [SLOW TEST:5.709 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to four, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:28.343 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should update readyReplicas once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:102 ------------------------------ • [SLOW TEST:5.530 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should remove VMs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:114 ------------------------------ • [SLOW TEST:8.214 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should remove owner references on the VM if it is orphan deleted /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:130 ------------------------------ • [SLOW TEST:9.082 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:41 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:168 ------------------------------ • [SLOW TEST:40.714 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81 ------------------------------ • [SLOW TEST:91.486 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:93 ------------------------------ • [SLOW TEST:45.978 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:116 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:117 ------------------------------ • [SLOW TEST:40.570 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:79 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:160 ------------------------------ • [SLOW TEST:9.038 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:49 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:50 ------------------------------ • [SLOW TEST:7.919 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:54 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:55 ------------------------------ • [SLOW TEST:54.523 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:46 VirtualMachine with nodeNetwork definition given /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:108 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VM /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • ------------------------------ • [SLOW TEST:8.267 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:46 VirtualMachine with nodeNetwork definition given /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:108 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:6.224 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:46 VirtualMachine with nodeNetwork definition given /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:108 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:26.458 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 should update OfflineVirtualMachine once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:144 ------------------------------ ••• ------------------------------ • [SLOW TEST:24.924 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 should stop VM if running set to false /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:210 ------------------------------ STEP: Doing run: 0 STEP: Starting the VM STEP: OVM has the running condition STEP: Stopping the VM STEP: OVM has not the running condition STEP: Doing run: 1 STEP: Starting the VM STEP: OVM has the running condition STEP: Stopping the VM STEP: OVM has not the running condition STEP: Doing run: 2 STEP: Starting the VM STEP: OVM has the running condition • Failure [395.794 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 should start and stop VM multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:218 Timed out after 300.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:106 ------------------------------ • [SLOW TEST:89.470 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 should not update the VM spec if Running /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:231 ------------------------------ STEP: Creating new OVM, not running STEP: Starting the VM STEP: OVM has the running condition STEP: Getting the running VM STEP: Obtaining the serial console STEP: Guest shutdown STEP: Testing the VM is not running • Failure [284.567 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:272 Timed out after 240.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:304 ------------------------------ • [SLOW TEST:23.580 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:310 should start a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:311 ------------------------------ • [SLOW TEST:29.613 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:44 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:55 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:310 should stop a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:342 ------------------------------ • [SLOW TEST:48.483 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:37 A VM with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:57 ------------------------------ • ------------------------------ • [SLOW TEST:21.708 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:61 ------------------------------ • [SLOW TEST:20.883 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:69 ------------------------------ • ------------------------------ • [SLOW TEST:18.973 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:99 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:100 should retry starting the VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:101 ------------------------------ • [SLOW TEST:20.035 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:99 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:100 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:132 ------------------------------ • [SLOW TEST:57.244 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:180 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:181 ------------------------------ • [SLOW TEST:41.922 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:210 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:211 ------------------------------ S [SKIPPING] [1.137 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:247 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:252 ------------------------------ S [SKIPPING] [1.256 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:55 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:247 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:252 ------------------------------ • ------------------------------ • [SLOW TEST:25.664 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Delete a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:321 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:322 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:323 ------------------------------ • [SLOW TEST:39.952 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:376 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:377 ------------------------------ • [SLOW TEST:33.528 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:41 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:376 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:405 ------------------------------ • [SLOW TEST:39.666 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:41.919 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:37.846 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ • [SLOW TEST:20.390 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:47 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:48 ------------------------------ •••••••• ------------------------------ • [SLOW TEST:51.402 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VM /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 2 Failures: [Fail] OfflineVirtualMachine A valid OfflineVirtualMachine given [It] should start and stop VM multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:106 [Fail] OfflineVirtualMachine A valid OfflineVirtualMachine given [It] should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:304 Ran 65 of 67 Specs in 2440.820 seconds FAIL! -- 63 Passed | 2 Failed | 0 Pending | 2 Skipped --- FAIL: TestTests (2440.82s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh d882db141acb f5660d0b4fa8 4339dda3b7d4 f63d522a50c8 d882db141acb f5660d0b4fa8 4339dda3b7d4 f63d522a50c8 kubevirt-functional-tests-openshift-release1-node01 kubevirt-functional-tests-openshift-release1-node02