compute-hyperv documentation

Welcome to the documentation of compute_hyperv

Starting with Folsom, Hyper-V can be used as a compute node within OpenStack deployments.

This documentation contains information on how to setup and configure Hyper-V hosts as OpenStack compute nodes, more specifically:

  • Supported OS versions
  • Requirements and host configurations
  • How to install the necessary OpenStack services
  • nova-compute configuration options
  • Troubleshooting and debugging tips & tricks

For release notes, please check out the following page.

Contents:

Contributing

If you would like to contribute to the development of OpenStack, you must follow the steps in this page:

Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at:

Pull requests submitted through GitHub will be ignored.

Bugs should be filed on Launchpad, not GitHub:

Installation guide

The compute-hyperv project offers two Nova Hyper-V drivers, providing additional features and bug fixes compared to the in-tree Nova Hyper-V driver:

  • compute_hyperv.driver.HyperVDriver
  • compute_hyperv.cluster.driver.HyperVClusterDriver

These drivers receive the same degree of testing (if not even more) as the upstream driver, being covered by a range of official OpenStack Continuous Integration (CI) systems.

Most production Hyper-V based OpenStack deployments use the compute-hyperv drivers.

The HyperVClusterDriver can be used on Hyper-V Cluster compute nodes and will create and manage highly available clustered virtual machines.

This chapter assumes a working setup of OpenStack following the OpenStack Installation Tutorial.

Prerequisites

Starting with Folsom, Hyper-V can be used as a compute node within OpenStack deployments.

The Hyper-V versions that are currently supported are:

  • (deprecated) Windows / Hyper-V Server 2012
  • Windows / Hyper-V Server 2012 R2
  • Windows / Hyper-V Server 2016

Newer Hyper-V versions come with an extended list of features, and can offer better overall performance. Thus, Windows / Hyper-V Server 2016 is recommended for the best experience.

Hardware requirements

Although this document does not provide a complete list of Hyper-V compatible hardware, the following items are necessary:

  • 64-bit processor with Second Level Address Translation (SLAT).
  • CPU support for VM Monitor Mode Extension (VT-c on Intel CPU’s).
  • Minimum of 4 GB memory. As virtual machines share memory with the Hyper-V host, you will need to provide enough memory to handle the expected virtual workload.
  • Minimum 16-20 GB of disk space for the OS itself and updates.
  • At least one NIC, but optimally two NICs: one connected to the management network, and one connected to the guest data network. If a single NIC is used, when creating the Hyper-V vSwitch, make sure the -AllowManagementOS option is set to True, otherwise you will lose connectivity to the host.

The following items will need to be enabled in the system BIOS:

  • Virtualization Technology - may have a different label depending on motherboard manufacturer.
  • Hardware Enforced Data Execution Prevention.

To check a host’s Hyper-V compatibility, open up cmd or Powershell and run:

systeminfo

The output will include the Hyper-V requirements and if the host meets them or not. If all the requirements are met, the host is Hyper-V capable.

Storage considerations

Instance files

Nova will use a pre-configured directory for storing instance files such as:

  • instance boot images and ephemeral disk images
  • instance config files (config drive image and Hyper-V files)
  • instance console log
  • cached Glance images
  • snapshot files

The following options are available for the instance directory:

  • Local disk.
  • SMB shares. Make sure that they are persistent.
  • Cluster Shared Volumes (CSV)
    • Storage Spaces
    • Storage Spaces Direct (S2D)
    • SAN LUNs as underlying CSV storage

Note

Ample storage may be required when using Nova “local” storage for the instance virtual disk images (as opposed to booting from Cinder volumes).

Compute nodes can be configured to use the same storage option. Doing so will result in faster cold / live migration operations to other compute nodes using the same storage, but there’s a risk of disk overcommitment. Nova is not aware of compute nodes sharing the same storage and because of this, the Nova scheduler might pick a host it normally wouldn’t.

For example, hosts A and B are configured to use a 100 GB SMB share. Both compute nodes will report as having 100 GB storage available. Nova has to spawn 2 instances requiring 80 GB storage each. Normally, Nova would be able to spawn only one instance, but both will spawn on different hosts, overcommiting the disk by 60 GB.

Cinder volumes

The Nova Hyper-V driver can attach Cinder volumes exposed through the following protocols:

  • iSCSI
  • Fibre Channel
  • SMB - the volumes are stored as virtual disk images (e.g. VHD / VHDX)

Note

The Nova Hyper-V Cluster driver only supports SMB backed volumes. The reason is that the volumes need to be available on the destination host side during an unexpected instance failover.

Before configuring Nova, you should ensure that the Hyper-V compute nodes can properly access the storage backend used by Cinder.

The MSI installer can enable the Microsoft Software iSCSI initiator for you. When using hardware iSCSI initiators or Fibre Channel, make sure that the HBAs are properly configured and the drivers are up to date.

Please consult your storage vendor documentation to see if there are any other special requirements (e.g. additional software to be installed, such as iSCSI DSMs - Device Specific Modules).

Some Cinder backends require pre-configured information (specified via volume types or Cinder Volume config file) about the hosts that are going to consume the volumes (e.g. the operating system type), based on which the LUNs will be created/exposed. The reason is that the supported SCSI command set may differ based on the operating system. An incorrect LUN type may prevent Windows nodes from accessing the volumes (although generic LUN types should be fine in most cases).

Multipath IO

You may setup multiple paths between your Windows hosts and the storage backends in order to provide increased throughput and fault tolerance.

When using iSCSI or Fibre Channel, make sure to enable and configure the MPIO service. MPIO is a service that manages available disk paths, performing failover and load balancing based on pre-configured policies. It’s extendable, in the sense that Device Specific Modules may be imported.

The MPIO service will ensure that LUNs accessible through multiple paths are exposed by the OS as a single disk drive.

Warning

If multiple disk paths are available and the MPIO service is not configured properly, the same LUN can be exposed as multiple disk drives (one per available path). This must be addressed urgently as it can potentially lead to data corruption.

Run the following to enable the MPIO service:

Enable-WindowsOptionalFeature –Online –FeatureName MultiPathIO

# Ensure that the "mpio" service is running
Get-Service mpio

Once you have enabled MPIO, make sure to configure it to automatically claim volumes exposed by the desired storage backend. If needed, import vendor provided DSMs.

For more details about Windows MPIO, check the following page.

SMB 3.0 and later also supports using multiple paths to a share (the UNC path can be the same), leveraging SMB Direct and SMB Multichannel.

By default, all available paths will be used when accessing SMB shares. You can configure constraints in order to choose which adapters should be used when connecting to SMB shares (for example, to avoid using a management network for SMB traffic).

Note

SMB does not require or interact in any way with the MPIO service.

For best performance, SMB Direct (RDMA) should also be used, if your network cards support it.

For more details about SMB Multichannel, check the following blog post.

NTP configuration

Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the following commands:

net stop w32time
w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL
net start w32time

Keep in mind that the node will have to be time synchronized with the other nodes of your OpenStack environment, so it is important to use the same NTP server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller.

Live migration configuration

In order for the live migration feature to work on the Hyper-V compute nodes, the following items are required:

  • A Windows domain controller with the Hyper-V compute nodes as domain members.
  • The nova-compute service must run with domain credentials. You can set the service credentials with:
sc.exe config openstack-compute obj="DOMAIN\username" password="password"

This guide contains information on how to setup and configure live migration on your Hyper-V compute nodes (authentication options, constrained delegation, migration performance options, etc), and a few troubleshooting tips.

Hyper-V Cluster configuration

compute-hyperv also offers a driver for Hyper-V Cluster nodes, which will be able to create and manage highly available virtual machines. For the Hyper-V Cluster Driver to be usable, the Hyper-V Cluster nodes will have to be joined to an Active Directory and a Microsoft Failover Cluster. The nodes in a Hyper-V Cluster must be identical.

Guarded Host configuration (Shielded VMs)

Shielded VMs is a new feature introduced in Windows / Hyper-V Server 2016 and can be used in order to have highly secure virtual machines that cannot be read from, tampered with, or inspected by malware, or even malicious administrators.

In order for a Hyper-V compute node to be able to spawn such VMs, it must be configured as a Guarded Host.

For more information on how to configure your Active Directory, Host Guardian Service, and compute node as a Guarded Host, you can read this article.

NUMA spanning configuration

Non-Uniform Memory Access (NUMA) is a computer system architecture that groups processors and memory in NUMA nodes. Processor threads accessing data in the same NUMA cell have lower memory access latencies and better overall performance. Some applications are NUMA-aware, taking advantage of NUMA performance optimizations.

Windows / Hyper-V Server 2012 introduced support for Virtual NUMA (vNUMA), which can be exposed to the VMs, allowing them to benefit from the NUMA performance optimizations.

By default, when Hyper-V starts a VM, it will try to fit all of its memory in a single NUMA node, but it doesn’t fit in only one, it will be spanned across multiple NUMA nodes. This is called NUMA spanning, and it is enabled by default. This allows Hyper-V to easily utilize the host’s memory for VMs.

NUMA spanning can be disabled and VMs can be configured to span a specific number of NUMA nodes (including 1), and have that NUMA topology exposed to the guest. Keep in mind that if a VM’s vNUMA topology doesn’t fit in the host’s available NUMA topology, it won’t be able to start, and as a side effect, less memory can be utilized for VMs.

If a compute node only has 1 NUMA node, disabling NUMA spanning will have no effect. To check how many NUMA node a host has, run the following powershell command:

Get-VMHostNumaNode

The output will contain a list of NUMA nodes, their processors, total memory, and used memory.

To disable NUMA spanning, run the following powershell commands:

Set-VMHost -NumaSpanningEnabled $false
Restart-Service vmms

In order for the changes to take effect, the Hyper-V Virtual Machine Management service (vmms) and the Hyper-V VMs have to be restarted.

For more details on vNUMA, you can read the following documentation.

PCI passthrough host configuration

Starting with Windows / Hyper-V Server 2016, PCI devices can be directly assigned to Hyper-V VMs.

In order to benefit from this feature, the host must support SR-IOV and have assignable PCI devices. This can easily be checked by running the following in powershell:

Start-BitsTransfer https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/hyperv-samples/benarm-powershell/DDA/survey-dda.ps1
.\survey-dda.ps1

The script above will output if the host supports SR-IOV, a detailed list of PCI devices and if they’re assignable or not.

If all the conditions are met, the desired devices will have to be prepared to be assigned to VMs. The following article contains a step-by-step guide on how to prepare them and how to restore the configurations if needed.

Install

This section describes how to install a Hyper-V nova compute node into an OpenStack deployment. For details about configuration, refer to Configuration.

This section assumes that you already have a working OpenStack environment.

The easiest way to install and configure the nova-compute service is to use an MSI, which can be freely downloaded from: https://cloudbase.it/openstack-hyperv-driver/

The MSI can optionally include the installation and / or configuration of:

  • Neutron L2 agents: Neutron Hyper-V Agent, Neutron OVS Agent (if OVS is installed on the compute node).
  • Ceilometer Polling Agent.
  • Windows Services for the mentioned agents.
  • Live migration feature (if the compute node is joined in an AD).
  • OVS vSwitch extension, OVS bridge, OVS tunnel IP (if OVS is installed, and Neutron OVS Agent is used).
  • Free RDP
  • iSCSI Initiator

MSIs can be installed normally through its GUI, or can be installed in an unattended mode (useful for automation). In order to do so, the following command has to be executed:

msiexec /i \path\to\the\HyperVNovaCompute.msi /qn /l*v log.txt

The command above will install the given MSI in the quiet, no UI mode, and will output its verbose logs into the given log.txt file. Additional key-value arguments can be given to the MSI for configuration. Some of the configurations are:

  • ADDLOCAL: Comma separated list of features to install. Acceptable values: HyperVNovaCompute,NeutronHyperVAgent,iSCSISWInitiator,FreeRDP
  • INSTALLDIR: The location where the OpenStack services and their configuration files are installed. By default, they are installed in: %ProgramFiles%\Cloudbase Solutions\OpenStack\Nova
  • SKIPNOVACONF: Installs the MSI without doing any of the other actions: creating configuration files, services, vSwitches, OVS bridges, etc.

Example:

msiexec /i HyperVNovaCompute.msi /qn /l*v log.txt `
    ADDLOCAL="HyperVNovaCompute,NeutronHyperVAgent,iSCSISWInitiator,FreeRDP"

After installing the OpenStack services on the Hyper-V compute node, check that they are up and running:

Get-Service nova-compute
Get-Service neutron-*
Get-Service ceilometer-*  # if the Ceilometer Polling Agent has been installed.

All the listed services must have the Running status. If not, refer to the Troubleshooting guide.

Next steps

Your OpenStack environment now includes the nova-compute service installed and configured with the compute_hyperv driver.

If the OpenStack services are Running on the Hyper-V compute node, make sure that they’re reporting to the OpenStack controller and that they’re alive by running the following:

neutron agent-list
nova service-list

The output should contain the Hyper-V host’s nova-compute service and Neutron L2 agent (either a Neutron Hyper-V Agent, or a Neutron OVS Agent) as alive / running.

Starting with Ocata, Nova cells became mandatory. Make sure that the newly added Hyper-V compute node is mapped into a Nova cell, otherwise Nova will not build any instances on it. In small deployments, two cells are enough: cell0 and cell1. cell0 is a special cell, instances that are never scheduled are relegated to the cell0 database, which is effectively a graveyard of instances that failed to start. All successful/running instances are stored in cell1.

You can check your Nova cells by running this on the Nova Controller:

nova-manage cell_v2 list_cells

You should at least have 2 cells listed (cell0 and cell1). If they’re not, or only cell0 exists, you can simply run:

nova-manage cell_v2 simple_cell_setup

If you have the 2 cells, in order to map the newly created compute nodes to cell1, run:

nova-manage cell_v2 discover_hosts
nova-manage cell_v2 list_hosts

The list_hosts command should output a table with your compute nodes mapped to the Nova cell. For more details on Nova cells, their benefits and how to properly use them, check the Nova cells documentation.

If Neutron Hyper-V Agent has been chosen as an L2 agent, make sure that the Neutron Server meets the following requirements:

  • networking-hyperv installed. To check if networking-hyperv is installed, run the following:
  pip freeze | grep networking-hyperv

If there is no output, it can be installed by running the command:
  pip install networking-hyperv==VERSION

The ``VERSION`` is dependent on your OpenStack deployment version. For
example, for Queens, the ``VERSION`` is 6.0.0. For other release names and
versions, you can look here:
https://github.com/openstack/networking-hyperv/releases
  • The Neutron Server has been configured to use the hyperv mechanism driver. The configuration option can be found in /etc/neutron/plugins/ml2/ml2_conf.ini:
[ml2]
mechanism_drivers = openvswitch,hyperv

If the configuration file has been modified, or networking-hyperv has been installed, the Neutron Server service will have to be restarted.

Additionally, keep in mind that the Neutron Hyper-V Agent only supports the following network types: local, flat, VLAN. Ports with any other network type will result in a PortBindingFailure exception. If tunneling is desired, the Neutron OVS Agent should be used instead.

Verify operation

Verify that instances can be created on the Hyper-V compute node through nova. If spawning fails, check the nova compute log file on the Hyper-V compute node for relevant information (by default, it can be found in C:\OpenStack\Log\). Additionally, setting the debug configuration option in nova.conf will help troubleshoot the issue.

If there is no relevant information in the compute node’s logs, check the Nova controller’s logs.

Troubleshooting guide

This section contains a few tips and tricks which can help you troubleshoot and solve your Hyper-V compute node’s potential issues.

OpenStack Services not running

You can check if the OpenStack services are up by running:

Get-Service nova-compute
Get-Service neutron-*

All the listed services must have the Running status. If not, check their logs, which can typically be found in C:\OpenStack\Log\. If there are no logs, try to run the services manually. To see how to run nova-compute manually, run the following command:

sc.exe qc nova-compute

The output will contain the BINARY_PATH_NAME with the service’s command. The command will contain the path to the nova-compute.exe executable and its configuration file path. Edit the configuration file and add the following:

[DEFAULT]
debug = True
use_stderr = True

This will help troubleshoot the service’s issues. Next, run nova-compute in PowerShell manually:

&"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\nova-compute.exe" `
    --config-file "C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\nova.conf"

The reason why the service could not be started should be visible in the output.

Live migration

This guide offers a few tips for troubleshooting live migration issues.

If live migration fails because the nodes have incompatible hardware, refer to refer to Configuration.

How to restart a service on Hyper-V

Restarting a service on OpenStack can easily be done through Powershell:

Restart-Service service-name

or through cmd:

net stop service_name && net start service_name

For example, the following command will restart the iSCSI initiator service:

Restart-Service msiscsi

Configuration

In addition to the Nova config options, compute-hyperv has a few extra configuration options. For a sample configuration file, refer to Configuration sample.

Driver configuration

In order to use the compute-hyperv Nova driver, the following configuration option will have to be set in the nova.conf file:

[DEFAULT]
compute_driver = compute_hyperv.driver.HyperVDriver

And for Hyper-V Clusters, the following:

[DEFAULT]
compute_driver = compute_hyperv.cluster.driver.HyperVClusterDriver
instances_path = path\to\cluster\wide\storage\location
sync_power_state_interval = -1

[workarounds]
handle_virt_lifecycle_events = False

By default, the OpenStack Hyper-V installer will configure the nova-compute service to use the compute_hyperv.driver.HyperVDriver driver.

Storage configuration

When spawning instances, nova-compute will create the VM related files ( VM configuration file, ephemerals, configdrive, console.log, etc.) in the location specified by the instances_path configuration option, even if the instance is volume-backed.

It is not recommended for Nova and Cinder to use the same storage location, as that can create scheduling and disk overcommitment issues.

Nova instance files location

By default, the OpenStack Hyper-V installer will configure nova-compute to use the following path as the instances_path:

[DEFAULT]
instances_path = C:\OpenStack\Instances

instances_path can be set to an SMB share, mounted or unmounted:

[DEFAULT]
# in this case, X is a persistently mounted SMB share.
instances_path = X:\OpenStack\Instances

# or
instances_path = \\SMB_SERVER\share_name\OpenStack\Instances

Alternatively, CSVs can be used:

[DEFAULT]
instances_path = C:\ClusterStorage\Volume1\OpenStack\Instances

Block Storage (Cinder) configuration

This section describes Nova configuration options that handle the way in which Cinder volumes are consumed.

When having multiple paths connecting the host to the storage backend, make sure to enable the following config option:

[hyperv]
use_multipath_io = True

This will ensure that the available paths are actually leveraged. Also, before attempting any volume connection, it will ensure that the MPIO service is enabled and that passthrough block devices (iSCSI / FC) are claimed by MPIO. SMB backed volumes are not affected by this option.

In some cases, Nova may fail to attach volumes due to transient connectivity issues. The following options specify how many and how often retries should be performed.

[hyperv]
# Those are the default values.
volume_attach_retry_count = 10
volume_attach_retry_interval = 5

# The following options only apply to disk scan retries.
mounted_disk_query_retry_count = 10
mounted_disk_query_retry_interval = 5

When having one or more hardware iSCSI initiators, you may use the following config option, explicitly telling Nova which iSCSI initiator to use:

[hyperv]
iscsi_initiator_list = PCI\VEN_1077&DEV_2031&SUBSYS_17E8103C&REV_02\\4&257301f0&0&0010_0, PCI\VEN_1077&DEV_2031&SUBSYS_17E8103C&REV_02\4&257301f0&0&0010_1

The list of available initiators may be retrieved using:

Get-InitiatorPort

If no iSCSI initiator is specified, the MS iSCSI Initiator service will only pick one of the available ones when establishing iSCSI sessions.

Live migration configuration

For live migrating virtual machines to hosts with different CPU features the following configuration option must be set in the compute node’s nova.conf file:

[hyperv]
limit_cpu_features = True

Keep in mind that changing this configuration option will not affect the instances that are already spawned, meaning that instances spawned with this flag set to False will not be able to live migrate to hosts with different CPU features, and that they will have to be shut down and rebuilt, or have the setting manually set.

Whitelisting PCI devices

After the assignable PCI devices have been prepared for Hyper-V (PCI passthrough host configuration), the next step is whitelist them in the compute node’s nova.conf.

[pci]
# this is a list of dictionaries, more dictionaries can be added.
passthrough_whitelist = [{"vendor_id": "<dev_vendor_id>", "product_id": "<dev_product_id>"}]

The vendor_id and product_id necessary for the passthrough_whitelist can be obtained from assignable PCI device’s InstanceId:

Get-VMHostAssignableDevice

The InstanceId should have the following format:

PCIP\VEN_<vendor_id>&DEV_<product_id>

The <vendor_id> and <product_id> can be extracted and used in the nova.conf file. After the configuration file has been changed, the nova-compute service will have to be restarted.

Afterwards, the nova-api and nova-scheduler services will have to be configured. For this, check the nova PCI passthrough configuration guide.

Configuration options

Configuration options reference

The following is an overview of all available configuration options in Nova and compute-hyperv. For a sample configuration file, refer to Configuration sample.

DEFAULT
rpc_conn_pool_size
Type:integer
Default:30

Size of RPC connection pool.

Deprecated Variations
Group Name
DEFAULT rpc_conn_pool_size
conn_pool_min_size
Type:integer
Default:2

The pool size limit for connections expiration policy

conn_pool_ttl
Type:integer
Default:1200

The time-to-live in sec of idle connections in the pool

executor_thread_pool_size
Type:integer
Default:64

Size of executor thread pool when executor is threading or eventlet.

Deprecated Variations
Group Name
DEFAULT rpc_thread_pool_size
rpc_response_timeout
Type:integer
Default:60

Seconds to wait for a response from a call.

transport_url
Type:string
Default:rabbit://

The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is:

driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query

Example: rabbit://rabbitmq:password@127.0.0.1:5672//

For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html

control_exchange
Type:string
Default:openstack

The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.

debug
Type:boolean
Default:false
Mutable:This option can be changed without restarting.

If set to true, the logging level will be set to DEBUG instead of the default INFO level.

log_config_append
Type:string
Default:<None>
Mutable:This option can be changed without restarting.

The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, logging_context_format_string).

Deprecated Variations
Group Name
DEFAULT log-config
DEFAULT log_config
log_date_format
Type:string
Default:%Y-%m-%d %H:%M:%S

Defines the format string for %(asctime)s in log records. Default: the value above . This option is ignored if log_config_append is set.

log_file
Type:string
Default:<None>

(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.

Deprecated Variations
Group Name
DEFAULT logfile
log_dir
Type:string
Default:<None>

(Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.

Deprecated Variations
Group Name
DEFAULT logdir
watch_log_file
Type:boolean
Default:false

Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.

use_syslog
Type:boolean
Default:false

Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.

use_journal
Type:boolean
Default:false

Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set.

syslog_log_facility
Type:string
Default:LOG_USER

Syslog facility to receive log lines. This option is ignored if log_config_append is set.

use_json
Type:boolean
Default:false

Use JSON formatting for logging. This option is ignored if log_config_append is set.

use_stderr
Type:boolean
Default:false

Log output to standard error. This option is ignored if log_config_append is set.

use_eventlog
Type:boolean
Default:false

Log output to Windows Event Log.

logging_context_format_string
Type:string
Default:%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

Format string to use for log messages with context.

logging_default_format_string
Type:string
Default:%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

Format string to use for log messages when context is undefined.

logging_debug_format_suffix
Type:string
Default:%(funcName)s %(pathname)s:%(lineno)d

Additional data to append to log message when logging level for the message is DEBUG.

logging_exception_prefix
Type:string
Default:%(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

Prefix each line of exception output with this format.

logging_user_identity_format
Type:string
Default:%(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

Defines the format string for %(user_identity)s that is used in logging_context_format_string.

default_log_levels
Type:list
Default:amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.

publish_errors
Type:boolean
Default:false

Enables or disables publication of error events.

instance_format
Type:string
Default:"[instance: %(uuid)s] "

The format for an instance that is passed with the log message.

instance_uuid_format
Type:string
Default:"[instance: %(uuid)s] "

The format for an instance UUID that is passed with the log message.

rate_limit_interval
Type:integer
Default:0

Interval, number of seconds, of log rate limiting.

rate_limit_burst
Type:integer
Default:0

Maximum number of logged messages per rate_limit_interval.

rate_limit_except_level
Type:string
Default:CRITICAL

Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered.

fatal_deprecations
Type:boolean
Default:false

Enables or disables fatal status of deprecations.

internal_service_availability_zone
Type:string
Default:internal

Availability zone for internal services.

This option determines the availability zone for the various internal nova services, such as ‘nova-scheduler’, ‘nova-conductor’, etc.

Possible values:

  • Any string representing an existing availability zone name.
default_availability_zone
Type:string
Default:nova

Default availability zone for compute services.

This option determines the default availability zone for ‘nova-compute’ services, which will be used if the service(s) do not belong to aggregates with availability zone metadata.

Possible values:

  • Any string representing an existing availability zone name.
default_schedule_zone
Type:string
Default:<None>

Default availability zone for instances.

This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime.

Possible values:

  • Any string representing an existing availability zone name.
  • None, which means that the instance can move from one availability zone to another during its lifetime if it is moved from one compute node to another.
password_length
Type:integer
Default:12
Minimum Value:0

Length of generated instance admin passwords.

instance_usage_audit_period
Type:string
Default:month

Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset.

Possible values:

  • period, example: hour, day, month` or ``year
  • period with offset, example: month@15 will result in monthly audits starting on 15th day of month.
use_rootwrap_daemon
Type:boolean
Default:false

Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes.

rootwrap_config
Type:string
Default:/etc/nova/rootwrap.conf

Path to the rootwrap configuration file.

Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry.

tempdir
Type:string
Default:<None>

Explicitly specify the temporary working directory.

compute_driver
Type:string
Default:<None>

Defines which driver to use for controlling virtualization.

Possible values:

  • libvirt.LibvirtDriver
  • xenapi.XenAPIDriver
  • fake.FakeDriver
  • ironic.IronicDriver
  • vmwareapi.VMwareVCDriver
  • hyperv.HyperVDriver
  • powervm.PowerVMDriver
  • zvm.ZVMDriver
allow_resize_to_same_host
Type:boolean
Default:false

Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize.

non_inheritable_image_properties
Type:list
Default:cache_in_nova,bittorrent,img_signature_hash_method,img_signature,img_signature_key_type,img_signature_certificate_uuid

Image properties that should not be inherited from the instance when taking a snapshot.

This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots.

Possible values:

  • A comma-separated list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images don’t need them.
  • Default list: cache_in_nova, bittorrent, img_signature_hash_method,
    img_signature, img_signature_key_type, img_signature_certificate_uuid
max_local_block_devices
Type:integer
Default:3

Maximum number of devices that will result in a local image being created on the hypervisor node.

A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of –image being used, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail.

Possible values:

  • 0: Creating a local disk is not allowed.
  • Negative number: Allows unlimited number of local discs.
  • Positive number: Allows only these many number of local discs.
    (Default value is 3).
compute_monitors
Type:list
Default:''

A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the “cpu.” namespace is assumed for backwards-compatibility.

NOTE: Only one monitor per namespace (For example: cpu) can be loaded at a time.

Possible values:

  • An empty list will disable the feature (Default).

  • An example value that would enable both the CPU and NUMA memory bandwidth monitors that use the virt driver variant:

    compute_monitors = cpu.virt_driver, numa_mem_bw.virt_driver

default_ephemeral_format
Type:string
Default:<None>

The default format an ephemeral_volume will be formatted with on creation.

Possible values:

  • ext2
  • ext3
  • ext4
  • xfs
  • ntfs (only for Windows guests)
vif_plugging_is_fatal
Type:boolean
Default:true

Determine if instance should boot or fail on VIF plugging timeout.

Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval.

This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready.

Possible values:

  • True: Instances should fail after VIF plugging timeout
  • False: Instances should continue booting after VIF plugging timeout
vif_plugging_timeout
Type:integer
Default:300
Minimum Value:0

Timeout for Neutron VIF plugging event message arrival.

Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see ‘vif_plugging_is_fatal’).

If you are hitting timeout failures at scale, consider running rootwrap in “daemon mode” in the neutron agent via the [agent]/root_helper_daemon neutron configuration option.

Related options:

  • vif_plugging_is_fatal - If vif_plugging_timeout is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all.
injected_network_template
Type:string
Default:$pybasedir/nova/virt/interfaces.template

Path to ‘/etc/network/interfaces’ template.

The path to a template file for the ‘/etc/network/interfaces’-style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server.

The template will be rendered using Jinja2 template engine, and receive a top-level key called interfaces. This key will contain a list of dictionaries, one for each interface.

Refer to the cloudinit documentaion for more information:

Possible values:

  • A path to a Jinja2-formatted template for a Debian ‘/etc/network/interfaces’ file. This applies even if using a non Debian-derived guest.

Related options:

  • flat_inject: This must be set to True to ensure nova embeds network configuration information in the metadata provided through the config drive.
preallocate_images
Type:string
Default:none
Valid Values:none, space

The image preallocation mode to use.

Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn’t available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation.

Possible values

none
No storage provisioning is done up front
space
Storage is fully allocated at instance start
use_cow_images
Type:boolean
Default:true

Enable use of copy-on-write (cow) images.

QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used.

force_raw_images
Type:boolean
Default:true

Force conversion of backing images to raw format.

Possible values:

  • True: Backing image files will be converted to raw image format
  • False: Backing image files will not be converted

Related options:

  • compute_driver: Only the libvirt driver uses this option.
virt_mkfs
Type:multi-valued
Default:''

Name of the mkfs commands for ephemeral device.

The format is <os_type>=<mkfs command>

resize_fs_using_block_device
Type:boolean
Default:false

Enable resizing of filesystems via a block device.

If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw).

timeout_nbd
Type:integer
Default:10
Minimum Value:0

Amount of time, in seconds, to wait for NBD device start up.

image_cache_subdirectory_name
Type:string
Default:_base

Location of cached images.

This is NOT the full path - just a folder name relative to ‘$instances_path’. For per-compute-host cached images, set to ‘_base_$my_ip’

remove_unused_base_images
Type:boolean
Default:true

Should unused base images be removed?

remove_unused_original_minimum_age_seconds
Type:integer
Default:86400

Unused unresized base images younger than this will not be removed.

pointer_model
Type:string
Default:usbtablet
Valid Values:ps2mouse, usbtablet, <None>

Generic property to specify the pointer type.

Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement.

If set, the ‘hw_pointer_model’ image property takes precedence over this configuration option.

Related options:

  • usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM.

Possible values

ps2mouse
Uses relative movement. Mouse connected by PS2
usbtablet
Uses absolute movement. Tablet connect by USB
<None>
Uses default behavior provided by drivers (mouse on PS2 for libvirt x86)
vcpu_pin_set
Type:string
Default:<None>

Defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs).

Possible values:

  • A comma-separated list of physical CPU numbers that virtual CPUs can be allocated to by default. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:

    vcpu_pin_set = "4-12,^8,15"
    
reserved_huge_pages
Type:unknown type
Default:<None>

Number of huge/large memory pages to reserved per NUMA host cell.

Possible values:

  • A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved. For example:

    reserved_huge_pages = node:0,size:2048,count:64
    reserved_huge_pages = node:1,size:1GB,count:1
    

    In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB.

reserved_host_disk_mb
Type:integer
Default:0
Minimum Value:0

Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host.

Possible values:

  • Any positive integer representing amount of disk in MB to reserve for the host.
reserved_host_memory_mb
Type:integer
Default:512
Minimum Value:0

Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host.

Possible values:

  • Any positive integer representing amount of memory in MB to reserve for the host.
reserved_host_cpus
Type:integer
Default:0
Minimum Value:0

Number of physical CPUs to reserve for the host. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host CPU from being considered as available, this option is used to reserve random pCPU(s) for the host.

Possible values:

  • Any positive integer representing number of physical CPUs to reserve for the host.
cpu_allocation_ratio
Type:floating point
Default:0.0
Minimum Value:0.0

This option helps you specify virtual CPU to physical CPU allocation ratio.

From Ocata (15.0.0) this is used to influence the hosts selected by the Placement API. Note that when Placement is used, the CoreFilter is redundant, because the Placement API will have already filtered out hosts that would have failed the CoreFilter.

This configuration specifies ratio for CoreFilter which can be set per compute node. For AggregateCoreFilter, it will fall back to this configuration value if no per-aggregate setting is found.

NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 16.0. Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior. If you want to reset back to the default, explicitly specify 16.0.

NOTE: As of the 16.0.0 Pike release, this configuration option is ignored for the ironic.IronicDriver compute driver and is hardcoded to 1.0.

Possible values:

  • Any valid positive integer or float value
ram_allocation_ratio
Type:floating point
Default:0.0
Minimum Value:0.0

This option helps you specify virtual RAM to physical RAM allocation ratio.

From Ocata (15.0.0) this is used to influence the hosts selected by the Placement API. Note that when Placement is used, the RamFilter is redundant, because the Placement API will have already filtered out hosts that would have failed the RamFilter.

This configuration specifies ratio for RamFilter which can be set per compute node. For AggregateRamFilter, it will fall back to this configuration value if no per-aggregate setting found.

NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.5. Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior. If you want to reset back to the default, explicitly specify 1.5.

NOTE: As of the 16.0.0 Pike release, this configuration option is ignored for the ironic.IronicDriver compute driver and is hardcoded to 1.0.

Possible values:

  • Any valid positive integer or float value
disk_allocation_ratio
Type:floating point
Default:0.0
Minimum Value:0.0

This option helps you specify virtual disk to physical disk allocation ratio.

From Ocata (15.0.0) this is used to influence the hosts selected by the Placement API. Note that when Placement is used, the DiskFilter is redundant, because the Placement API will have already filtered out hosts that would have failed the DiskFilter.

A ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.

NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.0. Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior. If you want to reset back to the default, explicitly specify 1.0.

NOTE: As of the 16.0.0 Pike release, this configuration option is ignored for the ironic.IronicDriver compute driver and is hardcoded to 1.0.

Possible values:

  • Any valid positive integer or float value
console_host
Type:string
Default:<current_hostname>

This option has a sample default set, which means that its actual default value may vary from the one documented above.

Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host.

Possible values:

  • Current hostname (default) or any string representing hostname.
default_access_ip_network_name
Type:string
Default:<None>

Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen.

Possible values:

  • None (default)
  • Any string representing network name.
defer_iptables_apply
Type:boolean
Default:false

Whether to batch up the application of IPTables rules during a host restart and apply all at the end of the init phase.

instances_path
Type:string
Default:$state_path/instances

This option has a sample default set, which means that its actual default value may vary from the one documented above.

Specifies where instances are stored on the hypervisor’s disk. It can point to locally attached storage or a directory on NFS.

Possible values:

  • $state_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova’s state. (default) or Any string representing directory path.
instance_usage_audit
Type:boolean
Default:false

This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service.

live_migration_retry_count
Type:integer
Default:30
Minimum Value:0

Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables.

Possible values:

  • Any positive integer representing retry count.
resume_guests_state_on_host_boot
Type:boolean
Default:false

This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts.

network_allocate_retries
Type:integer
Default:0
Minimum Value:0

Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails.

Possible values:

  • Any positive integer representing retry count.
max_concurrent_builds
Type:integer
Default:10
Minimum Value:0

Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node.

Possible Values:

  • 0 : treated as unlimited.
  • Any positive integer representing maximum concurrent builds.
max_concurrent_live_migrations
Type:integer
Default:1

Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment.

Possible values:

  • 0 : treated as unlimited.
  • Negative value defaults to 0.
  • Any positive integer representing maximum number of live migrations to run concurrently.
block_device_allocate_retries
Type:integer
Default:60

Number of times to retry block device allocation on failures. Starting with Liberty, Cinder can use image volume cache. This may help with block device allocation performance. Look at the cinder image_volume_cache_enabled configuration option.

Possible values:

  • 60 (default)
  • If value is 0, then one attempt is made.
  • Any negative value is treated as 0.
  • For any value > 0, total attempts are (value + 1)
sync_power_state_pool_size
Type:integer
Default:1000

Number of greenthreads available for use to sync power states.

This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic.

Possible values:

  • Any positive integer representing greenthreads count.
image_cache_manager_interval
Type:integer
Default:2400
Minimum Value:-1

Number of seconds to wait between runs of the image cache manager.

Possible values: * 0: run at the default rate. * -1: disable * Any other value

bandwidth_poll_interval
Type:integer
Default:600

Interval to pull network bandwidth usage info.

Not supported on all hypervisors. If a hypervisor doesn’t support bandwidth usage, it will not get the info in the usage events.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.
sync_power_state_interval
Type:integer
Default:600

Interval to sync power states between the database and the hypervisor.

The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • If handle_virt_lifecycle_events in workarounds_group is false and this option is negative, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
heal_instance_info_cache_interval
Type:integer
Default:60

Interval between instance network information cache updates.

Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it’s cache if this option is set to 0. If we don’t update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0.

Possible values:

  • Any positive integer in seconds.
  • Any value <=0 will disable the sync. This is not recommended.
reclaim_instance_interval
Type:integer
Default:0

Interval for reclaiming deleted instances.

A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it’s too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically.

Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node.

Possible values:

  • Any positive integer(in seconds) greater than 0 will enable this option.
  • Any value <=0 will disable the option.
volume_usage_poll_interval
Type:integer
Default:0

Interval for gathering volume usages.

This option updates the volume usage cache for every volume_usage_poll_interval number of seconds.

Possible values:

  • Any positive integer(in seconds) greater than 0 will enable this option.
  • Any value <=0 will disable the option.
shelved_poll_interval
Type:integer
Default:3600

Interval for polling shelved instances to offload.

The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the ‘shelved_offload_time’ config value it offloads the shelved instances. Check ‘shelved_offload_time’ config option description for details.

Possible values:

  • Any value <= 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • shelved_offload_time
shelved_offload_time
Type:integer
Default:0

Time before a shelved instance is eligible for removal from a host.

By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes.

Possible values:

  • 0: Instance will be immediately offloaded after being
    shelved.
  • Any value < 0: An instance will never offload.
  • Any positive integer in seconds: The instance will exist for the specified number of seconds before being offloaded.
instance_delete_interval
Type:integer
Default:300

Interval for retrying failed instance file deletes.

This option depends on ‘maximum_instance_delete_attempts’. This option specifies how often to retry deletes whereas ‘maximum_instance_delete_attempts’ specifies the maximum number of retry attempts that can be made.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • maximum_instance_delete_attempts from instance_cleaning_opts group.
block_device_allocate_retries_interval
Type:integer
Default:3
Minimum Value:0

Interval (in seconds) between block device allocation retries on failures.

This option allows the user to specify the time interval between consecutive retries. ‘block_device_allocate_retries’ option specifies the maximum number of retries.

Possible values:

  • 0: Disables the option.
  • Any positive integer in seconds enables the option.

Related options:

  • block_device_allocate_retries in compute_manager_opts group.
scheduler_instance_sync_interval
Type:integer
Default:120

Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova.

If the CONF option ‘scheduler_tracks_instance_changes’ is False, the sync calls will not be made. So, changing this option will have no effect.

If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • This option has no impact if scheduler_tracks_instance_changes is set to False.
update_resources_interval
Type:integer
Default:0

Interval for updating compute resources.

This option specifies how often the update_available_resources periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.
reboot_timeout
Type:integer
Default:0
Minimum Value:0

Time interval after which an instance is hard rebooted automatically.

When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds.

Possible values:

  • 0: Disables the option (default).
  • Any positive integer in seconds: Enables the option.
instance_build_timeout
Type:integer
Default:0
Minimum Value:0

Maximum time in seconds that an instance can take to build.

If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.
rescue_timeout
Type:integer
Default:0
Minimum Value:0

Interval to wait before un-rescuing an instance stuck in RESCUE.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.
resize_confirm_window
Type:integer
Default:0
Minimum Value:0

Automatically confirm resizes after N seconds.

Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.
shutdown_timeout
Type:integer
Default:60
Minimum Value:0

Total time to wait in seconds for an instance to perform a clean shutdown.

It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up.

The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly.

Possible values:

  • A positive integer or 0 (default value is 60).
running_deleted_instance_action
Type:string
Default:reap
Valid Values:reap, log, shutdown, noop

The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified.

Related options:

  • running_deleted_instance_poll_interval
  • running_deleted_instance_timeout

Possible values

reap
Powers down the instances and deletes them
log
Logs warning message about deletion of the resource
shutdown
Powers down instances and marks them as non-bootable which can be later used for debugging/analysis
noop
Takes no action
running_deleted_instance_poll_interval
Type:integer
Default:1800

Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If “running_deleted_instance _action” is set to “log” or “reap”, a value greater than 0 must be set.

Possible values:

  • Any positive integer in seconds enables the option.
  • 0: Disables the option.
  • 1800: Default value.

Related options:

  • running_deleted_instance_action
running_deleted_instance_timeout
Type:integer
Default:0

Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup.

Possible values:

  • Any positive integer in seconds(default is 0).

Related options:

  • “running_deleted_instance_action”
maximum_instance_delete_attempts
Type:integer
Default:5

The number of times to attempt to reap an instance’s files.

This option specifies the maximum number of retry attempts that can be made.

Possible values:

  • Any positive integer defines how many attempts are made.
  • Any value <=0 means no delete attempts occur, but you should use instance_delete_interval to disable the delete attempts.

Related options:

  • [DEFAULT] instance_delete_interval can be used to disable this option.
osapi_compute_unique_server_name_scope
Type:string
Default:''
Valid Values:‘’, project, global

Sets the scope of the check for unique instance names.

The default doesn’t check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an ‘’InstanceExists’’ error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don’t have to distinguish among instances with the same name by their IDs.

Possible values

‘’
An empty value means that no uniqueness check is done and duplicate names are possible
project
The instance name check is done only for instances within the same project
global
The instance name check is done for all instances regardless of the project
enable_new_services
Type:boolean
Default:true

Enable new nova-compute services on this host automatically.

When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, nova-consoleauth, or nova-osapi_compute.

Possible values:

  • True: Each new compute service is enabled as soon as it registers itself.
  • False: Compute services must be enabled via an os-services REST API call or with the CLI with nova service-enable <hostname> <binary>, otherwise they are not ready to use.
instance_name_template
Type:string
Default:instance-%08x

Template string to be used to generate instance names.

This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like instance-%(uuid)s. If you already have instances in your deployment when you change this, your deployment will break.

Possible values:

  • A string which either uses the instance database ID (like the default)
  • A string with a list of named database columns, for example %(id)d or %(uuid)s or %(hostname)s.
migrate_max_retries
Type:integer
Default:-1
Minimum Value:-1

Number of times to retry live-migration before failing.

Possible values:

  • If == -1, try until out of hosts (default)
  • If == 0, only try once, no retries
  • Integer greater than 0
config_drive_format
Type:string
Default:iso9660
Valid Values:iso9660, vfat

Configuration drive format

Configuration drive format that will contain metadata attached to the instance when it boots.

Related options:

  • This option is meaningful when one of the following alternatives occur:
    1. force_config_drive option set to true
    2. the REST API call to create the instance contains an enable flag for config drive option
    3. the image used to create the instance requires a config drive, this is defined by img_config_drive property for that image.
  • A compute node running Hyper-V hypervisor can be configured to attach configuration drive as a CD drive. To attach the configuration drive as a CD drive, set the [hyperv] config_drive_cdrom option to true.

Possible values

iso9660
A file system image standard that is widely supported across operating systems.
vfat
Provided for legacy reasons and to enable live migration with the libvirt driver and non-shared storage
force_config_drive
Type:boolean
Default:false

Force injection to take place on a config drive

When this option is set to true configuration drive functionality will be forced enabled by default, otherwise user can still enable configuration drives via the REST API or image metadata properties.

Possible values:

  • True: Force to use of configuration drive regardless the user’s input in the
    REST API call.
  • False: Do not force use of configuration drive. Config drives can still be
    enabled via the REST API or image metadata properties.

Related options:

  • Use the ‘mkisofs_cmd’ flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag.
  • To use configuration drive with Hyper-V, you must set the ‘mkisofs_cmd’ value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.
mkisofs_cmd
Type:string
Default:genisoimage

Name or path of the tool used for ISO image creation

Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value.

To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.

Possible values:

  • Name of the ISO image creator program, in case it is in the same directory as the nova-compute service
  • Path to ISO image creator program

Related options:

  • This option is meaningful when config drives are enabled.
  • To use configuration drive with Hyper-V, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.
default_flavor
Type:string
Default:m1.small

Default flavor to use for the EC2 API only. The Nova API does not support a default flavor.

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:The EC2 API is deprecated.
my_ip
Type:string
Default:<host_ipv4>

This option has a sample default set, which means that its actual default value may vary from the one documented above.

The IP address which the host is using to connect to the management network.

Possible values:

  • String with valid IP address. Default is IPv4 address of this host.

Related options:

  • metadata_host
  • my_block_storage_ip
  • routing_source_ip
  • vpn_ip
my_block_storage_ip
Type:string
Default:$my_ip

The IP address which is used to connect to the block storage network.

Possible values:

  • String with valid IP address. Default is IP address of this host.

Related options:

  • my_ip - if my_block_storage_ip is not set, then my_ip value is used.
host
Type:string
Default:<current_hostname>

This option has a sample default set, which means that its actual default value may vary from the one documented above.

Hostname, FQDN or IP address of this host.

Used as:

  • the oslo.messaging queue name for nova-compute worker
  • we use this value for the binding_host sent to neutron. This means if you use a neutron agent, it should have the same value for host.
  • cinder host attachment information

Must be valid within AMQP key.

Possible values:

  • String with hostname, FQDN or IP address. Default is hostname of this host.
dhcpbridge_flagfile
Type:multi-valued
Default:/etc/nova/nova-dhcpbridge.conf

This option is a list of full paths to one or more configuration files for dhcpbridge. In most cases the default path of ‘/etc/nova/nova-dhcpbridge.conf’ should be sufficient, but if you have special needs for configuring dhcpbridge, you can change or add to this list.

Possible values

  • A list of strings, where each string is the full path to a dhcpbridge configuration file.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
networks_path
Type:string
Default:$state_path/networks

The location where the network configuration files will be kept. The default is the ‘networks’ directory off of the location where nova’s Python module is installed.

Possible values

  • A string containing the full path to the desired configuration directory

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
public_interface
Type:string
Default:eth0

This is the name of the network interface for public IP addresses. The default is ‘eth0’.

Possible values:

  • Any string representing a network interface name

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dhcpbridge
Type:string
Default:$bindir/nova-dhcpbridge

The location of the binary nova-dhcpbridge. By default it is the binary named ‘nova-dhcpbridge’ that is installed with all the other nova binaries.

Possible values:

  • Any string representing the full path to the binary for dhcpbridge

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
routing_source_ip
Type:string
Default:$my_ip

The public IP address of the network host.

This is used when creating an SNAT rule.

Possible values:

  • Any valid IP address

Related options:

  • force_snat_range

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dhcp_lease_time
Type:integer
Default:86400
Minimum Value:1

The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).

Possible values:

  • Any positive integer value.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dns_server
Type:multi-valued
Default:''

Despite the singular form of the name of this option, it is actually a list of zero or more server addresses that dnsmasq will use for DNS nameservers. If this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use the servers specified in this option. If the option use_network_dns_servers is True, the dns1 and dns2 servers from the network will be appended to this list, and will be used as DNS servers, too.

Possible values:

  • A list of strings, where each string is either an IP address or a FQDN.

Related options:

  • use_network_dns_servers

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
use_network_dns_servers
Type:boolean
Default:false

When this option is set to True, the dns1 and dns2 servers for the network specified by the user on boot will be used for DNS, as well as any specified in the dns_server option.

Related options:

  • dns_server

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dmz_cidr
Type:list
Default:''

This option is a list of zero or more IP address ranges in your network’s DMZ that should be accepted.

Possible values:

  • A list of strings, each of which should be a valid CIDR.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
force_snat_range
Type:multi-valued
Default:''

This is a list of zero or more IP ranges that traffic from the routing_source_ip will be SNATted to. If the list is empty, then no SNAT rules are created.

Possible values:

  • A list of strings, each of which should be a valid CIDR.

Related options:

  • routing_source_ip

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dnsmasq_config_file
Type:string
Default:''

The path to the custom dnsmasq configuration file, if any.

Possible values:

  • The full path to the configuration file, or an empty string if there is no custom dnsmasq configuration file.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
linuxnet_interface_driver
Type:string
Default:nova.network.linux_net.LinuxBridgeInterfaceDriver

This is the class used as the ethernet device driver for linuxnet bridge operations. The default value should be all you need for most cases, but if you wish to use a customized class, set this option to the full dot-separated import path for that class.

Possible values:

  • Any string representing a dot-separated class path that Nova can import.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
linuxnet_ovs_integration_bridge
Type:string
Default:br-int

The name of the Open vSwitch bridge that is used with linuxnet when connecting with Open vSwitch.”

Possible values:

  • Any string representing a valid bridge name.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
send_arp_for_ha
Type:boolean
Default:false

When True, when a device starts up, and upon binding floating IP addresses, arp messages will be sent to ensure that the arp caches on the compute hosts are up-to-date.

Related options:

  • send_arp_for_ha_count
send_arp_for_ha_count
Type:integer
Default:3

When arp messages are configured to be sent, they will be sent with the count set to the value of this option. Of course, if this is set to zero, no arp messages will be sent.

Possible values:

  • Any integer greater than or equal to 0

Related options:

  • send_arp_for_ha
use_single_default_gateway
Type:boolean
Default:false

When set to True, only the firt nic of a VM will get its default gateway from the DHCP server.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
forward_bridge_interface
Type:multi-valued
Default:all

One or more interfaces that bridges can forward traffic to. If any of the items in this list is the special keyword ‘all’, then all traffic will be forwarded.

Possible values:

  • A list of zero or more interface names, or the word ‘all’.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
metadata_host
Type:string
Default:$my_ip

This option determines the IP address for the network metadata API server.

This is really the client side of the metadata host equation that allows nova-network to find the metadata server when doing a default multi host networking.

Possible values:

  • Any valid IP address. The default is the address of the Nova API server.

Related options:

  • metadata_port
metadata_port
Type:port number
Default:8775
Minimum Value:0
Maximum Value:65535

This option determines the port used for the metadata API server.

Related options:

  • metadata_host

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
iptables_top_regex
Type:string
Default:''

This expression, if defined, will select any matching iptables rules and place them at the top when applying metadata changes to the rules.

Possible values:

  • Any string representing a valid regular expression, or an empty string

Related options:

  • iptables_bottom_regex

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
iptables_bottom_regex
Type:string
Default:''

This expression, if defined, will select any matching iptables rules and place them at the bottom when applying metadata changes to the rules.

Possible values:

  • Any string representing a valid regular expression, or an empty string

Related options:

  • iptables_top_regex

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
iptables_drop_action
Type:string
Default:DROP

By default, packets that do not pass the firewall are DROPped. In many cases, though, an operator may find it more useful to change this from DROP to REJECT, so that the user issuing those packets may have a better idea as to what’s going on, or LOGDROP in order to record the blocked traffic before DROPping.

Possible values:

  • A string representing an iptables chain. The default is DROP.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ovs_vsctl_timeout
Type:integer
Default:120
Minimum Value:0

This option represents the period of time, in seconds, that the ovs_vsctl calls will wait for a response from the database before timing out. A setting of 0 means that the utility should wait forever for a response.

Possible values:

  • Any positive integer if a limited timeout is desired, or zero if the calls should wait forever for a response.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
fake_network
Type:boolean
Default:false

This option is used mainly in testing to avoid calls to the underlying network utilities.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ebtables_exec_attempts
Type:integer
Default:3
Minimum Value:1

This option determines the number of times to retry ebtables commands before giving up. The minimum number of retries is 1.

Possible values:

  • Any positive integer

Related options:

  • ebtables_retry_interval

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ebtables_retry_interval
Type:floating point
Default:1.0

This option determines the time, in seconds, that the system will sleep in between ebtables retries. Note that each successive retry waits a multiple of this value, so for example, if this is set to the default of 1.0 seconds, and ebtables_exec_attempts is 4, after the first failure, the system will sleep for 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and after the third failure it will sleep 3 * 1.0 seconds.

Possible values:

  • Any non-negative float or integer. Setting this to zero will result in no waiting between attempts.

Related options:

  • ebtables_exec_attempts

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
use_neutron
Type:boolean
Default:true

Enable neutron as the backend for networking.

Determine whether to use Neutron or Nova Network as the back end. Set to true to use neutron.

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
flat_injected
Type:boolean
Default:false

This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware and xenapi virt drivers to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM.

flat_network_bridge
Type:string
Default:<None>

This option determines the bridge used for simple network interfaces when no bridge is specified in the VM creation request.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any string representing a valid network bridge, such as ‘br100’

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
flat_network_dns
Type:string
Default:8.8.4.4

This is the address of the DNS server for a simple network. If this option is not specified, the default of ‘8.8.4.4’ is used.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IP address.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
flat_interface
Type:string
Default:<None>

This option is the name of the virtual interface of the VM on which the bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt for the bridge interface name.

Possible values:

  • Any valid virtual interface name, such as ‘eth0’

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
vlan_start
Type:integer
Default:100
Minimum Value:1
Maximum Value:4094

This is the VLAN number used for private networks. Note that the when creating the networks, if the specified number has already been assigned, nova-network will increment this number until it finds an available VLAN.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

  • Any integer between 1 and 4094. Values outside of that range will raise a ValueError exception.

Related options:

  • network_manager
  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
vlan_interface
Type:string
Default:<None>

This option is the name of the virtual interface of the VM on which the VLAN bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt and xenapi for the bridge interface name.

Please note that this setting will be ignored in nova-network if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

  • Any valid virtual interface name, such as ‘eth0’

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options. While this option has an effect when using neutron, it incorrectly override the value provided by neutron and should therefore not be used.
num_networks
Type:integer
Default:1
Minimum Value:1

This option represents the number of networks to create if not explicitly specified when the network is created. The only time this is used is if a CIDR is specified, but an explicit network_size is not. In that case, the subnets are created by diving the IP address space of the CIDR by num_networks. The resulting subnet sizes cannot be larger than the configuration option network_size; in that event, they are reduced to network_size, and a warning is logged.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any positive integer is technically valid, although there are practical limits based upon available IP address space and virtual interfaces.

Related options:

  • use_neutron
  • network_size

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
vpn_ip
Type:string
Default:$my_ip

This option is no longer used since the /os-cloudpipe API was removed in the 16.0.0 Pike release. This is the public IP address for the cloudpipe VPN servers. It defaults to the IP address of the host.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

  • Any valid IP address. The default is $my_ip, the IP address of the VM.

Related options:

  • network_manager
  • use_neutron
  • vpn_start

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
vpn_start
Type:port number
Default:1000
Minimum Value:0
Maximum Value:65535

This is the port number to use as the first VPN port for private networks.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’, or if you specify a value the ‘vpn_start’ parameter when creating a network.

Possible values:

  • Any integer representing a valid port number. The default is 1000.

Related options:

  • use_neutron
  • vpn_ip
  • network_manager

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
network_size
Type:integer
Default:256
Minimum Value:1

This option determines the number of addresses in each private subnet.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any positive integer that is less than or equal to the available network size. Note that if you are creating multiple networks, they must all fit in the available IP address space. The default is 256.

Related options:

  • use_neutron
  • num_networks

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
fixed_range_v6
Type:string
Default:fd00::/48

This option determines the fixed IPv6 address block when creating a network.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IPv6 CIDR

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
gateway
Type:string
Default:<None>

This is the default IPv4 gateway. It is used only in the testing suite.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IP address.

Related options:

  • use_neutron
  • gateway_v6

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
gateway_v6
Type:string
Default:<None>

This is the default IPv6 gateway. It is used only in the testing suite.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IP address.

Related options:

  • use_neutron
  • gateway

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
cnt_vpn_clients
Type:integer
Default:0
Minimum Value:0

This option represents the number of IP addresses to reserve at the top of the address range for VPN clients. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

  • Any integer, 0 or greater.

Related options:

  • use_neutron
  • network_manager

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
fixed_ip_disassociate_timeout
Type:integer
Default:600
Minimum Value:0

This is the number of seconds to wait before disassociating a deallocated fixed IP address. This is only used with the nova-network service, and has no effect when using neutron for networking.

Possible values:

  • Any integer, zero or greater.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
create_unique_mac_address_attempts
Type:integer
Default:5
Minimum Value:1

This option determines how many times nova-network will attempt to create a unique MAC address before giving up and raising a VirtualInterfaceMacAddressException error.

Possible values:

  • Any positive integer. The default is 5.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
teardown_unused_network_gateway
Type:boolean
Default:false

Determines whether unused gateway devices, both VLAN and bridge, are deleted if the network is in nova-network VLAN mode and is multi-hosted.

Related options:

  • use_neutron
  • vpn_ip
  • fake_network

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
force_dhcp_release
Type:boolean
Default:true

When this option is True, a call is made to release the DHCP for the instance when that instance is terminated.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
update_dns_entries
Type:boolean
Default:false

When this option is True, whenever a DNS entry must be updated, a fanout cast message is sent to all network hosts to update their DNS entries in multi-host mode.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dns_update_periodic_interval
Type:integer
Default:-1
Minimum Value:-1

This option determines the time, in seconds, to wait between refreshing DNS entries for the network.

Possible values:

  • A positive integer
  • -1 to disable updates

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
dhcp_domain
Type:string
Default:novalocal

This option allows you to specify the domain for the DHCP server.

Possible values:

  • Any string that is a valid domain name.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
l3_lib
Type:string
Default:nova.network.l3.LinuxNetL3

This option allows you to specify the L3 management library to be used.

Possible values:

  • Any dot-separated string that represents the import path to an L3 networking library.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
share_dhcp_address
Type:boolean
Default:false

THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.

If True in multi_host mode, all compute hosts share the same dhcp address. The same IP address used for DHCP will be added on each nova-network node which is only visible to the VMs on the same host.

The use of this configuration has been deprecated and may be removed in any release after Mitaka. It is recommended that instead of relying on this option, an explicit value should be passed to ‘create_networks()’ as a keyword argument with the name ‘share_address’.

Warning

This option is deprecated for removal since 2014.2. Its value may be silently ignored in the future.

ldap_dns_url
Type:URI
Default:ldap://ldap.example.com:389

URL for LDAP server which will store DNS entries

Possible values:

  • A valid LDAP URL representing the server

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_user
Type:string
Default:uid=admin,ou=people,dc=example,dc=org

Bind user for LDAP server

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_password
Type:string
Default:password

Bind user’s password for LDAP server

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_soa_hostmaster
Type:string
Default:hostmaster@example.org

Hostmaster for LDAP DNS driver Statement of Authority

Possible values:

  • Any valid string representing LDAP DNS hostmaster.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_servers
Type:multi-valued
Default:dns.example.org

DNS Servers for LDAP DNS driver

Possible values:

  • A valid URL representing a DNS server

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_base_dn
Type:string
Default:ou=hosts,dc=example,dc=org

Base distinguished name for the LDAP search query

This option helps to decide where to look up the host in LDAP.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_soa_refresh
Type:integer
Default:1800

Refresh interval (in seconds) for LDAP DNS driver Start of Authority

Time interval, a secondary/slave DNS server waits before requesting for primary DNS server’s current SOA record. If the records are different, secondary DNS server will request a zone transfer from primary.

NOTE: Lower values would cause more traffic.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_soa_retry
Type:integer
Default:3600

Retry interval (in seconds) for LDAP DNS driver Start of Authority

Time interval, a secondary/slave DNS server should wait, if an attempt to transfer zone failed during the previous refresh interval.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_soa_expiry
Type:integer
Default:86400

Expiry interval (in seconds) for LDAP DNS driver Start of Authority

Time interval, a secondary/slave DNS server holds the information before it is no longer considered authoritative.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ldap_dns_soa_minimum
Type:integer
Default:7200

Minimum interval (in seconds) for LDAP DNS driver Start of Authority

It is Minimum time-to-live applies for all resource records in the zone file. This value is supplied to other servers how long they should keep the data in cache.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
multi_host
Type:boolean
Default:false

Default value for multi_host in networks.

nova-network service can operate in a multi-host or single-host mode. In multi-host mode each compute node runs a copy of nova-network and the instances on that compute node use the compute node as a gateway to the Internet. Where as in single-host mode, a central server runs the nova-network service. All compute nodes forward traffic from the instances to the cloud controller which then forwards traffic to the Internet.

If this options is set to true, some rpc network calls will be sent directly to host.

Note that this option is only used when using nova-network instead of Neutron in your deployment.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
network_driver
Type:string
Default:nova.network.linux_net

Driver to use for network creation.

Network driver initializes (creates bridges and so on) only when the first VM lands on a host node. All network managers configure the network using network drivers. The driver is not tied to any particular network manager.

The default Linux driver implements vlans, bridges, and iptables rules using linux utilities.

Note that this option is only used when using nova-network instead of Neutron in your deployment.

Related options:

  • use_neutron

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
firewall_driver
Type:string
Default:nova.virt.firewall.NoopFirewallDriver

Firewall driver to use with nova-network service.

This option only applies when using the nova-network service. When using another networking services, such as Neutron, this should be to set to the nova.virt.firewall.NoopFirewallDriver.

Possible values:

  • nova.virt.firewall.IptablesFirewallDriver
  • nova.virt.firewall.NoopFirewallDriver
  • nova.virt.libvirt.firewall.IptablesFirewallDriver
  • […]

Related options:

  • use_neutron: This must be set to False to enable nova-network networking

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
allow_same_net_traffic
Type:boolean
Default:true

Determine whether to allow network traffic from same network.

When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project.

This option only applies when using the nova-network service. When using another networking services, such as Neutron, security groups or other approaches should be used.

Possible values:

  • True: Network traffic should be allowed pass between all instances on the same network, regardless of their tenant and security policies
  • False: Network traffic should not be allowed pass between instances unless it is unblocked in a security group

Related options:

  • use_neutron: This must be set to False to enable nova-network networking
  • firewall_driver: This must be set to nova.virt.libvirt.firewall.IptablesFirewallDriver to ensure the libvirt firewall driver is enabled.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
default_floating_pool
Type:string
Default:nova

Default pool for floating IPs.

This option specifies the default floating IP pool for allocating floating IPs.

While allocating a floating ip, users can optionally pass in the name of the pool they want to allocate from, otherwise it will be pulled from the default pool.

If this option is not set, then ‘nova’ is used as default floating pool.

Possible values:

  • Any string representing a floating IP pool name

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:This option was used for two purposes: to set the floating IP pool name for nova-network and to do the same for neutron. nova-network is deprecated, as are any related configuration options. Users of neutron, meanwhile, should use the ‘default_floating_pool’ option in the ‘[neutron]’ group.
auto_assign_floating_ip
Type:boolean
Default:false

Autoassigning floating IP to VM

When set to True, floating IP is auto allocated and associated to the VM upon creation.

Related options:

  • use_neutron: this options only works with nova-network.

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
floating_ip_dns_manager
Type:string
Default:nova.network.noop_dns_driver.NoopDNSDriver

Full class name for the DNS Manager for floating IPs.

This option specifies the class of the driver that provides functionality to manage DNS entries associated with floating IPs.

When a user adds a DNS entry for a specified domain to a floating IP, nova will add a DNS entry using the specified floating DNS driver. When a floating IP is deallocated, its DNS entry will automatically be deleted.

Possible values:

  • Full Python path to the class to be used

Related options:

  • use_neutron: this options only works with nova-network.

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
instance_dns_manager
Type:string
Default:nova.network.noop_dns_driver.NoopDNSDriver

Full class name for the DNS Manager for instance IPs.

This option specifies the class of the driver that provides functionality to manage DNS entries for instances.

On instance creation, nova will add DNS entries for the instance name and id, using the specified instance DNS driver and domain. On instance deletion, nova will remove the DNS entries.

Possible values:

  • Full Python path to the class to be used

Related options:

  • use_neutron: this options only works with nova-network.

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
instance_dns_domain
Type:string
Default:''

If specified, Nova checks if the availability_zone of every instance matches what the database says the availability_zone should be for the specified dns_domain.

Related options:

  • use_neutron: this options only works with nova-network.

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
use_ipv6
Type:boolean
Default:false

Assign IPv6 and IPv4 addresses when creating instances.

Related options:

  • use_neutron: this only works with nova-network.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
ipv6_backend
Type:string
Default:rfc2462
Valid Values:rfc2462, account_identifier

Abstracts out IPv6 address generation to pluggable backends.

nova-network can be put into dual-stack mode, so that it uses both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances acquire IPv6 global unicast addresses with the help of stateless address auto-configuration mechanism.

Related options:

  • use_neutron: this option only works with nova-network.
  • use_ipv6: this option only works if ipv6 is enabled for nova-network.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
enable_network_quota
Type:boolean
Default:false

This option is used to enable or disable quota checking for tenant networks.

Related options:

  • quota_networks

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated.
quota_networks
Type:integer
Default:3
Minimum Value:0

This option controls the number of private networks that can be created per project (or per tenant).

Related options:

  • enable_network_quota

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated.
network_manager
Type:string
Default:nova.network.manager.VlanManager
Valid Values:nova.network.manager.FlatManager, nova.network.manager.FlatDHCPManager, nova.network.manager.VlanManager

Full class name for the Manager for network

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
record
Type:string
Default:<None>

Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done.

daemon
Type:boolean
Default:false

Run as a background process.

ssl_only
Type:boolean
Default:false

Disallow non-encrypted connections.

source_is_ipv6
Type:boolean
Default:false

Set to True if source host is addressed with IPv6.

cert
Type:string
Default:self.pem

Path to SSL certificate file.

key
Type:string
Default:<None>

SSL key file (if separate from cert).

web
Type:string
Default:/usr/share/spice-html5

Path to directory with content which will be served by a web server.

pybasedir
Type:string
Default:<Path>

This option has a sample default set, which means that its actual default value may vary from the one documented above.

The directory where the Nova python modules are installed.

This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value.

Possible values:

  • The full path to a directory.

Related options:

  • state_path
bindir
Type:string
Default:/home/docs/checkouts/readthedocs.org/user_builds/compute-hyperv/envs/stable-rocky/local/bin

The directory where the Nova binaries are installed.

This option is only relevant if the networking capabilities from Nova are used (see services below). Nova’s networking capabilities are targeted to be fully replaced by Neutron in the future. It is very unlikely that you need to change this option from its default value.

Possible values:

  • The full path to a directory.
state_path
Type:string
Default:$pybasedir

The top-level directory for maintaining Nova’s state.

This directory is used to store Nova’s internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option instances_path gets overwritten, this directory can grow very large.

Possible values:

  • The full path to a directory. Defaults to value provided in pybasedir.
long_rpc_timeout
Type:integer
Default:1800

This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value.

Operations with RPC calls that utilize this value:

  • live migration
  • scheduling

Related options:

  • rpc_response_timeout
report_interval
Type:integer
Default:10

Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment.

Related Options:

  • service_down_time report_interval should be less than service_down_time. If service_down_time is less than report_interval, services will routinely be considered down, because they report in too rarely.
service_down_time
Type:integer
Default:60

Maximum time in seconds since last check-in for up service

Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn’t updated the status for more than service_down_time, then the compute node is considered down.

Related Options:

  • report_interval (service_down_time should not be less than report_interval)
  • scheduler.periodic_task_interval
periodic_enable
Type:boolean
Default:true

Enable periodic tasks.

If set to true, this option allows services to periodically run tasks on the manager.

In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one.

periodic_fuzzy_delay
Type:integer
Default:60
Minimum Value:0

Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding.

When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler.

Possible Values:

  • Any positive integer (in seconds)
  • 0 : disable the random delay
enabled_apis
Type:list
Default:osapi_compute,metadata

List of APIs to be enabled by default.

enabled_ssl_apis
Type:list
Default:''

List of APIs with enabled SSL.

Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support.

osapi_compute_listen
Type:string
Default:0.0.0.0

IP address on which the OpenStack API will listen.

The OpenStack API service listens on this IP address for incoming requests.

osapi_compute_listen_port
Type:port number
Default:8774
Minimum Value:0
Maximum Value:65535

Port on which the OpenStack API will listen.

The OpenStack API service listens on this port number for incoming requests.

osapi_compute_workers
Type:integer
Default:<None>
Minimum Value:1

Number of workers for OpenStack API service. The default will be the number of CPUs available.

OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes.

Possible Values:

  • Any positive integer
  • None (default value)
metadata_listen
Type:string
Default:0.0.0.0

IP address on which the metadata API will listen.

The metadata API service listens on this IP address for incoming requests.

metadata_listen_port
Type:port number
Default:8775
Minimum Value:0
Maximum Value:65535

Port on which the metadata API will listen.

The metadata API service listens on this port number for incoming requests.

metadata_workers
Type:integer
Default:<None>
Minimum Value:1

Number of workers for metadata service. If not specified the number of available CPUs will be used.

The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes.

Possible Values:

  • Any positive integer
  • None (default value)
servicegroup_driver
Type:string
Default:db
Valid Values:db, mc

This option specifies the driver to be used for the servicegroup service.

ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver.

Related Options:

  • service_down_time (maximum time since last check-in for up service)

Possible values

db
Database ServiceGroup driver
mc
Memcache ServiceGroup driver
api

Options under this group are used to define Nova API.

auth_strategy
Type:string
Default:keystone
Valid Values:keystone, noauth2

Determine the strategy to use for authentication.

Possible values

keystone
Use keystone for authentication.
noauth2
Designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username.
Deprecated Variations
Group Name
DEFAULT auth_strategy
use_forwarded_for
Type:boolean
Default:false

When True, the ‘X-Forwarded-For’ header is treated as the canonical remote address. When False (the default), the ‘remote_address’ header is used.

You should only enable this if you have an HTML sanitizing proxy.

Deprecated Variations
Group Name
DEFAULT use_forwarded_for
config_drive_skip_versions
Type:string
Default:1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01

When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are:

  • 1.0
  • 2007-01-19
  • 2007-03-01
  • 2007-08-29
  • 2007-10-10
  • 2007-12-15
  • 2008-02-01
  • 2008-09-01
  • 2009-04-04

The option is in the format of a single string, with each version separated by a space.

Possible values:

  • Any string that represents zero or more versions, separated by spaces.
Deprecated Variations
Group Name
DEFAULT config_drive_skip_versions
vendordata_providers
Type:list
Default:StaticJSON

A list of vendordata providers.

vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment.

For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference.

Related options:

  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
  • vendordata_dynamic_failure_fatal
Deprecated Variations
Group Name
DEFAULT vendordata_providers
vendordata_dynamic_targets
Type:list
Default:''

A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>.

The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference.

Deprecated Variations
Group Name
DEFAULT vendordata_dynamic_targets
vendordata_dynamic_ssl_certfile
Type:string
Default:''

Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against.

Possible values:

  • An empty string, or a path to a valid certificate file

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
  • vendordata_dynamic_failure_fatal
Deprecated Variations
Group Name
DEFAULT vendordata_dynamic_ssl_certfile
vendordata_dynamic_connect_timeout
Type:integer
Default:5
Minimum Value:3

Maximum wait time for an external REST service to connect.

Possible values:

  • Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small.

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_read_timeout
  • vendordata_dynamic_failure_fatal
Deprecated Variations
Group Name
DEFAULT vendordata_dynamic_connect_timeout
vendordata_dynamic_read_timeout
Type:integer
Default:5
Minimum Value:0

Maximum wait time for an external REST service to return data once connected.

Possible values:

  • Any integer. Note that instance start is blocked during this wait time, so this value should be kept small.

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_failure_fatal
Deprecated Variations
Group Name
DEFAULT vendordata_dynamic_read_timeout
vendordata_dynamic_failure_fatal
Type:boolean
Default:false

Should failures to fetch dynamic vendordata be fatal to instance boot?

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
metadata_cache_expiration
Type:integer
Default:15
Minimum Value:0

This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect.

Deprecated Variations
Group Name
DEFAULT metadata_cache_expiration
vendordata_jsonfile_path
Type:string
Default:<None>

Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary.

Possible values:

  • Any string representing the path to the data file, or an empty string
    (default).
Deprecated Variations
Group Name
DEFAULT vendordata_jsonfile_path
max_limit
Type:integer
Default:1000
Minimum Value:0

As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option.

Deprecated Variations
Group Name
DEFAULT osapi_max_limit
Type:string
Default:<None>

This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged.

Possible values:

  • Any string, including an empty string (the default).
Deprecated Variations
Group Name
DEFAULT osapi_compute_link_prefix
Type:string
Default:<None>

This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged.

Possible values:

  • Any string, including an empty string (the default).
Deprecated Variations
Group Name
DEFAULT osapi_glance_link_prefix
instance_list_per_project_cells
Type:boolean
Default:false

When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True.

instance_list_cells_batch_strategy
Type:string
Default:distributed
Valid Values:distributed, fixed

This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request.

Related options:

  • instance_list_cells_batch_fixed_size
  • max_limit

Possible values

distributed
Divide the limit requested by the user by the number of cells in the system. This requires counting the cells in the system initially, which will not be refreshed until service restart or SIGHUP. The actual batch size will be increased by 10% over the result of ($limit / $num_cells).
fixed
Request fixed-size batches from each cell, as defined by instance_list_cells_batch_fixed_size. If the limit is smaller than the batch size, the limit will be used instead. If you do not wish batching to be used at all, setting the fixed size equal to the max_limit value will cause only one request per cell database to be issued.
instance_list_cells_batch_fixed_size
Type:integer
Default:100
Minimum Value:100

This controls the batch size of instances requested from each cell database if instance_list_cells_batch_strategy` is set to fixed. This integral value will define the limit issued to each cell every time a batch of instances is requested, regardless of the number of cells in the system or any other factors. Per the general logic called out in the documentation for instance_list_cells_batch_strategy, the minimum value for this is 100 records per batch.

Related options:

  • instance_list_cells_batch_strategy
  • max_limit
list_records_by_skipping_down_cells
Type:boolean
Default:true

When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True.

use_neutron_default_nets
Type:boolean
Default:false

When True, the TenantNetworkController will query the Neutron API to get the default networks to use.

Related options:

  • neutron_default_tenant_id
Deprecated Variations
Group Name
DEFAULT use_neutron_default_nets
neutron_default_tenant_id
Type:string
Default:default

Tenant ID for getting the default network from Neutron API (also referred in some places as the ‘project ID’) to use.

Related options:

  • use_neutron_default_nets
Deprecated Variations
Group Name
DEFAULT neutron_default_tenant_id
enable_instance_password
Type:boolean
Default:true

Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False.

Deprecated Variations
Group Name
DEFAULT enable_instance_password
api_database

The Nova API Database is a separate database which is used for information which is used across cells. This database is mandatory since the Mitaka release (13.0.0).

connection
Type:string
Default:<None>

The SQLAlchemy connection string to use to connect to the database.

connection_parameters
Type:string
Default:''

Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&…

sqlite_synchronous
Type:boolean
Default:true

If True, SQLite uses synchronous mode.

slave_connection
Type:string
Default:<None>

The SQLAlchemy connection string to use to connect to the slave database.

mysql_sql_mode
Type:string
Default:TRADITIONAL

The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=

connection_recycle_time
Type:integer
Default:3600

Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.

Deprecated Variations
Group Name
api_database idle_timeout
max_pool_size
Type:integer
Default:<None>

Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.

max_retries
Type:integer
Default:10

Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.

retry_interval
Type:integer
Default:10

Interval between retries of opening a SQL connection.

max_overflow
Type:integer
Default:<None>

If set, use this value for max_overflow with SQLAlchemy.

connection_debug
Type:integer
Default:0

Verbosity of SQL debugging information: 0=None, 100=Everything.

connection_trace
Type:boolean
Default:false

Add Python stack traces to SQL as comment strings.

pool_timeout
Type:integer
Default:<None>

If set, use this value for pool_timeout with SQLAlchemy.

barbican
barbican_endpoint
Type:string
Default:<None>

Use this endpoint to connect to Barbican, for example: “http://localhost:9311/

barbican_api_version
Type:string
Default:<None>

Version of the Barbican API, for example: “v1”

auth_endpoint
Type:string
Default:http://localhost/identity/v3

Use this endpoint to connect to Keystone

Deprecated Variations
Group Name
key_manager auth_url
retry_delay
Type:integer
Default:1

Number of seconds to wait before retrying poll for key creation completion

number_of_retries
Type:integer
Default:60

Number of times to retry poll for key creation completion

verify_ssl
Type:boolean
Default:true

Specifies if insecure TLS (https) requests. If False, the server’s certificate will not be validated

barbican_endpoint_type
Type:string
Default:public
Valid Values:public, internal, admin

Specifies the type of endpoint. Allowed values are: public, private, and admin

cache
config_prefix
Type:string
Default:cache.oslo

Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.

expiration_time
Type:integer
Default:600

Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn’t have an explicit cache expiration time defined for it.

backend
Type:string
Default:dogpile.cache.null
Valid Values:oslo_cache.memcache_pool, oslo_cache.dict, oslo_cache.mongo, oslo_cache.etcd3gw, dogpile.cache.memcached, dogpile.cache.pylibmc, dogpile.cache.bmemcached, dogpile.cache.dbm, dogpile.cache.redis, dogpile.cache.memory, dogpile.cache.memory_pickle, dogpile.cache.null

Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend.

backend_argument
Type:multi-valued
Default:''

Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: “<argname>:<value>”.

proxies
Type:list
Default:''

Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior.

enabled
Type:boolean
Default:false

Global toggle for caching.

debug_cache_backend
Type:boolean
Default:false

Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false.

memcache_servers
Type:list
Default:localhost:11211

Memcache servers in the format of “host:port”. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).

memcache_dead_retry
Type:integer
Default:300

Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).

memcache_socket_timeout
Type:floating point
Default:3.0

Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).

memcache_pool_maxsize
Type:integer
Default:10

Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only).

memcache_pool_unused_timeout
Type:integer
Default:60

Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only).

memcache_pool_connection_get_timeout
Type:integer
Default:10

Number of seconds that an operation will wait to get a memcache client connection.

cells

DEPRECATED: Cells options allow you to use cells v1 functionality in an OpenStack deployment.

Note that the options in this group are only for cells v1 functionality, which is considered experimental and not recommended for new deployments. Cells v1 is being replaced with cells v2, which starting in the 15.0.0 Ocata release is required and all Nova deployments will be at least a cells v2 cell of one.

enable
Type:boolean
Default:false

Enable cell v1 functionality.

Note that cells v1 is considered experimental and not recommended for new Nova deployments. Cells v1 is being replaced by cells v2 which starting in the 15.0.0 Ocata release, all Nova deployments are at least a cells v2 cell of one. Setting this option, or any other options in the [cells] group, is not required for cells v2.

When this functionality is enabled, it lets you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker.

Related options:

  • name: A unique cell name must be given when this functionality is enabled.
  • cell_type: Cell type should be defined for all cells.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
name
Type:string
Default:nova

Name of the current cell.

This value must be unique for each cell. Name of a cell is used as its id, leaving this option unset or setting the same name for two or more cells may cause unexpected behaviour.

Related options:

  • enabled: This option is meaningful only when cells service is enabled

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
capabilities
Type:list
Default:hypervisor=xenserver;kvm,os=linux;windows

Cell capabilities.

List of arbitrary key=value pairs defining capabilities of the current cell to be sent to the parent cells. These capabilities are intended to be used in cells scheduler filters/weighers.

Possible values:

  • key=value pairs list for example; hypervisor=xenserver;kvm,os=linux;windows

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
call_timeout
Type:integer
Default:60
Minimum Value:0

Call timeout.

Cell messaging module waits for response(s) to be put into the eventlet queue. This option defines the seconds waited for response from a call to a cell.

Possible values:

  • An integer, corresponding to the interval time in seconds.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
reserve_percent
Type:floating point
Default:10.0

Reserve percentage

Percentage of cell capacity to hold in reserve, so the minimum amount of free resource is considered to be;

min_free = total * (reserve_percent / 100.0)

This option affects both memory and disk utilization.

The primary purpose of this reserve is to ensure some space is available for users who want to resize their instance to be larger. Note that currently once the capacity expands into this reserve space this option is ignored.

Possible values:

  • An integer or float, corresponding to the percentage of cell capacity to be held in reserve.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
cell_type
Type:string
Default:compute
Valid Values:api, compute

Type of cell.

When cells feature is enabled the hosts in the OpenStack Compute cloud are partitioned into groups. Cells are configured as a tree. The top-level cell’s cell_type must be set to api. All other cells are defined as a compute cell by default.

Related option:

  • quota_driver: Disable quota checking for the child cells. (nova.quota.NoopQuotaDriver)

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
mute_child_interval
Type:integer
Default:300

Mute child interval.

Number of seconds after which a lack of capability and capacity update the child cell is to be treated as a mute cell. Then the child cell will be weighed as recommend highly that it be skipped.

Possible values:

  • An integer, corresponding to the interval time in seconds.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
bandwidth_update_interval
Type:integer
Default:600

Bandwidth update interval.

Seconds between bandwidth usage cache updates for cells.

Possible values:

  • An integer, corresponding to the interval time in seconds.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
instance_update_sync_database_limit
Type:integer
Default:100

Instance update sync database limit.

Number of instances to pull from the database at one time for a sync. If there are more instances to update the results will be paged through.

Possible values:

  • An integer, corresponding to a number of instances.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
mute_weight_multiplier
Type:floating point
Default:-10000.0

Mute weight multiplier.

Multiplier used to weigh mute children. Mute children cells are recommended to be skipped so their weight is multiplied by this negative value.

Possible values:

  • Negative numeric number

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
ram_weight_multiplier
Type:floating point
Default:10.0

Ram weight multiplier.

Multiplier used for weighing ram. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell.

Possible values:

  • Numeric multiplier

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
offset_weight_multiplier
Type:floating point
Default:1.0

Offset weight multiplier

Multiplier used to weigh offset weigher. Cells with higher weight_offsets in the DB will be preferred. The weight_offset is a property of a cell stored in the database. It can be used by a deployer to have scheduling decisions favor or disfavor cells based on the setting.

Possible values:

  • Numeric multiplier

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
instance_updated_at_threshold
Type:integer
Default:3600

Instance updated at threshold

Number of seconds after an instance was updated or deleted to continue to update cells. This option lets cells manager to only attempt to sync instances that have been updated recently. i.e., a threshold of 3600 means to only update instances that have modified in the last hour.

Possible values:

  • Threshold in seconds

Related options:

  • This value is used with the instance_update_num_instances value in a periodic task run.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
instance_update_num_instances
Type:integer
Default:1

Instance update num instances

On every run of the periodic task, nova cells manager will attempt to sync instance_updated_at_threshold number of instances. When the manager gets the list of instances, it shuffles them so that multiple nova-cells services do not attempt to sync the same instances in lockstep.

Possible values:

  • Positive integer number

Related options:

  • This value is used with the instance_updated_at_threshold value in a periodic task run.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
max_hop_count
Type:integer
Default:10

Maximum hop count

When processing a targeted message, if the local cell is not the target, a route is defined between neighbouring cells. And the message is processed across the whole routing path. This option defines the maximum hop counts until reaching the target.

Possible values:

  • Positive integer value

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
scheduler
Type:string
Default:nova.cells.scheduler.CellsScheduler

Cells scheduler.

The class of the driver used by the cells scheduler. This should be the full Python path to the class to be used. If nothing is specified in this option, the CellsScheduler is used.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
rpc_driver_queue_base
Type:string
Default:cells.intercell

RPC driver queue base.

When sending a message to another cell by JSON-ifying the message and making an RPC cast to ‘process_message’, a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this.

Possible values:

  • The base queue name to be used when communicating between cells.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
scheduler_filter_classes
Type:list
Default:nova.cells.filters.all_filters

Scheduler filter classes.

Filter classes the cells scheduler should use. An entry of “nova.cells.filters.all_filters” maps to all cells filters included with nova. As of the Mitaka release the following filter classes are available:

Different cell filter: A scheduler hint of ‘different_cell’ with a value of a full cell name may be specified to route a build away from a particular cell.

Image properties filter: Image metadata named ‘hypervisor_version_requires’ with a version specification may be specified to ensure the build goes to a cell which has hypervisors of the required version. If either the version requirement on the image or the hypervisor capability of the cell is not present, this filter returns without filtering out the cells.

Target cell filter: A scheduler hint of ‘target_cell’ with a value of a full cell name may be specified to route a build to a particular cell. No error handling is done as there’s no way to know whether the full path is a valid.

As an admin user, you can also add a filter that directs builds to a particular cell.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
scheduler_weight_classes
Type:list
Default:nova.cells.weights.all_weighers

Scheduler weight classes.

Weigher classes the cells scheduler should use. An entry of “nova.cells.weights.all_weighers” maps to all cell weighers included with nova. As of the Mitaka release the following weight classes are available:

mute_child: Downgrades the likelihood of child cells being chosen for scheduling requests, which haven’t sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative).

ram_by_instance_type: Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell.

weight_offset: Allows modifying the database to weight a particular cell. The highest weight will be the first cell to be scheduled for launching an instance. When the weight_offset of a cell is set to 0, it is unlikely to be picked but it could be picked if other cells have a lower weight, like if they’re full. And when the weight_offset is set to a very high value (for example, ‘999999999999999’), it is likely to be picked if another cell do not have a higher weight.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
scheduler_retries
Type:integer
Default:10

Scheduler retries.

How many retries when no cells are available. Specifies how many times the scheduler tries to launch a new instance when no cells are available.

Possible values:

  • Positive integer value

Related options:

  • This value is used with the scheduler_retry_delay value while retrying to find a suitable cell.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
scheduler_retry_delay
Type:integer
Default:2

Scheduler retry delay.

Specifies the delay (in seconds) between scheduling retries when no cell can be found to place the new instance on. When the instance could not be scheduled to a cell after scheduler_retries in combination with scheduler_retry_delay, then the scheduling of the instance failed.

Possible values:

  • Time in seconds.

Related options:

  • This value is used with the scheduler_retries value while retrying to find a suitable cell.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
db_check_interval
Type:integer
Default:60

DB check interval.

Cell state manager updates cell status for all cells from the DB only after this particular interval time is passed. Otherwise cached status are used. If this value is 0 or negative all cell status are updated from the DB whenever a state is needed.

Possible values:

  • Interval time, in seconds.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
cells_config
Type:string
Default:<None>

Optional cells configuration.

Configuration file from which to read cells configuration. If given, overrides reading cells from the database.

Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use this option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the id column). You must specify the queue connection information through a transport_url field, instead of username, password, and so on.

The transport_url has the following form: rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST

Possible values:

The scheme can be either qpid or rabbit, the following sample shows this optional configuration:

{
    "parent": {
        "name": "parent",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": true
    },
    "cell1": {
        "name": "cell1",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit1.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": false
    },
    "cell2": {
        "name": "cell2",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit2.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": false
    }
}

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:Cells v1 is being replaced with Cells v2.
cinder
catalog_info
Type:string
Default:volumev3:cinderv3:publicURL

Info to match when looking for cinder in the service catalog.

Possible values:

  • Format is separated values of the form: <service_type>:<service_name>:<endpoint_type>

Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.

Related options:

  • endpoint_template - Setting this option will override catalog_info
endpoint_template
Type:string
Default:<None>

If this option is set then it will override service catalog lookup with this template for cinder endpoint

Possible values:

Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.

Related options:

  • catalog_info - If endpoint_template is not set, catalog_info will be used.
os_region_name
Type:string
Default:<None>

Region name of this node. This is used when picking the URL in the service catalog.

Possible values:

  • Any string representing region name
http_retries
Type:integer
Default:3
Minimum Value:0

Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.

Possible values:

  • Any integer value. 0 means connection is attempted only once
cross_az_attach
Type:boolean
Default:true

Allow attach between instance and volume in different availability zones.

If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not “volume” because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach.

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

auth_type
Type:unknown type
Default:<None>

Authentication type to load

Deprecated Variations
Group Name
cinder auth_plugin
auth_section
Type:unknown type
Default:<None>

Config Section from which to load plugin specific options

auth_url
Type:unknown type
Default:<None>

Authentication URL

system_scope
Type:unknown type
Default:<None>

Scope for system operations

domain_id
Type:unknown type
Default:<None>

Domain ID to scope to

domain_name
Type:unknown type
Default:<None>

Domain name to scope to

project_id
Type:unknown type
Default:<None>

Project ID to scope to

project_name
Type:unknown type
Default:<None>

Project name to scope to

project_domain_id
Type:unknown type
Default:<None>

Domain ID containing project

project_domain_name
Type:unknown type
Default:<None>

Domain name containing project

trust_id
Type:unknown type
Default:<None>

Trust ID

default_domain_id
Type:unknown type
Default:<None>

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default_domain_name
Type:unknown type
Default:<None>

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

user_id
Type:unknown type
Default:<None>

User ID

username
Type:unknown type
Default:<None>

Username

Deprecated Variations
Group Name
cinder user-name
cinder user_name
user_domain_id
Type:unknown type
Default:<None>

User’s domain id

user_domain_name
Type:unknown type
Default:<None>

User’s domain name

password
Type:unknown type
Default:<None>

User’s password

tenant_id
Type:unknown type
Default:<None>

Tenant ID

tenant_name
Type:unknown type
Default:<None>

Tenant Name

compute
consecutive_build_service_disable_threshold
Type:integer
Default:10

Enables reporting of build failures to the scheduler.

Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher.

Possible values:

  • Any positive integer enables reporting build failures.
  • Zero to disable reporting build failures.

Related options:

  • [filter_scheduler]/build_failure_weight_multiplier
shutdown_retry_interval
Type:integer
Default:10
Minimum Value:1

Time to wait in seconds before resending an ACPI shutdown signal to instances.

The overall time to wait is set by shutdown_timeout.

Possible values:

  • Any integer greater than 0 in seconds

Related options:

  • shutdown_timeout
resource_provider_association_refresh
Type:integer
Default:300
Minimum Value:0
Mutable:This option can be changed without restarting.

Interval for updating nova-compute-side cache of the compute node resource provider’s aggregates and traits info.

This option specifies the number of seconds between attempts to update a provider’s aggregates and traits information in the local cache of the compute node.

A value of zero disables cache refresh completely.

Possible values:

  • Any positive integer in seconds, or zero to disable refresh.
cpu_shared_set
Type:string
Default:<None>

Defines which physical CPUs (pCPUs) will be used for best-effort guest vCPU resources.

Currently only used by libvirt driver to place guest emulator threads when hw:emulator_threads_policy:share.

::
cpu_shared_set = “4-12,^8,15”
live_migration_wait_for_vif_plug
Type:boolean
Default:false

Determine if the source compute host should wait for a network-vif-plugged event from the (neutron) networking service before starting the actual transfer of the guest to the destination compute host.

Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this.

Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend on the destination host, a network-vif-plugged event may be triggered and then received on the source compute host and the source compute can wait for that event to ensure networking is set up on the destination host before starting the guest transfer in the hypervisor.

By default, this is False for two reasons:

  1. Backward compatibility: deployments should test this out and ensure it works for them before enabling it.
  2. The compute service cannot reliably determine which types of virtual interfaces (port.binding:vif_type) will send network-vif-plugged events without an accompanying port binding:host_id change. Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least one known backend that will not currently work in this case, see bug https://launchpad.net/bugs/1755890 for more details.

Possible values:

  • True: wait for network-vif-plugged events before starting guest transfer
  • False: do not wait for network-vif-plugged events before starting guest transfer (this is how things have always worked before this option was introduced)

Related options:

  • [DEFAULT]/vif_plugging_is_fatal: if live_migration_wait_for_vif_plug is True and vif_plugging_timeout is greater than 0, and a timeout is reached, the live migration process will fail with an error but the guest transfer will not have started to the destination host
  • [DEFAULT]/vif_plugging_timeout: if live_migration_wait_for_vif_plug is True, this controls the amount of time to wait before timing out and either failing if vif_plugging_is_fatal is True, or simply continuing with the live migration
conductor

Options under this group are used to define Conductor’s communication, which manager should be act as a proxy between computes and database, and finally, how many worker processes will be used.

workers
Type:integer
Default:<None>

Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.

console

Options under this group allow to tune the configuration of the console proxy service.

Note: in configuration of every compute is a console_host option, which allows to select the console proxy service to connect to.

allowed_origins
Type:list
Default:''

Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header.

Possible values:

  • A list where each element is an allowed origin hostnames, else an empty list
Deprecated Variations
Group Name
DEFAULT console_allowed_origins
consoleauth
token_ttl
Type:integer
Default:600
Minimum Value:0

The lifetime of a console auth token (in seconds).

A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted.

Related options:

  • [workarounds]/enable_consoleauth
Deprecated Variations
Group Name
DEFAULT console_token_ttl
devices
enabled_vgpu_types
Type:list
Default:''

The vGPU types enabled in the compute node.

Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use this option to specify a list of enabled vGPU types that may be assigned to a guest instance. But please note that Nova only supports a single type in the Queens release. If more than one vGPU type is specified (as a comma-separated list), only the first one will be used. An example is as the following:

[devices]
enabled_vgpu_types = GRID K100,Intel GVT-g,MxGPU.2,nvidia-11
ephemeral_storage_encryption
enabled
Type:boolean
Default:false

Enables/disables LVM ephemeral storage encryption.

cipher
Type:string
Default:aes-xts-plain64

Cipher-mode string to be used.

The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: “<cipher>-<chainmode>-<ivmode>”.

Possible values:

  • Any crypto option listed in /proc/crypto.
key_size
Type:integer
Default:512
Minimum Value:1

Encryption key length in bits.

The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key.

filter_scheduler
host_subset_size
Type:integer
Default:1
Minimum Value:1

Size of subset of best hosts selected by scheduler.

New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option.

Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer, where the integer corresponds to the size of a host subset. Any integer is valid, although any value less than 1 will be treated as 1
Deprecated Variations
Group Name
DEFAULT scheduler_host_subset_size
max_io_ops_per_host
Type:integer
Default:8

The number of instances that can be actively performing IO on a host.

Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops_filter’ filter is enabled.

Possible values:

  • An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host.
Deprecated Variations
Group Name
DEFAULT max_io_ops_per_host
max_instances_per_host
Type:integer
Default:50
Minimum Value:1

Maximum number of instances that be active on a host.

If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option’s value.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘NumInstancesFilter’ or ‘AggregateNumInstancesFilter’ filter is enabled.

Possible values:

  • An integer, where the integer corresponds to the max instances that can be scheduled on a host.
Deprecated Variations
Group Name
DEFAULT max_instances_per_host
track_instance_changes
Type:boolean
Default:true

Enable querying of individual hosts for instance information.

The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host.

If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

NOTE: In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the [workarounds]/disable_group_policy_check_upcall option.

Deprecated Variations
Group Name
DEFAULT scheduler_tracks_instance_changes
available_filters
Type:multi-valued
Default:nova.scheduler.filters.all_filters

Filters that the scheduler can use.

An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the ‘enabled_filters’ option will be used, but any filter appearing in that option must also be included in this list.

By default, this is set to all filters that are included with nova.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host

Related options:

  • enabled_filters
Deprecated Variations
Group Name
DEFAULT scheduler_available_filters
enabled_filters
Type:list
Default:RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Filters that the scheduler will use.

An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host

Related options:

  • All of the filters in this option must be present in the ‘available_filters’ option, or a SchedulerHostFilterNotFound exception will be raised.
Deprecated Variations
Group Name
DEFAULT scheduler_default_filters
weight_classes
Type:list
Default:nova.scheduler.weights.all_weighers

Weighers that the scheduler will use.

Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is ‘scheduler_host_subset_size’.

By default, this is set to all weighers that are included with Nova.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host
Deprecated Variations
Group Name
DEFAULT scheduler_weight_classes
ram_weight_multiplier
Type:floating point
Default:1.0

RAM weight multipler ratio.

This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘ram’ weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multipler ratio for this weigher.
Deprecated Variations
Group Name
DEFAULT ram_weight_multiplier
cpu_weight_multiplier
Type:floating point
Default:1.0

CPU weight multiplier ratio.

Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘cpu’ weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multipler ratio for this weigher.

Related options:

  • filter_scheduler.weight_classes: This weigher must be added to list of enabled weight classes if the weight_classes setting is set to a non-default value.
disk_weight_multiplier
Type:floating point
Default:1.0

Disk weight multipler ratio.

Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘disk’ weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multipler ratio for this weigher.
Deprecated Variations
Group Name
DEFAULT disk_weight_multiplier
io_ops_weight_multiplier
Type:floating point
Default:-1.0

IO operations weight multipler ratio.

This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops’ weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multipler ratio for this weigher.
Deprecated Variations
Group Name
DEFAULT io_ops_weight_multiplier
pci_weight_multiplier
Type:floating point
Default:1.0
Minimum Value:0.0

PCI device affinity weight multiplier.

The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance. The NUMATopologyFilter filter must be enabled for this to have any significance. For more information, refer to the filter documentation:

Possible values:

  • A positive integer or float value, where the value corresponds to the multiplier ratio for this weigher.
soft_affinity_weight_multiplier
Type:floating point
Default:1.0

Multiplier used for weighing hosts for group soft-affinity.

Possible values:

  • An integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity. Only a positive value are meaningful, as negative values would make this behave as a soft anti-affinity weigher.
Deprecated Variations
Group Name
DEFAULT soft_affinity_weight_multiplier
soft_anti_affinity_weight_multiplier
Type:floating point
Default:1.0

Multiplier used for weighing hosts for group soft-anti-affinity.

Possible values:

  • An integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity. Only a positive value are meaningful, as negative values would make this behave as a soft affinity weigher.
Deprecated Variations
Group Name
DEFAULT soft_anti_affinity_weight_multiplier
build_failure_weight_multiplier
Type:floating point
Default:1000000.0

Multiplier used for weighing hosts that have had recent build failures.

This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

Related options:

  • [compute]/consecutive_build_service_disable_threshold - Must be nonzero for a compute to report data considered by this weigher.
shuffle_best_same_weighed_hosts
Type:boolean
Default:false

Enable spreading the instances between hosts with the same best weight.

Enabling it is beneficial for cases when host_subset_size is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense.

image_properties_default_architecture
Type:string
Default:<None>
Valid Values:alpha, armv6, armv7l, armv7b, aarch64, cris, i686, ia64, lm32, m68k, microblaze, microblazeel, mips, mipsel, mips64, mips64el, openrisc, parisc, parisc64, ppc, ppcle, ppc64, ppc64le, ppcemb, s390, s390x, sh4, sh4eb, sparc, sparc64, unicore32, x86_64, xtensa, xtensaeb

The default architecture to be used when using the image properties filter.

When using the ImagePropertiesFilter, it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on aarch64 compute nodes because the user did not specify the ‘hw_architecture’ property in Glance.

Possible values:

  • CPU Architectures such as x86_64, aarch64, s390x.
isolated_images
Type:list
Default:''

List of UUIDs for images that can only be run on certain hosts.

If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled.

Possible values:

  • A list of UUID strings, where each string corresponds to the UUID of an image

Related options:

  • scheduler/isolated_hosts
  • scheduler/restrict_isolated_hosts_to_isolated_images
Deprecated Variations
Group Name
DEFAULT isolated_images
isolated_hosts
Type:list
Default:''

List of hosts that can only run certain images.

If there is a need to restrict some images to only run on certain designated hosts, list those host names here.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled.

Possible values:

  • A list of strings, where each string corresponds to the name of a host

Related options:

  • scheduler/isolated_images
  • scheduler/restrict_isolated_hosts_to_isolated_images
Deprecated Variations
Group Name
DEFAULT isolated_hosts
restrict_isolated_hosts_to_isolated_images
Type:boolean
Default:true

Prevent non-isolated images from being built on isolated hosts.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled. Even then, this option doesn’t affect the behavior of requests for isolated images, which will always be restricted to isolated hosts.

Related options:

  • scheduler/isolated_images
  • scheduler/isolated_hosts
Deprecated Variations
Group Name
DEFAULT restrict_isolated_hosts_to_isolated_images
aggregate_image_properties_isolation_namespace
Type:string
Default:<None>

Image property namespace for use in the host aggregate.

Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled.

Possible values:

  • A string, where the string corresponds to an image property namespace

Related options:

  • aggregate_image_properties_isolation_separator
Deprecated Variations
Group Name
DEFAULT aggregate_image_properties_isolation_namespace
aggregate_image_properties_isolation_separator
Type:string
Default:.

Separator character(s) for image property namespace and name.

When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled.

Possible values:

  • A string, where the string corresponds to an image property namespace separator character

Related options:

  • aggregate_image_properties_isolation_namespace
Deprecated Variations
Group Name
DEFAULT aggregate_image_properties_isolation_separator
glance

Configuration options for the Image service

api_servers
Type:list
Default:<None>

List of glance api servers endpoints available to nova.

https is used for ssl-based glance api servers.

NOTE: The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason.

Possible values:

  • A list of any fully qualified url of the form “scheme://hostname:port[/path]” (i.e. “http://10.0.1.0:9292” or “https://my.glance.server/image”).
num_retries
Type:integer
Default:0
Minimum Value:0

Enable glance operation retries.

Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries.

allowed_direct_url_schemes
Type:list
Default:''

List of url schemes that can be directly accessed.

This option specifies a list of url schemes that can be downloaded directly via the direct_url. This direct_URL can be fetched from Image metadata which can be used by nova to get the image more efficiently. nova-compute could benefit from this by invoking a copy when it has access to the same file system as glance.

Possible values:

  • [file], Empty list (default)

Warning

This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.

Reason:This was originally added for the ‘nova.image.download.file’ FileTransfer extension which was removed in the 16.0.0 Pike release. The ‘nova.image.download.modules’ extension point is not maintained and there is no indication of its use in production clouds.
verify_glance_signatures
Type:boolean
Default:false

Enable image signature verification.

nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers.

Related options:

  • The options in the key_manager group, as the key_manager is used for the signature validation.
  • Both enable_certificate_validation and default_trusted_certificate_ids below depend on this option being enabled.
enable_certificate_validation
Type:boolean
Default:false

Enable certificate validation for image signature verification.

During image signature verification nova will first verify the validity of the image’s signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy.

Related options:

  • This option only takes effect if verify_glance_signatures is enabled.
  • The value of default_trusted_certificate_ids may be used when this option is enabled.

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together.
default_trusted_certificate_ids
Type:list
Default:''

List of certificate IDs for certificates that should be trusted.

May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail.

Related options:

  • The value of this option may be used if both verify_glance_signatures and enable_certificate_validation are enabled.
debug
Type:boolean
Default:false

Enable or disable debug logging with glanceclient.

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

service_type
Type:string
Default:image

The default service_type for endpoint URL discovery.

service_name
Type:string
Default:<None>

The default service_name for endpoint URL discovery.

valid_interfaces
Type:list
Default:internal,public

List of interfaces, in order of preference, for endpoint URL.

region_name
Type:string
Default:<None>

The default region_name for endpoint URL discovery.

endpoint_override
Type:string
Default:<None>

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

guestfs

libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used/free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks and resizing disks.

debug
Type:boolean
Default:false

Enable/disables guestfs logging.

This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, “libguestfs” package must be installed.

Related options:

Since libguestfs access and modifies VM’s managed by libvirt, below options should be set to give access to those VM’s.

  • libvirt.inject_key
  • libvirt.inject_partition
  • libvirt.inject_password
hyperv
evacuate_task_state_timeout
Type:integer
Default:600

Number of seconds to wait for an instance to be evacuated during host maintenance.

cluster_event_check_interval
Type:integer
Default:2

Warning

This option is deprecated for removal since 5.0.1. Its value may be silently ignored in the future.

instance_automatic_shutdown
Type:boolean
Default:false

Automatically shutdown instances when the host is shutdown. By default, instances will be saved, which adds a disk overhead. Changing this option will not affect existing instances.

instance_live_migration_timeout
Type:integer
Default:300
Minimum Value:0

Number of seconds to wait for an instance to be live migrated (Only applies to clustered instances for the moment).

max_failover_count
Type:integer
Default:1
Minimum Value:1

The maximum number of failovers that can occur in the failover_period timeframe per VM. Once a VM’s number failover reaches this number, the VM will simply end up in a Failed state.

failover_period
Type:integer
Default:6
Minimum Value:1

The number of hours in which the max_failover_count number of failovers can occur.

auto_failback
Type:boolean
Default:true

Allow the VM the failback to its original host once it is available.

force_destroy_instances
Type:boolean
Default:false

If this option is enabled, instance destroy requests are executed immediately, regardless of instance pending tasks. In some situations, the destroy operation will fail (e.g. due to file locks), requiring subsequent retries.

dynamic_memory_ratio
Type:floating point
Default:1.0

Dynamic memory ratio

Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup.

Possible values:

  • 1.0: Disables dynamic memory allocation (Default).
  • Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup.
enable_instance_metrics_collection
Type:boolean
Default:false

Enable instance metrics collection

Enables metrics collections for an instance by using Hyper-V’s metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer.

instances_path_share
Type:string
Default:''

Instances path share

The name of a Windows share mapped to the “instances_path” dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same “instances_path” used locally.

Possible values:

  • “”: An administrative share will be used (Default).
  • Name of a Windows share.

Related options:

  • “instances_path”: The directory which will be used if this option here is left blank.
limit_cpu_features
Type:boolean
Default:false

Limit CPU features

This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance.

mounted_disk_query_retry_count
Type:integer
Default:10
Minimum Value:0

Mounted disk query retry count

The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached.

Possible values:

  • Positive integer values. Values greater than 1 is recommended (Default: 10).

Related options:

  • Time interval between disk mount retries is declared with “mounted_disk_query_retry_interval” option.
mounted_disk_query_retry_interval
Type:integer
Default:5
Minimum Value:0

Mounted disk query retry interval

Interval between checks for a mounted disk, in seconds.

Possible values:

  • Time in seconds (Default: 5).

Related options:

  • This option is meaningful when the mounted_disk_query_retry_count is greater than 1.
  • The retry loop runs with mounted_disk_query_retry_count and mounted_disk_query_retry_interval configuration options.
power_state_check_timeframe
Type:integer
Default:60
Minimum Value:0

Power state check timeframe

The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe.

Possible values:

  • Timeframe in seconds (Default: 60).
power_state_event_polling_interval
Type:integer
Default:2
Minimum Value:0

Power state event polling interval

Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value.

Possible values:

  • Time in seconds (Default: 2).
qemu_img_cmd
Type:string
Default:qemu-img.exe

qemu-img command

qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value.

Possible values:

  • Name of the qemu-img executable, in case it is in the same directory as the nova-compute service or its path is in the PATH environment variable (Default).
  • Path of qemu-img command (DRIVELETTER:PATHTOQEMU-IMGCOMMAND).

Related options:

  • If the config_drive_cdrom option is False, qemu-img will be used to convert the ISO to a VHD, otherwise the configuration drive will remain an ISO. To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation.
vswitch_name
Type:string
Default:<None>

External virtual switch name

The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private).

Possible values:

  • If not provided, the first of a list of available vswitches is used. This list is queried using WQL.
  • Virtual switch name.
wait_soft_reboot_seconds
Type:integer
Default:60
Minimum Value:0

Wait soft reboot seconds

Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.

Possible values:

  • Time in seconds (Default: 60).
config_drive_cdrom
Type:boolean
Default:false

Configuration drive cdrom

OpenStack can be configured to write instance metadata to a configuration drive, which is then attached to the instance before it boots. The configuration drive can be attached as a disk drive (default) or as a CD drive.

Possible values:

  • True: Attach the configuration drive image as a CD drive.
  • False: Attach the configuration drive image as a disk drive (Default).

Related options:

  • This option is meaningful with force_config_drive option set to ‘True’ or when the REST API call to create an instance will have ‘–config-drive=True’ flag.
  • config_drive_format option must be set to ‘iso9660’ in order to use CD drive as the configuration drive image.
  • To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value to the full path to an qemu-img command installation.
  • You can configure the Compute service to always create a configuration drive by setting the force_config_drive option to ‘True’.
config_drive_inject_password
Type:boolean
Default:false

Configuration drive inject password

Enables setting the admin password in the configuration drive image.

Related options:

  • This option is meaningful when used with other options that enable configuration drive usage with Hyper-V, such as force_config_drive.
  • Currently, the only accepted config_drive_format is ‘iso9660’.
volume_attach_retry_count
Type:integer
Default:10
Minimum Value:0

Volume attach retry count

The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached.

Possible values:

  • Positive integer values (Default: 10).

Related options:

  • Time interval between attachment attempts is declared with volume_attach_retry_interval option.
volume_attach_retry_interval
Type:integer
Default:5
Minimum Value:0

Volume attach retry interval

Interval between volume attachment attempts, in seconds.

Possible values:

  • Time in seconds (Default: 5).

Related options:

  • This options is meaningful when volume_attach_retry_count is greater than 1.
  • The retry loop runs with volume_attach_retry_count and volume_attach_retry_interval configuration options.
enable_remotefx
Type:boolean
Default:false

Enable RemoteFX feature

This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled.

Instances with RemoteFX can be requested with the following flavor extra specs:

os:resolution. Guest VM screen resolution size. Acceptable values:

1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160

3840x2160 is only available on Windows / Hyper-V Server 2016.

os:monitors. Guest VM number of monitors. Acceptable values:

[1, 4] - Windows / Hyper-V Server 2012 R2
[1, 8] - Windows / Hyper-V Server 2016

os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:

64, 128, 256, 512, 1024
use_multipath_io
Type:boolean
Default:false

Use multipath connections when attaching iSCSI or FC disks.

This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices.

iscsi_initiator_list
Type:list
Default:''

List of iSCSI initiators that will be used for estabilishing iSCSI sessions.

If none are specified, the Microsoft iSCSI initiator service will choose the initiator.

ironic

Configuration options for Ironic driver (Bare Metal). If using the Ironic driver following options must be set: * auth_type * auth_url * project_name * username * password * project_domain_id or project_domain_name * user_domain_id or user_domain_name

api_endpoint
Type:URI
Default:http://ironic.example.org:6385/

This option has a sample default set, which means that its actual default value may vary from the one documented above.

URL override for the Ironic API endpoint.

Warning

This option is deprecated for removal. Its value may be silently ignored in the future.

Reason:Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, api_endpoint will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.
api_max_retries
Type:integer
Default:60
Minimum Value:0

The number of times to retry when a request conflicts. If set to 0, only try once, no retries.

Related options:

  • api_retry_interval
api_retry_interval
Type:integer
Default:2
Minimum Value:0

The number of seconds to wait before retrying the request.

Related options:

  • api_max_retries
serial_console_state_timeout
Type:integer
Default:10
Minimum Value:0

Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout.

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

auth_type
Type:unknown type
Default:<None>

Authentication type to load

Deprecated Variations
Group Name
ironic auth_plugin
auth_section
Type:unknown type
Default:<None>

Config Section from which to load plugin specific options

auth_url
Type:unknown type
Default:<None>

Authentication URL

system_scope
Type:unknown type
Default:<None>

Scope for system operations

domain_id
Type:unknown type
Default:<None>

Domain ID to scope to

domain_name
Type:unknown type
Default:<None>

Domain name to scope to

project_id
Type:unknown type
Default:<None>

Project ID to scope to

project_name
Type:unknown type
Default:<None>

Project name to scope to

project_domain_id
Type:unknown type
Default:<None>

Domain ID containing project

project_domain_name
Type:unknown type
Default:<None>

Domain name containing project

trust_id
Type:unknown type
Default:<None>

Trust ID

user_id
Type:unknown type
Default:<None>

User ID

username
Type:unknown type
Default:<None>

Username

Deprecated Variations
Group Name
ironic user-name
ironic user_name
user_domain_id
Type:unknown type
Default:<None>

User’s domain id

user_domain_name
Type:unknown type
Default:<None>

User’s domain name

password
Type:unknown type
Default:<None>

User’s password

service_type
Type:string
Default:baremetal

The default service_type for endpoint URL discovery.

service_name
Type:string
Default:<None>

The default service_name for endpoint URL discovery.

valid_interfaces
Type:list
Default:internal,public

List of interfaces, in order of preference, for endpoint URL.

region_name
Type:string
Default:<None>

The default region_name for endpoint URL discovery.

endpoint_override
Type:string
Default:<None>

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

Deprecated Variations
Group Name
ironic api_endpoint
key_manager
fixed_key
Type:string
Default:<None>

Fixed key returned by key manager, specified in hex.

Possible values:

  • Empty string or a key in hex value
Deprecated Variations
Group Name
keymgr fixed_key
backend
Type:string
Default:barbican

Specify the key manager implementation. Options are “barbican” and “vault”. Default is “barbican”. Will support the values earlier set using [key_manager]/api_class for some time.

Deprecated Variations
Group Name
key_manager api_class
auth_type
Type:string
Default:<None>

The type of authentication credential to create. Possible values are ‘token’, ‘password’, ‘keystone_token’, and ‘keystone_password’. Required if no context is passed to the credential factory.

token
Type:string
Default:<None>

Token for authentication. Required for ‘token’ and ‘keystone_token’ auth_type if no context is passed to the credential factory.

username
Type:string
Default:<None>

Username for authentication. Required for ‘password’ auth_type. Optional for the ‘keystone_password’ auth_type.

password
Type:string
Default:<None>

Password for authentication. Required for ‘password’ and ‘keystone_password’ auth_type.

auth_url
Type:string
Default:<None>

Use this endpoint to connect to Keystone.

user_id
Type:string
Default:<None>

User ID for authentication. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

user_domain_id
Type:string
Default:<None>

User’s domain ID for authentication. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

user_domain_name
Type:string
Default:<None>

User’s domain name for authentication. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

trust_id
Type:string
Default:<None>

Trust ID for trust scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

domain_id
Type:string
Default:<None>

Domain ID for domain scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

domain_name
Type:string
Default:<None>

Domain name for domain scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

project_id
Type:string
Default:<None>

Project ID for project scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

project_name
Type:string
Default:<None>

Project name for project scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

project_domain_id
Type:string
Default:<None>

Project’s domain ID for project. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

project_domain_name
Type:string
Default:<None>

Project’s domain name for project. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

reauthenticate
Type:boolean
Default:true

Allow fetching a new token if the current one is going to expire. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.

keystone

Configuration options for the identity service

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

service_type
Type:string
Default:identity

The default service_type for endpoint URL discovery.

service_name
Type:string
Default:<None>

The default service_name for endpoint URL discovery.

valid_interfaces
Type:list
Default:internal,public

List of interfaces, in order of preference, for endpoint URL.

region_name
Type:string
Default:<None>

The default region_name for endpoint URL discovery.

endpoint_override
Type:string
Default:<None>

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

libvirt

Libvirt options allows cloud administrator to configure related libvirt hypervisor driver to be used within an OpenStack deployment.

Almost all of the libvirt config options are influence by virt_type config which describes the virtualization type (or so called domain type) libvirt should use for specific features such as live migration, snapshot.

rescue_image_id
Type:string
Default:<None>

The ID of the image to boot from to rescue data from a corrupted instance.

If the rescue REST API operation doesn’t provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used.

Possible values:

  • An ID of an image or nothing. If it points to an Amazon Machine Image (AMI), consider to set the config options rescue_kernel_id and rescue_ramdisk_id too. If nothing is set, the image of the instance is used.

Related options:

  • rescue_kernel_id: If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.
  • rescue_ramdisk_id: If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.
rescue_kernel_id
Type:string
Default:<None>

The ID of the kernel (AKI) image to use with the rescue image.

If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.

Possible values:

  • An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one.

Related options:

  • rescue_image_id: If that option points to an image in Amazon’s AMI/AKI/ARI image format, it’s useful to use rescue_kernel_id too.
rescue_ramdisk_id
Type:string
Default:<None>

The ID of the RAM disk (ARI) image to use with the rescue image.

If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.

Possible values:

  • An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one.

Related options:

  • rescue_image_id: If that option points to an image in Amazon’s AMI/AKI/ARI image format, it’s useful to use rescue_ramdisk_id too.
virt_type
Type:string
Default:kvm
Valid Values:kvm, lxc, qemu, uml, xen, parallels

Describes the virtualization type (or so called domain type) libvirt should use.

The choice of this type must match the underlying virtualization strategy you have chosen for this host.

Related options:

  • connection_uri: depends on this
  • disk_prefix: depends on this
  • cpu_mode: depends on this
  • cpu_model: depends on this
connection_uri
Type:string
Default:''

Overrides the default libvirt URI of the chosen virtualization type.

If set, Nova will use this URI to connect to libvirt.

Possible values:

  • An URI like qemu:///system or xen+ssh://oirase/ for example. This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type.

Related options:

  • virt_type: Influences what is used as default value here.
inject_password
Type:boolean
Default:false

Allow the injection of an admin password for instance only at create and rebuild process.

There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won’t be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume.

Linux distribution guest only.

Possible values:

  • True: Allows the injection.
  • False: Disallows the injection. Any via the REST API provided admin password will be silently ignored.

Related options:

  • inject_partition: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.
inject_key
Type:boolean
Default:false

Allow the injection of an SSH key at boot time.

There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the authorized_keys of that user. The SELinux context will be set if necessary. Be aware that the injection is not possible when the instance gets launched from a volume.

This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service.

Linux distribution guest only.

Related options:

  • inject_partition: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.
inject_partition
Type:integer
Default:-2
Minimum Value:-2

Determines the way how the file system is chosen to inject data into it.

libguestfs will be used a first solution to inject data. If that’s not available on the host, the image will be locally mounted on the host as a fallback solution. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won’t be boot.

Possible values:

  • -2 => disable the injection of data.
  • -1 => find the root partition with the file system to mount with libguestfs
  • 0 => The image is not partitioned
  • >0 => The number of the partition to use for the injection

Linux distribution guest only.

Related options:

  • inject_key: If this option allows the injection of a SSH key it depends on value greater or equal to -1 for inject_partition.
  • inject_password: If this option allows the injection of an admin password it depends on value greater or equal to -1 for inject_partition.
  • guestfs You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues.
  • virt_type: If you use lxc as virt_type it will be treated as a single partition image
use_usb_tablet
Type:boolean
Default:true

Enable a mouse cursor within a graphical VNC or SPICE sessions.

This will only be taken into account if the VM is fully virtualized and VNC and/or SPICE is enabled. If the node doesn’t support a graphical framebuffer, then it is valid to set this to False.

Related options:

  • [vnc]enabled: If VNC is enabled, use_usb_tablet will have an effect.
  • [spice]enabled + [spice].agent_enabled: If SPICE is enabled and the spice agent is disabled, the config value of use_usb_tablet will have an effect.

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:This option is being replaced by the ‘pointer_model’ option.
live_migration_scheme
Type:string
Default:<None>

URI scheme used for live migration.

Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme.

Related options:

  • virt_type: This option is meaningful only when virt_type is set to kvm or qemu.
  • live_migration_uri: If live_migration_uri value is not None, the scheme used for live migration is taken from live_migration_uri instead.
live_migration_inbound_addr
Type:string
Default:<None>

The IP address or hostname to be used as the target for live migration traffic.

If this option is set to None, the hostname of the migration target compute node will be used.

This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network.

Possible values:

  • A valid IP address or hostname, else None.

Related options:

  • live_migration_tunnelled: The live_migration_inbound_addr value is ignored if tunneling is enabled.
live_migration_uri
Type:string
Default:<None>

Live migration target URI to use.

Override the default libvirt live migration target URI (which is dependent on virt_type). Any included “%s” is replaced with the migration target hostname.

If this option is set to None (which is the default), Nova will automatically generate the live_migration_uri value based on only 4 supported virt_type in following list:

  • ‘kvm’: ‘qemu+tcp://%s/system’
  • ‘qemu’: ‘qemu+tcp://%s/system’
  • ‘xen’: ‘xenmigr://%s/system’
  • ‘parallels’: ‘parallels+tcp://%s/system’

Related options:

  • live_migration_inbound_addr: If live_migration_inbound_addr value is not None and live_migration_tunnelled is False, the ip/hostname address of target compute node is used instead of live_migration_uri as the uri for live migration.
  • live_migration_scheme: If live_migration_uri is not set, the scheme used for live migration is taken from live_migration_scheme instead.

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: live_migration_scheme and live_migration_inbound_addr respectively.
live_migration_tunnelled
Type:boolean
Default:false

Enable tunnelled migration.

This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively.

Note that this option is NOT compatible with use of block migration.

Related options:

  • live_migration_inbound_addr: The live_migration_inbound_addr value is ignored if tunneling is enabled.
live_migration_bandwidth
Type:integer
Default:0

Maximum bandwidth(in MiB/s) to be used during migration.

If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details.

live_migration_downtime
Type:integer
Default:500
Minimum Value:100

Maximum permitted downtime, in milliseconds, for live migration switchover.

Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over.

Related options:

  • live_migration_completion_timeout
live_migration_downtime_steps
Type:integer
Default:10
Minimum Value:3

Number of incremental steps to reach max downtime value.

Will be rounded up to a minimum of 3 steps.

live_migration_downtime_delay
Type:integer
Default:75
Minimum Value:3

Time to wait, in seconds, between each step increase of the migration downtime.

Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device.

live_migration_completion_timeout
Type:integer
Default:800
Mutable:This option can be changed without restarting.

Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation.

Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts.

Related options:

  • live_migration_downtime
  • live_migration_downtime_steps
  • live_migration_downtime_delay
live_migration_progress_timeout
Type:integer
Default:0
Mutable:This option can be changed without restarting.

Time to wait, in seconds, for migration to make forward progress in transferring data before aborting the operation.

Set to 0 to disable timeouts.

This is deprecated, and now disabled by default because we have found serious bugs in this feature that caused false live-migration timeout failures. This feature will be removed or replaced in a future release.

Warning

This option is deprecated for removal. Its value may be silently ignored in the future.

Reason:Serious bugs found in this feature, see https://bugs.launchpad.net/nova/+bug/1644248 for details.
live_migration_permit_post_copy
Type:boolean
Default:false

This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.

When permitted, post-copy mode will be automatically activated if a live-migration memory copy iteration does not make percentage increase of at least 10% over the last iteration.

The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete.

When using post-copy mode, if the source and destination hosts loose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide.

Related options:

  • live_migration_permit_auto_converge
live_migration_permit_auto_converge
Type:boolean
Default:false

This option allows nova to start live migration with auto converge on.

Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use.

Related options:

  • live_migration_permit_post_copy
snapshot_image_format
Type:string
Default:<None>
Valid Values:raw, qcow2, vmdk, vdi

Determine the snapshot image format when sending to the image service.

If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image.

Possible values

raw
RAW disk format
qcow2
KVM default disk format
vmdk
VMWare default disk format
vdi
VirtualBox default disk format
disk_prefix
Type:string
Default:<None>

Override the default disk prefix for the devices attached to an instance.

If set, this is used to identify a free disk device name for a bus.

Possible values:

  • Any prefix which will result in a valid disk device name like ‘sda’ or ‘hda’ for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd.

Related options:

  • virt_type: Influences which device type is used, which determines the default disk prefix.
wait_soft_reboot_seconds
Type:integer
Default:120

Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.

cpu_mode
Type:string
Default:<None>
Valid Values:host-model, host-passthrough, custom, none

Is used to set the CPU mode an instance should have.

If virt_type="kvm|qemu", it will default to host-model, otherwise it will default to none.

Related options:

  • cpu_model: This should be set ONLY when cpu_mode is set to custom. Otherwise, it would result in an error and the instance launch will fail.

Possible values

host-model
Clone the host CPU feature flags
host-passthrough
Use the host CPU model exactly
custom
Use the CPU model in [libvirt]cpu_model
none
Don’t set a specific CPU model. For instances with [libvirt] virt_type as KVM/QEMU, the default CPU model from QEMU will be used, which provides a basic set of CPU features that are compatible with most hosts
cpu_model
Type:string
Default:<None>

Set the name of the libvirt CPU model the instance should use.

Possible values:

  • The named CPU models listed in /usr/share/libvirt/cpu_map.xml

Related options:

  • cpu_mode: This should be set to custom ONLY when you want to configure (via cpu_model) a specific named CPU model. Otherwise, it would result in an error and the instance launch will fail.
  • virt_type: Only the virtualization types kvm and qemu use this.
cpu_model_extra_flags
Type:list
Default:''

This allows specifying granular CPU feature flags when configuring CPU models. For example, to explicitly specify the pcid (Process-Context ID, an Intel processor feature – which is now required to address the guest performance degradation as a result of applying the “Meltdown” CVE fixes to certain Intel CPU models) flag to the “IvyBridge” virtual CPU model:

[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_model_extra_flags = pcid

To specify multiple CPU flags (e.g. the Intel VMX to expose the virtualization extensions to the guest, or pdpe1gb to configure 1GB huge pages for CPU models that do not provide it):

[libvirt]
cpu_mode = custom
cpu_model = Haswell-noTSX-IBRS
cpu_model_extra_flags = PCID, VMX, pdpe1gb

As it can be noticed from above, the cpu_model_extra_flags config attribute is case insensitive. And specifying extra flags is valid in combination with all the three possible values for cpu_mode: custom (this also requires an explicit cpu_model to be specified), host-model, or host-passthrough. A valid example for allowing extra CPU flags even for host-passthrough mode is that sometimes QEMU may disable certain CPU features – e.g. Intel’s “invtsc”, Invariable Time Stamp Counter, CPU flag. And if you need to expose that CPU flag to the Nova instance, the you need to explicitly ask for it.

The possible values for cpu_model_extra_flags depends on the CPU model in use. Refer to /usr/share/libvirt/cpu_map.xml possible CPU feature flags for a given CPU model.

Note that when using this config attribute to set the ‘PCID’ CPU flag with the custom CPU mode, not all virtual (i.e. libvirt / QEMU) CPU models need it:

  • The only virtual CPU models that include the ‘PCID’ capability are Intel “Haswell”, “Broadwell”, and “Skylake” variants.
  • The libvirt / QEMU CPU models “Nehalem”, “Westmere”, “SandyBridge”, and “IvyBridge” will _not_ expose the ‘PCID’ capability by default, even if the host CPUs by the same name include it. I.e. ‘PCID’ needs to be explicitly specified when using the said virtual CPU models.

The libvirt driver’s default CPU mode, host-model, will do the right thing with respect to handling ‘PCID’ CPU flag for the guest – assuming you are running updated processor microcode, host and guest kernel, libvirt, and QEMU. The other mode, host-passthrough, checks if ‘PCID’ is available in the hardware, and if so directly passes it through to the Nova guests. Thus, in context of ‘PCID’, with either of these CPU modes (host-model or host-passthrough), there is no need to use the cpu_model_extra_flags.

Related options:

  • cpu_mode
  • cpu_model
snapshots_directory
Type:string
Default:$instances_path/snapshots

Location where libvirt driver will store snapshots before uploading them to image service

xen_hvmloader_path
Type:string
Default:/usr/lib/xen/boot/hvmloader

Location where the Xen hvmloader is kept

disk_cachemodes
Type:list
Default:''

Specific cache modes to use for different disk types.

For example: file=directsync,block=none,network=writeback

For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment.

Possible cache modes:

  • default: Same as writethrough.
  • none: With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured. However, because the host page cache is disabled, the read performance in the guest would not be as good as in the modes where the host page cache is enabled, such as writethrough mode. Shareable disk devices, like for a multi-attachable block storage volume, will have their cache mode set to ‘none’ regardless of configuration.
  • writethrough: writethrough mode is the default caching mode. With caching set to writethrough mode, the host page cache is enabled, but the disk write cache is disabled for the guest. Consequently, this caching mode ensures data integrity even if the applications and storage stack in the guest do not transfer data to permanent storage properly (either through fsync operations or file system barriers). Because the host page cache is enabled in this mode, the read performance for applications running in the guest is generally better. However, the write performance might be reduced because the disk write cache is disabled.
  • writeback: With caching set to writeback mode, both the host page cache and the disk write cache are enabled for the guest. Because of this, the I/O performance for applications running in the guest is good, but the data is not protected in a power failure. As a result, this caching mode is recommended only for temporary data where potential data loss is not a concern. NOTE: Certain backend disk mechanisms may provide safe writeback cache semantics. Specifically those that bypass the host page cache, such as QEMU’s integrated RBD driver. Ceph documentation recommends setting this to writeback for maximum performance while maintaining data safety.
  • directsync: Like “writethrough”, but it bypasses the host page cache.
  • unsafe: Caching mode of unsafe ignores cache transfer operations completely. As its name implies, this caching mode should be used only for temporary data where data loss is not a concern. This mode can be useful for speeding up guest installations, but you should switch to another caching mode in production environments.
rng_dev_path
Type:string
Default:/dev/urandom

The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is /dev/urandom – it is non-blocking, therefore relatively fast; and avoids the limitations of /dev/random, which is a legacy interface. For more details (and comparision between different RNG sources), refer to the “Usage” section in the Linux kernel API documentation for [u]random: http://man7.org/linux/man-pages/man4/urandom.4.html and http://man7.org/linux/man-pages/man7/random.7.html.

hw_machine_type
Type:list
Default:<None>

For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the “virsh capabilities” command. The format of the value for this config option is host-arch=machine-type. For example: x86_64=machinetype1,armv7l=machinetype2

sysinfo_serial
Type:string
Default:auto
Valid Values:none, os, hardware, auto

The data source used to the populate the host “serial” UUID exposed to guest in the virtual BIOS.

Possible values

none
A serial number entry is not added to the guest domain xml.
os
A UUID serial number is generated from the host /etc/machine-id file.
hardware
A UUID for the host hardware as reported by libvirt. This is typically from the host SMBIOS data, unless it has been overridden in libvirtd.conf.
auto
Uses the “os” source if possible, else “hardware”.
mem_stats_period_seconds
Type:integer
Default:10

A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics.

uid_maps
Type:list
Default:''

List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5 allowed.

gid_maps
Type:list
Default:''

List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5 allowed.

realtime_scheduler_priority
Type:integer
Default:1

In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99)

enabled_perf_events
Type:list
Default:''

This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statsitics via the libvirt driver, which in turn uses the Linux kernel’s perf infrastructure. With this config attribute set, Nova will generate libvirt guest XML to monitor the specified events. For more information, refer to the “Performance monitoring events” section here: https://libvirt.org/formatdomain.html#elementsPerf. And here: https://libvirt.org/html/libvirt-libvirt-domain.html – look for VIR_PERF_PARAM_*

For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows:

[libvirt]
enabled_perf_events = cpu_clock, cache_misses

Possible values: A string list. The list of supported events can be found here: https://libvirt.org/formatdomain.html#elementsPerf.

Note that support for Intel CMT events (cmt, mbmbt, mbml) is deprecated, and will be removed in the “Stein” release. That’s because the upstream Linux kernel (from 4.14 onwards) has deleted support for Intel CMT, because it is broken by design.

num_pcie_ports
Type:integer
Default:0
Minimum Value:0
Maximum Value:28

The number of PCIe ports an instance will get.

Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use.

By default we have just 1-2 free ports which limits hotplug.

More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt

Due to QEMU limitations for aarch64/virt maximum value is set to ‘28’.

Default value ‘0’ moves calculating amount of ports to libvirt.

file_backed_memory
Type:integer
Default:0
Minimum Value:0

Available capacity in MiB for file-backed memory.

Set to 0 to disable file-backed memory.

When enabled, instances will create memory files in the directory specified in /etc/libvirt/qemu.conf’s memory_backing_dir option. The default location is /var/lib/libvirt/qemu/ram.

When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel’s pagecache mechanism.

Note

This feature is not compatible with hugepages.

Note

This feature is not compatible with memory overcommit.

Related options:

  • virt_type must be set to kvm or qemu.
  • ram_allocation_ratio must be set to 1.0.
images_type
Type:string
Default:default
Valid Values:raw, flat, qcow2, lvm, rbd, ploop, default

VM Images format.

If default is specified, then use_cow_images flag is used instead of this one.

Related options:

  • virt.use_cow_images
  • images_volume_group
images_volume_group
Type:string
Default:<None>

LVM Volume Group that is used for VM images, when you specify images_type=lvm

Related options:

  • images_type
sparse_logical_volumes
Type:boolean
Default:false

Create sparse logical volumes (with virtualsize) if this flag is set to True.

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes.
images_rbd_pool
Type:string
Default:rbd

The RADOS pool in which rbd volumes are stored

images_rbd_ceph_conf
Type:string
Default:''

Path to the ceph configuration file to use

hw_disk_discard
Type:string
Default:<None>
Valid Values:ignore, unmap

Discard option for nova managed disks.

Requires:

  • Libvirt >= 1.0.6
  • Qemu >= 1.5 (raw format)
  • Qemu >= 1.6 (qcow2 format)
image_info_filename_pattern
Type:string
Default:$instances_path/$image_cache_subdirectory_name/%(image)s.info

Allows image information files to be stored in non-standard locations

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:Image info files are no longer used by the image cache
remove_unused_resized_minimum_age_seconds
Type:integer
Default:3600

Unused resized base images younger than this will not be removed

checksum_base_images
Type:boolean
Default:false

Write a checksum for files in _base to disk

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level.
checksum_interval_seconds
Type:integer
Default:3600

How frequently to checksum base images

Warning

This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.

Reason:The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level.
volume_clear
Type:string
Default:zero
Valid Values:zero, shred, none

Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage.

Related options:

  • images_type - must be set to lvm
  • volume_clear_size

Possible values

zero
Overwrite volumes with zeroes
shred
Overwrite volumes repeatedly
none
Do not wipe deleted volumes
volume_clear_size
Type:integer
Default:0
Minimum Value:0

Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in volume_clear option.

Possible values:

  • 0 - clear whole volume
  • >0 - clear specified amount of MiB

Related options:

  • images_type - must be set to lvm
  • volume_clear - must be set and the value must be different than none for this option to have any impact
snapshot_compression
Type:boolean
Default:false

Enable snapshot compression for qcow2 images.

Note: you can set snapshot_image_format to qcow2 to force all snapshots to be in qcow2 format, independently from their original image type.

Related options:

  • snapshot_image_format
use_virtio_for_bridges
Type:boolean
Default:true

Use virtio for bridge interfaces with KVM/QEMU

volume_use_multipath
Type:boolean
Default:false

Use multipath connection of the iSCSI or FC volume

Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance.

Deprecated Variations
Group Name
libvirt iscsi_use_multipath
num_volume_scan_tries
Type:integer
Default:5

Number of times to scan given storage protocol to find volume.

Deprecated Variations
Group Name
libvirt num_iscsi_scan_tries
num_aoe_discover_tries
Type:integer
Default:3

Number of times to rediscover AoE target to find volume.

Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device.

iscsi_iface
Type:string
Default:<None>

The iSCSI transport iface to use to connect to target in case offload support is desired.

Default format is of the form <transport_name>.<hwaddress> where <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name.

Deprecated Variations
Group Name
libvirt iscsi_transport
num_iser_scan_tries
Type:integer
Default:5

Number of times to scan iSER target to find volume.

iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume.

iser_use_multipath
Type:boolean
Default:false

Use multipath connection of the iSER volume.

iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance.

rbd_user
Type:string
Default:<None>

The RADOS client name for accessing rbd(RADOS Block Devices) volumes.

Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server.

rbd_secret_uuid
Type:string
Default:<None>

The libvirt UUID of the secret for the rbd_user volumes.

nfs_mount_point_base
Type:string
Default:$state_path/mnt

Directory where the NFS volume is mounted on the compute node. The default is ‘mnt’ directory of the location where nova’s Python module is installed.

NFS provides shared storage for the OpenStack Block Storage service.

Possible values:

  • A string representing absolute path of mount point.
nfs_mount_options
Type:string
Default:<None>

Mount options passed to the NFS client. See section of the nfs man page for details.

Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point.

Possible values:

  • Any string representing mount options separated by commas.
  • Example string: vers=3,lookupcache=pos
quobyte_mount_point_base
Type:string
Default:$state_path/mnt

Directory where the Quobyte volume is mounted on the compute node.

Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted.

Possible values:

  • A string representing absolute path of mount point.
quobyte_client_cfg
Type:string
Default:<None>

Path to a Quobyte Client configuration file.

smbfs_mount_point_base
Type:string
Default:$state_path/mnt

Directory where the SMBFS shares are mounted on the compute node.

smbfs_mount_options
Type:string
Default:''

Mount options passed to the SMBFS client.

Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu uid and gid must be specified.

remote_filesystem_transport
Type:string
Default:ssh
Valid Values:ssh, rsync

libvirt’s transport method for remote file operations.

Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for:

  • creating directory on remote host
  • creating file on remote host
  • removing file from remote host
  • copying file to remote host
vzstorage_mount_point_base
Type:string
Default:$state_path/mnt

Directory where the Virtuozzo Storage clusters are mounted on the compute node.

This option defines non-standard mountpoint for Vzstorage cluster.

Related options:

  • vzstorage_mount_* group of parameters
vzstorage_mount_user
Type:string
Default:stack

Mount owner user name.

This option defines the owner user of Vzstorage cluster mountpoint.

Related options:

  • vzstorage_mount_* group of parameters
vzstorage_mount_group
Type:string
Default:qemu

Mount owner group name.

This option defines the owner group of Vzstorage cluster mountpoint.

Related options:

  • vzstorage_mount_* group of parameters
vzstorage_mount_perms
Type:string
Default:0770

Mount access mode.

This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0’s.

Related options:

  • vzstorage_mount_* group of parameters
vzstorage_log_path
Type:string
Default:/var/log/vstorage/%(cluster_name)s/nova.log.gz

Path to vzstorage client log.

This option defines the log of cluster operations, it should include “%(cluster_name)s” template to separate logs from multiple shares.

Related options:

  • vzstorage_mount_opts may include more detailed logging options.
vzstorage_cache_path
Type:string
Default:<None>

Path to the SSD cache file.

You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client’s SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility.

This option defines the path which should include “%(cluster_name)s” template to separate caches from multiple shares.

Related options:

  • vzstorage_mount_opts may include more detailed cache options.
vzstorage_mount_opts
Type:list
Default:''

Extra mount options for pstorage-mount

For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: “[‘-v’, ‘-R’, ‘500’]” Shouldn’t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options.

Related options:

  • All other vzstorage_* options
rx_queue_size
Type:unknown type
Default:<None>
Valid Values:256, 512, 1024

Configure virtio rx queue size.

This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7.

tx_queue_size
Type:unknown type
Default:<None>
Valid Values:256, 512, 1024

Configure virtio tx queue size.

This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10.

num_nvme_discover_tries
Type:integer
Default:5

Number of times to rediscover NVMe target to find volume

Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device.

metrics

Configuration options for metrics

Options under this group allow to adjust how values assigned to metrics are calculated.

weight_multiplier
Type:floating point
Default:1.0

When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:

  • >1.0: increases the effect of the metric on overall weight
  • 1.0: no change to the calculated weight
  • >0.0,<1.0: reduces the effect of the metric on overall weight
  • 0.0: the metric value is ignored, and the value of the ‘weight_of_unavailable’ option is returned instead
  • >-1.0,<0.0: the effect is reduced and reversed
  • -1.0: the effect is reversed
  • <-1.0: the effect is increased proportionally and reversed

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer or float value, where the value corresponds to the multipler ratio for this weigher.

Related options:

  • weight_of_unavailable
weight_setting
Type:list
Default:''

This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more ‘name=ratio’ pairs, separated by commas, where ‘name’ is the name of the metric to be weighed, and ‘ratio’ is the relative weight for that metric.

Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the ‘weight_of_unavailable’ option.

As an example, let’s consider the case where this option is set to:

name1=1.0, name2=-1.3

The final weight will be:

(name1.value * 1.0) + (name2.value * -1.3)

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more key/value pairs separated by commas, where the key is a string representing the name of a metric and the value is a numeric weight for that metric. If any value is set to 0, the value is ignored and the weight will be set to the value of the ‘weight_of_unavailable’ option.

Related options:

  • weight_of_unavailable
required
Type:boolean
Default:true

This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • True or False, where False ensures any metric being unavailable for a host will set the host weight to ‘weight_of_unavailable’.

Related options:

  • weight_of_unavailable
weight_of_unavailable
Type:floating point
Default:-10000.0

When any of the following conditions are met, this value will be used in place of any actual metric value:

  • One of the metrics named in ‘weight_setting’ is not available for a host, and the value of ‘required’ is False
  • The ratio specified for a metric in ‘weight_setting’ is 0
  • The ‘weight_multiplier’ option is set to 0

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer or float value, where the value corresponds to the multipler ratio for this weigher.

Related options:

  • weight_setting
  • required
  • weight_multiplier
mks

Nova compute node uses WebMKS, a desktop sharing protocol to provide instance console access to VM’s created by VMware hypervisors.

Related options: Following options must be set to provide console access. * mksproxy_base_url * enabled

mksproxy_base_url
Type:URI
Default:http://127.0.0.1:6090/

Location of MKS web console proxy

The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured

Possible values:

  • Must be a valid URL of the form:http://host:port/ or https://host:port/
enabled
Type:boolean
Default:false

Enables graphical console access for virtual machines.

neutron

Configuration options for neutron (network connectivity as a service).

url
Type:URI
Default:http://127.0.0.1:9696

This option has a sample default set, which means that its actual default value may vary from the one documented above.

This option specifies the URL for connecting to Neutron.

Possible values:

  • Any valid URL that points to the Neutron API service is appropriate here. This typically matches the URL returned for the ‘network’ service type from the Keystone service catalog.

Warning

This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.

Reason:Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, “url” will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.
ovs_bridge
Type:string
Default:br-int

Default name for the Open vSwitch integration bridge.

Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses.

default_floating_pool
Type:string
Default:nova

Default name for the floating IP pool.

Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding reponses.

extension_sync_interval
Type:integer
Default:600
Minimum Value:0

Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait.

physnets
Type:list
Default:''

List of physnets present on this host.

For each physnet listed, an additional section, [neutron_physnet_$PHYSNET], will be added to the configuration file. Each section must be configured with a single configuration option, numa_nodes, which should be a list of node IDs for all NUMA nodes this physnet is associated with. For example:

[neutron]
physnets = foo, bar

[neutron_physnet_foo]
numa_nodes = 0

[neutron_physnet_bar]
numa_nodes = 0,1

Any physnet that is not listed using this option will be treated as having no particular NUMA node affinity.

Tunnelled networks (VXLAN, GRE, …) cannot be accounted for in this way and are instead configured using the [neutron_tunnel] group. For example:

[neutron_tunnel]
numa_nodes = 1

Related options:

  • [neutron_tunnel] numa_nodes can be used to configure NUMA affinity for all tunneled networks
  • [neutron_physnet_$PHYSNET] numa_nodes must be configured for each value of $PHYSNET specified by this option
service_metadata_proxy
Type:boolean
Default:false

When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the ‘X-Instance-ID’ header.

Related options:

  • metadata_proxy_shared_secret
metadata_proxy_shared_secret
Type:string
Default:''

This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the ‘X-Metadata-Provider-Signature’ header must be supplied in the request.

Related options:

  • service_metadata_proxy
cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

auth_type
Type:unknown type
Default:<None>

Authentication type to load

Deprecated Variations
Group Name
neutron auth_plugin
auth_section
Type:unknown type
Default:<None>

Config Section from which to load plugin specific options

auth_url
Type:unknown type
Default:<None>

Authentication URL

system_scope
Type:unknown type
Default:<None>

Scope for system operations

domain_id
Type:unknown type
Default:<None>

Domain ID to scope to

domain_name
Type:unknown type
Default:<None>

Domain name to scope to

project_id
Type:unknown type
Default:<None>

Project ID to scope to

project_name
Type:unknown type
Default:<None>

Project name to scope to

project_domain_id
Type:unknown type
Default:<None>

Domain ID containing project

project_domain_name
Type:unknown type
Default:<None>

Domain name containing project

trust_id
Type:unknown type
Default:<None>

Trust ID

default_domain_id
Type:unknown type
Default:<None>

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default_domain_name
Type:unknown type
Default:<None>

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

user_id
Type:unknown type
Default:<None>

User ID

username
Type:unknown type
Default:<None>

Username

Deprecated Variations
Group Name
neutron user-name
neutron user_name
user_domain_id
Type:unknown type
Default:<None>

User’s domain id

user_domain_name
Type:unknown type
Default:<None>

User’s domain name

password
Type:unknown type
Default:<None>

User’s password

tenant_id
Type:unknown type
Default:<None>

Tenant ID

tenant_name
Type:unknown type
Default:<None>

Tenant Name

service_type
Type:string
Default:network

The default service_type for endpoint URL discovery.

service_name
Type:string
Default:<None>

The default service_name for endpoint URL discovery.

valid_interfaces
Type:list
Default:internal,public

List of interfaces, in order of preference, for endpoint URL.

region_name
Type:string
Default:<None>

The default region_name for endpoint URL discovery.

endpoint_override
Type:string
Default:<None>

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

notifications

Most of the actions in Nova which manipulate the system state generate notifications which are posted to the messaging component (e.g. RabbitMQ) and can be consumed by any service outside the OpenStack. More technical details at https://docs.openstack.org/nova/latest/reference/notifications.html

notify_on_state_change
Type:string
Default:<None>
Valid Values:<None>, vm_state, vm_and_task_state

If set, send compute.instance.update notifications on instance state changes.

Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications.

Possible values

<None>
no notifications
vm_state
Notifications are sent with VM state transition information in the old_state and state fields. The old_task_state and new_task_state fields will be set to the current task_state of the instance
vm_and_task_state
Notifications are sent with VM and task state transition information
Deprecated Variations
Group Name
DEFAULT notify_on_state_change
default_level
Type:string
Default:INFO
Valid Values:DEBUG, INFO, WARN, ERROR, CRITICAL

Default notification level for outgoing notifications.

Deprecated Variations
Group Name
DEFAULT default_notification_level
notification_format
Type:string
Default:both
Valid Values:both, versioned, unversioned

Specifies which notification format shall be used by nova.

The default value is fine for most deployments and rarely needs to be changed. This value can be set to ‘versioned’ once the infrastructure moves closer to consuming the newer format of notifications. After this occurs, this option will be removed.

Note that notifications can be completely disabled by setting driver=noop in the [oslo_messaging_notifications] group.

The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html

Possible values

both
Both the legacy unversioned and the new versioned notifications are emitted
versioned
Only the new versioned notifications are emitted
unversioned
Only the legacy unversioned notifications are emitted
Deprecated Variations
Group Name
DEFAULT notification_format
versioned_notifications_topics
Type:list
Default:versioned_notifications

Specifies the topics for the versioned notifications issued by nova.

The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list.

The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html

bdms_in_notifications
Type:boolean
Default:false

If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database.

os_win
hbaapi_lib_path
Type:string
Default:hbaapi.dll

Fibre Channel hbaapi library path. If no custom hbaapi library is requested, the default one will be used.

cache_temporary_wmi_objects
Type:boolean
Default:true

Caches temporary WMI objects in order to increase performance. This only affects networkutils, where almost all operations require a reference to a switch port. The cached objects are no longer valid if the VM they are associated with is destroyed.

wmi_job_terminate_timeout
Type:integer
Default:120

The default amount of seconds to wait when stopping pending WMI jobs. Setting this value to 0 will disable the timeout.

osapi_v21
project_id_regex
Type:string
Default:<None>

This option is a string representing a regular expression (regex) that matches the project_id as contained in URLs. If not set, it will match normal UUIDs created by keystone.

Possible values:

  • A string representing any legal regular expression

Warning

This option is deprecated for removal since 13.0.0. Its value may be silently ignored in the future.

Reason:Recent versions of nova constrain project IDs to hexadecimal characters and dashes. If your installation uses IDs outside of this range, you should use this option to provide your own regex and give you time to migrate offending projects to valid IDs before the next release.
oslo_concurrency
disable_process_locking
Type:boolean
Default:false

Enables or disables inter-process locks.

Deprecated Variations
Group Name
DEFAULT disable_process_locking
lock_path
Type:string
Default:<None>

Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.

Deprecated Variations
Group Name
DEFAULT lock_path
oslo_messaging_amqp
container_name
Type:string
Default:<None>

Name for the AMQP container. must be globally unique. Defaults to a generated UUID

Deprecated Variations
Group Name
amqp1 container_name
idle_timeout
Type:integer
Default:0

Timeout for inactive connections (in seconds)

Deprecated Variations
Group Name
amqp1 idle_timeout
trace
Type:boolean
Default:false

Debug: dump AMQP frames to stdout

Deprecated Variations
Group Name
amqp1 trace
ssl
Type:boolean
Default:false

Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system’s CA-bundle to verify the server’s certificate.

ssl_ca_file
Type:string
Default:''

CA certificate PEM file used to verify the server’s certificate

Deprecated Variations
Group Name
amqp1 ssl_ca_file
ssl_cert_file
Type:string
Default:''

Self-identifying certificate PEM file for client authentication

Deprecated Variations
Group Name
amqp1 ssl_cert_file
ssl_key_file
Type:string
Default:''

Private key PEM file used to sign ssl_cert_file certificate (optional)

Deprecated Variations
Group Name
amqp1 ssl_key_file
ssl_key_password
Type:string
Default:<None>

Password for decrypting ssl_key_file (if encrypted)

Deprecated Variations
Group Name
amqp1 ssl_key_password
ssl_verify_vhost
Type:boolean
Default:false

By default SSL checks that the name in the server’s certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server’s SSL certificate uses the virtual host name instead of the DNS name.

sasl_mechanisms
Type:string
Default:''

Space separated list of acceptable SASL mechanisms

Deprecated Variations
Group Name
amqp1 sasl_mechanisms
sasl_config_dir
Type:string
Default:''

Path to directory that contains the SASL configuration

Deprecated Variations
Group Name
amqp1 sasl_config_dir
sasl_config_name
Type:string
Default:''

Name of configuration file (without .conf suffix)

Deprecated Variations
Group Name
amqp1 sasl_config_name
sasl_default_realm
Type:string
Default:''

SASL realm to use if no realm present in username

connection_retry_interval
Type:integer
Default:1
Minimum Value:1

Seconds to pause before attempting to re-connect.

connection_retry_backoff
Type:integer
Default:2
Minimum Value:0

Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt.

connection_retry_interval_max
Type:integer
Default:30
Minimum Value:1

Maximum limit for connection_retry_interval + connection_retry_backoff

Type:integer
Default:10
Minimum Value:1

Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error.

default_reply_retry
Type:integer
Default:0
Minimum Value:-1

The maximum number of attempts to re-send a reply message which failed due to a recoverable error.

default_reply_timeout
Type:integer
Default:30
Minimum Value:5

The deadline for an rpc reply message delivery.

default_send_timeout
Type:integer
Default:30
Minimum Value:5

The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry.

default_notify_timeout
Type:integer
Default:30
Minimum Value:5

The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry.

Type:integer
Default:600
Minimum Value:1

The duration to schedule a purge of idle sender links. Detach link after expiry.

addressing_mode
Type:string
Default:dynamic

Indicates the addressing mode used by the driver. Permitted values: ‘legacy’ - use legacy non-routable addressing ‘routable’ - use routable addresses ‘dynamic’ - use legacy addresses if the message bus does not support routing otherwise use routable addressing

pseudo_vhost
Type:boolean
Default:true

Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private ‘subnet’ per virtual host. Set to False if the message bus supports virtual hosting using the ‘hostname’ field in the AMQP 1.0 Open performative as the name of the virtual host.

server_request_prefix
Type:string
Default:exclusive

address prefix used when sending to a specific server

Deprecated Variations
Group Name
amqp1 server_request_prefix
broadcast_prefix
Type:string
Default:broadcast

address prefix used when broadcasting to all servers

Deprecated Variations
Group Name
amqp1 broadcast_prefix
group_request_prefix
Type:string
Default:unicast

address prefix when sending to any server in group

Deprecated Variations
Group Name
amqp1 group_request_prefix
rpc_address_prefix
Type:string
Default:openstack.org/om/rpc

Address prefix for all generated RPC addresses

notify_address_prefix
Type:string
Default:openstack.org/om/notify

Address prefix for all generated Notification addresses

multicast_address
Type:string
Default:multicast

Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages.

unicast_address
Type:string
Default:unicast

Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination.

anycast_address
Type:string
Default:anycast

Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers.

default_notification_exchange
Type:string
Default:<None>

Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else ‘notify’

default_rpc_exchange
Type:string
Default:<None>

Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else ‘rpc’

Type:integer
Default:200
Minimum Value:1

Window size for incoming RPC Reply messages.

rpc_server_credit
Type:integer
Default:100
Minimum Value:1

Window size for incoming RPC Request messages

notify_server_credit
Type:integer
Default:100
Minimum Value:1

Window size for incoming Notification messages

pre_settled
Type:multi-valued
Default:rpc-cast
Default:rpc-reply

Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: ‘rpc-call’ - send RPC Calls pre-settled ‘rpc-reply’- send RPC Replies pre-settled ‘rpc-cast’ - Send RPC Casts pre-settled ‘notify’ - Send Notifications pre-settled

oslo_messaging_kafka
kafka_max_fetch_bytes
Type:integer
Default:1048576

Max fetch bytes of Kafka consumer

kafka_consumer_timeout
Type:floating point
Default:1.0

Default timeout(s) for Kafka consumers

pool_size
Type:integer
Default:10

Pool Size for Kafka Consumers

Warning

This option is deprecated for removal. Its value may be silently ignored in the future.

Reason:Driver no longer uses connection pool.
conn_pool_min_size
Type:integer
Default:2

The pool size limit for connections expiration policy

Warning

This option is deprecated for removal. Its value may be silently ignored in the future.

Reason:Driver no longer uses connection pool.
conn_pool_ttl
Type:integer
Default:1200

The time-to-live in sec of idle connections in the pool

Warning

This option is deprecated for removal. Its value may be silently ignored in the future.

Reason:Driver no longer uses connection pool.
consumer_group
Type:string
Default:oslo_messaging_consumer

Group id for Kafka consumer. Consumers in one group will coordinate message consumption

producer_batch_timeout
Type:floating point
Default:0.0

Upper bound on the delay for KafkaProducer batching in seconds

producer_batch_size
Type:integer
Default:16384

Size of batch for the producer async send

enable_auto_commit
Type:boolean
Default:false

Enable asynchronous consumer commits

max_poll_records
Type:integer
Default:500

The maximum number of records returned in a poll call

security_protocol
Type:string
Default:PLAINTEXT
Valid Values:PLAINTEXT, SASL_PLAINTEXT, SSL, SASL_SSL

Protocol used to communicate with brokers

sasl_mechanism
Type:string
Default:PLAIN

Mechanism when security protocol is SASL

ssl_cafile
Type:string
Default:''

CA certificate PEM file used to verify the server certificate

oslo_messaging_notifications
driver
Type:multi-valued
Default:''

The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop

Deprecated Variations
Group Name
DEFAULT notification_driver
transport_url
Type:string
Default:<None>

A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.

Deprecated Variations
Group Name
DEFAULT notification_transport_url
topics
Type:list
Default:notifications

AMQP topic used for OpenStack notifications.

Deprecated Variations
Group Name
rpc_notifier2 topics
DEFAULT notification_topics
retry
Type:integer
Default:-1

The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite

oslo_messaging_rabbit
amqp_durable_queues
Type:boolean
Default:false

Use durable queues in AMQP.

amqp_auto_delete
Type:boolean
Default:false

Auto-delete queues in AMQP.

Deprecated Variations
Group Name
DEFAULT amqp_auto_delete
ssl
Type:boolean
Default:false

Connect over SSL.

Deprecated Variations
Group Name
oslo_messaging_rabbit rabbit_use_ssl
ssl_version
Type:string
Default:''

SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.

Deprecated Variations
Group Name
oslo_messaging_rabbit kombu_ssl_version
ssl_key_file
Type:string
Default:''

SSL key file (valid only if SSL enabled).

Deprecated Variations
Group Name
oslo_messaging_rabbit kombu_ssl_keyfile
ssl_cert_file
Type:string
Default:''

SSL cert file (valid only if SSL enabled).

Deprecated Variations
Group Name
oslo_messaging_rabbit kombu_ssl_certfile
ssl_ca_file
Type:string
Default:''

SSL certification authority file (valid only if SSL enabled).

Deprecated Variations
Group Name
oslo_messaging_rabbit kombu_ssl_ca_certs
kombu_reconnect_delay
Type:floating point
Default:1.0

How long to wait before reconnecting in response to an AMQP consumer cancel notification.

Deprecated Variations
Group Name
DEFAULT kombu_reconnect_delay
kombu_compression
Type:string
Default:<None>

EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions.

kombu_missing_consumer_retry_timeout
Type:integer
Default:60

How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.

Deprecated Variations
Group Name
oslo_messaging_rabbit kombu_reconnect_timeout
kombu_failover_strategy
Type:string
Default:round-robin
Valid Values:round-robin, shuffle

Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.

rabbit_login_method
Type:string
Default:AMQPLAIN
Valid Values:PLAIN, AMQPLAIN, RABBIT-CR-DEMO

The RabbitMQ login method.

Deprecated Variations
Group Name
DEFAULT rabbit_login_method
rabbit_retry_interval
Type:integer
Default:1

How frequently to retry connecting with RabbitMQ.

rabbit_retry_backoff
Type:integer
Default:2

How long to backoff for between retries when connecting to RabbitMQ.

Deprecated Variations
Group Name
DEFAULT rabbit_retry_backoff
rabbit_interval_max
Type:integer
Default:30

Maximum interval of RabbitMQ connection retries. Default is 30 seconds.

rabbit_ha_queues
Type:boolean
Default:false

Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: “rabbitmqctl set_policy HA ‘^(?!amq.).*’ ‘{“ha-mode”: “all”}’ “

Deprecated Variations
Group Name
DEFAULT rabbit_ha_queues
rabbit_transient_queues_ttl
Type:integer
Default:1800
Minimum Value:1

Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues.

rabbit_qos_prefetch_count
Type:integer
Default:0

Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.

heartbeat_timeout_threshold
Type:integer
Default:60

Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disable the heartbeat). EXPERIMENTAL

heartbeat_rate
Type:integer
Default:2

How often times during the heartbeat_timeout_threshold we check the heartbeat.

pci
alias
Type:multi-valued
Default:''

An alias for a PCI passthrough device requirement.

This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements.

Possible Values:

  • A dictionary of JSON values which describe the aliases. For example:

    alias = {
      "name": "QuickAssist",
      "product_id": "0443",
      "vendor_id": "8086",
      "device_type": "type-PCI",
      "numa_policy": "required"
    }
    

    This defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are :

    name

    Name of the PCI alias.

    product_id

    Product ID of the device in hexadecimal.

    vendor_id

    Vendor ID of the device in hexadecimal.

    device_type

    Type of PCI device. Valid values are: type-PCI, type-PF and type-VF.

    numa_policy

    Required NUMA affinity of device. Valid values are: legacy, preferred and required.

  • Supports multiple aliases by repeating the option (not by specifying a list value):

    alias = {
      "name": "QuickAssist-1",
      "product_id": "0443",
      "vendor_id": "8086",
      "device_type": "type-PCI",
      "numa_policy": "required"
    }
    alias = {
      "name": "QuickAssist-2",
      "product_id": "0444",
      "vendor_id": "8086",
      "device_type": "type-PCI",
      "numa_policy": "required"
    }
    
Deprecated Variations
Group Name
DEFAULT pci_alias
passthrough_whitelist
Type:multi-valued
Default:''

White list of PCI devices available to VMs.

Possible values:

  • A JSON dictionary which describe a whitelisted PCI device. It should take the following format:

    ["vendor_id": "<id>",] ["product_id": "<id>",]
    ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
     "devname": "<name>",]
    {"<tag>": "<tag_value>",}
    

    Where [ indicates zero or one occurrences, { indicates zero or multiple occurrences, and | mutually exclusive options. Note that any missing fields are automatically wildcarded.

    Valid key values are :

    vendor_id

    Vendor ID of the device in hexadecimal.

    product_id

    Product ID of the device in hexadecimal.

    address

    PCI address of the device. Both traditional glob style and regular expression syntax is supported.

    devname

    Device name of the device (for e.g. interface name). Not all PCI devices have a name.

    <tag>

    Additional <tag> and <tag_value> used for matching PCI devices. Supported <tag> values are :

    • physical_network
    • trusted

    Valid examples are:

    passthrough_whitelist = {"devname":"eth0",
                             "physical_network":"physnet"}
    passthrough_whitelist = {"address":"*:0a:00.*"}
    passthrough_whitelist = {"address":":0a:00.",
                             "physical_network":"physnet1"}
    passthrough_whitelist = {"vendor_id":"1137",
                             "product_id":"0071"}
    passthrough_whitelist = {"vendor_id":"1137",
                             "product_id":"0071",
                             "address": "0000:0a:00.1",
                             "physical_network":"physnet1"}
    passthrough_whitelist = {"address":{"domain": ".*",
                                        "bus": "02", "slot": "01",
                                        "function": "[2-7]"},
                             "physical_network":"physnet1"}
    passthrough_whitelist = {"address":{"domain": ".*",
                                        "bus": "02", "slot": "0[1-2]",
                                        "function": ".*"},
                             "physical_network":"physnet1"}
    passthrough_whitelist = {"devname": "eth0", "physical_network":"physnet1",
                             "trusted": "true"}
    

    The following are invalid, as they specify mutually exclusive options:

    passthrough_whitelist = {"devname":"eth0",
                             "physical_network":"physnet",
                             "address":"*:0a:00.*"}
    
  • A JSON list of JSON dictionaries corresponding to the above format. For example:

    passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"},
                             {"product_id":"0002", "vendor_id":"8086"}]
    
Deprecated Variations
Group Name
DEFAULT pci_passthrough_whitelist
placement
randomize_allocation_candidates
Type:boolean
Default:false

If True, when limiting allocation candidate results, the results will be a random sampling of the full result set. If False, allocation candidates are returned in a deterministic but undefined order. That is, all things being equal, two requests for allocation candidates will return the same results in the same order; but no guarantees are made as to how that order is determined.

policy_file
Type:string
Default:placement-policy.yaml

The file that defines placement policies. This can be an absolute path or relative to the configuration file.

incomplete_consumer_project_id
Type:string
Default:00000000-0000-0000-0000-000000000000

Early API microversions (<1.8) allowed creating allocations and not specifying a project or user identifier for the consumer. In cleaning up the data modeling, we no longer allow missing project and user information. If an older client makes an allocation, we’ll use this in place of the information it doesn’t provide.

incomplete_consumer_user_id
Type:string
Default:00000000-0000-0000-0000-000000000000

Early API microversions (<1.8) allowed creating allocations and not specifying a project or user identifier for the consumer. In cleaning up the data modeling, we no longer allow missing project and user information. If an older client makes an allocation, we’ll use this in place of the information it doesn’t provide.

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

auth_type
Type:unknown type
Default:<None>

Authentication type to load

Deprecated Variations
Group Name
placement auth_plugin
auth_section
Type:unknown type
Default:<None>

Config Section from which to load plugin specific options

auth_url
Type:unknown type
Default:<None>

Authentication URL

system_scope
Type:unknown type
Default:<None>

Scope for system operations

domain_id
Type:unknown type
Default:<None>

Domain ID to scope to

domain_name
Type:unknown type
Default:<None>

Domain name to scope to

project_id
Type:unknown type
Default:<None>

Project ID to scope to

project_name
Type:unknown type
Default:<None>

Project name to scope to

project_domain_id
Type:unknown type
Default:<None>

Domain ID containing project

project_domain_name
Type:unknown type
Default:<None>

Domain name containing project

trust_id
Type:unknown type
Default:<None>

Trust ID

default_domain_id
Type:unknown type
Default:<None>

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default_domain_name
Type:unknown type
Default:<None>

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

user_id
Type:unknown type
Default:<None>

User ID

username
Type:unknown type
Default:<None>

Username

Deprecated Variations
Group Name
placement user-name
placement user_name
user_domain_id
Type:unknown type
Default:<None>

User’s domain id

user_domain_name
Type:unknown type
Default:<None>

User’s domain name

password
Type:unknown type
Default:<None>

User’s password

tenant_id
Type:unknown type
Default:<None>

Tenant ID

tenant_name
Type:unknown type
Default:<None>

Tenant Name

service_type
Type:string
Default:placement

The default service_type for endpoint URL discovery.

service_name
Type:string
Default:<None>

The default service_name for endpoint URL discovery.

valid_interfaces
Type:list
Default:internal,public

List of interfaces, in order of preference, for endpoint URL.

region_name
Type:string
Default:<None>

The default region_name for endpoint URL discovery.

endpoint_override
Type:string
Default:<None>

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

placement_database

The Placement API Database is a separate database which can be used with the placement service. This database is optional: if the connection option is not set, the nova api database will be used instead.

connection
Type:string
Default:<None>

The SQLAlchemy connection string to use to connect to the database.

connection_parameters
Type:string
Default:''

Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&…

sqlite_synchronous
Type:boolean
Default:true

If True, SQLite uses synchronous mode.

slave_connection
Type:string
Default:<None>

The SQLAlchemy connection string to use to connect to the slave database.

mysql_sql_mode
Type:string
Default:TRADITIONAL

The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=

connection_recycle_time
Type:integer
Default:3600

Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.

max_pool_size
Type:integer
Default:<None>

Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.

max_retries
Type:integer
Default:10

Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.

retry_interval
Type:integer
Default:10

Interval between retries of opening a SQL connection.

max_overflow
Type:integer
Default:<None>

If set, use this value for max_overflow with SQLAlchemy.

connection_debug
Type:integer
Default:0

Verbosity of SQL debugging information: 0=None, 100=Everything.

connection_trace
Type:boolean
Default:false

Add Python stack traces to SQL as comment strings.

pool_timeout
Type:integer
Default:<None>

If set, use this value for pool_timeout with SQLAlchemy.

powervm

PowerVM options allow cloud administrators to configure how OpenStack will work with the PowerVM hypervisor.

proc_units_factor
Type:floating point
Default:0.1
Minimum Value:0.05
Maximum Value:1

Factor used to calculate the amount of physical processor compute power given to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas 0.05 means 1/20th of a physical processor.

disk_driver
Type:string
Default:localdisk
Valid Values:localdisk, ssp

The disk driver to use for PowerVM disks. PowerVM provides support for localdisk and PowerVM Shared Storage Pool disk drivers.

Related options:

  • volume_group_name - required when using localdisk
volume_group_name
Type:string
Default:''

Volume Group to use for block device operations. If disk_driver is localdisk, then this attribute must be specified. It is strongly recommended NOT to use rootvg since that is used by the management partition and filling it will cause failures.

quota

Quota options allow to manage quotas in openstack deployment.

instances
Type:integer
Default:10
Minimum Value:-1

The number of instances allowed per project.

Possible Values

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_instances
cores
Type:integer
Default:20
Minimum Value:-1

The number of instance cores or vCPUs allowed per project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_cores
ram
Type:integer
Default:51200
Minimum Value:-1

The number of megabytes of instance RAM allowed per project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_ram
floating_ips
Type:integer
Default:10
Minimum Value:-1

The number of floating IPs allowed per project.

Floating IPs are not allocated to instances by default. Users need to select them from the pool configured by the OpenStack administrator to attach to their instances.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_floating_ips

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
fixed_ips
Type:integer
Default:-1
Minimum Value:-1

The number of fixed IPs allowed per project.

Unlike floating IPs, fixed IPs are allocated dynamically by the network component when instances boot up. This quota value should be at least the number of instances allowed

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_fixed_ips

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
metadata_items
Type:integer
Default:128
Minimum Value:-1

The number of metadata items allowed per instance.

Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_metadata_items
injected_files
Type:integer
Default:5
Minimum Value:-1

The number of injected files allowed.

File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_injected_files
injected_file_content_bytes
Type:integer
Default:10240
Minimum Value:-1

The number of bytes allowed per injected file.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_injected_file_content_bytes
injected_file_path_length
Type:integer
Default:255
Minimum Value:-1

The maximum allowed injected file path length.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_injected_file_path_length
security_groups
Type:integer
Default:10
Minimum Value:-1

The number of security groups per project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_security_groups

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
security_group_rules
Type:integer
Default:20
Minimum Value:-1

The number of security rules per security group.

The associated rules in each security group control the traffic to instances in the group.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_security_group_rules

Warning

This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.

Reason:nova-network is deprecated, as are any related configuration options.
key_pairs
Type:integer
Default:100
Minimum Value:-1

The maximum number of key pairs allowed per user.

Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_key_pairs
server_groups
Type:integer
Default:10
Minimum Value:-1

The maxiumum number of server groups per project.

Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_server_groups
server_group_members
Type:integer
Default:10
Minimum Value:-1

The maximum number of servers per server group.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.
Deprecated Variations
Group Name
DEFAULT quota_server_group_members
reservation_expire
Type:integer
Default:86400

The number of seconds until a reservation expires.

This quota represents the time period for invalidating quota reservations.

Deprecated Variations
Group Name
DEFAULT reservation_expire
until_refresh
Type:integer
Default:0
Minimum Value:0

The count of reservations until usage is refreshed.

This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues.

Deprecated Variations
Group Name
DEFAULT until_refresh
max_age
Type:integer
Default:0
Minimum Value:0

The number of seconds between subsequent usage refreshes.

This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. Note that quotas are not updated on a periodic task, they will update on a new reservation if max_age has passed since the last reservation.

Deprecated Variations
Group Name
DEFAULT max_age
driver
Type:string
Default:nova.quota.DbQuotaDriver
Valid Values:nova.quota.DbQuotaDriver, nova.quota.NoopQuotaDriver

Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks.

Possible values

nova.quota.DbQuotaDriver
Stores quota limit information in the database and relies on the quota_* configuration options for default quota limit values. Counts quota usage on-demand.
nova.quota.NoopQuotaDriver
Ignores quota and treats all resources as unlimited.
recheck_quota
Type:boolean
Default:true

Recheck quota after resource creation to prevent allowing quota to be exceeded.

This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them.

The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request.

rdp

Options under this group enable and configure Remote Desktop Protocol ( RDP) related features.

This group is only relevant to Hyper-V users.

enabled
Type:boolean
Default:false

Enable Remote Desktop Protocol (RDP) related features.

Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V.

Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform.

Related options:

  • compute_driver: Must be hyperv.
html5_proxy_base_url
Type:URI
Default:http://127.0.0.1:6083/

The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance.

An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack.

An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect

Possible values:

  • <scheme>://<ip-address>:<port-number>/

    The scheme must be identical to the scheme configured for the RDP HTML5 console proxy service. It is http or https.

    The IP address must be identical to the address on which the RDP HTML5 console proxy service is listening.

    The port must be identical to the port on which the RDP HTML5 console proxy service is listening.

Related options:

  • rdp.enabled: Must be set to True for html5_proxy_base_url to be effective.
remote_debug
host
Type:host address
Default:<None>

Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host.

Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.

Possible Values:

  • IP address of a remote host as a command line parameter to a nova service. For Example:
/usr/local/bin/nova-compute –config-file /etc/nova/nova.conf –remote_debug-host <IP address where the debugger is running>
port
Type:port number
Default:<None>
Minimum Value:0
Maximum Value:65535

Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host.

Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.

Possible Values:

  • Port number you want to use as a command line parameter to a nova service. For Example:
/usr/local/bin/nova-compute –config-file /etc/nova/nova.conf –remote_debug-host <IP address where the debugger is running> –remote_debug-port <port> it’s listening on>.
scheduler
driver
Type:string
Default:filter_scheduler

The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace ‘nova.scheduler.driver’ of file ‘setup.cfg’. If nothing is specified in this option, the ‘filter_scheduler’ is used.

Other options are:

  • ‘fake_scheduler’ which is used for testing.

Possible values:

  • Any of the drivers included in Nova:
    • filter_scheduler
    • fake_scheduler
  • You may also set this to the entry point name of a custom scheduler driver, but you will be responsible for creating and maintaining it in your setup.cfg file.

Related options:

  • workers
Deprecated Variations
Group Name
DEFAULT scheduler_driver
periodic_task_interval
Type:integer
Default:60

Periodic task interval.

This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. Currently there are no in-tree scheduler driver that use this option.

If this is larger than the nova-service ‘service_down_time’ setting, the ComputeFilter (if enabled) may think the compute service is down. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler.

Possible values:

  • An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks.

Related options:

  • nova-service service_down_time
max_attempts
Type:integer
Default:3
Minimum Value:1

This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded exception is raised and the instance is set to an error state.

Possible values:

  • A positive integer, where the integer corresponds to the max number of attempts that can be made when building or moving an instance.
Deprecated Variations
Group Name
DEFAULT scheduler_max_attempts
discover_hosts_in_cells_interval
Type:integer
Default:-1
Minimum Value:-1

Periodic task interval.

This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur.

Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run.

max_placement_results
Type:integer
Default:1000
Minimum Value:1

This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates.

A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on “will it fit” grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler.

This option is only used by the FilterScheduler; if you use a different scheduler, this option has no effect.

workers
Type:integer
Default:<None>
Minimum Value:0

Number of workers for the nova-scheduler service. The default will be the number of CPUs available if using the “filter_scheduler” scheduler driver, otherwise the default will be 1.

limit_tenants_to_placement_aggregate
Type:boolean
Default:false

This setting causes the scheduler to look up a host aggregate with the metadata key of filter_tenant_id set to the project of an incoming request, and request results from placement be limited to that aggregate. Multiple tenants may be added to a single aggregate by appending a serial number to the key, such as filter_tenant_id:123.

The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request.

See also the placement_aggregate_required_for_tenants option.

placement_aggregate_required_for_tenants
Type:boolean
Default:false

This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node.

See also the limit_tenants_to_placement_aggregate option.

query_placement_for_availability_zone
Type:boolean
Default:false

This setting causes the scheduler to look up a host aggregate with the metadata key of availability_zone set to the value provided by an incoming request, and request results from placement be limited to that aggregate.

The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the availability_zone key is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts.

Note that if you enable this flag, you can disable the (less efficient) AvailabilityZoneFilter in the scheduler.

serial_console

The serial console feature allows you to connect to a guest in case a graphical console like VNC, RDP or SPICE is not available. This is only currently supported for the libvirt, Ironic and hyper-v drivers.

enabled
Type:boolean
Default:false

Enable the serial console feature.

In order to use this feature, the service nova-serialproxy needs to run. This service is typically executed on the controller node.

port_range
Type:string
Default:10000:20000

A range of TCP ports a guest can use for its backend.

Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won’t get launched.

Possible values:

  • Each string which passes the regex \d+:\d+ For example 10000:20000. Be sure that the first port number is lower than the second port number and that both are in range from 0 to 65535.
base_url
Type:URI
Default:ws://127.0.0.1:6083/

The URL an end user would use to connect to the nova-serialproxy service.

The nova-serialproxy service is called with this token enriched URL and establishes the connection to the proper instance.

Related options:

  • The IP address must be identical to the address to which the nova-serialproxy service is listening (see option serialproxy_host in this section).
  • The port must be the same as in the option serialproxy_port of this section.
  • If you choose to use a secured websocket connection, then start this option with wss:// instead of the unsecured ws://. The options cert and key in the [DEFAULT] section have to be set for that.
proxyclient_address
Type:string
Default:127.0.0.1

The IP address to which proxy clients (like nova-serialproxy) should connect to get the serial console of an instance.

This is typically the IP address of the host of a nova-compute service.

serialproxy_host
Type:string
Default:0.0.0.0

The IP address which is used by the nova-serialproxy service to listen for incoming requests.

The nova-serialproxy service listens on this IP address for incoming connection requests to instances which expose serial console.

Related options:

  • Ensure that this is the same IP address which is defined in the option base_url of this section or use 0.0.0.0 to listen on all addresses.
serialproxy_port
Type:port number
Default:6083
Minimum Value:0
Maximum Value:65535

The port number which is used by the nova-serialproxy service to listen for incoming requests.

The nova-serialproxy service listens on this port number for incoming connection requests to instances which expose serial console.

Related options:

  • Ensure that this is the same port number which is defined in the option base_url of this section.
service_user

Configuration options for service to service authentication using a service token. These options allow sending a service token along with the user’s token when contacting external REST APIs.

send_service_user_token
Type:boolean
Default:false

When True, if sending a user token to a REST API, also send a service token.

Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user’s behalf, we include a service token along with the user token. Should the user’s token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware.

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

auth_type
Type:unknown type
Default:<None>

Authentication type to load

Deprecated Variations
Group Name
service_user auth_plugin
auth_section
Type:unknown type
Default:<None>

Config Section from which to load plugin specific options

auth_url
Type:unknown type
Default:<None>

Authentication URL

system_scope
Type:unknown type
Default:<None>

Scope for system operations

domain_id
Type:unknown type
Default:<None>

Domain ID to scope to

domain_name
Type:unknown type
Default:<None>

Domain name to scope to

project_id
Type:unknown type
Default:<None>

Project ID to scope to

project_name
Type:unknown type
Default:<None>

Project name to scope to

project_domain_id
Type:unknown type
Default:<None>

Domain ID containing project

project_domain_name
Type:unknown type
Default:<None>

Domain name containing project

trust_id
Type:unknown type
Default:<None>

Trust ID

default_domain_id
Type:unknown type
Default:<None>

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default_domain_name
Type:unknown type
Default:<None>

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

user_id
Type:unknown type
Default:<None>

User ID

username
Type:unknown type
Default:<None>

Username

Deprecated Variations
Group Name
service_user user-name
service_user user_name
user_domain_id
Type:unknown type
Default:<None>

User’s domain id

user_domain_name
Type:unknown type
Default:<None>

User’s domain name

password
Type:unknown type
Default:<None>

User’s password

tenant_id
Type:unknown type
Default:<None>

Tenant ID

tenant_name
Type:unknown type
Default:<None>

Tenant Name

spice

SPICE console feature allows you to connect to a guest virtual machine. SPICE is a replacement for fairly limited VNC protocol.

Following requirements must be met in order to use SPICE:

  • Virtualization driver must be libvirt
  • spice.enabled set to True
  • vnc.enabled set to False
  • update html5proxy_base_url
  • update server_proxyclient_address
enabled
Type:boolean
Default:false

Enable SPICE related features.

Related options:

  • VNC must be explicitly disabled to get access to the SPICE console. Set the enabled option to False in the [vnc] section to disable the VNC console.
agent_enabled
Type:boolean
Default:true

Enable the SPICE guest agent support on the instances.

The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled:

  • Copy & Paste of text and images between the guest and client machine
  • Automatic adjustment of resolution when the client screen changes - e.g. if you make the Spice console full screen the guest resolution will adjust to match it rather than letterboxing.
  • Better mouse integration - The mouse can be captured and released without needing to click inside the console or press keys to release it. The performance of mouse movement is also improved.
html5proxy_base_url
Type:URI
Default:http://127.0.0.1:6082/spice_auto.html

Location of the SPICE HTML5 console proxy.

End user would use this URL to connect to the nova-spicehtml5proxy` service. This service will forward request to the console of an instance.

In order to use SPICE console, the service nova-spicehtml5proxy should be running. This service is typically launched on the controller node.

Possible values:

  • Must be a valid URL of the form: http://host:port/spice_auto.html where host is the node running nova-spicehtml5proxy and the port is typically 6082. Consider not using default value as it is not well defined for any real deployment.

Related options:

  • This option depends on html5proxy_host and html5proxy_port options. The access URL returned by the compute node must have the host and port where the nova-spicehtml5proxy service is listening.
server_listen
Type:string
Default:127.0.0.1

The address where the SPICE server running on the instances should listen.

Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s).

Possible values:

  • IP address to listen on.
server_proxyclient_address
Type:string
Default:127.0.0.1

The address used by nova-spicehtml5proxy client to connect to instance console.

Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s).

Possible values:

  • Any valid IP address on the compute node.

Related options:

  • This option depends on the server_listen option. The proxy client must be able to access the address specified in server_listen using the value of this option.
keymap
Type:string
Default:<None>

A keyboard layout which is supported by the underlying hypervisor on this node.

Possible values:

  • This is usually an ‘IETF language tag’ (default is ‘en-us’). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps.

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:Configuring this option forces QEMU to do keymap conversions. These conversions are lossy and can result in significant issues for users of non en-US keyboards. Refer to bug #1682020 for more information.
html5proxy_host
Type:host address
Default:0.0.0.0

IP address or a hostname on which the nova-spicehtml5proxy service listens for incoming requests.

Related options:

  • This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a host that is accessible from the HTML5 client.
html5proxy_port
Type:port number
Default:6082
Minimum Value:0
Maximum Value:65535

Port on which the nova-spicehtml5proxy service listens for incoming requests.

Related options:

  • This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a port that is accessible from the HTML5 client.
upgrade_levels

upgrade_levels options are used to set version cap for RPC messages sent between different nova services.

By default all services send messages using the latest version they know about.

The compute upgrade level is an important part of rolling upgrades where old and new nova-compute services run side by side.

The other options can largely be ignored, and are only kept to help with a possible future backport issue.

compute
Type:string
Default:<None>

Compute RPC API version cap.

By default, we always send messages using the most recent version the client knows about.

Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can’t understand. Note that we only support upgrading from release N to release N+1.

Set this option to “auto” if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment.

Possible values:

  • By default send the latest version the client knows about
  • ‘auto’: Automatically determines what version to use based on the service versions in the deployment.
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
cells
Type:string
Default:<None>

Cells RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
intercell
Type:string
Default:<None>

Intercell RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
cert
Type:string
Default:<None>

Cert RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:The nova-cert service was removed in 16.0.0 (Pike) so this option is no longer used.
scheduler
Type:string
Default:<None>

Scheduler RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
conductor
Type:string
Default:<None>

Conductor RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
console
Type:string
Default:<None>

Console RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
consoleauth
Type:string
Default:<None>

Consoleauth RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:The nova-consoleauth service was deprecated in 18.0.0 (Rocky) and will be removed in an upcoming release.
network
Type:string
Default:<None>

Network RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:The nova-network service was deprecated in 14.0.0 (Newton) and will be removed in an upcoming release.
baseapi
Type:string
Default:<None>

Base API RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
  • An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
vault
root_token_id
Type:string
Default:<None>

root token for vault

vault_url
Type:string
Default:http://127.0.0.1:8200

Use this endpoint to connect to Vault, for example: “http://127.0.0.1:8200

ssl_ca_crt_file
Type:string
Default:<None>

Absolute path to ca cert file

use_ssl
Type:boolean
Default:false

SSL Enabled/Disabled

vendordata_dynamic_auth

Options within this group control the authentication of the vendordata subsystem of the metadata API server (and config drive) with external systems.

cafile
Type:string
Default:<None>

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile
Type:string
Default:<None>

PEM encoded client certificate cert file

keyfile
Type:string
Default:<None>

PEM encoded client certificate key file

insecure
Type:boolean
Default:false

Verify HTTPS connections.

timeout
Type:integer
Default:<None>

Timeout value for http requests

collect_timing
Type:boolean
Default:false

Collect per-API call timing information.

split_loggers
Type:boolean
Default:false

Log requests to multiple loggers.

auth_type
Type:unknown type
Default:<None>

Authentication type to load

Deprecated Variations
Group Name
vendordata_dynamic_auth auth_plugin
auth_section
Type:unknown type
Default:<None>

Config Section from which to load plugin specific options

auth_url
Type:unknown type
Default:<None>

Authentication URL

system_scope
Type:unknown type
Default:<None>

Scope for system operations

domain_id
Type:unknown type
Default:<None>

Domain ID to scope to

domain_name
Type:unknown type
Default:<None>

Domain name to scope to

project_id
Type:unknown type
Default:<None>

Project ID to scope to

project_name
Type:unknown type
Default:<None>

Project name to scope to

project_domain_id
Type:unknown type
Default:<None>

Domain ID containing project

project_domain_name
Type:unknown type
Default:<None>

Domain name containing project

trust_id
Type:unknown type
Default:<None>

Trust ID

default_domain_id
Type:unknown type
Default:<None>

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default_domain_name
Type:unknown type
Default:<None>

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

user_id
Type:unknown type
Default:<None>

User ID

username
Type:unknown type
Default:<None>

Username

Deprecated Variations
Group Name
vendordata_dynamic_auth user-name
vendordata_dynamic_auth user_name
user_domain_id
Type:unknown type
Default:<None>

User’s domain id

user_domain_name
Type:unknown type
Default:<None>

User’s domain name

password
Type:unknown type
Default:<None>

User’s password

tenant_id
Type:unknown type
Default:<None>

Tenant ID

tenant_name
Type:unknown type
Default:<None>

Tenant Name

vmware

Related options: Following options must be set in order to launch VMware-based virtual machines.

  • compute_driver: Must use vmwareapi.VMwareVCDriver.
  • vmware.host_username
  • vmware.host_password
  • vmware.cluster_name
vlan_interface
Type:string
Default:vmnic0

This option specifies the physical ethernet adapter name for VLAN networking.

Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.

Possible values:

  • Any valid string representing VLAN interface name
integration_bridge
Type:string
Default:<None>

This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set.

Possible values:

  • Any valid string representing the name of the integration bridge
console_delay_seconds
Type:integer
Default:<None>
Minimum Value:0

Set this value if affected by an increased network latency causing repeated characters when typing in a remote console.

serial_port_service_uri
Type:string
Default:<None>

Identifies the remote system where the serial port traffic will be sent.

This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs.

Possible values:

  • Any valid URI
serial_port_proxy_uri
Type:URI
Default:<None>

Identifies a proxy service that provides network access to the serial_port_service_uri.

Possible values:

  • Any valid URI (The scheme is ‘telnet’ or ‘telnets’.)

Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri

serial_log_dir
Type:string
Default:/opt/vmware/vspc

Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the ‘serial_log_dir’ config value of VSPC.

host_ip
Type:host address
Default:<None>

Hostname or IP address for connection to VMware vCenter host.

host_port
Type:port number
Default:443
Minimum Value:0
Maximum Value:65535

Port for connection to VMware vCenter host.

host_username
Type:string
Default:<None>

Username for connection to VMware vCenter host.

host_password
Type:string
Default:<None>

Password for connection to VMware vCenter host.

ca_file
Type:string
Default:<None>

Specifies the CA bundle file to be used in verifying the vCenter server certificate.

insecure
Type:boolean
Default:false

If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification.

Related options: * ca_file: This option is ignored if “ca_file” is set.

cluster_name
Type:string
Default:<None>

Name of a VMware Cluster ComputeResource.

datastore_regex
Type:string
Default:<None>

Regular expression pattern to match the name of datastore.

The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex=”nas.*” selects all the data stores that have a name starting with “nas”.

NOTE: If no regex is given, it just picks the datastore with the most freespace.

Possible values:

  • Any matching regular expression to a datastore must be given
task_poll_interval
Type:floating point
Default:0.5

Time interval in seconds to poll remote tasks invoked on VMware VC server.

api_retry_count
Type:integer
Default:10
Minimum Value:0

Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc.

vnc_port
Type:port number
Default:5900
Minimum Value:0
Maximum Value:65535

This option specifies VNC starting port.

Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option ‘vnc_port’ helps you to set default starting port for the VNC client.

Possible values:

  • Any valid port number within 5900 -(5900 + vnc_port_total)

Related options: Below options should be set to enable VNC client. * vnc.enabled = True * vnc_port_total

vnc_port_total
Type:integer
Default:10000
Minimum Value:0

Total number of VNC ports.

vnc_keymap
Type:string
Default:en-us

Keymap for VNC.

The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.

Possible values:

  • A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an ‘IETF language tag’ (for example ‘en-us’).
use_linked_clone
Type:boolean
Default:true

This option enables/disables the use of linked clone.

The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the OpenStack Image service.

If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM.

connection_pool_size
Type:integer
Default:10
Minimum Value:10

This option sets the http connection pool size

The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice.

pbm_enabled
Type:boolean
Default:false

This option enables or disables storage policy based placement of instances.

Related options:

  • pbm_default_policy
pbm_wsdl_location
Type:string
Default:<None>

This option specifies the PBM service WSDL file location URL.

Setting this will disable storage policy based placement of instances.

Possible values:

pbm_default_policy
Type:string
Default:<None>

This option specifies the default policy to be used.

If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used.

Possible values:

  • Any valid storage policy such as VSAN default storage policy

Related options:

  • pbm_enabled
maximum_objects
Type:integer
Default:100
Minimum Value:0

This option specifies the limit on the maximum number of objects to return in a single result.

A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests.

cache_prefix
Type:string
Default:<None>

This option adds a prefix to the folder where cached images are stored

This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes.

Note: This should only be used when the compute nodes are running on same host or they have a shared file system.

Possible values:

  • Any string representing the cache prefix to the folder
vnc

Virtual Network Computer (VNC) can be used to provide remote desktop console access to instances for tenants and/or administrators.

enabled
Type:boolean
Default:true

Enable VNC related features.

Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest.

Deprecated Variations
Group Name
DEFAULT vnc_enabled
keymap
Type:string
Default:<None>

Keymap for VNC.

The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.

Possible values:

  • A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an ‘IETF language tag’ (for example ‘en-us’). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps.
Deprecated Variations
Group Name
DEFAULT vnc_keymap

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:Configuring this option forces QEMU to do keymap conversions. These conversions are lossy and can result in significant issues for users of non en-US keyboards. You should instead use a VNC client that supports Extended Key Event messages, such as noVNC 1.0.0. Refer to bug #1682020 for more information.
server_listen
Type:host address
Default:127.0.0.1

The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node.

Deprecated Variations
Group Name
DEFAULT vncserver_listen
vnc vncserver_listen
server_proxyclient_address
Type:host address
Default:127.0.0.1

Private, internal IP address or hostname of VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients.

This option sets the private address to which proxy clients, such as nova-xvpvncproxy, should connect to.

Deprecated Variations
Group Name
DEFAULT vncserver_proxyclient_address
vnc vncserver_proxyclient_address
novncproxy_base_url
Type:URI
Default:http://127.0.0.1:6080/vnc_auto.html

Public address of noVNC VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions.

If using noVNC >= 1.0.0, you should use vnc_lite.html instead of vnc_auto.html.

Related options:

  • novncproxy_host
  • novncproxy_port
Deprecated Variations
Group Name
DEFAULT novncproxy_base_url
xvpvncproxy_host
Type:host address
Default:0.0.0.0

IP address or hostname that the XVP VNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the private address to which the XVP VNC console proxy service should bind to.

Related options:

  • xvpvncproxy_port
  • xvpvncproxy_base_url
Deprecated Variations
Group Name
DEFAULT xvpvncproxy_host
xvpvncproxy_port
Type:port number
Default:6081
Minimum Value:0
Maximum Value:65535

Port that the XVP VNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the private port to which the XVP VNC console proxy service should bind to.

Related options:

  • xvpvncproxy_host
  • xvpvncproxy_base_url
Deprecated Variations
Group Name
DEFAULT xvpvncproxy_port
xvpvncproxy_base_url
Type:URI
Default:http://127.0.0.1:6081/console

Public URL address of XVP VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the public base URL to which client systems will connect. XVP clients can use this address to connect to the XVP instance and, by extension, the VNC sessions.

Related options:

  • xvpvncproxy_host
  • xvpvncproxy_port
Deprecated Variations
Group Name
DEFAULT xvpvncproxy_base_url
novncproxy_host
Type:string
Default:0.0.0.0

IP address that the noVNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the private address to which the noVNC console proxy service should bind to.

Related options:

  • novncproxy_port
  • novncproxy_base_url
Deprecated Variations
Group Name
DEFAULT novncproxy_host
novncproxy_port
Type:port number
Default:6080
Minimum Value:0
Maximum Value:65535

Port that the noVNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the private port to which the noVNC console proxy service should bind to.

Related options:

  • novncproxy_host
  • novncproxy_base_url
Deprecated Variations
Group Name
DEFAULT novncproxy_port
auth_schemes
Type:list
Default:none

The authentication schemes to use with the compute node.

Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first.

Related options:

  • [vnc]vencrypt_client_key, [vnc]vencrypt_client_cert: must also be set
vencrypt_client_key
Type:string
Default:<None>

The path to the client certificate PEM file (for x509)

The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication.

Related options:

  • vnc.auth_schemes: must include vencrypt
  • vnc.vencrypt_client_cert: must also be set
vencrypt_client_cert
Type:string
Default:<None>

The path to the client key file (for x509)

The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication.

Realted options:

  • vnc.auth_schemes: must include vencrypt
  • vnc.vencrypt_client_key: must also be set
vencrypt_ca_certs
Type:string
Default:<None>

The path to the CA certificate PEM file

The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server.

Related options:

  • vnc.auth_schemes: must include vencrypt
workarounds

A collection of workarounds used to mitigate bugs or issues found in system tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These should only be enabled in exceptional circumstances. All options are linked against bug IDs, where more information on the issue can be found.

disable_rootwrap
Type:boolean
Default:false

Use sudo instead of rootwrap.

Allow fallback to sudo for performance reasons.

For more information, refer to the bug report:

Possible values:

  • True: Use sudo instead of rootwrap
  • False: Use rootwrap as usual

Interdependencies to other options:

  • Any options that affect ‘rootwrap’ will be ignored.
disable_libvirt_livesnapshot
Type:boolean
Default:false

Disable live snapshots when using the libvirt driver.

Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem.

When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process.

For more information, refer to the bug report:

Possible values:

  • True: Live snapshot is disabled when using libvirt
  • False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it)
handle_virt_lifecycle_events
Type:boolean
Default:true

Enable handling of events emitted from compute drivers.

Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored.

This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature.

Care should be taken when this feature is disabled and ‘sync_power_state_interval’ is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.

For more information, refer to the bug report:

Interdependencies to other options:

  • If sync_power_state_interval is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
disable_group_policy_check_upcall
Type:boolean
Default:false

Disable the server group policy check upcall in compute.

In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy.

Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute.

Related options:

  • [filter_scheduler]/track_instance_changes also relies on upcalls from the compute service to the scheduler service.
enable_consoleauth
Type:boolean
Default:false

Enable the consoleauth service to avoid resetting unexpired consoles.

Console token authorizations have moved from the nova-consoleauth service to the database, so all new consoles will be supported by the database backend. With this, consoles that existed before database backend support will be reset. For most operators, this should be a minimal disruption as the default TTL of a console token is 10 minutes.

Operators that have much longer token TTL configured or otherwise wish to avoid immediately resetting all existing consoles can enable this flag to continue using the nova-consoleauth service in addition to the database backend. Once all of the old nova-consoleauth supported console tokens have expired, this flag should be disabled. For example, if a deployment has configured a token TTL of one hour, the operator may disable the flag, one hour after deploying the new code during an upgrade.

Note

Cells v1 was not converted to use the database backend for console token authorizations. Cells v1 console token authorizations will continue to be supported by the nova-consoleauth service and use of the [workarounds]/enable_consoleauth option does not apply to Cells v1 users.

Related options:

  • [consoleauth]/token_ttl

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:This option has been added as deprecated originally because it is used for avoiding a upgrade issue and it will not be used in the future. See the help text for more details.
wsgi

Options under this group are used to configure WSGI (Web Server Gateway Interface). WSGI is used to serve API requests.

api_paste_config
Type:string
Default:api-paste.ini

This option represents a file name for the paste.deploy config for nova-api.

Possible values:

  • A string representing file name for the paste.deploy config.
Deprecated Variations
Group Name
DEFAULT api_paste_config
wsgi_log_format
Type:string
Default:%(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f

It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.

This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect.

Possible values:

  • ‘%(client_ip)s “%(request_line)s” status: %(status_code)s’ ‘len: %(body_length)s time: %(wall_seconds).7f’ (default)
  • Any formatted string formed by specific values.
Deprecated Variations
Group Name
DEFAULT wsgi_log_format

Warning

This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.

Reason:This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi.
secure_proxy_ssl_header
Type:string
Default:<None>

This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy.

Possible values:

  • None (default) - the request scheme is not influenced by any HTTP headers
  • Valid HTTP header, like HTTP_X_FORWARDED_PROTO

WARNING: Do not set this unless you know what you are doing.

Make sure ALL of the following are true before setting this (assuming the values from the example above):

  • Your API is behind a proxy.
  • Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it.
  • Your proxy sets the X-Forwarded-Proto header and sends it to API, but only for requests that originally come in via HTTPS.

If any of those are not true, you should keep this setting set to None.

Deprecated Variations
Group Name
DEFAULT secure_proxy_ssl_header
ssl_ca_file
Type:string
Default:<None>

This option allows setting path to the CA certificate file that should be used to verify connecting clients.

Possible values:

  • String representing path to the CA certificate file.

Related options:

  • enabled_ssl_apis
Deprecated Variations
Group Name
DEFAULT ssl_ca_file
ssl_cert_file
Type:string
Default:<None>

This option allows setting path to the SSL certificate of API server.

Possible values:

  • String representing path to the SSL certificate.

Related options:

  • enabled_ssl_apis
Deprecated Variations
Group Name
DEFAULT ssl_cert_file
ssl_key_file
Type:string
Default:<None>

This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect.

Possible values:

  • String representing path to the SSL private key.

Related options:

  • enabled_ssl_apis
Deprecated Variations
Group Name
DEFAULT ssl_key_file
tcp_keepidle
Type:integer
Default:600
Minimum Value:0

This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X.

Related options:

  • keep_alive
Deprecated Variations
Group Name
DEFAULT tcp_keepidle
default_pool_size
Type:integer
Default:1000
Minimum Value:0

This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option.

Deprecated Variations
Group Name
DEFAULT wsgi_default_pool_size
max_header_line
Type:integer
Default:16384
Minimum Value:0

This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).

Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length.

Deprecated Variations
Group Name
DEFAULT max_header_line
keep_alive
Type:boolean
Default:true

This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse.

Possible values:

  • True : reuse HTTP connection.
  • False : closes the client socket connection explicitly.

Related options:

  • tcp_keepidle
Deprecated Variations
Group Name
DEFAULT wsgi_keep_alive
client_socket_timeout
Type:integer
Default:900
Minimum Value:0

This option specifies the timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0.

Deprecated Variations
Group Name
DEFAULT client_socket_timeout
xenserver

XenServer options are used when the compute_driver is set to use XenServer (compute_driver=xenapi.XenAPIDriver).

Must specify connection_url, connection_password and ovs_integration_bridge to use compute_driver=xenapi.XenAPIDriver.

agent_timeout
Type:integer
Default:30
Minimum Value:0

Number of seconds to wait for agent’s reply to a request.

Nova configures/performs certain administrative actions on a server with the help of an agent that’s installed on the server. The communication between Nova and the agent is achieved via sharing messages, called records, over xenstore, a shared storage across all the domains on a Xenserver host. Operations performed by the agent on behalf of nova are: ‘version’,’ key_init’, ‘password’,’resetnetwork’,’inject_file’, and ‘agentupdate’.

To perform one of the above operations, the xapi ‘agent’ plugin writes the command and its associated parameters to a certain location known to the domain and awaits response. On being notified of the message, the agent performs appropriate actions on the server and writes the result back to xenstore. This result is then read by the xapi ‘agent’ plugin to determine the success/failure of the operation.

This config option determines how long the xapi ‘agent’ plugin shall wait to read the response off of xenstore for a given request/command. If the agent on the instance fails to write the result in this time period, the operation is considered to have timed out.

Related options:

  • agent_version_timeout
  • agent_resetnetwork_timeout
agent_version_timeout
Type:integer
Default:300
Minimum Value:0

Number of seconds to wait for agent’t reply to version request.

This indicates the amount of time xapi ‘agent’ plugin waits for the agent to respond to the ‘version’ request specifically. The generic timeout for agent communication agent_timeout is ignored in this case.

During the build process the ‘version’ request is used to determine if the agent is available/operational to perform other requests such as ‘resetnetwork’, ‘password’, ‘key_init’ and ‘inject_file’. If the ‘version’ call fails, the other configuration is skipped. So, this configuration option can also be interpreted as time in which agent is expected to be fully operational.

agent_resetnetwork_timeout
Type:integer
Default:60
Minimum Value:0

Number of seconds to wait for agent’s reply to resetnetwork request.

This indicates the amount of time xapi ‘agent’ plugin waits for the agent to respond to the ‘resetnetwork’ request specifically. The generic timeout for agent communication agent_timeout is ignored in this case.

agent_path
Type:string
Default:usr/sbin/xe-update-networking

Path to locate guest agent on the server.

Specifies the path in which the XenAPI guest agent should be located. If the agent is present, network configuration is not injected into the image.

Related options:

For this option to have an effect: * flat_injected should be set to True * compute_driver should be set to xenapi.XenAPIDriver

disable_agent
Type:boolean
Default:false

Disables the use of XenAPI agent.

This configuration option suggests whether the use of agent should be enabled or not regardless of what image properties are present. Image properties have an effect only when this is set to True. Read description of config option use_agent_default for more information.

Related options:

  • use_agent_default
use_agent_default
Type:boolean
Default:false

Whether or not to use the agent by default when its usage is enabled but not indicated by the image.

The use of XenAPI agent can be disabled altogether using the configuration option disable_agent. However, if it is not disabled, the use of an agent can still be controlled by the image in use through one of its properties, xenapi_use_agent. If this property is either not present or specified incorrectly on the image, the use of agent is determined by this configuration option.

Note that if this configuration is set to True when the agent is not present, the boot times will increase significantly.

Related options:

  • disable_agent
login_timeout
Type:integer
Default:10
Minimum Value:0

Timeout in seconds for XenAPI login.

connection_concurrent
Type:integer
Default:5
Minimum Value:1

Maximum number of concurrent XenAPI connections.

In nova, multiple XenAPI requests can happen at a time. Configuring this option will parallelize access to the XenAPI session, which allows you to make concurrent XenAPI connections.

cache_images
Type:string
Default:all
Valid Values:all, some, none

Cache glance images locally.

The value for this option must be chosen from the choices listed here. Configuring a value other than these will default to ‘all’.

Note: There is nothing that deletes these images.

Possible values

all
Will cache all images
some
Will only cache images that have the image_property cache_in_nova=True
none
Turns off caching entirely
image_compression_level
Type:integer
Default:<None>
Minimum Value:1
Maximum Value:9

Compression level for images.

By setting this option we can configure the gzip compression level. This option sets GZIP environment variable before spawning tar -cz to force the compression level. It defaults to none, which means the GZIP environment variable is not set and the default (usually -6) is used.

Possible values:

  • Range is 1-9, e.g., 9 for gzip -9, 9 being most compressed but most CPU intensive on dom0.
  • Any values out of this range will default to None.
default_os_type
Type:string
Default:linux

Default OS type used when uploading an image to glance

block_device_creation_timeout
Type:integer
Default:10
Minimum Value:1

Time in secs to wait for a block device to be created

max_kernel_ramdisk_size
Type:integer
Default:16777216

Maximum size in bytes of kernel or ramdisk images.

Specifying the maximum size of kernel or ramdisk will avoid copying large files to dom0 and fill up /boot/guest.

sr_matching_filter
Type:string
Default:default-sr:true

Filter for finding the SR to be used to install guest instances on.

Possible values:

  • To use the Local Storage in default XenServer/XCP installations set this flag to other-config:i18n-key=local-storage.
  • To select an SR with a different matching criteria, you could set it to other-config:my_favorite_sr=true.
  • To fall back on the Default SR, as displayed by XenCenter, set this flag to: default-sr:true.
sparse_copy
Type:boolean
Default:true

Whether to use sparse_copy for copying data on a resize down. (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won’t have to be rsynced.

num_vbd_unplug_retries
Type:integer
Default:10
Minimum Value:0

Maximum number of retries to unplug VBD. If set to 0, should try once, no retries.

ipxe_network_name
Type:string
Default:<None>

Name of network to use for booting iPXE ISOs.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

By default this option is not set. Enable this option to boot an iPXE ISO.

Related Options:

  • ipxe_boot_menu_url
  • ipxe_mkisofs_cmd
ipxe_boot_menu_url
Type:string
Default:<None>

URL to the iPXE boot menu.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

By default this option is not set. Enable this option to boot an iPXE ISO.

Related Options:

  • ipxe_network_name
  • ipxe_mkisofs_cmd
ipxe_mkisofs_cmd
Type:string
Default:mkisofs

Name and optionally path of the tool used for ISO image creation.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

Note: By default mkisofs is not present in the Dom0, so the package can either be manually added to Dom0 or include the mkisofs binary in the image itself.

Related Options:

  • ipxe_network_name
  • ipxe_boot_menu_url
connection_url
Type:string
Default:<None>

URL for connection to XenServer/Xen Cloud Platform. A special value of unix://local can be used to connect to the local unix socket.

Possible values:

  • Any string that represents a URL. The connection_url is generally the management network IP address of the XenServer.
  • This option must be set if you chose the XenServer driver.
connection_username
Type:string
Default:root

Username for connection to XenServer/Xen Cloud Platform

connection_password
Type:string
Default:<None>

Password for connection to XenServer/Xen Cloud Platform

vhd_coalesce_poll_interval
Type:floating point
Default:5.0
Minimum Value:0

The interval used for polling of coalescing vhds.

This is the interval after which the task of coalesce VHD is performed, until it reaches the max attempts that is set by vhd_coalesce_max_attempts.

Related options:

  • vhd_coalesce_max_attempts
check_host
Type:boolean
Default:true

Ensure compute service is running on host XenAPI connects to. This option must be set to false if the ‘independent_compute’ option is set to true.

Possible values:

  • Setting this option to true will make sure that compute service is running on the same host that is specified by connection_url.
  • Setting this option to false, doesn’t perform the check.

Related options:

  • independent_compute
vhd_coalesce_max_attempts
Type:integer
Default:20
Minimum Value:0

Max number of times to poll for VHD to coalesce.

This option determines the maximum number of attempts that can be made for coalescing the VHD before giving up.

Related opitons:

  • vhd_coalesce_poll_interval
sr_base_path
Type:string
Default:/var/run/sr-mount

Base path to the storage repository on the XenServer host.

target_host
Type:host address
Default:<None>

The iSCSI Target Host.

This option represents the hostname or ip of the iSCSI Target. If the target host is not present in the connection information from the volume provider then the value from this option is taken.

Possible values:

  • Any string that represents hostname/ip of Target.
target_port
Type:port number
Default:3260
Minimum Value:0
Maximum Value:65535

The iSCSI Target Port.

This option represents the port of the iSCSI Target. If the target port is not present in the connection information from the volume provider then the value from this option is taken.

independent_compute
Type:boolean
Default:false

Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host.

Related options:

  • CONF.flat_injected (Must be False)
  • CONF.xenserver.check_host (Must be False)
  • CONF.default_ephemeral_format (Must be unset or ‘ext3’)
  • Joining host aggregates (will error if attempted)
  • Swap disks for Windows VMs (will error if attempted)
  • Nova-based auto_configure_disk (will error if attempted)
running_timeout
Type:integer
Default:60
Minimum Value:0

Wait time for instances to go to running state.

Provide an integer value representing time in seconds to set the wait time for an instance to go to running state.

When a request to create an instance is received by nova-api and communicated to nova-compute, the creation of the instance occurs through interaction with Xen via XenAPI in the compute node. Once the node on which the instance(s) are to be launched is decided by nova-schedule and the launch is triggered, a certain amount of wait time is involved until the instance(s) can become available and ‘running’. This wait time is defined by running_timeout. If the instances do not go to running state within this specified wait time, the launch expires and the instance(s) are set to ‘error’ state.

image_upload_handler
Type:string
Default:''

Dom0 plugin driver used to handle image uploads.

Provide a string value representing a plugin driver required to handle the image uploading to GlanceStore.

Images, and snapshots from XenServer need to be uploaded to the data store for use. image_upload_handler takes in a value for the Dom0 plugin driver. This driver is then called to uplaod images to the GlanceStore.

Warning

This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.

Reason:Instead of setting the class path here, we will use short names to represent image handlers. The download and upload handlers must also be matching. So another new option “image_handler” will be used to set the short name for a specific image handler for both image download and upload.
image_handler
Type:string
Default:direct_vhd
Valid Values:direct_vhd, vdi_local_dev, vdi_remote_stream

The plugin used to handle image uploads and downloads.

Provide a short name representing an image driver required to handle the image between compute host and glance.

Possible values

direct_vhd
This plugin directly processes the VHD files in XenServer SR(Storage Repository). So this plugin only works when the host’s SR type is file system based e.g. ext, nfs.
vdi_local_dev
This plugin implements an image handler which attaches the instance’s VDI as a local disk to the VM where the OpenStack Compute service runs. It uploads the raw disk to glance when creating image; when booting an instance from a glance image, it downloads the image and streams it into the disk which is attached to the compute VM.
vdi_remote_stream
This plugin implements an image handler which works as a proxy between glance and XenServer. The VHD streams to XenServer via a remote import API supplied by XAPI for image download; and for image upload, the VHD streams from XenServer via a remote export API supplied by XAPI. This plugin works for all SR types supported by XenServer.
introduce_vdi_retry_wait
Type:integer
Default:20
Minimum Value:0

Number of seconds to wait for SR to settle if the VDI does not exist when first introduced.

Some SRs, particularly iSCSI connections are slow to see the VDIs right after they got introduced. Setting this option to a time interval will make the SR to wait for that time period before raising VDI not found exception.

ovs_integration_bridge
Type:string
Default:<None>

The name of the integration Bridge that is used with xenapi when connecting with Open vSwitch.

Note: The value of this config option is dependent on the environment, therefore this configuration value must be set accordingly if you are using XenAPI.

Possible values:

  • Any string that represents a bridge name.
use_join_force
Type:boolean
Default:true

When adding new host to a pool, this will append a –force flag to the command, forcing hosts to join a pool, even if they have different CPUs.

Since XenServer version 5.6 it is possible to create a pool of hosts that have different CPU capabilities. To accommodate CPU differences, XenServer limited features it uses to determine CPU compatibility to only the ones that are exposed by CPU and support for CPU masking was added. Despite this effort to level differences between CPUs, it is still possible that adding new host will fail, thus option to force join was introduced.

console_public_hostname
Type:string
Default:<current_hostname>

This option has a sample default set, which means that its actual default value may vary from the one documented above.

Publicly visible name for this console host.

Possible values:

  • Current hostname (default) or any string representing hostname.
Deprecated Variations
Group Name
DEFAULT console_public_hostname
xvp

Configuration options for XVP.

xvp (Xen VNC Proxy) is a proxy server providing password-protected VNC-based access to the consoles of virtual machines hosted on Citrix XenServer.

console_xvp_conf_template
Type:string
Default:$pybasedir/nova/console/xvp.conf.template

XVP conf template

Deprecated Variations
Group Name
DEFAULT console_xvp_conf_template
console_xvp_conf
Type:string
Default:/etc/xvp.conf

Generated XVP conf file

Deprecated Variations
Group Name
DEFAULT console_xvp_conf
console_xvp_pid
Type:string
Default:/var/run/xvp.pid

XVP master process pid file

Deprecated Variations
Group Name
DEFAULT console_xvp_pid
console_xvp_log
Type:string
Default:/var/log/xvp.log

XVP log file

Deprecated Variations
Group Name
DEFAULT console_xvp_log
console_xvp_multiplex_port
Type:port number
Default:5900
Minimum Value:0
Maximum Value:65535

Port for XVP to multiplex VNC connections on

Deprecated Variations
Group Name
DEFAULT console_xvp_multiplex_port
zvm

zvm options allows cloud administrator to configure related z/VM hypervisor driver to be used within an OpenStack deployment.

zVM options are used when the compute_driver is set to use zVM (compute_driver=zvm.ZVMDriver)

cloud_connector_url
Type:URI
Default:http://zvm.example.org:8080/

This option has a sample default set, which means that its actual default value may vary from the one documented above.

URL to be used to communicate with z/VM Cloud Connector.

ca_file
Type:string
Default:<None>

CA certificate file to be verified in httpd server with TLS enabled

A string, it must be a path to a CA bundle to use.

image_tmp_path
Type:string
Default:$state_path/images

This option has a sample default set, which means that its actual default value may vary from the one documented above.

The path at which images will be stored (snapshot, deploy, etc).

Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location.

Possible values:
A file system path on the host running the compute service.
reachable_timeout
Type:integer
Default:300

Timeout (seconds) to wait for an instance to start.

The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted.

Possible Values:
Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state.

Configuration sample

The following is a sample compute-hyperv configuration for adaptation and use.

The sample configuration can also be viewed in file form.

Config options that are specific to the Hyper-V Nova driver can be found in the [hyperv] config group section.

Important

The sample configuration file is auto-generated from compute-hyperv when this documentation is built. You must ensure your version of compute-hyperv matches the version of this documentation.

[DEFAULT]

#
# From nova.conf
#

#
# Availability zone for internal services.
#
# This option determines the availability zone for the various internal nova
# services, such as 'nova-scheduler', 'nova-conductor', etc.
#
# Possible values:
#
# * Any string representing an existing availability zone name.
#  (string value)
#internal_service_availability_zone = internal

#
# Default availability zone for compute services.
#
# This option determines the default availability zone for 'nova-compute'
# services, which will be used if the service(s) do not belong to aggregates
# with
# availability zone metadata.
#
# Possible values:
#
# * Any string representing an existing availability zone name.
#  (string value)
#default_availability_zone = nova

#
# Default availability zone for instances.
#
# This option determines the default availability zone for instances, which will
# be used when a user does not specify one when creating an instance. The
# instance(s) will be bound to this availability zone for their lifetime.
#
# Possible values:
#
# * Any string representing an existing availability zone name.
# * None, which means that the instance can move from one availability zone to
#   another during its lifetime if it is moved from one compute node to another.
#  (string value)
#default_schedule_zone = <None>

# Length of generated instance admin passwords. (integer value)
# Minimum value: 0
#password_length = 12

#
# Time period to generate instance usages for. It is possible to define optional
# offset to given period by appending @ character followed by a number defining
# offset.
#
# Possible values:
#
# *  period, example: ``hour``, ``day``, ``month` or ``year``
# *  period with offset, example: ``month@15`` will result in monthly audits
#    starting on 15th day of month.
#  (string value)
#instance_usage_audit_period = month

#
# Start and use a daemon that can run the commands that need to be run with
# root privileges. This option is usually enabled on nodes that run nova compute
# processes.
#  (boolean value)
#use_rootwrap_daemon = false

#
# Path to the rootwrap configuration file.
#
# Goal of the root wrapper is to allow a service-specific unprivileged user to
# run a number of actions as the root user in the safest manner possible.
# The configuration file used here must match the one defined in the sudoers
# entry.
#  (string value)
#rootwrap_config = /etc/nova/rootwrap.conf

# Explicitly specify the temporary working directory. (string value)
#tempdir = <None>

#
# Defines which driver to use for controlling virtualization.
#
# Possible values:
#
# * ``libvirt.LibvirtDriver``
# * ``xenapi.XenAPIDriver``
# * ``fake.FakeDriver``
# * ``ironic.IronicDriver``
# * ``vmwareapi.VMwareVCDriver``
# * ``hyperv.HyperVDriver``
# * ``powervm.PowerVMDriver``
# * ``zvm.ZVMDriver``
#  (string value)
#compute_driver = <None>

#
# Allow destination machine to match source for resize. Useful when
# testing in single-host environments. By default it is not allowed
# to resize to the same host. Setting this option to true will add
# the same host to the destination options. Also set to true
# if you allow the ServerGroupAffinityFilter and need to resize.
#  (boolean value)
#allow_resize_to_same_host = false

#
# Image properties that should not be inherited from the instance
# when taking a snapshot.
#
# This option gives an opportunity to select which image-properties
# should not be inherited by newly created snapshots.
#
# Possible values:
#
# * A comma-separated list whose item is an image property. Usually only
#   the image properties that are only needed by base images can be included
#   here, since the snapshots that are created from the base images don't
#   need them.
# * Default list: cache_in_nova, bittorrent, img_signature_hash_method,
#                 img_signature, img_signature_key_type,
#                 img_signature_certificate_uuid
#
#  (list value)
#non_inheritable_image_properties = cache_in_nova,bittorrent,img_signature_hash_method,img_signature,img_signature_key_type,img_signature_certificate_uuid

#
# Maximum number of devices that will result in a local image being
# created on the hypervisor node.
#
# A negative number means unlimited. Setting max_local_block_devices
# to 0 means that any request that attempts to create a local disk
# will fail. This option is meant to limit the number of local discs
# (so root local disc that is the result of --image being used, and
# any other ephemeral and swap disks). 0 does not mean that images
# will be automatically converted to volumes and boot instances from
# volumes - it just means that all requests that attempt to create a
# local disk will fail.
#
# Possible values:
#
# * 0: Creating a local disk is not allowed.
# * Negative number: Allows unlimited number of local discs.
# * Positive number: Allows only these many number of local discs.
#                        (Default value is 3).
#  (integer value)
#max_local_block_devices = 3

#
# A comma-separated list of monitors that can be used for getting
# compute metrics. You can use the alias/name from the setuptools
# entry points for nova.compute.monitors.* namespaces. If no
# namespace is supplied, the "cpu." namespace is assumed for
# backwards-compatibility.
#
# NOTE: Only one monitor per namespace (For example: cpu) can be loaded at
# a time.
#
# Possible values:
#
# * An empty list will disable the feature (Default).
# * An example value that would enable both the CPU and NUMA memory
#   bandwidth monitors that use the virt driver variant:
#
#     compute_monitors = cpu.virt_driver, numa_mem_bw.virt_driver
#  (list value)
#compute_monitors =

#
# The default format an ephemeral_volume will be formatted with on creation.
#
# Possible values:
#
# * ``ext2``
# * ``ext3``
# * ``ext4``
# * ``xfs``
# * ``ntfs`` (only for Windows guests)
#  (string value)
#default_ephemeral_format = <None>

#
# Determine if instance should boot or fail on VIF plugging timeout.
#
# Nova sends a port update to Neutron after an instance has been scheduled,
# providing Neutron with the necessary information to finish setup of the port.
# Once completed, Neutron notifies Nova that it has finished setting up the
# port, at which point Nova resumes the boot of the instance since network
# connectivity is now supposed to be present. A timeout will occur if the reply
# is not received after a given interval.
#
# This option determines what Nova does when the VIF plugging timeout event
# happens. When enabled, the instance will error out. When disabled, the
# instance will continue to boot on the assumption that the port is ready.
#
# Possible values:
#
# * True: Instances should fail after VIF plugging timeout
# * False: Instances should continue booting after VIF plugging timeout
#  (boolean value)
#vif_plugging_is_fatal = true

#
# Timeout for Neutron VIF plugging event message arrival.
#
# Number of seconds to wait for Neutron vif plugging events to
# arrive before continuing or failing (see 'vif_plugging_is_fatal').
#
# If you are hitting timeout failures at scale, consider running rootwrap
# in "daemon mode" in the neutron agent via the ``[agent]/root_helper_daemon``
# neutron configuration option.
#
# Related options:
#
# * vif_plugging_is_fatal - If ``vif_plugging_timeout`` is set to zero and
#   ``vif_plugging_is_fatal`` is False, events should not be expected to
#   arrive at all.
#  (integer value)
# Minimum value: 0
#vif_plugging_timeout = 300

# Path to '/etc/network/interfaces' template.
#
# The path to a template file for the '/etc/network/interfaces'-style file,
# which
# will be populated by nova and subsequently used by cloudinit. This provides a
# method to configure network connectivity in environments without a DHCP
# server.
#
# The template will be rendered using Jinja2 template engine, and receive a
# top-level key called ``interfaces``. This key will contain a list of
# dictionaries, one for each interface.
#
# Refer to the cloudinit documentaion for more information:
#
#   https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
#
# Possible values:
#
# * A path to a Jinja2-formatted template for a Debian '/etc/network/interfaces'
#   file. This applies even if using a non Debian-derived guest.
#
# Related options:
#
# * ``flat_inject``: This must be set to ``True`` to ensure nova embeds network
#   configuration information in the metadata provided through the config drive.
#  (string value)
#injected_network_template = $pybasedir/nova/virt/interfaces.template

#
# The image preallocation mode to use.
#
# Image preallocation allows storage for instance images to be allocated up
# front
# when the instance is initially provisioned. This ensures immediate feedback is
# given if enough space isn't available. In addition, it should significantly
# improve performance on writes to new blocks and may even improve I/O
# performance to prewritten blocks due to reduced fragmentation.
#  (string value)
# Possible values:
# none - No storage provisioning is done up front
# space - Storage is fully allocated at instance start
#preallocate_images = none

#
# Enable use of copy-on-write (cow) images.
#
# QEMU/KVM allow the use of qcow2 as backing files. By disabling this,
# backing files will not be used.
#  (boolean value)
#use_cow_images = true

#
# Force conversion of backing images to raw format.
#
# Possible values:
#
# * True: Backing image files will be converted to raw image format
# * False: Backing image files will not be converted
#
# Related options:
#
# * ``compute_driver``: Only the libvirt driver uses this option.
#  (boolean value)
#force_raw_images = true

#
# Name of the mkfs commands for ephemeral device.
#
# The format is <os_type>=<mkfs command>
#  (multi valued)
#virt_mkfs =

#
# Enable resizing of filesystems via a block device.
#
# If enabled, attempt to resize the filesystem by accessing the image over a
# block device. This is done by the host and may not be necessary if the image
# contains a recent version of cloud-init. Possible mechanisms require the nbd
# driver (for qcow and raw), or loop (for raw).
#  (boolean value)
#resize_fs_using_block_device = false

# Amount of time, in seconds, to wait for NBD device start up. (integer value)
# Minimum value: 0
#timeout_nbd = 10

#
# Location of cached images.
#
# This is NOT the full path - just a folder name relative to '$instances_path'.
# For per-compute-host cached images, set to '_base_$my_ip'
#  (string value)
#image_cache_subdirectory_name = _base

# Should unused base images be removed? (boolean value)
#remove_unused_base_images = true

#
# Unused unresized base images younger than this will not be removed.
#  (integer value)
#remove_unused_original_minimum_age_seconds = 86400

#
# Generic property to specify the pointer type.
#
# Input devices allow interaction with a graphical framebuffer. For
# example to provide a graphic tablet for absolute cursor movement.
#
# If set, the 'hw_pointer_model' image property takes precedence over
# this configuration option.
#
# Related options:
#
# * usbtablet must be configured with VNC enabled or SPICE enabled and SPICE
#   agent disabled. When used with libvirt the instance mode should be
#   configured as HVM.
#   (string value)
# Possible values:
# ps2mouse - Uses relative movement. Mouse connected by PS2
# usbtablet - Uses absolute movement. Tablet connect by USB
# <None> - Uses default behavior provided by drivers (mouse on PS2 for libvirt
# x86)
#pointer_model = usbtablet

#
# Defines which physical CPUs (pCPUs) can be used by instance
# virtual CPUs (vCPUs).
#
# Possible values:
#
# * A comma-separated list of physical CPU numbers that virtual CPUs can be
#   allocated to by default. Each element should be either a single CPU number,
#   a range of CPU numbers, or a caret followed by a CPU number to be
#   excluded from a previous range. For example::
#
#     vcpu_pin_set = "4-12,^8,15"
#  (string value)
#vcpu_pin_set = <None>

#
# Number of huge/large memory pages to reserved per NUMA host cell.
#
# Possible values:
#
# * A list of valid key=value which reflect NUMA node ID, page size
#   (Default unit is KiB) and number of pages to be reserved. For example::
#
#     reserved_huge_pages = node:0,size:2048,count:64
#     reserved_huge_pages = node:1,size:1GB,count:1
#
#   In this example we are reserving on NUMA node 0 64 pages of 2MiB
#   and on NUMA node 1 1 page of 1GiB.
#  (dict value)
#reserved_huge_pages = <None>

#
# Amount of disk resources in MB to make them always available to host. The
# disk usage gets reported back to the scheduler from nova-compute running
# on the compute nodes. To prevent the disk resources from being considered
# as available, this option can be used to reserve disk space for that host.
#
# Possible values:
#
# * Any positive integer representing amount of disk in MB to reserve
#   for the host.
#  (integer value)
# Minimum value: 0
#reserved_host_disk_mb = 0

#
# Amount of memory in MB to reserve for the host so that it is always available
# to host processes. The host resources usage is reported back to the scheduler
# continuously from nova-compute running on the compute node. To prevent the
# host
# memory from being considered as available, this option is used to reserve
# memory for the host.
#
# Possible values:
#
# * Any positive integer representing amount of memory in MB to reserve
#   for the host.
#  (integer value)
# Minimum value: 0
#reserved_host_memory_mb = 512

#
# Number of physical CPUs to reserve for the host. The host resources usage is
# reported back to the scheduler continuously from nova-compute running on the
# compute node. To prevent the host CPU from being considered as available,
# this option is used to reserve random pCPU(s) for the host.
#
# Possible values:
#
# * Any positive integer representing number of physical CPUs to reserve
#   for the host.
#  (integer value)
# Minimum value: 0
#reserved_host_cpus = 0

#
# This option helps you specify virtual CPU to physical CPU allocation ratio.
#
# From Ocata (15.0.0) this is used to influence the hosts selected by
# the Placement API. Note that when Placement is used, the CoreFilter
# is redundant, because the Placement API will have already filtered
# out hosts that would have failed the CoreFilter.
#
# This configuration specifies ratio for CoreFilter which can be set
# per compute node. For AggregateCoreFilter, it will fall back to this
# configuration value if no per-aggregate setting is found.
#
# NOTE: This can be set per-compute, or if set to 0.0, the value
# set on the scheduler node(s) or compute node(s) will be used
# and defaulted to 16.0. Once set to a non-default value, it is not possible
# to "unset" the config to get back to the default behavior. If you want
# to reset back to the default, explicitly specify 16.0.
#
# NOTE: As of the 16.0.0 Pike release, this configuration option is ignored
# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
#
# Possible values:
#
# * Any valid positive integer or float value
#  (floating point value)
# Minimum value: 0
#cpu_allocation_ratio = 0.0

#
# This option helps you specify virtual RAM to physical RAM
# allocation ratio.
#
# From Ocata (15.0.0) this is used to influence the hosts selected by
# the Placement API. Note that when Placement is used, the RamFilter
# is redundant, because the Placement API will have already filtered
# out hosts that would have failed the RamFilter.
#
# This configuration specifies ratio for RamFilter which can be set
# per compute node. For AggregateRamFilter, it will fall back to this
# configuration value if no per-aggregate setting found.
#
# NOTE: This can be set per-compute, or if set to 0.0, the value
# set on the scheduler node(s) or compute node(s) will be used and
# defaulted to 1.5. Once set to a non-default value, it is not possible
# to "unset" the config to get back to the default behavior. If you want
# to reset back to the default, explicitly specify 1.5.
#
# NOTE: As of the 16.0.0 Pike release, this configuration option is ignored
# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
#
# Possible values:
#
# * Any valid positive integer or float value
#  (floating point value)
# Minimum value: 0
#ram_allocation_ratio = 0.0

#
# This option helps you specify virtual disk to physical disk
# allocation ratio.
#
# From Ocata (15.0.0) this is used to influence the hosts selected by
# the Placement API. Note that when Placement is used, the DiskFilter
# is redundant, because the Placement API will have already filtered
# out hosts that would have failed the DiskFilter.
#
# A ratio greater than 1.0 will result in over-subscription of the
# available physical disk, which can be useful for more
# efficiently packing instances created with images that do not
# use the entire virtual disk, such as sparse or compressed
# images. It can be set to a value between 0.0 and 1.0 in order
# to preserve a percentage of the disk for uses other than
# instances.
#
# NOTE: This can be set per-compute, or if set to 0.0, the value
# set on the scheduler node(s) or compute node(s) will be used and
# defaulted to 1.0. Once set to a non-default value, it is not possible
# to "unset" the config to get back to the default behavior. If you want
# to reset back to the default, explicitly specify 1.0.
#
# NOTE: As of the 16.0.0 Pike release, this configuration option is ignored
# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
#
# Possible values:
#
# * Any valid positive integer or float value
#  (floating point value)
# Minimum value: 0
#disk_allocation_ratio = 0.0

#
# Console proxy host to be used to connect to instances on this host. It is the
# publicly visible name for the console host.
#
# Possible values:
#
# * Current hostname (default) or any string representing hostname.
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#console_host = <current_hostname>

#
# Name of the network to be used to set access IPs for instances. If there are
# multiple IPs to choose from, an arbitrary one will be chosen.
#
# Possible values:
#
# * None (default)
# * Any string representing network name.
#  (string value)
#default_access_ip_network_name = <None>

#
# Whether to batch up the application of IPTables rules during a host restart
# and apply all at the end of the init phase.
#  (boolean value)
#defer_iptables_apply = false

#
# Specifies where instances are stored on the hypervisor's disk.
# It can point to locally attached storage or a directory on NFS.
#
# Possible values:
#
# * $state_path/instances where state_path is a config option that specifies
#   the top-level directory for maintaining nova's state. (default) or
#   Any string representing directory path.
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#instances_path = $state_path/instances

#
# This option enables periodic compute.instance.exists notifications. Each
# compute node must be configured to generate system usage data. These
# notifications are consumed by OpenStack Telemetry service.
#  (boolean value)
#instance_usage_audit = false

#
# Maximum number of 1 second retries in live_migration. It specifies number
# of retries to iptables when it complains. It happens when an user continuously
# sends live-migration request to same host leading to concurrent request
# to iptables.
#
# Possible values:
#
# * Any positive integer representing retry count.
#  (integer value)
# Minimum value: 0
#live_migration_retry_count = 30

#
# This option specifies whether to start guests that were running before the
# host rebooted. It ensures that all of the instances on a Nova compute node
# resume their state each time the compute node boots or restarts.
#  (boolean value)
#resume_guests_state_on_host_boot = false

#
# Number of times to retry network allocation. It is required to attempt network
# allocation retries if the virtual interface plug fails.
#
# Possible values:
#
# * Any positive integer representing retry count.
#  (integer value)
# Minimum value: 0
#network_allocate_retries = 0

#
# Limits the maximum number of instance builds to run concurrently by
# nova-compute. Compute service can attempt to build an infinite number of
# instances, if asked to do so. This limit is enforced to avoid building
# unlimited instance concurrently on a compute node. This value can be set
# per compute node.
#
# Possible Values:
#
# * 0 : treated as unlimited.
# * Any positive integer representing maximum concurrent builds.
#  (integer value)
# Minimum value: 0
#max_concurrent_builds = 10

#
# Maximum number of live migrations to run concurrently. This limit is enforced
# to avoid outbound live migrations overwhelming the host/network and causing
# failures. It is not recommended that you change this unless you are very sure
# that doing so is safe and stable in your environment.
#
# Possible values:
#
# * 0 : treated as unlimited.
# * Negative value defaults to 0.
# * Any positive integer representing maximum number of live migrations
#   to run concurrently.
#  (integer value)
#max_concurrent_live_migrations = 1

#
# Number of times to retry block device allocation on failures. Starting with
# Liberty, Cinder can use image volume cache. This may help with block device
# allocation performance. Look at the cinder image_volume_cache_enabled
# configuration option.
#
# Possible values:
#
# * 60 (default)
# * If value is 0, then one attempt is made.
# * Any negative value is treated as 0.
# * For any value > 0, total attempts are (value + 1)
#  (integer value)
#block_device_allocate_retries = 60

#
# Number of greenthreads available for use to sync power states.
#
# This option can be used to reduce the number of concurrent requests
# made to the hypervisor or system with real instance power states
# for performance reasons, for example, with Ironic.
#
# Possible values:
#
# * Any positive integer representing greenthreads count.
#  (integer value)
#sync_power_state_pool_size = 1000

#
# Number of seconds to wait between runs of the image cache manager.
#
# Possible values:
# * 0: run at the default rate.
# * -1: disable
# * Any other value
#  (integer value)
# Minimum value: -1
#image_cache_manager_interval = 2400

#
# Interval to pull network bandwidth usage info.
#
# Not supported on all hypervisors. If a hypervisor doesn't support bandwidth
# usage, it will not get the info in the usage events.
#
# Possible values:
#
# * 0: Will run at the default periodic interval.
# * Any value < 0: Disables the option.
# * Any positive integer in seconds.
#  (integer value)
#bandwidth_poll_interval = 600

#
# Interval to sync power states between the database and the hypervisor.
#
# The interval that Nova checks the actual virtual machine power state
# and the power state that Nova has in its database. If a user powers
# down their VM, Nova updates the API to report the VM has been
# powered down. Should something turn on the VM unexpectedly,
# Nova will turn the VM back off to keep the system in the expected
# state.
#
# Possible values:
#
# * 0: Will run at the default periodic interval.
# * Any value < 0: Disables the option.
# * Any positive integer in seconds.
#
# Related options:
#
# * If ``handle_virt_lifecycle_events`` in workarounds_group is
#   false and this option is negative, then instances that get out
#   of sync between the hypervisor and the Nova database will have
#   to be synchronized manually.
#  (integer value)
#sync_power_state_interval = 600

#
# Interval between instance network information cache updates.
#
# Number of seconds after which each compute node runs the task of
# querying Neutron for all of its instances networking information,
# then updates the Nova db with that information. Nova will never
# update it's cache if this option is set to 0. If we don't update the
# cache, the metadata service and nova-api endpoints will be proxying
# incorrect network data about the instance. So, it is not recommended
# to set this option to 0.
#
# Possible values:
#
# * Any positive integer in seconds.
# * Any value <=0 will disable the sync. This is not recommended.
#  (integer value)
#heal_instance_info_cache_interval = 60

#
# Interval for reclaiming deleted instances.
#
# A value greater than 0 will enable SOFT_DELETE of instances.
# This option decides whether the server to be deleted will be put into
# the SOFT_DELETED state. If this value is greater than 0, the deleted
# server will not be deleted immediately, instead it will be put into
# a queue until it's too old (deleted time greater than the value of
# reclaim_instance_interval). The server can be recovered from the
# delete queue by using the restore action. If the deleted server remains
# longer than the value of reclaim_instance_interval, it will be
# deleted by a periodic task in the compute service automatically.
#
# Note that this option is read from both the API and compute nodes, and
# must be set globally otherwise servers could be put into a soft deleted
# state in the API and never actually reclaimed (deleted) on the compute
# node.
#
# Possible values:
#
# * Any positive integer(in seconds) greater than 0 will enable
#   this option.
# * Any value <=0 will disable the option.
#  (integer value)
#reclaim_instance_interval = 0

#
# Interval for gathering volume usages.
#
# This option updates the volume usage cache for every
# volume_usage_poll_interval number of seconds.
#
# Possible values:
#
# * Any positive integer(in seconds) greater than 0 will enable
#   this option.
# * Any value <=0 will disable the option.
#  (integer value)
#volume_usage_poll_interval = 0

#
# Interval for polling shelved instances to offload.
#
# The periodic task runs for every shelved_poll_interval number
# of seconds and checks if there are any shelved instances. If it
# finds a shelved instance, based on the 'shelved_offload_time' config
# value it offloads the shelved instances. Check 'shelved_offload_time'
# config option description for details.
#
# Possible values:
#
# * Any value <= 0: Disables the option.
# * Any positive integer in seconds.
#
# Related options:
#
# * ``shelved_offload_time``
#  (integer value)
#shelved_poll_interval = 3600

#
# Time before a shelved instance is eligible for removal from a host.
#
# By default this option is set to 0 and the shelved instance will be
# removed from the hypervisor immediately after shelve operation.
# Otherwise, the instance will be kept for the value of
# shelved_offload_time(in seconds) so that during the time period the
# unshelve action will be faster, then the periodic task will remove
# the instance from hypervisor after shelved_offload_time passes.
#
# Possible values:
#
# * 0: Instance will be immediately offloaded after being
#      shelved.
# * Any value < 0: An instance will never offload.
# * Any positive integer in seconds: The instance will exist for
#   the specified number of seconds before being offloaded.
#  (integer value)
#shelved_offload_time = 0

#
# Interval for retrying failed instance file deletes.
#
# This option depends on 'maximum_instance_delete_attempts'.
# This option specifies how often to retry deletes whereas
# 'maximum_instance_delete_attempts' specifies the maximum number
# of retry attempts that can be made.
#
# Possible values:
#
# * 0: Will run at the default periodic interval.
# * Any value < 0: Disables the option.
# * Any positive integer in seconds.
#
# Related options:
#
# * ``maximum_instance_delete_attempts`` from instance_cleaning_opts
#   group.
#  (integer value)
#instance_delete_interval = 300

#
# Interval (in seconds) between block device allocation retries on failures.
#
# This option allows the user to specify the time interval between
# consecutive retries. 'block_device_allocate_retries' option specifies
# the maximum number of retries.
#
# Possible values:
#
# * 0: Disables the option.
# * Any positive integer in seconds enables the option.
#
# Related options:
#
# * ``block_device_allocate_retries`` in compute_manager_opts group.
#  (integer value)
# Minimum value: 0
#block_device_allocate_retries_interval = 3

#
# Interval between sending the scheduler a list of current instance UUIDs to
# verify that its view of instances is in sync with nova.
#
# If the CONF option 'scheduler_tracks_instance_changes' is
# False, the sync calls will not be made. So, changing this option will
# have no effect.
#
# If the out of sync situations are not very common, this interval
# can be increased to lower the number of RPC messages being sent.
# Likewise, if sync issues turn out to be a problem, the interval
# can be lowered to check more frequently.
#
# Possible values:
#
# * 0: Will run at the default periodic interval.
# * Any value < 0: Disables the option.
# * Any positive integer in seconds.
#
# Related options:
#
# * This option has no impact if ``scheduler_tracks_instance_changes``
#   is set to False.
#  (integer value)
#scheduler_instance_sync_interval = 120

#
# Interval for updating compute resources.
#
# This option specifies how often the update_available_resources
# periodic task should run. A number less than 0 means to disable the
# task completely. Leaving this at the default of 0 will cause this to
# run at the default periodic interval. Setting it to any positive
# value will cause it to run at approximately that number of seconds.
#
# Possible values:
#
# * 0: Will run at the default periodic interval.
# * Any value < 0: Disables the option.
# * Any positive integer in seconds.
#  (integer value)
#update_resources_interval = 0

#
# Time interval after which an instance is hard rebooted automatically.
#
# When doing a soft reboot, it is possible that a guest kernel is
# completely hung in a way that causes the soft reboot task
# to not ever finish. Setting this option to a time period in seconds
# will automatically hard reboot an instance if it has been stuck
# in a rebooting state longer than N seconds.
#
# Possible values:
#
# * 0: Disables the option (default).
# * Any positive integer in seconds: Enables the option.
#  (integer value)
# Minimum value: 0
#reboot_timeout = 0

#
# Maximum time in seconds that an instance can take to build.
#
# If this timer expires, instance status will be changed to ERROR.
# Enabling this option will make sure an instance will not be stuck
# in BUILD state for a longer period.
#
# Possible values:
#
# * 0: Disables the option (default)
# * Any positive integer in seconds: Enables the option.
#  (integer value)
# Minimum value: 0
#instance_build_timeout = 0

#
# Interval to wait before un-rescuing an instance stuck in RESCUE.
#
# Possible values:
#
# * 0: Disables the option (default)
# * Any positive integer in seconds: Enables the option.
#  (integer value)
# Minimum value: 0
#rescue_timeout = 0

#
# Automatically confirm resizes after N seconds.
#
# Resize functionality will save the existing server before resizing.
# After the resize completes, user is requested to confirm the resize.
# The user has the opportunity to either confirm or revert all
# changes. Confirm resize removes the original server and changes
# server status from resized to active. Setting this option to a time
# period (in seconds) will automatically confirm the resize if the
# server is in resized state longer than that time.
#
# Possible values:
#
# * 0: Disables the option (default)
# * Any positive integer in seconds: Enables the option.
#  (integer value)
# Minimum value: 0
#resize_confirm_window = 0

#
# Total time to wait in seconds for an instance to perform a clean
# shutdown.
#
# It determines the overall period (in seconds) a VM is allowed to
# perform a clean shutdown. While performing stop, rescue and shelve,
# rebuild operations, configuring this option gives the VM a chance
# to perform a controlled shutdown before the instance is powered off.
# The default timeout is 60 seconds. A value of 0 (zero) means the guest
# will be powered off immediately with no opportunity for guest OS clean-up.
#
# The timeout value can be overridden on a per image basis by means
# of os_shutdown_timeout that is an image metadata setting allowing
# different types of operating systems to specify how much time they
# need to shut down cleanly.
#
# Possible values:
#
# * A positive integer or 0 (default value is 60).
#  (integer value)
# Minimum value: 0
#shutdown_timeout = 60

#
# The compute service periodically checks for instances that have been
# deleted in the database but remain running on the compute node. The
# above option enables action to be taken when such instances are
# identified.
#
# Related options:
#
# * ``running_deleted_instance_poll_interval``
# * ``running_deleted_instance_timeout``
#  (string value)
# Possible values:
# reap - Powers down the instances and deletes them
# log - Logs warning message about deletion of the resource
# shutdown - Powers down instances and marks them as non-bootable which can be
# later used for debugging/analysis
# noop - Takes no action
#running_deleted_instance_action = reap

#
# Time interval in seconds to wait between runs for the clean up action.
# If set to 0, above check will be disabled. If "running_deleted_instance
# _action" is set to "log" or "reap", a value greater than 0 must be set.
#
# Possible values:
#
# * Any positive integer in seconds enables the option.
# * 0: Disables the option.
# * 1800: Default value.
#
# Related options:
#
# * running_deleted_instance_action
#  (integer value)
#running_deleted_instance_poll_interval = 1800

#
# Time interval in seconds to wait for the instances that have
# been marked as deleted in database to be eligible for cleanup.
#
# Possible values:
#
# * Any positive integer in seconds(default is 0).
#
# Related options:
#
# * "running_deleted_instance_action"
#  (integer value)
#running_deleted_instance_timeout = 0

#
# The number of times to attempt to reap an instance's files.
#
# This option specifies the maximum number of retry attempts
# that can be made.
#
# Possible values:
#
# * Any positive integer defines how many attempts are made.
# * Any value <=0 means no delete attempts occur, but you should use
#   ``instance_delete_interval`` to disable the delete attempts.
#
# Related options:
#
# * ``[DEFAULT] instance_delete_interval`` can be used to disable this option.
#  (integer value)
#maximum_instance_delete_attempts = 5

#
# Sets the scope of the check for unique instance names.
#
# The default doesn't check for unique names. If a scope for the name check is
# set, a launch of a new instance or an update of an existing instance with a
# duplicate name will result in an ''InstanceExists'' error. The uniqueness is
# case-insensitive. Setting this option can increase the usability for end
# users as they don't have to distinguish among instances with the same name
# by their IDs.
#  (string value)
# Possible values:
# '' - An empty value means that no uniqueness check is done and duplicate names
# are possible
# project - The instance name check is done only for instances within the same
# project
# global - The instance name check is done for all instances regardless of the
# project
#osapi_compute_unique_server_name_scope =

#
# Enable new nova-compute services on this host automatically.
#
# When a new nova-compute service starts up, it gets
# registered in the database as an enabled service. Sometimes it can be useful
# to register new compute services in disabled state and then enabled them at a
# later point in time. This option only sets this behavior for nova-compute
# services, it does not auto-disable other services like nova-conductor,
# nova-scheduler, nova-consoleauth, or nova-osapi_compute.
#
# Possible values:
#
# * ``True``: Each new compute service is enabled as soon as it registers
# itself.
# * ``False``: Compute services must be enabled via an os-services REST API call
#   or with the CLI with ``nova service-enable <hostname> <binary>``, otherwise
#   they are not ready to use.
#  (boolean value)
#enable_new_services = true

#
# Template string to be used to generate instance names.
#
# This template controls the creation of the database name of an instance. This
# is *not* the display name you enter when creating an instance (via Horizon
# or CLI). For a new deployment it is advisable to change the default value
# (which uses the database autoincrement) to another value which makes use
# of the attributes of an instance, like ``instance-%(uuid)s``. If you
# already have instances in your deployment when you change this, your
# deployment will break.
#
# Possible values:
#
# * A string which either uses the instance database ID (like the
#   default)
# * A string with a list of named database columns, for example ``%(id)d``
#   or ``%(uuid)s`` or ``%(hostname)s``.
#  (string value)
#instance_name_template = instance-%08x

#
# Number of times to retry live-migration before failing.
#
# Possible values:
#
# * If == -1, try until out of hosts (default)
# * If == 0, only try once, no retries
# * Integer greater than 0
#  (integer value)
# Minimum value: -1
#migrate_max_retries = -1

#
# Configuration drive format
#
# Configuration drive format that will contain metadata attached to the
# instance when it boots.
#
# Related options:
#
# * This option is meaningful when one of the following alternatives occur:
#
#   1. ``force_config_drive`` option set to ``true``
#   2. the REST API call to create the instance contains an enable flag for
#      config drive option
#   3. the image used to create the instance requires a config drive,
#      this is defined by ``img_config_drive`` property for that image.
#
# * A compute node running Hyper-V hypervisor can be configured to attach
#   configuration drive as a CD drive. To attach the configuration drive as a CD
#   drive, set the ``[hyperv] config_drive_cdrom`` option to true.
#  (string value)
# Possible values:
# iso9660 - A file system image standard that is widely supported across
# operating systems.
# vfat - Provided for legacy reasons and to enable live migration with the
# libvirt driver and non-shared storage
#config_drive_format = iso9660

#
# Force injection to take place on a config drive
#
# When this option is set to true configuration drive functionality will be
# forced enabled by default, otherwise user can still enable configuration
# drives via the REST API or image metadata properties.
#
# Possible values:
#
# * True: Force to use of configuration drive regardless the user's input in the
#         REST API call.
# * False: Do not force use of configuration drive. Config drives can still be
#          enabled via the REST API or image metadata properties.
#
# Related options:
#
# * Use the 'mkisofs_cmd' flag to set the path where you install the
#   genisoimage program. If genisoimage is in same path as the
#   nova-compute service, you do not need to set this flag.
# * To use configuration drive with Hyper-V, you must set the
#   'mkisofs_cmd' value to the full path to an mkisofs.exe installation.
#   Additionally, you must set the qemu_img_cmd value in the hyperv
#   configuration section to the full path to an qemu-img command
#   installation.
#  (boolean value)
#force_config_drive = false

#
# Name or path of the tool used for ISO image creation
#
# Use the mkisofs_cmd flag to set the path where you install the genisoimage
# program. If genisoimage is on the system path, you do not need to change
# the default value.
#
# To use configuration drive with Hyper-V, you must set the mkisofs_cmd value
# to the full path to an mkisofs.exe installation. Additionally, you must set
# the qemu_img_cmd value in the hyperv configuration section to the full path
# to an qemu-img command installation.
#
# Possible values:
#
# * Name of the ISO image creator program, in case it is in the same directory
#   as the nova-compute service
# * Path to ISO image creator program
#
# Related options:
#
# * This option is meaningful when config drives are enabled.
# * To use configuration drive with Hyper-V, you must set the qemu_img_cmd
#   value in the hyperv configuration section to the full path to an qemu-img
#   command installation.
#  (string value)
#mkisofs_cmd = genisoimage

# DEPRECATED:
# Default flavor to use for the EC2 API only.
# The Nova API does not support a default flavor.
#  (string value)
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason: The EC2 API is deprecated.
#default_flavor = m1.small

#
# The IP address which the host is using to connect to the management network.
#
# Possible values:
#
# * String with valid IP address. Default is IPv4 address of this host.
#
# Related options:
#
# * metadata_host
# * my_block_storage_ip
# * routing_source_ip
# * vpn_ip
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#my_ip = <host_ipv4>

#
# The IP address which is used to connect to the block storage network.
#
# Possible values:
#
# * String with valid IP address. Default is IP address of this host.
#
# Related options:
#
# * my_ip - if my_block_storage_ip is not set, then my_ip value is used.
#  (string value)
#my_block_storage_ip = $my_ip

#
# Hostname, FQDN or IP address of this host.
#
# Used as:
#
# * the oslo.messaging queue name for nova-compute worker
# * we use this value for the binding_host sent to neutron. This means if you
# use
#   a neutron agent, it should have the same value for host.
# * cinder host attachment information
#
# Must be valid within AMQP key.
#
# Possible values:
#
# * String with hostname, FQDN or IP address. Default is hostname of this host.
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#host = <current_hostname>

# DEPRECATED:
# This option is a list of full paths to one or more configuration files for
# dhcpbridge. In most cases the default path of '/etc/nova/nova-dhcpbridge.conf'
# should be sufficient, but if you have special needs for configuring
# dhcpbridge,
# you can change or add to this list.
#
# Possible values
#
# * A list of strings, where each string is the full path to a dhcpbridge
#   configuration file.
#  (multi valued)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dhcpbridge_flagfile = /etc/nova/nova-dhcpbridge.conf

# DEPRECATED:
# The location where the network configuration files will be kept. The default
# is
# the 'networks' directory off of the location where nova's Python module is
# installed.
#
# Possible values
#
# * A string containing the full path to the desired configuration directory
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#networks_path = $state_path/networks

# DEPRECATED:
# This is the name of the network interface for public IP addresses. The default
# is 'eth0'.
#
# Possible values:
#
# * Any string representing a network interface name
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#public_interface = eth0

# DEPRECATED:
# The location of the binary nova-dhcpbridge. By default it is the binary named
# 'nova-dhcpbridge' that is installed with all the other nova binaries.
#
# Possible values:
#
# * Any string representing the full path to the binary for dhcpbridge
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dhcpbridge = $bindir/nova-dhcpbridge

# DEPRECATED:
# The public IP address of the network host.
#
# This is used when creating an SNAT rule.
#
# Possible values:
#
# * Any valid IP address
#
# Related options:
#
# * ``force_snat_range``
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#routing_source_ip = $my_ip

# DEPRECATED:
# The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).
#
# Possible values:
#
# * Any positive integer value.
#  (integer value)
# Minimum value: 1
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dhcp_lease_time = 86400

# DEPRECATED:
# Despite the singular form of the name of this option, it is actually a list of
# zero or more server addresses that dnsmasq will use for DNS nameservers. If
# this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use
# the servers specified in this option. If the option use_network_dns_servers is
# True, the dns1 and dns2 servers from the network will be appended to this
# list,
# and will be used as DNS servers, too.
#
# Possible values:
#
# * A list of strings, where each string is either an IP address or a FQDN.
#
# Related options:
#
# * ``use_network_dns_servers``
#  (multi valued)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dns_server =

# DEPRECATED:
# When this option is set to True, the dns1 and dns2 servers for the network
# specified by the user on boot will be used for DNS, as well as any specified
# in
# the `dns_server` option.
#
# Related options:
#
# * ``dns_server``
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#use_network_dns_servers = false

# DEPRECATED:
# This option is a list of zero or more IP address ranges in your network's DMZ
# that should be accepted.
#
# Possible values:
#
# * A list of strings, each of which should be a valid CIDR.
#  (list value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dmz_cidr =

# DEPRECATED:
# This is a list of zero or more IP ranges that traffic from the
# `routing_source_ip` will be SNATted to. If the list is empty, then no SNAT
# rules are created.
#
# Possible values:
#
# * A list of strings, each of which should be a valid CIDR.
#
# Related options:
#
# * ``routing_source_ip``
#  (multi valued)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#force_snat_range =

# DEPRECATED:
# The path to the custom dnsmasq configuration file, if any.
#
# Possible values:
#
# * The full path to the configuration file, or an empty string if there is no
#   custom dnsmasq configuration file.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dnsmasq_config_file =

# DEPRECATED:
# This is the class used as the ethernet device driver for linuxnet bridge
# operations. The default value should be all you need for most cases, but if
# you
# wish to use a customized class, set this option to the full dot-separated
# import path for that class.
#
# Possible values:
#
# * Any string representing a dot-separated class path that Nova can import.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver

# DEPRECATED:
# The name of the Open vSwitch bridge that is used with linuxnet when connecting
# with Open vSwitch."
#
# Possible values:
#
# * Any string representing a valid bridge name.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#linuxnet_ovs_integration_bridge = br-int

#
# When True, when a device starts up, and upon binding floating IP addresses,
# arp
# messages will be sent to ensure that the arp caches on the compute hosts are
# up-to-date.
#
# Related options:
#
# * ``send_arp_for_ha_count``
#  (boolean value)
#send_arp_for_ha = false

#
# When arp messages are configured to be sent, they will be sent with the count
# set to the value of this option. Of course, if this is set to zero, no arp
# messages will be sent.
#
# Possible values:
#
# * Any integer greater than or equal to 0
#
# Related options:
#
# * ``send_arp_for_ha``
#  (integer value)
#send_arp_for_ha_count = 3

# DEPRECATED:
# When set to True, only the firt nic of a VM will get its default gateway from
# the DHCP server.
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#use_single_default_gateway = false

# DEPRECATED:
# One or more interfaces that bridges can forward traffic to. If any of the
# items
# in this list is the special keyword 'all', then all traffic will be forwarded.
#
# Possible values:
#
# * A list of zero or more interface names, or the word 'all'.
#  (multi valued)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#forward_bridge_interface = all

#
# This option determines the IP address for the network metadata API server.
#
# This is really the client side of the metadata host equation that allows
# nova-network to find the metadata server when doing a default multi host
# networking.
#
# Possible values:
#
# * Any valid IP address. The default is the address of the Nova API server.
#
# Related options:
#
# * ``metadata_port``
#  (string value)
#metadata_host = $my_ip

# DEPRECATED:
# This option determines the port used for the metadata API server.
#
# Related options:
#
# * ``metadata_host``
#  (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#metadata_port = 8775

# DEPRECATED:
# This expression, if defined, will select any matching iptables rules and place
# them at the top when applying metadata changes to the rules.
#
# Possible values:
#
# * Any string representing a valid regular expression, or an empty string
#
# Related options:
#
# * ``iptables_bottom_regex``
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#iptables_top_regex =

# DEPRECATED:
# This expression, if defined, will select any matching iptables rules and place
# them at the bottom when applying metadata changes to the rules.
#
# Possible values:
#
# * Any string representing a valid regular expression, or an empty string
#
# Related options:
#
# * iptables_top_regex
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#iptables_bottom_regex =

# DEPRECATED:
# By default, packets that do not pass the firewall are DROPped. In many cases,
# though, an operator may find it more useful to change this from DROP to
# REJECT,
# so that the user issuing those packets may have a better idea as to what's
# going on, or LOGDROP in order to record the blocked traffic before DROPping.
#
# Possible values:
#
# * A string representing an iptables chain. The default is DROP.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#iptables_drop_action = DROP

# DEPRECATED:
# This option represents the period of time, in seconds, that the ovs_vsctl
# calls
# will wait for a response from the database before timing out. A setting of 0
# means that the utility should wait forever for a response.
#
# Possible values:
#
# * Any positive integer if a limited timeout is desired, or zero if the calls
#   should wait forever for a response.
#  (integer value)
# Minimum value: 0
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ovs_vsctl_timeout = 120

# DEPRECATED:
# This option is used mainly in testing to avoid calls to the underlying network
# utilities.
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#fake_network = false

# DEPRECATED:
# This option determines the number of times to retry ebtables commands before
# giving up. The minimum number of retries is 1.
#
# Possible values:
#
# * Any positive integer
#
# Related options:
#
# * ``ebtables_retry_interval``
#  (integer value)
# Minimum value: 1
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ebtables_exec_attempts = 3

# DEPRECATED:
# This option determines the time, in seconds, that the system will sleep in
# between ebtables retries. Note that each successive retry waits a multiple of
# this value, so for example, if this is set to the default of 1.0 seconds, and
# ebtables_exec_attempts is 4, after the first failure, the system will sleep
# for
# 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and
# after the third failure it will sleep 3 * 1.0 seconds.
#
# Possible values:
#
# * Any non-negative float or integer. Setting this to zero will result in no
#   waiting between attempts.
#
# Related options:
#
# * ebtables_exec_attempts
#  (floating point value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ebtables_retry_interval = 1.0

# DEPRECATED:
# Enable neutron as the backend for networking.
#
# Determine whether to use Neutron or Nova Network as the back end. Set to true
# to use neutron.
#  (boolean value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#use_neutron = true

#
# This option determines whether the network setup information is injected into
# the VM before it is booted. While it was originally designed to be used only
# by nova-network, it is also used by the vmware and xenapi virt drivers to
# control whether network information is injected into a VM. The libvirt virt
# driver also uses it when we use config_drive to configure network to control
# whether network information is injected into a VM.
#  (boolean value)
#flat_injected = false

# DEPRECATED:
# This option determines the bridge used for simple network interfaces when no
# bridge is specified in the VM creation request.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any string representing a valid network bridge, such as 'br100'
#
# Related options:
#
# * ``use_neutron``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#flat_network_bridge = <None>

# DEPRECATED:
# This is the address of the DNS server for a simple network. If this option is
# not specified, the default of '8.8.4.4' is used.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any valid IP address.
#
# Related options:
#
# * ``use_neutron``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#flat_network_dns = 8.8.4.4

# DEPRECATED:
# This option is the name of the virtual interface of the VM on which the bridge
# will be built. While it was originally designed to be used only by
# nova-network, it is also used by libvirt for the bridge interface name.
#
# Possible values:
#
# * Any valid virtual interface name, such as 'eth0'
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#flat_interface = <None>

# DEPRECATED:
# This is the VLAN number used for private networks. Note that the when creating
# the networks, if the specified number has already been assigned, nova-network
# will increment this number until it finds an available VLAN.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment. It also will be ignored if the configuration
# option
# for `network_manager` is not set to the default of
# 'nova.network.manager.VlanManager'.
#
# Possible values:
#
# * Any integer between 1 and 4094. Values outside of that range will raise a
#   ValueError exception.
#
# Related options:
#
# * ``network_manager``
# * ``use_neutron``
#  (integer value)
# Minimum value: 1
# Maximum value: 4094
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#vlan_start = 100

# DEPRECATED:
# This option is the name of the virtual interface of the VM on which the VLAN
# bridge will be built. While it was originally designed to be used only by
# nova-network, it is also used by libvirt and xenapi for the bridge interface
# name.
#
# Please note that this setting will be ignored in nova-network if the
# configuration option for `network_manager` is not set to the default of
# 'nova.network.manager.VlanManager'.
#
# Possible values:
#
# * Any valid virtual interface name, such as 'eth0'
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options. While
# this option has an effect when using neutron, it incorrectly override the
# value
# provided by neutron and should therefore not be used.
#vlan_interface = <None>

# DEPRECATED:
# This option represents the number of networks to create if not explicitly
# specified when the network is created. The only time this is used is if a CIDR
# is specified, but an explicit network_size is not. In that case, the subnets
# are created by diving the IP address space of the CIDR by num_networks. The
# resulting subnet sizes cannot be larger than the configuration option
# `network_size`; in that event, they are reduced to `network_size`, and a
# warning is logged.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any positive integer is technically valid, although there are practical
#   limits based upon available IP address space and virtual interfaces.
#
# Related options:
#
# * ``use_neutron``
# * ``network_size``
#  (integer value)
# Minimum value: 1
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#num_networks = 1

# DEPRECATED:
# This option is no longer used since the /os-cloudpipe API was removed in the
# 16.0.0 Pike release. This is the public IP address for the cloudpipe VPN
# servers. It defaults to the IP address of the host.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment. It also will be ignored if the configuration
# option
# for `network_manager` is not set to the default of
# 'nova.network.manager.VlanManager'.
#
# Possible values:
#
# * Any valid IP address. The default is ``$my_ip``, the IP address of the VM.
#
# Related options:
#
# * ``network_manager``
# * ``use_neutron``
# * ``vpn_start``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#vpn_ip = $my_ip

# DEPRECATED:
# This is the port number to use as the first VPN port for private networks.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment. It also will be ignored if the configuration
# option
# for `network_manager` is not set to the default of
# 'nova.network.manager.VlanManager', or if you specify a value the 'vpn_start'
# parameter when creating a network.
#
# Possible values:
#
# * Any integer representing a valid port number. The default is 1000.
#
# Related options:
#
# * ``use_neutron``
# * ``vpn_ip``
# * ``network_manager``
#  (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#vpn_start = 1000

# DEPRECATED:
# This option determines the number of addresses in each private subnet.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any positive integer that is less than or equal to the available network
#   size. Note that if you are creating multiple networks, they must all fit in
#   the available IP address space. The default is 256.
#
# Related options:
#
# * ``use_neutron``
# * ``num_networks``
#  (integer value)
# Minimum value: 1
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#network_size = 256

# DEPRECATED:
# This option determines the fixed IPv6 address block when creating a network.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any valid IPv6 CIDR
#
# Related options:
#
# * ``use_neutron``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#fixed_range_v6 = fd00::/48

# DEPRECATED:
# This is the default IPv4 gateway. It is used only in the testing suite.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any valid IP address.
#
# Related options:
#
# * ``use_neutron``
# * ``gateway_v6``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#gateway = <None>

# DEPRECATED:
# This is the default IPv6 gateway. It is used only in the testing suite.
#
# Please note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Possible values:
#
# * Any valid IP address.
#
# Related options:
#
# * ``use_neutron``
# * ``gateway``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#gateway_v6 = <None>

# DEPRECATED:
# This option represents the number of IP addresses to reserve at the top of the
# address range for VPN clients. It also will be ignored if the configuration
# option for `network_manager` is not set to the default of
# 'nova.network.manager.VlanManager'.
#
# Possible values:
#
# * Any integer, 0 or greater.
#
# Related options:
#
# * ``use_neutron``
# * ``network_manager``
#  (integer value)
# Minimum value: 0
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#cnt_vpn_clients = 0

# DEPRECATED:
# This is the number of seconds to wait before disassociating a deallocated
# fixed
# IP address. This is only used with the nova-network service, and has no effect
# when using neutron for networking.
#
# Possible values:
#
# * Any integer, zero or greater.
#
# Related options:
#
# * ``use_neutron``
#  (integer value)
# Minimum value: 0
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#fixed_ip_disassociate_timeout = 600

# DEPRECATED:
# This option determines how many times nova-network will attempt to create a
# unique MAC address before giving up and raising a
# `VirtualInterfaceMacAddressException` error.
#
# Possible values:
#
# * Any positive integer. The default is 5.
#
# Related options:
#
# * ``use_neutron``
#  (integer value)
# Minimum value: 1
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#create_unique_mac_address_attempts = 5

# DEPRECATED:
# Determines whether unused gateway devices, both VLAN and bridge, are deleted
# if
# the network is in nova-network VLAN mode and is multi-hosted.
#
# Related options:
#
# * ``use_neutron``
# * ``vpn_ip``
# * ``fake_network``
#  (boolean value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#teardown_unused_network_gateway = false

# DEPRECATED:
# When this option is True, a call is made to release the DHCP for the instance
# when that instance is terminated.
#
# Related options:
#
# * ``use_neutron``
#  (boolean value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#force_dhcp_release = true

# DEPRECATED:
# When this option is True, whenever a DNS entry must be updated, a fanout cast
# message is sent to all network hosts to update their DNS entries in multi-host
# mode.
#
# Related options:
#
# * ``use_neutron``
#  (boolean value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#update_dns_entries = false

# DEPRECATED:
# This option determines the time, in seconds, to wait between refreshing DNS
# entries for the network.
#
# Possible values:
#
# * A positive integer
# * -1 to disable updates
#
# Related options:
#
# * ``use_neutron``
#  (integer value)
# Minimum value: -1
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dns_update_periodic_interval = -1

# DEPRECATED:
# This option allows you to specify the domain for the DHCP server.
#
# Possible values:
#
# * Any string that is a valid domain name.
#
# Related options:
#
# * ``use_neutron``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#dhcp_domain = novalocal

# DEPRECATED:
# This option allows you to specify the L3 management library to be used.
#
# Possible values:
#
# * Any dot-separated string that represents the import path to an L3 networking
#   library.
#
# Related options:
#
# * ``use_neutron``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#l3_lib = nova.network.l3.LinuxNetL3

# DEPRECATED:
# THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.
#
# If True in multi_host mode, all compute hosts share the same dhcp address. The
# same IP address used for DHCP will be added on each nova-network node which is
# only visible to the VMs on the same host.
#
# The use of this configuration has been deprecated and may be removed in any
# release after Mitaka. It is recommended that instead of relying on this
# option,
# an explicit value should be passed to 'create_networks()' as a keyword
# argument
# with the name 'share_address'.
#  (boolean value)
# This option is deprecated for removal since 2014.2.
# Its value may be silently ignored in the future.
#share_dhcp_address = false

# DEPRECATED:
# URL for LDAP server which will store DNS entries
#
# Possible values:
#
# * A valid LDAP URL representing the server
#  (uri value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_url = ldap://ldap.example.com:389

# DEPRECATED: Bind user for LDAP server (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_user = uid=admin,ou=people,dc=example,dc=org

# DEPRECATED: Bind user's password for LDAP server (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_password = password

# DEPRECATED:
# Hostmaster for LDAP DNS driver Statement of Authority
#
# Possible values:
#
# * Any valid string representing LDAP DNS hostmaster.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_soa_hostmaster = hostmaster@example.org

# DEPRECATED:
# DNS Servers for LDAP DNS driver
#
# Possible values:
#
# * A valid URL representing a DNS server
#  (multi valued)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_servers = dns.example.org

# DEPRECATED:
# Base distinguished name for the LDAP search query
#
# This option helps to decide where to look up the host in LDAP.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_base_dn = ou=hosts,dc=example,dc=org

# DEPRECATED:
# Refresh interval (in seconds) for LDAP DNS driver Start of Authority
#
# Time interval, a secondary/slave DNS server waits before requesting for
# primary DNS server's current SOA record. If the records are different,
# secondary DNS server will request a zone transfer from primary.
#
# NOTE: Lower values would cause more traffic.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_soa_refresh = 1800

# DEPRECATED:
# Retry interval (in seconds) for LDAP DNS driver Start of Authority
#
# Time interval, a secondary/slave DNS server should wait, if an
# attempt to transfer zone failed during the previous refresh interval.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_soa_retry = 3600

# DEPRECATED:
# Expiry interval (in seconds) for LDAP DNS driver Start of Authority
#
# Time interval, a secondary/slave DNS server holds the information
# before it is no longer considered authoritative.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_soa_expiry = 86400

# DEPRECATED:
# Minimum interval (in seconds) for LDAP DNS driver Start of Authority
#
# It is Minimum time-to-live applies for all resource records in the
# zone file. This value is supplied to other servers how long they
# should keep the data in cache.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ldap_dns_soa_minimum = 7200

# DEPRECATED:
# Default value for multi_host in networks.
#
# nova-network service can operate in a multi-host or single-host mode.
# In multi-host mode each compute node runs a copy of nova-network and the
# instances on that compute node use the compute node as a gateway to the
# Internet. Where as in single-host mode, a central server runs the nova-network
# service. All compute nodes forward traffic from the instances to the
# cloud controller which then forwards traffic to the Internet.
#
# If this options is set to true, some rpc network calls will be sent directly
# to host.
#
# Note that this option is only used when using nova-network instead of
# Neutron in your deployment.
#
# Related options:
#
# * ``use_neutron``
#  (boolean value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#multi_host = false

# DEPRECATED:
# Driver to use for network creation.
#
# Network driver initializes (creates bridges and so on) only when the
# first VM lands on a host node. All network managers configure the
# network using network drivers. The driver is not tied to any particular
# network manager.
#
# The default Linux driver implements vlans, bridges, and iptables rules
# using linux utilities.
#
# Note that this option is only used when using nova-network instead
# of Neutron in your deployment.
#
# Related options:
#
# * ``use_neutron``
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#network_driver = nova.network.linux_net

# DEPRECATED:
# Firewall driver to use with ``nova-network`` service.
#
# This option only applies when using the ``nova-network`` service. When using
# another networking services, such as Neutron, this should be to set to the
# ``nova.virt.firewall.NoopFirewallDriver``.
#
# Possible values:
#
# * ``nova.virt.firewall.IptablesFirewallDriver``
# * ``nova.virt.firewall.NoopFirewallDriver``
# * ``nova.virt.libvirt.firewall.IptablesFirewallDriver``
# * [...]
#
# Related options:
#
# * ``use_neutron``: This must be set to ``False`` to enable ``nova-network``
#   networking
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#firewall_driver = nova.virt.firewall.NoopFirewallDriver

# DEPRECATED:
# Determine whether to allow network traffic from same network.
#
# When set to true, hosts on the same subnet are not filtered and are allowed
# to pass all types of traffic between them. On a flat network, this allows
# all instances from all projects unfiltered communication. With VLAN
# networking, this allows access between instances within the same project.
#
# This option only applies when using the ``nova-network`` service. When using
# another networking services, such as Neutron, security groups or other
# approaches should be used.
#
# Possible values:
#
# * True: Network traffic should be allowed pass between all instances on the
#   same network, regardless of their tenant and security policies
# * False: Network traffic should not be allowed pass between instances unless
#   it is unblocked in a security group
#
# Related options:
#
# * ``use_neutron``: This must be set to ``False`` to enable ``nova-network``
#   networking
# * ``firewall_driver``: This must be set to
#   ``nova.virt.libvirt.firewall.IptablesFirewallDriver`` to ensure the
#   libvirt firewall driver is enabled.
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#allow_same_net_traffic = true

# DEPRECATED:
# Default pool for floating IPs.
#
# This option specifies the default floating IP pool for allocating floating
# IPs.
#
# While allocating a floating ip, users can optionally pass in the name of the
# pool they want to allocate from, otherwise it will be pulled from the
# default pool.
#
# If this option is not set, then 'nova' is used as default floating pool.
#
# Possible values:
#
# * Any string representing a floating IP pool name
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# This option was used for two purposes: to set the floating IP pool name for
# nova-network and to do the same for neutron. nova-network is deprecated, as
# are
# any related configuration options. Users of neutron, meanwhile, should use the
# 'default_floating_pool' option in the '[neutron]' group.
#default_floating_pool = nova

# DEPRECATED:
# Autoassigning floating IP to VM
#
# When set to True, floating IP is auto allocated and associated
# to the VM upon creation.
#
# Related options:
#
# * use_neutron: this options only works with nova-network.
#  (boolean value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#auto_assign_floating_ip = false

# DEPRECATED:
# Full class name for the DNS Manager for floating IPs.
#
# This option specifies the class of the driver that provides functionality
# to manage DNS entries associated with floating IPs.
#
# When a user adds a DNS entry for a specified domain to a floating IP,
# nova will add a DNS entry using the specified floating DNS driver.
# When a floating IP is deallocated, its DNS entry will automatically be
# deleted.
#
# Possible values:
#
# * Full Python path to the class to be used
#
# Related options:
#
# * use_neutron: this options only works with nova-network.
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver

# DEPRECATED:
# Full class name for the DNS Manager for instance IPs.
#
# This option specifies the class of the driver that provides functionality
# to manage DNS entries for instances.
#
# On instance creation, nova will add DNS entries for the instance name and
# id, using the specified instance DNS driver and domain. On instance deletion,
# nova will remove the DNS entries.
#
# Possible values:
#
# * Full Python path to the class to be used
#
# Related options:
#
# * use_neutron: this options only works with nova-network.
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver

# DEPRECATED:
# If specified, Nova checks if the availability_zone of every instance matches
# what the database says the availability_zone should be for the specified
# dns_domain.
#
# Related options:
#
# * use_neutron: this options only works with nova-network.
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#instance_dns_domain =

# DEPRECATED:
# Assign IPv6 and IPv4 addresses when creating instances.
#
# Related options:
#
# * use_neutron: this only works with nova-network.
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#use_ipv6 = false

# DEPRECATED:
# Abstracts out IPv6 address generation to pluggable backends.
#
# nova-network can be put into dual-stack mode, so that it uses
# both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances
# acquire IPv6 global unicast addresses with the help of stateless address
# auto-configuration mechanism.
#
# Related options:
#
# * use_neutron: this option only works with nova-network.
# * use_ipv6: this option only works if ipv6 is enabled for nova-network.
#  (string value)
# Possible values:
# rfc2462 - <No description provided>
# account_identifier - <No description provided>
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#ipv6_backend = rfc2462

# DEPRECATED:
# This option is used to enable or disable quota checking for tenant networks.
#
# Related options:
#
# * quota_networks
#  (boolean value)
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason:
# CRUD operations on tenant networks are only available when using nova-network
# and nova-network is itself deprecated.
#enable_network_quota = false

# DEPRECATED:
# This option controls the number of private networks that can be created per
# project (or per tenant).
#
# Related options:
#
# * enable_network_quota
#  (integer value)
# Minimum value: 0
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason:
# CRUD operations on tenant networks are only available when using nova-network
# and nova-network is itself deprecated.
#quota_networks = 3

# DEPRECATED: Full class name for the Manager for network (string value)
# Possible values:
# nova.network.manager.FlatManager - <No description provided>
# nova.network.manager.FlatDHCPManager - <No description provided>
# nova.network.manager.VlanManager - <No description provided>
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#network_manager = nova.network.manager.VlanManager

#
# Filename that will be used for storing websocket frames received
# and sent by a proxy service (like VNC, spice, serial) running on this host.
# If this is not set, no recording will be done.
#  (string value)
#record = <None>

# Run as a background process. (boolean value)
#daemon = false

# Disallow non-encrypted connections. (boolean value)
#ssl_only = false

# Set to True if source host is addressed with IPv6. (boolean value)
#source_is_ipv6 = false

# Path to SSL certificate file. (string value)
#cert = self.pem

# SSL key file (if separate from cert). (string value)
#key = <None>

#
# Path to directory with content which will be served by a web server.
#  (string value)
#web = /usr/share/spice-html5

#
# The directory where the Nova python modules are installed.
#
# This directory is used to store template files for networking and remote
# console access. It is also the default path for other config options which
# need to persist Nova internal data. It is very unlikely that you need to
# change this option from its default value.
#
# Possible values:
#
# * The full path to a directory.
#
# Related options:
#
# * ``state_path``
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#pybasedir = <Path>

#
# The directory where the Nova binaries are installed.
#
# This option is only relevant if the networking capabilities from Nova are
# used (see services below). Nova's networking capabilities are targeted to
# be fully replaced by Neutron in the future. It is very unlikely that you need
# to change this option from its default value.
#
# Possible values:
#
# * The full path to a directory.
#  (string value)
#bindir = /home/docs/checkouts/readthedocs.org/user_builds/compute-hyperv/envs/stable-rocky/local/bin

#
# The top-level directory for maintaining Nova's state.
#
# This directory is used to store Nova's internal state. It is used by a
# variety of other config options which derive from this. In some scenarios
# (for example migrations) it makes sense to use a storage location which is
# shared between multiple compute hosts (for example via NFS). Unless the
# option ``instances_path`` gets overwritten, this directory can grow very
# large.
#
# Possible values:
#
# * The full path to a directory. Defaults to value provided in ``pybasedir``.
#  (string value)
#state_path = $pybasedir

#
# This option allows setting an alternate timeout value for RPC calls
# that have the potential to take a long time. If set, RPC calls to
# other services will use this value for the timeout (in seconds)
# instead of the global rpc_response_timeout value.
#
# Operations with RPC calls that utilize this value:
#
# * live migration
# * scheduling
#
# Related options:
#
# * rpc_response_timeout
#  (integer value)
#long_rpc_timeout = 1800

#
# Number of seconds indicating how frequently the state of services on a
# given hypervisor is reported. Nova needs to know this to determine the
# overall health of the deployment.
#
# Related Options:
#
# * service_down_time
#   report_interval should be less than service_down_time. If service_down_time
#   is less than report_interval, services will routinely be considered down,
#   because they report in too rarely.
#  (integer value)
#report_interval = 10

#
# Maximum time in seconds since last check-in for up service
#
# Each compute node periodically updates their database status based on the
# specified report interval. If the compute node hasn't updated the status
# for more than service_down_time, then the compute node is considered down.
#
# Related Options:
#
# * report_interval (service_down_time should not be less than report_interval)
# * scheduler.periodic_task_interval
#  (integer value)
#service_down_time = 60

#
# Enable periodic tasks.
#
# If set to true, this option allows services to periodically run tasks
# on the manager.
#
# In case of running multiple schedulers or conductors you may want to run
# periodic tasks on only one host - in this case disable this option for all
# hosts but one.
#  (boolean value)
#periodic_enable = true

#
# Number of seconds to randomly delay when starting the periodic task
# scheduler to reduce stampeding.
#
# When compute workers are restarted in unison across a cluster,
# they all end up running the periodic tasks at the same time
# causing problems for the external services. To mitigate this
# behavior, periodic_fuzzy_delay option allows you to introduce a
# random initial delay when starting the periodic task scheduler.
#
# Possible Values:
#
# * Any positive integer (in seconds)
# * 0 : disable the random delay
#  (integer value)
# Minimum value: 0
#periodic_fuzzy_delay = 60

# List of APIs to be enabled by default. (list value)
#enabled_apis = osapi_compute,metadata

#
# List of APIs with enabled SSL.
#
# Nova provides SSL support for the API servers. enabled_ssl_apis option
# allows configuring the SSL support.
#  (list value)
#enabled_ssl_apis =

#
# IP address on which the OpenStack API will listen.
#
# The OpenStack API service listens on this IP address for incoming
# requests.
#  (string value)
#osapi_compute_listen = 0.0.0.0

#
# Port on which the OpenStack API will listen.
#
# The OpenStack API service listens on this port number for incoming
# requests.
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#osapi_compute_listen_port = 8774

#
# Number of workers for OpenStack API service. The default will be the number
# of CPUs available.
#
# OpenStack API services can be configured to run as multi-process (workers).
# This overcomes the problem of reduction in throughput when API request
# concurrency increases. OpenStack API service will run in the specified
# number of processes.
#
# Possible Values:
#
# * Any positive integer
# * None (default value)
#  (integer value)
# Minimum value: 1
#osapi_compute_workers = <None>

#
# IP address on which the metadata API will listen.
#
# The metadata API service listens on this IP address for incoming
# requests.
#  (string value)
#metadata_listen = 0.0.0.0

#
# Port on which the metadata API will listen.
#
# The metadata API service listens on this port number for incoming
# requests.
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#metadata_listen_port = 8775

#
# Number of workers for metadata service. If not specified the number of
# available CPUs will be used.
#
# The metadata service can be configured to run as multi-process (workers).
# This overcomes the problem of reduction in throughput when API request
# concurrency increases. The metadata service will run in the specified
# number of processes.
#
# Possible Values:
#
# * Any positive integer
# * None (default value)
#  (integer value)
# Minimum value: 1
#metadata_workers = <None>

#
# This option specifies the driver to be used for the servicegroup service.
#
# ServiceGroup API in nova enables checking status of a compute node. When a
# compute worker running the nova-compute daemon starts, it calls the join API
# to join the compute group. Services like nova scheduler can query the
# ServiceGroup API to check if a node is alive. Internally, the ServiceGroup
# client driver automatically updates the compute worker status. There are
# multiple backend implementations for this service: Database ServiceGroup
# driver
# and Memcache ServiceGroup driver.
#
# Related Options:
#
# * ``service_down_time`` (maximum time since last check-in for up service)
#  (string value)
# Possible values:
# db - Database ServiceGroup driver
# mc - Memcache ServiceGroup driver
#servicegroup_driver = db

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false

# Enable journald for logging. If running in a systemd environment you may wish
# to enable journal support. Doing so will use the journal native protocol which
# includes structured metadata in addition to log messages.This option is
# ignored if log_config_append is set. (boolean value)
#use_journal = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Use JSON formatting for logging. This option is ignored if log_config_append
# is set. (boolean value)
#use_json = false

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = false

# Log output to Windows Event Log. (boolean value)
#use_eventlog = false

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Interval, number of seconds, of log rate limiting. (integer value)
#rate_limit_interval = 0

# Maximum number of logged messages per rate_limit_interval. (integer value)
#rate_limit_burst = 0

# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
# empty string. Logs with level greater or equal to rate_limit_except_level are
# not filtered. An empty string means that all levels are filtered. (string
# value)
#rate_limit_except_level = CRITICAL

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# Size of executor thread pool when executor is threading or eventlet. (integer
# value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# The network address and optional user credentials for connecting to the
# messaging backend, in URL format. The expected format is:
#
# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
#
# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
#
# For full details on the fields in the URL see the documentation of
# oslo_messaging.TransportURL at
# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
# (string value)
#transport_url = rabbit://

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack


[api]
#
# Options under this group are used to define Nova API.

#
# From nova.conf
#

#
# Determine the strategy to use for authentication.
#  (string value)
# Possible values:
# keystone - Use keystone for authentication.
# noauth2 - Designed for testing only, as it does no actual credential checking.
# 'noauth2' provides administrative credentials only if 'admin' is specified as
# the username.
#auth_strategy = keystone

#
# When True, the 'X-Forwarded-For' header is treated as the canonical remote
# address. When False (the default), the 'remote_address' header is used.
#
# You should only enable this if you have an HTML sanitizing proxy.
#  (boolean value)
#use_forwarded_for = false

#
# When gathering the existing metadata for a config drive, the EC2-style
# metadata is returned for all versions that don't appear in this option.
# As of the Liberty release, the available versions are:
#
# * 1.0
# * 2007-01-19
# * 2007-03-01
# * 2007-08-29
# * 2007-10-10
# * 2007-12-15
# * 2008-02-01
# * 2008-09-01
# * 2009-04-04
#
# The option is in the format of a single string, with each version separated
# by a space.
#
# Possible values:
#
# * Any string that represents zero or more versions, separated by spaces.
#  (string value)
#config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01

#
# A list of vendordata providers.
#
# vendordata providers are how deployers can provide metadata via configdrive
# and metadata that is specific to their deployment.
#
# For more information on the requirements for implementing a vendordata
# dynamic endpoint, please see the vendordata.rst file in the nova developer
# reference.
#
# Related options:
#
# * ``vendordata_dynamic_targets``
# * ``vendordata_dynamic_ssl_certfile``
# * ``vendordata_dynamic_connect_timeout``
# * ``vendordata_dynamic_read_timeout``
# * ``vendordata_dynamic_failure_fatal``
#  (list value)
#vendordata_providers = StaticJSON

#
# A list of targets for the dynamic vendordata provider. These targets are of
# the form <name>@<url>.
#
# The dynamic vendordata provider collects metadata by contacting external REST
# services and querying them for information about the instance. This behaviour
# is documented in the vendordata.rst file in the nova developer reference.
#  (list value)
#vendordata_dynamic_targets =

#
# Path to an optional certificate file or CA bundle to verify dynamic
# vendordata REST services ssl certificates against.
#
# Possible values:
#
# * An empty string, or a path to a valid certificate file
#
# Related options:
#
# * vendordata_providers
# * vendordata_dynamic_targets
# * vendordata_dynamic_connect_timeout
# * vendordata_dynamic_read_timeout
# * vendordata_dynamic_failure_fatal
#  (string value)
#vendordata_dynamic_ssl_certfile =

#
# Maximum wait time for an external REST service to connect.
#
# Possible values:
#
# * Any integer with a value greater than three (the TCP packet retransmission
#   timeout). Note that instance start may be blocked during this wait time,
#   so this value should be kept small.
#
# Related options:
#
# * vendordata_providers
# * vendordata_dynamic_targets
# * vendordata_dynamic_ssl_certfile
# * vendordata_dynamic_read_timeout
# * vendordata_dynamic_failure_fatal
#  (integer value)
# Minimum value: 3
#vendordata_dynamic_connect_timeout = 5

#
# Maximum wait time for an external REST service to return data once connected.
#
# Possible values:
#
# * Any integer. Note that instance start is blocked during this wait time,
#   so this value should be kept small.
#
# Related options:
#
# * vendordata_providers
# * vendordata_dynamic_targets
# * vendordata_dynamic_ssl_certfile
# * vendordata_dynamic_connect_timeout
# * vendordata_dynamic_failure_fatal
#  (integer value)
# Minimum value: 0
#vendordata_dynamic_read_timeout = 5

#
# Should failures to fetch dynamic vendordata be fatal to instance boot?
#
# Related options:
#
# * vendordata_providers
# * vendordata_dynamic_targets
# * vendordata_dynamic_ssl_certfile
# * vendordata_dynamic_connect_timeout
# * vendordata_dynamic_read_timeout
#  (boolean value)
#vendordata_dynamic_failure_fatal = false

#
# This option is the time (in seconds) to cache metadata. When set to 0,
# metadata caching is disabled entirely; this is generally not recommended for
# performance reasons. Increasing this setting should improve response times
# of the metadata API when under heavy load. Higher values may increase memory
# usage, and result in longer times for host metadata changes to take effect.
#  (integer value)
# Minimum value: 0
#metadata_cache_expiration = 15

#
# Cloud providers may store custom data in vendor data file that will then be
# available to the instances via the metadata service, and to the rendering of
# config-drive. The default class for this, JsonFileVendorData, loads this
# information from a JSON file, whose path is configured by this option. If
# there is no path set by this option, the class returns an empty dictionary.
#
# Possible values:
#
# * Any string representing the path to the data file, or an empty string
#     (default).
#  (string value)
#vendordata_jsonfile_path = <None>

#
# As a query can potentially return many thousands of items, you can limit the
# maximum number of items in a single response by setting this option.
#  (integer value)
# Minimum value: 0
# Deprecated group/name - [DEFAULT]/osapi_max_limit
#max_limit = 1000

#
# This string is prepended to the normal URL that is returned in links to the
# OpenStack Compute API. If it is empty (the default), the URLs are returned
# unchanged.
#
# Possible values:
#
# * Any string, including an empty string (the default).
#  (string value)
# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
#compute_link_prefix = <None>

#
# This string is prepended to the normal URL that is returned in links to
# Glance resources. If it is empty (the default), the URLs are returned
# unchanged.
#
# Possible values:
#
# * Any string, including an empty string (the default).
#  (string value)
# Deprecated group/name - [DEFAULT]/osapi_glance_link_prefix
#glance_link_prefix = <None>

#
# When enabled, this will cause the API to only query cell databases
# in which the tenant has mapped instances. This requires an additional
# (fast) query in the API database before each list, but also
# (potentially) limits the number of cell databases that must be queried
# to provide the result. If you have a small number of cells, or tenants
# are likely to have instances in all cells, then this should be
# False. If you have many cells, especially if you confine tenants to a
# small subset of those cells, this should be True.
#  (boolean value)
#instance_list_per_project_cells = false

#
# This controls the method by which the API queries cell databases in
# smaller batches during large instance list operations. If batching is
# performed, a large instance list operation will request some fraction
# of the overall API limit from each cell database initially, and will
# re-request that same batch size as records are consumed (returned)
# from each cell as necessary. Larger batches mean less chattiness
# between the API and the database, but potentially more wasted effort
# processing the results from the database which will not be returned to
# the user. Any strategy will yield a batch size of at least 100 records,
# to avoid a user causing many tiny database queries in their request.
#
# Related options:
#
# * instance_list_cells_batch_fixed_size
# * max_limit
#  (string value)
# Possible values:
# distributed - Divide the limit requested by the user by the number of cells in
# the system. This requires counting the cells in the system initially, which
# will not be refreshed until service restart or SIGHUP. The actual batch size
# will be increased by 10% over the result of ($limit / $num_cells).
# fixed - Request fixed-size batches from each cell, as defined by
# ``instance_list_cells_batch_fixed_size``. If the limit is smaller than the
# batch size, the limit will be used instead. If you do not wish batching to be
# used at all, setting the fixed size equal to the ``max_limit`` value will
# cause only one request per cell database to be issued.
#instance_list_cells_batch_strategy = distributed

#
# This controls the batch size of instances requested from each cell
# database if ``instance_list_cells_batch_strategy``` is set to ``fixed``.
# This integral value will define the limit issued to each cell every time
# a batch of instances is requested, regardless of the number of cells in
# the system or any other factors. Per the general logic called out in
# the documentation for ``instance_list_cells_batch_strategy``, the
# minimum value for this is 100 records per batch.
#
# Related options:
#
# * instance_list_cells_batch_strategy
# * max_limit
#  (integer value)
# Minimum value: 100
#instance_list_cells_batch_fixed_size = 100

#
# When set to False, this will cause the API to return a 500 error if there is
# an
# infrastructure failure like non-responsive cells. If you want the API to skip
# the down cells and return the results from the up cells set this option to
# True.
#  (boolean value)
#list_records_by_skipping_down_cells = true

#
# When True, the TenantNetworkController will query the Neutron API to get the
# default networks to use.
#
# Related options:
#
# * neutron_default_tenant_id
#  (boolean value)
#use_neutron_default_nets = false

#
# Tenant ID for getting the default network from Neutron API (also referred in
# some places as the 'project ID') to use.
#
# Related options:
#
# * use_neutron_default_nets
#  (string value)
#neutron_default_tenant_id = default

#
# Enables returning of the instance password by the relevant server API calls
# such as create, rebuild, evacuate, or rescue. If the hypervisor does not
# support password injection, then the password returned will not be correct,
# so if your hypervisor does not support password injection, set this to False.
#  (boolean value)
#enable_instance_password = true


[api_database]
#
# The *Nova API Database* is a separate database which is used for information
# which is used across *cells*. This database is mandatory since the Mitaka
# release (13.0.0).

#
# From nova.conf
#

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
#connection = <None>

# Optional URL parameters to append onto the connection URL at connect time;
# specify as param1=value1&param2=value2&... (string value)
#connection_parameters =

# If True, SQLite uses synchronous mode. (boolean value)
#sqlite_synchronous = true

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Connections which have been present in the connection pool longer than this
# number of seconds will be replaced with a new one the next time they are
# checked out from the pool. (integer value)
# Deprecated group/name - [api_database]/idle_timeout
#connection_recycle_time = 3600

# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
#max_pool_size = <None>

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
#max_overflow = <None>

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
#pool_timeout = <None>


[barbican]

#
# From nova.conf
#

# Use this endpoint to connect to Barbican, for example:
# "http://localhost:9311/" (string value)
#barbican_endpoint = <None>

# Version of the Barbican API, for example: "v1" (string value)
#barbican_api_version = <None>

# Use this endpoint to connect to Keystone (string value)
# Deprecated group/name - [key_manager]/auth_url
#auth_endpoint = http://localhost/identity/v3

# Number of seconds to wait before retrying poll for key creation completion
# (integer value)
#retry_delay = 1

# Number of times to retry poll for key creation completion (integer value)
#number_of_retries = 60

# Specifies if insecure TLS (https) requests. If False, the server's certificate
# will not be validated (boolean value)
#verify_ssl = true

# Specifies the type of endpoint.  Allowed values are: public, private, and
# admin (string value)
# Possible values:
# public - <No description provided>
# internal - <No description provided>
# admin - <No description provided>
#barbican_endpoint_type = public


[cache]

#
# From nova.conf
#

# Prefix for building the configuration dictionary for the cache region. This
# should not need to be changed unless there is another dogpile.cache region
# with the same configuration name. (string value)
#config_prefix = cache.oslo

# Default TTL, in seconds, for any cached item in the dogpile.cache region. This
# applies to any cached method that doesn't have an explicit cache expiration
# time defined for it. (integer value)
#expiration_time = 600

# Cache backend module. For eventlet-based or environments with hundreds of
# threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
# recommended. For environments with less than 100 threaded servers, Memcached
# (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test
# environments with a single instance of the server can use the
# dogpile.cache.memory backend. (string value)
# Possible values:
# oslo_cache.memcache_pool - <No description provided>
# oslo_cache.dict - <No description provided>
# oslo_cache.mongo - <No description provided>
# oslo_cache.etcd3gw - <No description provided>
# dogpile.cache.memcached - <No description provided>
# dogpile.cache.pylibmc - <No description provided>
# dogpile.cache.bmemcached - <No description provided>
# dogpile.cache.dbm - <No description provided>
# dogpile.cache.redis - <No description provided>
# dogpile.cache.memory - <No description provided>
# dogpile.cache.memory_pickle - <No description provided>
# dogpile.cache.null - <No description provided>
#backend = dogpile.cache.null

# Arguments supplied to the backend module. Specify this option once per
# argument to be passed to the dogpile.cache backend. Example format:
# "<argname>:<value>". (multi valued)
#backend_argument =

# Proxy classes to import that will affect the way the dogpile.cache backend
# functions. See the dogpile.cache documentation on changing-backend-behavior.
# (list value)
#proxies =

# Global toggle for caching. (boolean value)
#enabled = false

# Extra debugging from the cache backend (cache keys, get/set/delete/etc calls).
# This is only really useful if you need to see the specific cache-backend
# get/set/delete calls with the keys/values.  Typically this should be left set
# to false. (boolean value)
#debug_cache_backend = false

# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (list value)
#memcache_servers = localhost:11211

# Number of seconds memcached server is considered dead before it is tried
# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
# (integer value)
#memcache_dead_retry = 300

# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (floating point value)
#memcache_socket_timeout = 3.0

# Max total number of open connections to every memcached server.
# (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_maxsize = 10

# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_unused_timeout = 60

# Number of seconds that an operation will wait to get a memcache client
# connection. (integer value)
#memcache_pool_connection_get_timeout = 10


[cells]
#
# DEPRECATED: Cells options allow you to use cells v1 functionality in an
# OpenStack deployment.
#
# Note that the options in this group are only for cells v1 functionality, which
# is considered experimental and not recommended for new deployments. Cells v1
# is being replaced with cells v2, which starting in the 15.0.0 Ocata release is
# required and all Nova deployments will be at least a cells v2 cell of one.
#

#
# From nova.conf
#

# DEPRECATED:
# Enable cell v1 functionality.
#
# Note that cells v1 is considered experimental and not recommended for new
# Nova deployments. Cells v1 is being replaced by cells v2 which starting in
# the 15.0.0 Ocata release, all Nova deployments are at least a cells v2 cell
# of one. Setting this option, or any other options in the [cells] group, is
# not required for cells v2.
#
# When this functionality is enabled, it lets you to scale an OpenStack
# Compute cloud in a more distributed fashion without having to use
# complicated technologies like database and message queue clustering.
# Cells are configured as a tree. The top-level cell should have a host
# that runs a nova-api service, but no nova-compute services. Each
# child cell should run all of the typical nova-* services in a regular
# Compute cloud except for nova-api. You can think of cells as a normal
# Compute deployment in that each cell has its own database server and
# message queue broker.
#
# Related options:
#
# * name: A unique cell name must be given when this functionality
#   is enabled.
# * cell_type: Cell type should be defined for all cells.
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#enable = false

# DEPRECATED:
# Name of the current cell.
#
# This value must be unique for each cell. Name of a cell is used as
# its id, leaving this option unset or setting the same name for
# two or more cells may cause unexpected behaviour.
#
# Related options:
#
# * enabled: This option is meaningful only when cells service
#   is enabled
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#name = nova

# DEPRECATED:
# Cell capabilities.
#
# List of arbitrary key=value pairs defining capabilities of the
# current cell to be sent to the parent cells. These capabilities
# are intended to be used in cells scheduler filters/weighers.
#
# Possible values:
#
# * key=value pairs list for example;
#   ``hypervisor=xenserver;kvm,os=linux;windows``
#  (list value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#capabilities = hypervisor=xenserver;kvm,os=linux;windows

# DEPRECATED:
# Call timeout.
#
# Cell messaging module waits for response(s) to be put into the
# eventlet queue. This option defines the seconds waited for
# response from a call to a cell.
#
# Possible values:
#
# * An integer, corresponding to the interval time in seconds.
#  (integer value)
# Minimum value: 0
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#call_timeout = 60

# DEPRECATED:
# Reserve percentage
#
# Percentage of cell capacity to hold in reserve, so the minimum
# amount of free resource is considered to be;
#
#     min_free = total * (reserve_percent / 100.0)
#
# This option affects both memory and disk utilization.
#
# The primary purpose of this reserve is to ensure some space is
# available for users who want to resize their instance to be larger.
# Note that currently once the capacity expands into this reserve
# space this option is ignored.
#
# Possible values:
#
# * An integer or float, corresponding to the percentage of cell capacity to
#   be held in reserve.
#  (floating point value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#reserve_percent = 10.0

# DEPRECATED:
# Type of cell.
#
# When cells feature is enabled the hosts in the OpenStack Compute
# cloud are partitioned into groups. Cells are configured as a tree.
# The top-level cell's cell_type must be set to ``api``. All other
# cells are defined as a ``compute cell`` by default.
#
# Related option:
#
# * quota_driver: Disable quota checking for the child cells.
#   (nova.quota.NoopQuotaDriver)
#  (string value)
# Possible values:
# api - <No description provided>
# compute - <No description provided>
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#cell_type = compute

# DEPRECATED:
# Mute child interval.
#
# Number of seconds after which a lack of capability and capacity
# update the child cell is to be treated as a mute cell. Then the
# child cell will be weighed as recommend highly that it be skipped.
#
# Possible values:
#
# * An integer, corresponding to the interval time in seconds.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#mute_child_interval = 300

# DEPRECATED:
# Bandwidth update interval.
#
# Seconds between bandwidth usage cache updates for cells.
#
# Possible values:
#
# * An integer, corresponding to the interval time in seconds.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#bandwidth_update_interval = 600

# DEPRECATED:
# Instance update sync database limit.
#
# Number of instances to pull from the database at one time for
# a sync. If there are more instances to update the results will
# be paged through.
#
# Possible values:
#
# * An integer, corresponding to a number of instances.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#instance_update_sync_database_limit = 100

# DEPRECATED:
# Mute weight multiplier.
#
# Multiplier used to weigh mute children. Mute children cells are
# recommended to be skipped so their weight is multiplied by this
# negative value.
#
# Possible values:
#
# * Negative numeric number
#  (floating point value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#mute_weight_multiplier = -10000.0

# DEPRECATED:
# Ram weight multiplier.
#
# Multiplier used for weighing ram. Negative numbers indicate that
# Compute should stack VMs on one host instead of spreading out new
# VMs to more hosts in the cell.
#
# Possible values:
#
# * Numeric multiplier
#  (floating point value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#ram_weight_multiplier = 10.0

# DEPRECATED:
# Offset weight multiplier
#
# Multiplier used to weigh offset weigher. Cells with higher
# weight_offsets in the DB will be preferred. The weight_offset
# is a property of a cell stored in the database. It can be used
# by a deployer to have scheduling decisions favor or disfavor
# cells based on the setting.
#
# Possible values:
#
# * Numeric multiplier
#  (floating point value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#offset_weight_multiplier = 1.0

# DEPRECATED:
# Instance updated at threshold
#
# Number of seconds after an instance was updated or deleted to
# continue to update cells. This option lets cells manager to only
# attempt to sync instances that have been updated recently.
# i.e., a threshold of 3600 means to only update instances that
# have modified in the last hour.
#
# Possible values:
#
# * Threshold in seconds
#
# Related options:
#
# * This value is used with the ``instance_update_num_instances``
#   value in a periodic task run.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#instance_updated_at_threshold = 3600

# DEPRECATED:
# Instance update num instances
#
# On every run of the periodic task, nova cells manager will attempt to
# sync instance_updated_at_threshold number of instances. When the
# manager gets the list of instances, it shuffles them so that multiple
# nova-cells services do not attempt to sync the same instances in
# lockstep.
#
# Possible values:
#
# * Positive integer number
#
# Related options:
#
# * This value is used with the ``instance_updated_at_threshold``
#   value in a periodic task run.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#instance_update_num_instances = 1

# DEPRECATED:
# Maximum hop count
#
# When processing a targeted message, if the local cell is not the
# target, a route is defined between neighbouring cells. And the
# message is processed across the whole routing path. This option
# defines the maximum hop counts until reaching the target.
#
# Possible values:
#
# * Positive integer value
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#max_hop_count = 10

# DEPRECATED:
# Cells scheduler.
#
# The class of the driver used by the cells scheduler. This should be
# the full Python path to the class to be used. If nothing is specified
# in this option, the CellsScheduler is used.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#scheduler = nova.cells.scheduler.CellsScheduler

# DEPRECATED:
# RPC driver queue base.
#
# When sending a message to another cell by JSON-ifying the message
# and making an RPC cast to 'process_message', a base queue is used.
# This option defines the base queue name to be used when communicating
# between cells. Various topics by message type will be appended to this.
#
# Possible values:
#
# * The base queue name to be used when communicating between cells.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#rpc_driver_queue_base = cells.intercell

# DEPRECATED:
# Scheduler filter classes.
#
# Filter classes the cells scheduler should use. An entry of
# "nova.cells.filters.all_filters" maps to all cells filters
# included with nova. As of the Mitaka release the following
# filter classes are available:
#
# Different cell filter: A scheduler hint of 'different_cell'
# with a value of a full cell name may be specified to route
# a build away from a particular cell.
#
# Image properties filter: Image metadata named
# 'hypervisor_version_requires' with a version specification
# may be specified to ensure the build goes to a cell which
# has hypervisors of the required version. If either the version
# requirement on the image or the hypervisor capability of the
# cell is not present, this filter returns without filtering out
# the cells.
#
# Target cell filter: A scheduler hint of 'target_cell' with a
# value of a full cell name may be specified to route a build to
# a particular cell. No error handling is done as there's no way
# to know whether the full path is a valid.
#
# As an admin user, you can also add a filter that directs builds
# to a particular cell.
#
#  (list value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#scheduler_filter_classes = nova.cells.filters.all_filters

# DEPRECATED:
# Scheduler weight classes.
#
# Weigher classes the cells scheduler should use. An entry of
# "nova.cells.weights.all_weighers" maps to all cell weighers
# included with nova. As of the Mitaka release the following
# weight classes are available:
#
# mute_child: Downgrades the likelihood of child cells being
# chosen for scheduling requests, which haven't sent capacity
# or capability updates in a while. Options include
# mute_weight_multiplier (multiplier for mute children; value
# should be negative).
#
# ram_by_instance_type: Select cells with the most RAM capacity
# for the instance type being requested. Because higher weights
# win, Compute returns the number of available units for the
# instance type requested. The ram_weight_multiplier option defaults
# to 10.0 that adds to the weight by a factor of 10. Use a negative
# number to stack VMs on one host instead of spreading out new VMs
# to more hosts in the cell.
#
# weight_offset: Allows modifying the database to weight a particular
# cell. The highest weight will be the first cell to be scheduled for
# launching an instance. When the weight_offset of a cell is set to 0,
# it is unlikely to be picked but it could be picked if other cells
# have a lower weight, like if they're full. And when the weight_offset
# is set to a very high value (for example, '999999999999999'), it is
# likely to be picked if another cell do not have a higher weight.
#  (list value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#scheduler_weight_classes = nova.cells.weights.all_weighers

# DEPRECATED:
# Scheduler retries.
#
# How many retries when no cells are available. Specifies how many
# times the scheduler tries to launch a new instance when no cells
# are available.
#
# Possible values:
#
# * Positive integer value
#
# Related options:
#
# * This value is used with the ``scheduler_retry_delay`` value
#   while retrying to find a suitable cell.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#scheduler_retries = 10

# DEPRECATED:
# Scheduler retry delay.
#
# Specifies the delay (in seconds) between scheduling retries when no
# cell can be found to place the new instance on. When the instance
# could not be scheduled to a cell after ``scheduler_retries`` in
# combination with ``scheduler_retry_delay``, then the scheduling
# of the instance failed.
#
# Possible values:
#
# * Time in seconds.
#
# Related options:
#
# * This value is used with the ``scheduler_retries`` value
#   while retrying to find a suitable cell.
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#scheduler_retry_delay = 2

# DEPRECATED:
# DB check interval.
#
# Cell state manager updates cell status for all cells from the DB
# only after this particular interval time is passed. Otherwise cached
# status are used. If this value is 0 or negative all cell status are
# updated from the DB whenever a state is needed.
#
# Possible values:
#
# * Interval time, in seconds.
#
#  (integer value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#db_check_interval = 60

# DEPRECATED:
# Optional cells configuration.
#
# Configuration file from which to read cells configuration. If given,
# overrides reading cells from the database.
#
# Cells store all inter-cell communication data, including user names
# and passwords, in the database. Because the cells data is not updated
# very frequently, use this option to specify a JSON file to store
# cells data. With this configuration, the database is no longer
# consulted when reloading the cells data. The file must have columns
# present in the Cell model (excluding common database fields and the
# id column). You must specify the queue connection information through
# a transport_url field, instead of username, password, and so on.
#
# The transport_url has the following form:
# rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
#
# Possible values:
#
# The scheme can be either qpid or rabbit, the following sample shows
# this optional configuration::
#
#     {
#         "parent": {
#             "name": "parent",
#             "api_url": "http://api.example.com:8774",
#             "transport_url": "rabbit://rabbit.example.com",
#             "weight_offset": 0.0,
#             "weight_scale": 1.0,
#             "is_parent": true
#         },
#         "cell1": {
#             "name": "cell1",
#             "api_url": "http://api.example.com:8774",
#             "transport_url": "rabbit://rabbit1.example.com",
#             "weight_offset": 0.0,
#             "weight_scale": 1.0,
#             "is_parent": false
#         },
#         "cell2": {
#             "name": "cell2",
#             "api_url": "http://api.example.com:8774",
#             "transport_url": "rabbit://rabbit2.example.com",
#             "weight_offset": 0.0,
#             "weight_scale": 1.0,
#             "is_parent": false
#         }
#     }
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason: Cells v1 is being replaced with Cells v2.
#cells_config = <None>


[cinder]

#
# From nova.conf
#

#
# Info to match when looking for cinder in the service catalog.
#
# Possible values:
#
# * Format is separated values of the form:
#   <service_type>:<service_name>:<endpoint_type>
#
# Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens
# release.
#
# Related options:
#
# * endpoint_template - Setting this option will override catalog_info
#  (string value)
#catalog_info = volumev3:cinderv3:publicURL

#
# If this option is set then it will override service catalog lookup with
# this template for cinder endpoint
#
# Possible values:
#
# * URL for cinder endpoint API
#   e.g. http://localhost:8776/v3/%(project_id)s
#
# Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens
# release.
#
# Related options:
#
# * catalog_info - If endpoint_template is not set, catalog_info will be used.
#  (string value)
#endpoint_template = <None>

#
# Region name of this node. This is used when picking the URL in the service
# catalog.
#
# Possible values:
#
# * Any string representing region name
#  (string value)
#os_region_name = <None>

#
# Number of times cinderclient should retry on any failed http call.
# 0 means connection is attempted only once. Setting it to any positive integer
# means that on failure connection is retried that many times e.g. setting it
# to 3 means total attempts to connect will be 4.
#
# Possible values:
#
# * Any integer value. 0 means connection is attempted only once
#  (integer value)
# Minimum value: 0
#http_retries = 3

#
# Allow attach between instance and volume in different availability zones.
#
# If False, volumes attached to an instance must be in the same availability
# zone in Cinder as the instance availability zone in Nova.
# This also means care should be taken when booting an instance from a volume
# where source is not "volume" because Nova will attempt to create a volume
# using
# the same availability zone as what is assigned to the instance.
# If that AZ is not in Cinder (or allow_availability_zone_fallback=False in
# cinder.conf), the volume create request will fail and the instance will fail
# the build request.
# By default there is no availability zone restriction on volume attach.
#  (boolean value)
#cross_az_attach = true

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# Authentication type to load (string value)
# Deprecated group/name - [cinder]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Scope for system operations (string value)
#system_scope = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
#project_id = <None>

# Project name to scope to (string value)
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used for
# both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# User ID (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [cinder]/user_name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>

# Tenant ID (string value)
#tenant_id = <None>

# Tenant Name (string value)
#tenant_name = <None>


[compute]

#
# From nova.conf
#

#
# Enables reporting of build failures to the scheduler.
#
# Any nonzero value will enable sending build failure statistics to the
# scheduler for use by the BuildFailureWeigher.
#
# Possible values:
#
# * Any positive integer enables reporting build failures.
# * Zero to disable reporting build failures.
#
# Related options:
#
# * [filter_scheduler]/build_failure_weight_multiplier
#
#  (integer value)
#consecutive_build_service_disable_threshold = 10

#
# Time to wait in seconds before resending an ACPI shutdown signal to
# instances.
#
# The overall time to wait is set by ``shutdown_timeout``.
#
# Possible values:
#
# * Any integer greater than 0 in seconds
#
# Related options:
#
# * ``shutdown_timeout``
#  (integer value)
# Minimum value: 1
#shutdown_retry_interval = 10

#
# Interval for updating nova-compute-side cache of the compute node resource
# provider's aggregates and traits info.
#
# This option specifies the number of seconds between attempts to update a
# provider's aggregates and traits information in the local cache of the compute
# node.
#
# A value of zero disables cache refresh completely.
#
# Possible values:
#
# * Any positive integer in seconds, or zero to disable refresh.
#  (integer value)
# Minimum value: 0
# Note: This option can be changed without restarting.
#resource_provider_association_refresh = 300

#
# Defines which physical CPUs (pCPUs) will be used for best-effort guest vCPU
# resources.
#
# Currently only used by libvirt driver to place guest emulator threads when
# hw:emulator_threads_policy:share.
#
# ::
#     cpu_shared_set = "4-12,^8,15"
#  (string value)
#cpu_shared_set = <None>

#
# Determine if the source compute host should wait for a ``network-vif-plugged``
# event from the (neutron) networking service before starting the actual
# transfer
# of the guest to the destination compute host.
#
# Note that this option is read on the destination host of a live migration.
# If you set this option the same on all of your compute hosts, which you should
# do if you use the same networking backend universally, you do not have to
# worry about this.
#
# Before starting the transfer of the guest, some setup occurs on the
# destination
# compute host, including plugging virtual interfaces. Depending on the
# networking backend **on the destination host**, a ``network-vif-plugged``
# event may be triggered and then received on the source compute host and the
# source compute can wait for that event to ensure networking is set up on the
# destination host before starting the guest transfer in the hypervisor.
#
# By default, this is False for two reasons:
#
# 1. Backward compatibility: deployments should test this out and ensure it
# works
#    for them before enabling it.
#
# 2. The compute service cannot reliably determine which types of virtual
#    interfaces (``port.binding:vif_type``) will send ``network-vif-plugged``
#    events without an accompanying port ``binding:host_id`` change.
#    Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least
#    one known backend that will not currently work in this case, see bug
#    https://launchpad.net/bugs/1755890 for more details.
#
# Possible values:
#
# * True: wait for ``network-vif-plugged`` events before starting guest transfer
# * False: do not wait for ``network-vif-plugged`` events before starting guest
#   transfer (this is how things have always worked before this option
#   was introduced)
#
# Related options:
#
# * [DEFAULT]/vif_plugging_is_fatal: if ``live_migration_wait_for_vif_plug`` is
#   True and ``vif_plugging_timeout`` is greater than 0, and a timeout is
#   reached, the live migration process will fail with an error but the guest
#   transfer will not have started to the destination host
# * [DEFAULT]/vif_plugging_timeout: if ``live_migration_wait_for_vif_plug`` is
#   True, this controls the amount of time to wait before timing out and either
#   failing if ``vif_plugging_is_fatal`` is True, or simply continuing with the
#   live migration
#  (boolean value)
#live_migration_wait_for_vif_plug = false


[conductor]
#
# Options under this group are used to define Conductor's communication,
# which manager should be act as a proxy between computes and database,
# and finally, how many worker processes will be used.

#
# From nova.conf
#

#
# Number of workers for OpenStack Conductor service. The default will be the
# number of CPUs available.
#  (integer value)
#workers = <None>


[console]
#
# Options under this group allow to tune the configuration of the console proxy
# service.
#
# Note: in configuration of every compute is a ``console_host`` option,
# which allows to select the console proxy service to connect to.

#
# From nova.conf
#

#
# Adds list of allowed origins to the console websocket proxy to allow
# connections from other origin hostnames.
# Websocket proxy matches the host header with the origin header to
# prevent cross-site requests. This list specifies if any there are
# values other than host are allowed in the origin header.
#
# Possible values:
#
# * A list where each element is an allowed origin hostnames, else an empty list
#  (list value)
# Deprecated group/name - [DEFAULT]/console_allowed_origins
#allowed_origins =


[consoleauth]

#
# From nova.conf
#

#
# The lifetime of a console auth token (in seconds).
#
# A console auth token is used in authorizing console access for a user.
# Once the auth token time to live count has elapsed, the token is
# considered expired.  Expired tokens are then deleted.
#
# Related options:
#
# * ``[workarounds]/enable_consoleauth``
#  (integer value)
# Minimum value: 0
# Deprecated group/name - [DEFAULT]/console_token_ttl
#token_ttl = 600


[devices]

#
# From nova.conf
#

#
# The vGPU types enabled in the compute node.
#
# Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use
# this option to specify a list of enabled vGPU types that may be assigned to a
# guest instance. But please note that Nova only supports a single type in the
# Queens release. If more than one vGPU type is specified (as a comma-separated
# list), only the first one will be used. An example is as the following::
#
#     [devices]
#     enabled_vgpu_types = GRID K100,Intel GVT-g,MxGPU.2,nvidia-11
#  (list value)
#enabled_vgpu_types =


[ephemeral_storage_encryption]

#
# From nova.conf
#

#
# Enables/disables LVM ephemeral storage encryption.
#  (boolean value)
#enabled = false

#
# Cipher-mode string to be used.
#
# The cipher and mode to be used to encrypt ephemeral storage. The set of
# cipher-mode combinations available depends on kernel support. According
# to the dm-crypt documentation, the cipher is expected to be in the format:
# "<cipher>-<chainmode>-<ivmode>".
#
# Possible values:
#
# * Any crypto option listed in ``/proc/crypto``.
#  (string value)
#cipher = aes-xts-plain64

#
# Encryption key length in bits.
#
# The bit length of the encryption key to be used to encrypt ephemeral storage.
# In XTS mode only half of the bits are used for encryption key.
#  (integer value)
# Minimum value: 1
#key_size = 512


[filter_scheduler]

#
# From nova.conf
#

#
# Size of subset of best hosts selected by scheduler.
#
# New instances will be scheduled on a host chosen randomly from a subset of the
# N best hosts, where N is the value set by this option.
#
# Setting this to a value greater than 1 will reduce the chance that multiple
# scheduler processes handling similar requests will select the same host,
# creating a potential race condition. By selecting a host randomly from the N
# hosts that best fit the request, the chance of a conflict is reduced. However,
# the higher you set this value, the less optimal the chosen host may be for a
# given request.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * An integer, where the integer corresponds to the size of a host subset. Any
#   integer is valid, although any value less than 1 will be treated as 1
#  (integer value)
# Minimum value: 1
# Deprecated group/name - [DEFAULT]/scheduler_host_subset_size
#host_subset_size = 1

#
# The number of instances that can be actively performing IO on a host.
#
# Instances performing IO includes those in the following states: build, resize,
# snapshot, migrate, rescue, unshelve.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'io_ops_filter' filter is enabled.
#
# Possible values:
#
# * An integer, where the integer corresponds to the max number of instances
#   that can be actively performing IO on any given host.
#  (integer value)
#max_io_ops_per_host = 8

#
# Maximum number of instances that be active on a host.
#
# If you need to limit the number of instances on any given host, set this
# option
# to the maximum number of instances you want to allow. The NumInstancesFilter
# and AggregateNumInstancesFilter will reject any host that has at least as many
# instances as this option's value.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'NumInstancesFilter' or
# 'AggregateNumInstancesFilter' filter is enabled.
#
# Possible values:
#
# * An integer, where the integer corresponds to the max instances that can be
#   scheduled on a host.
#  (integer value)
# Minimum value: 1
#max_instances_per_host = 50

#
# Enable querying of individual hosts for instance information.
#
# The scheduler may need information about the instances on a host in order to
# evaluate its filters and weighers. The most common need for this information
# is
# for the (anti-)affinity filters, which need to choose a host based on the
# instances already running on a host.
#
# If the configured filters and weighers do not need this information, disabling
# this option will improve performance. It may also be disabled when the
# tracking
# overhead proves too heavy, although this will cause classes requiring host
# usage data to query the database on each request instead.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# NOTE: In a multi-cell (v2) setup where the cell MQ is separated from the
# top-level, computes cannot directly communicate with the scheduler. Thus,
# this option cannot be enabled in that scenario. See also the
# [workarounds]/disable_group_policy_check_upcall option.
#  (boolean value)
# Deprecated group/name - [DEFAULT]/scheduler_tracks_instance_changes
#track_instance_changes = true

#
# Filters that the scheduler can use.
#
# An unordered list of the filter classes the nova scheduler may apply.  Only
# the
# filters specified in the 'enabled_filters' option will be used, but
# any filter appearing in that option must also be included in this list.
#
# By default, this is set to all filters that are included with nova.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * A list of zero or more strings, where each string corresponds to the name of
#   a filter that may be used for selecting a host
#
# Related options:
#
# * enabled_filters
#  (multi valued)
# Deprecated group/name - [DEFAULT]/scheduler_available_filters
#available_filters = nova.scheduler.filters.all_filters

#
# Filters that the scheduler will use.
#
# An ordered list of filter class names that will be used for filtering
# hosts. These filters will be applied in the order they are listed so
# place your most restrictive filters first to make the filtering process more
# efficient.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * A list of zero or more strings, where each string corresponds to the name of
#   a filter to be used for selecting a host
#
# Related options:
#
# * All of the filters in this option *must* be present in the
#   'available_filters' option, or a SchedulerHostFilterNotFound
#   exception will be raised.
#  (list value)
# Deprecated group/name - [DEFAULT]/scheduler_default_filters
#enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

#
# Weighers that the scheduler will use.
#
# Only hosts which pass the filters are weighed. The weight for any host starts
# at 0, and the weighers order these hosts by adding to or subtracting from the
# weight assigned by the previous weigher. Weights may become negative. An
# instance will be scheduled to one of the N most-weighted hosts, where N is
# 'scheduler_host_subset_size'.
#
# By default, this is set to all weighers that are included with Nova.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * A list of zero or more strings, where each string corresponds to the name of
#   a weigher that will be used for selecting a host
#  (list value)
# Deprecated group/name - [DEFAULT]/scheduler_weight_classes
#weight_classes = nova.scheduler.weights.all_weighers

#
# RAM weight multipler ratio.
#
# This option determines how hosts with more or less available RAM are weighed.
# A
# positive value will result in the scheduler preferring hosts with more
# available RAM, and a negative number will result in the scheduler preferring
# hosts with less available RAM. Another way to look at it is that positive
# values for this option will tend to spread instances across many hosts, while
# negative values will tend to fill up (stack) hosts as much as possible before
# scheduling to a less-used host. The absolute value, whether positive or
# negative, controls how strong the RAM weigher is relative to other weighers.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'ram' weigher is enabled.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multipler
#   ratio for this weigher.
#  (floating point value)
#ram_weight_multiplier = 1.0

#
# CPU weight multiplier ratio.
#
# Multiplier used for weighting free vCPUs. Negative numbers indicate stacking
# rather than spreading.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'cpu' weigher is enabled.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multipler
#   ratio for this weigher.
#
# Related options:
#
# * ``filter_scheduler.weight_classes``: This weigher must be added to list of
#   enabled weight classes if the ``weight_classes`` setting is set to a
#   non-default value.
#  (floating point value)
#cpu_weight_multiplier = 1.0

#
# Disk weight multipler ratio.
#
# Multiplier used for weighing free disk space. Negative numbers mean to
# stack vs spread.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'disk' weigher is enabled.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multipler
#   ratio for this weigher.
#  (floating point value)
#disk_weight_multiplier = 1.0

#
# IO operations weight multipler ratio.
#
# This option determines how hosts with differing workloads are weighed.
# Negative
# values, such as the default, will result in the scheduler preferring hosts
# with
# lighter workloads whereas positive values will prefer hosts with heavier
# workloads. Another way to look at it is that positive values for this option
# will tend to schedule instances onto hosts that are already busy, while
# negative values will tend to distribute the workload across more hosts. The
# absolute value, whether positive or negative, controls how strong the io_ops
# weigher is relative to other weighers.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'io_ops' weigher is enabled.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multipler
#   ratio for this weigher.
#  (floating point value)
#io_ops_weight_multiplier = -1.0

#
# PCI device affinity weight multiplier.
#
# The PCI device affinity weighter computes a weighting based on the number of
# PCI devices on the host and the number of PCI devices requested by the
# instance. The ``NUMATopologyFilter`` filter must be enabled for this to have
# any significance. For more information, refer to the filter documentation:
#
#     https://docs.openstack.org/nova/latest/user/filter-scheduler.html
#
# Possible values:
#
# * A positive integer or float value, where the value corresponds to the
#   multiplier ratio for this weigher.
#  (floating point value)
# Minimum value: 0
#pci_weight_multiplier = 1.0

#
# Multiplier used for weighing hosts for group soft-affinity.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to weight multiplier
#   for hosts with group soft affinity. Only a positive value are meaningful, as
#   negative values would make this behave as a soft anti-affinity weigher.
#  (floating point value)
#soft_affinity_weight_multiplier = 1.0

#
# Multiplier used for weighing hosts for group soft-anti-affinity.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to weight multiplier
#   for hosts with group soft anti-affinity. Only a positive value are
#   meaningful, as negative values would make this behave as a soft affinity
#   weigher.
#  (floating point value)
#soft_anti_affinity_weight_multiplier = 1.0

#
# Multiplier used for weighing hosts that have had recent build failures.
#
# This option determines how much weight is placed on a compute node with
# recent build failures. Build failures may indicate a failing, misconfigured,
# or otherwise ailing compute node, and avoiding it during scheduling may be
# beneficial. The weight is inversely proportional to the number of recent
# build failures the compute node has experienced. This value should be
# set to some high value to offset weight given by other enabled weighers
# due to available resources. To disable weighing compute hosts by the
# number of recent failures, set this to zero.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multiplier
#   ratio for this weigher.
#
# Related options:
#
# * [compute]/consecutive_build_service_disable_threshold - Must be nonzero
#   for a compute to report data considered by this weigher.
#  (floating point value)
#build_failure_weight_multiplier = 1000000.0

#
# Enable spreading the instances between hosts with the same best weight.
#
# Enabling it is beneficial for cases when host_subset_size is 1
# (default), but there is a large number of hosts with same maximal weight.
# This scenario is common in Ironic deployments where there are typically many
# baremetal nodes with identical weights returned to the scheduler.
# In such case enabling this option will reduce contention and chances for
# rescheduling events.
# At the same time it will make the instance packing (even in unweighed case)
# less dense.
#  (boolean value)
#shuffle_best_same_weighed_hosts = false

#
# The default architecture to be used when using the image properties filter.
#
# When using the ImagePropertiesFilter, it is possible that you want to define
# a default architecture to make the user experience easier and avoid having
# something like x86_64 images landing on aarch64 compute nodes because the
# user did not specify the 'hw_architecture' property in Glance.
#
# Possible values:
#
# * CPU Architectures such as x86_64, aarch64, s390x.
#  (string value)
# Possible values:
# alpha - <No description provided>
# armv6 - <No description provided>
# armv7l - <No description provided>
# armv7b - <No description provided>
# aarch64 - <No description provided>
# cris - <No description provided>
# i686 - <No description provided>
# ia64 - <No description provided>
# lm32 - <No description provided>
# m68k - <No description provided>
# microblaze - <No description provided>
# microblazeel - <No description provided>
# mips - <No description provided>
# mipsel - <No description provided>
# mips64 - <No description provided>
# mips64el - <No description provided>
# openrisc - <No description provided>
# parisc - <No description provided>
# parisc64 - <No description provided>
# ppc - <No description provided>
# ppcle - <No description provided>
# ppc64 - <No description provided>
# ppc64le - <No description provided>
# ppcemb - <No description provided>
# s390 - <No description provided>
# s390x - <No description provided>
# sh4 - <No description provided>
# sh4eb - <No description provided>
# sparc - <No description provided>
# sparc64 - <No description provided>
# unicore32 - <No description provided>
# x86_64 - <No description provided>
# xtensa - <No description provided>
# xtensaeb - <No description provided>
#image_properties_default_architecture = <None>

#
# List of UUIDs for images that can only be run on certain hosts.
#
# If there is a need to restrict some images to only run on certain designated
# hosts, list those image UUIDs here.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'IsolatedHostsFilter' filter is enabled.
#
# Possible values:
#
# * A list of UUID strings, where each string corresponds to the UUID of an
#   image
#
# Related options:
#
# * scheduler/isolated_hosts
# * scheduler/restrict_isolated_hosts_to_isolated_images
#  (list value)
#isolated_images =

#
# List of hosts that can only run certain images.
#
# If there is a need to restrict some images to only run on certain designated
# hosts, list those host names here.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'IsolatedHostsFilter' filter is enabled.
#
# Possible values:
#
# * A list of strings, where each string corresponds to the name of a host
#
# Related options:
#
# * scheduler/isolated_images
# * scheduler/restrict_isolated_hosts_to_isolated_images
#  (list value)
#isolated_hosts =

#
# Prevent non-isolated images from being built on isolated hosts.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Even
# then, this option doesn't affect the behavior of requests for isolated images,
# which will *always* be restricted to isolated hosts.
#
# Related options:
#
# * scheduler/isolated_images
# * scheduler/isolated_hosts
#  (boolean value)
#restrict_isolated_hosts_to_isolated_images = true

#
# Image property namespace for use in the host aggregate.
#
# Images and hosts can be configured so that certain images can only be
# scheduled
# to hosts in a particular aggregate. This is done with metadata values set on
# the host aggregate that are identified by beginning with the value of this
# option. If the host is part of an aggregate with such a metadata key, the
# image
# in the request spec must have the value of that metadata in its properties in
# order for the scheduler to consider the host as acceptable.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'aggregate_image_properties_isolation' filter
# is
# enabled.
#
# Possible values:
#
# * A string, where the string corresponds to an image property namespace
#
# Related options:
#
# * aggregate_image_properties_isolation_separator
#  (string value)
#aggregate_image_properties_isolation_namespace = <None>

#
# Separator character(s) for image property namespace and name.
#
# When using the aggregate_image_properties_isolation filter, the relevant
# metadata keys are prefixed with the namespace defined in the
# aggregate_image_properties_isolation_namespace configuration option plus a
# separator. This option defines the separator to be used.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect. Also note that this setting
# only affects scheduling if the 'aggregate_image_properties_isolation' filter
# is enabled.
#
# Possible values:
#
# * A string, where the string corresponds to an image property namespace
#   separator character
#
# Related options:
#
# * aggregate_image_properties_isolation_namespace
#  (string value)
#aggregate_image_properties_isolation_separator = .


[glance]
# Configuration options for the Image service

#
# From nova.conf
#

#
# List of glance api servers endpoints available to nova.
#
# https is used for ssl-based glance api servers.
#
# NOTE: The preferred mechanism for endpoint discovery is via keystoneauth1
# loading options. Only use api_servers if you need multiple endpoints and are
# unable to use a load balancer for some reason.
#
# Possible values:
#
# * A list of any fully qualified url of the form
# "scheme://hostname:port[/path]"
#   (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").
#  (list value)
#api_servers = <None>

#
# Enable glance operation retries.
#
# Specifies the number of retries when uploading / downloading
# an image to / from glance. 0 means no retries.
#  (integer value)
# Minimum value: 0
#num_retries = 0

# DEPRECATED:
# List of url schemes that can be directly accessed.
#
# This option specifies a list of url schemes that can be downloaded
# directly via the direct_url. This direct_URL can be fetched from
# Image metadata which can be used by nova to get the
# image more efficiently. nova-compute could benefit from this by
# invoking a copy when it has access to the same file system as glance.
#
# Possible values:
#
# * [file], Empty list (default)
#  (list value)
# This option is deprecated for removal since 17.0.0.
# Its value may be silently ignored in the future.
# Reason:
# This was originally added for the 'nova.image.download.file' FileTransfer
# extension which was removed in the 16.0.0 Pike release. The
# 'nova.image.download.modules' extension point is not maintained
# and there is no indication of its use in production clouds.
#allowed_direct_url_schemes =

#
# Enable image signature verification.
#
# nova uses the image signature metadata from glance and verifies the signature
# of a signed image while downloading that image. If the image signature cannot
# be verified or if the image signature metadata is either incomplete or
# unavailable, then nova will not boot the image and instead will place the
# instance into an error state. This provides end users with stronger assurances
# of the integrity of the image data they are using to create servers.
#
# Related options:
#
# * The options in the `key_manager` group, as the key_manager is used
#   for the signature validation.
# * Both enable_certificate_validation and default_trusted_certificate_ids
#   below depend on this option being enabled.
#  (boolean value)
#verify_glance_signatures = false

# DEPRECATED:
# Enable certificate validation for image signature verification.
#
# During image signature verification nova will first verify the validity of the
# image's signing certificate using the set of trusted certificates associated
# with the instance. If certificate validation fails, signature verification
# will not be performed and the instance will be placed into an error state.
# This
# provides end users with stronger assurances that the image data is unmodified
# and trustworthy. If left disabled, image signature verification can still
# occur but the end user will not have any assurance that the signing
# certificate used to generate the image signature is still trustworthy.
#
# Related options:
#
# * This option only takes effect if verify_glance_signatures is enabled.
# * The value of default_trusted_certificate_ids may be used when this option
#   is enabled.
#  (boolean value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# This option is intended to ease the transition for deployments leveraging
# image signature verification. The intended state long-term is for signature
# verification and certificate validation to always happen together.
#enable_certificate_validation = false

#
# List of certificate IDs for certificates that should be trusted.
#
# May be used as a default list of trusted certificate IDs for certificate
# validation. The value of this option will be ignored if the user provides a
# list of trusted certificate IDs with an instance API request. The value of
# this option will be persisted with the instance data if signature verification
# and certificate validation are enabled and if the user did not provide an
# alternative list. If left empty when certificate validation is enabled the
# user must provide a list of trusted certificate IDs otherwise certificate
# validation will fail.
#
# Related options:
#
# * The value of this option may be used if both verify_glance_signatures and
#   enable_certificate_validation are enabled.
#  (list value)
#default_trusted_certificate_ids =

# Enable or disable debug logging with glanceclient. (boolean value)
#debug = false

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# The default service_type for endpoint URL discovery. (string value)
#service_type = image

# The default service_name for endpoint URL discovery. (string value)
#service_name = <None>

# List of interfaces, in order of preference, for endpoint URL. (list value)
#valid_interfaces = internal,public

# The default region_name for endpoint URL discovery. (string value)
#region_name = <None>

# Always use this endpoint URL for requests for this client. NOTE: The
# unversioned endpoint should be specified here; to request a particular API
# version, use the `version`, `min-version`, and/or `max-version` options.
# (string value)
#endpoint_override = <None>


[guestfs]
#
# libguestfs is a set of tools for accessing and modifying virtual
# machine (VM) disk images. You can use this for viewing and editing
# files inside guests, scripting changes to VMs, monitoring disk
# used/free statistics, creating guests, P2V, V2V, performing backups,
# cloning VMs, building VMs, formatting disks and resizing disks.

#
# From nova.conf
#

#
# Enable/disables guestfs logging.
#
# This configures guestfs to debug messages and push them to OpenStack
# logging system. When set to True, it traces libguestfs API calls and
# enable verbose debug messages. In order to use the above feature,
# "libguestfs" package must be installed.
#
# Related options:
#
# Since libguestfs access and modifies VM's managed by libvirt, below options
# should be set to give access to those VM's.
#
# * ``libvirt.inject_key``
# * ``libvirt.inject_partition``
# * ``libvirt.inject_password``
#  (boolean value)
#debug = false


[hyperv]
#
# The hyperv feature allows you to configure the Hyper-V hypervisor
# driver to be used within an OpenStack deployment.

#
# From compute_hyperv
#

# Number of seconds to wait for an instance to be evacuated during host
# maintenance. (integer value)
#evacuate_task_state_timeout = 600

# Warning: Failed to format sample for cluster_event_check_interval
# 'NoneType' object has no attribute 'startswith'

# Automatically shutdown instances when the host is shutdown. By default,
# instances will be saved, which adds a disk overhead. Changing this option will
# not affect existing instances. (boolean value)
#instance_automatic_shutdown = false

# Number of seconds to wait for an instance to be live migrated (Only applies to
# clustered instances for the moment). (integer value)
# Minimum value: 0
#instance_live_migration_timeout = 300

# The maximum number of failovers that can occur in the failover_period
# timeframe per VM. Once a VM's number failover reaches this number, the VM will
# simply end up in a Failed state. (integer value)
# Minimum value: 1
#max_failover_count = 1

# The number of hours in which the max_failover_count number of failovers can
# occur. (integer value)
# Minimum value: 1
#failover_period = 6

# Allow the VM the failback to its original host once it is available. (boolean
# value)
#auto_failback = true

# If this option is enabled, instance destroy requests are executed immediately,
# regardless of instance pending tasks. In some situations, the destroy
# operation will fail (e.g. due to file locks), requiring subsequent retries.
# (boolean value)
#force_destroy_instances = false

#
# From nova.conf
#

#
# Dynamic memory ratio
#
# Enables dynamic memory allocation (ballooning) when set to a value
# greater than 1. The value expresses the ratio between the total RAM
# assigned to an instance and its startup RAM amount. For example a
# ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of
# RAM allocated at startup.
#
# Possible values:
#
# * 1.0: Disables dynamic memory allocation (Default).
# * Float values greater than 1.0: Enables allocation of total implied
#   RAM divided by this value for startup.
#  (floating point value)
#dynamic_memory_ratio = 1.0

#
# Enable instance metrics collection
#
# Enables metrics collections for an instance by using Hyper-V's
# metric APIs. Collected data can be retrieved by other apps and
# services, e.g.: Ceilometer.
#  (boolean value)
#enable_instance_metrics_collection = false

#
# Instances path share
#
# The name of a Windows share mapped to the "instances_path" dir
# and used by the resize feature to copy files to the target host.
# If left blank, an administrative share (hidden network share) will
# be used, looking for the same "instances_path" used locally.
#
# Possible values:
#
# * "": An administrative share will be used (Default).
# * Name of a Windows share.
#
# Related options:
#
# * "instances_path": The directory which will be used if this option
#   here is left blank.
#  (string value)
#instances_path_share =

#
# Limit CPU features
#
# This flag is needed to support live migration to hosts with
# different CPU features and checked during instance creation
# in order to limit the CPU features used by the instance.
#  (boolean value)
#limit_cpu_features = false

#
# Mounted disk query retry count
#
# The number of times to retry checking for a mounted disk.
# The query runs until the device can be found or the retry
# count is reached.
#
# Possible values:
#
# * Positive integer values. Values greater than 1 is recommended
#   (Default: 10).
#
# Related options:
#
# * Time interval between disk mount retries is declared with
#   "mounted_disk_query_retry_interval" option.
#  (integer value)
# Minimum value: 0
#mounted_disk_query_retry_count = 10

#
# Mounted disk query retry interval
#
# Interval between checks for a mounted disk, in seconds.
#
# Possible values:
#
# * Time in seconds (Default: 5).
#
# Related options:
#
# * This option is meaningful when the mounted_disk_query_retry_count
#   is greater than 1.
# * The retry loop runs with mounted_disk_query_retry_count and
#   mounted_disk_query_retry_interval configuration options.
#  (integer value)
# Minimum value: 0
#mounted_disk_query_retry_interval = 5

#
# Power state check timeframe
#
# The timeframe to be checked for instance power state changes.
# This option is used to fetch the state of the instance from Hyper-V
# through the WMI interface, within the specified timeframe.
#
# Possible values:
#
# * Timeframe in seconds (Default: 60).
#  (integer value)
# Minimum value: 0
#power_state_check_timeframe = 60

#
# Power state event polling interval
#
# Instance power state change event polling frequency. Sets the
# listener interval for power state events to the given value.
# This option enhances the internal lifecycle notifications of
# instances that reboot themselves. It is unlikely that an operator
# has to change this value.
#
# Possible values:
#
# * Time in seconds (Default: 2).
#  (integer value)
# Minimum value: 0
#power_state_event_polling_interval = 2

#
# qemu-img command
#
# qemu-img is required for some of the image related operations
# like converting between different image types. You can get it
# from here: (http://qemu.weilnetz.de/) or you can install the
# Cloudbase OpenStack Hyper-V Compute Driver
# (https://cloudbase.it/openstack-hyperv-driver/) which automatically
# sets the proper path for this config option. You can either give the
# full path of qemu-img.exe or set its path in the PATH environment
# variable and leave this option to the default value.
#
# Possible values:
#
# * Name of the qemu-img executable, in case it is in the same
#   directory as the nova-compute service or its path is in the
#   PATH environment variable (Default).
# * Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND).
#
# Related options:
#
# * If the config_drive_cdrom option is False, qemu-img will be used to
#   convert the ISO to a VHD, otherwise the configuration drive will
#   remain an ISO. To use configuration drive with Hyper-V, you must
#   set the mkisofs_cmd value to the full path to an mkisofs.exe
#   installation.
#  (string value)
#qemu_img_cmd = qemu-img.exe

#
# External virtual switch name
#
# The Hyper-V Virtual Switch is a software-based layer-2 Ethernet
# network switch that is available with the installation of the
# Hyper-V server role. The switch includes programmatically managed
# and extensible capabilities to connect virtual machines to both
# virtual networks and the physical network. In addition, Hyper-V
# Virtual Switch provides policy enforcement for security, isolation,
# and service levels. The vSwitch represented by this config option
# must be an external one (not internal or private).
#
# Possible values:
#
# * If not provided, the first of a list of available vswitches
#   is used. This list is queried using WQL.
# * Virtual switch name.
#  (string value)
#vswitch_name = <None>

#
# Wait soft reboot seconds
#
# Number of seconds to wait for instance to shut down after soft
# reboot request is made. We fall back to hard reboot if instance
# does not shutdown within this window.
#
# Possible values:
#
# * Time in seconds (Default: 60).
#  (integer value)
# Minimum value: 0
#wait_soft_reboot_seconds = 60

#
# Configuration drive cdrom
#
# OpenStack can be configured to write instance metadata to
# a configuration drive, which is then attached to the
# instance before it boots. The configuration drive can be
# attached as a disk drive (default) or as a CD drive.
#
# Possible values:
#
# * True: Attach the configuration drive image as a CD drive.
# * False: Attach the configuration drive image as a disk drive (Default).
#
# Related options:
#
# * This option is meaningful with force_config_drive option set to 'True'
#   or when the REST API call to create an instance will have
#   '--config-drive=True' flag.
# * config_drive_format option must be set to 'iso9660' in order to use
#   CD drive as the configuration drive image.
# * To use configuration drive with Hyper-V, you must set the
#   mkisofs_cmd value to the full path to an mkisofs.exe installation.
#   Additionally, you must set the qemu_img_cmd value to the full path
#   to an qemu-img command installation.
# * You can configure the Compute service to always create a configuration
#   drive by setting the force_config_drive option to 'True'.
#  (boolean value)
#config_drive_cdrom = false

#
# Configuration drive inject password
#
# Enables setting the admin password in the configuration drive image.
#
# Related options:
#
# * This option is meaningful when used with other options that enable
#   configuration drive usage with Hyper-V, such as force_config_drive.
# * Currently, the only accepted config_drive_format is 'iso9660'.
#  (boolean value)
#config_drive_inject_password = false

#
# Volume attach retry count
#
# The number of times to retry attaching a volume. Volume attachment
# is retried until success or the given retry count is reached.
#
# Possible values:
#
# * Positive integer values (Default: 10).
#
# Related options:
#
# * Time interval between attachment attempts is declared with
#   volume_attach_retry_interval option.
#  (integer value)
# Minimum value: 0
#volume_attach_retry_count = 10

#
# Volume attach retry interval
#
# Interval between volume attachment attempts, in seconds.
#
# Possible values:
#
# * Time in seconds (Default: 5).
#
# Related options:
#
# * This options is meaningful when volume_attach_retry_count
#   is greater than 1.
# * The retry loop runs with volume_attach_retry_count and
#   volume_attach_retry_interval configuration options.
#  (integer value)
# Minimum value: 0
#volume_attach_retry_interval = 5

#
# Enable RemoteFX feature
#
# This requires at least one DirectX 11 capable graphics adapter for
# Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization
# feature has to be enabled.
#
# Instances with RemoteFX can be requested with the following flavor
# extra specs:
#
# **os:resolution**. Guest VM screen resolution size. Acceptable values::
#
#     1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160
#
# ``3840x2160`` is only available on Windows / Hyper-V Server 2016.
#
# **os:monitors**. Guest VM number of monitors. Acceptable values::
#
#     [1, 4] - Windows / Hyper-V Server 2012 R2
#     [1, 8] - Windows / Hyper-V Server 2016
#
# **os:vram**. Guest VM VRAM amount. Only available on
# Windows / Hyper-V Server 2016. Acceptable values::
#
#     64, 128, 256, 512, 1024
#  (boolean value)
#enable_remotefx = false

#
# Use multipath connections when attaching iSCSI or FC disks.
#
# This requires the Multipath IO Windows feature to be enabled. MPIO must be
# configured to claim such devices.
#  (boolean value)
#use_multipath_io = false

#
# List of iSCSI initiators that will be used for estabilishing iSCSI sessions.
#
# If none are specified, the Microsoft iSCSI initiator service will choose the
# initiator.
#  (list value)
#iscsi_initiator_list =


[ironic]
#
# Configuration options for Ironic driver (Bare Metal).
# If using the Ironic driver following options must be set:
# * auth_type
# * auth_url
# * project_name
# * username
# * password
# * project_domain_id or project_domain_name
# * user_domain_id or user_domain_name

#
# From nova.conf
#

# DEPRECATED: URL override for the Ironic API endpoint. (uri value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Endpoint lookup uses the service catalog via common keystoneauth1
# Adapter configuration options. In the current release, api_endpoint will
# override this behavior, but will be ignored and/or removed in a future
# release. To achieve the same result, use the endpoint_override option instead.
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#api_endpoint = http://ironic.example.org:6385/

#
# The number of times to retry when a request conflicts.
# If set to 0, only try once, no retries.
#
# Related options:
#
# * api_retry_interval
#  (integer value)
# Minimum value: 0
#api_max_retries = 60

#
# The number of seconds to wait before retrying the request.
#
# Related options:
#
# * api_max_retries
#  (integer value)
# Minimum value: 0
#api_retry_interval = 2

# Timeout (seconds) to wait for node serial console state changed. Set to 0 to
# disable timeout. (integer value)
# Minimum value: 0
#serial_console_state_timeout = 10

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# Authentication type to load (string value)
# Deprecated group/name - [ironic]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Scope for system operations (string value)
#system_scope = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
#project_id = <None>

# Project name to scope to (string value)
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# User ID (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [ironic]/user_name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>

# The default service_type for endpoint URL discovery. (string value)
#service_type = baremetal

# The default service_name for endpoint URL discovery. (string value)
#service_name = <None>

# List of interfaces, in order of preference, for endpoint URL. (list value)
#valid_interfaces = internal,public

# The default region_name for endpoint URL discovery. (string value)
#region_name = <None>

# Always use this endpoint URL for requests for this client. NOTE: The
# unversioned endpoint should be specified here; to request a particular API
# version, use the `version`, `min-version`, and/or `max-version` options.
# (string value)
# Deprecated group/name - [ironic]/api_endpoint
#endpoint_override = <None>


[key_manager]

#
# From nova.conf
#

#
# Fixed key returned by key manager, specified in hex.
#
# Possible values:
#
# * Empty string or a key in hex value
#  (string value)
#fixed_key = <None>

# Specify the key manager implementation. Options are "barbican" and "vault".
# Default is  "barbican". Will support the  values earlier set using
# [key_manager]/api_class for some time. (string value)
# Deprecated group/name - [key_manager]/api_class
#backend = barbican

# The type of authentication credential to create. Possible values are 'token',
# 'password', 'keystone_token', and 'keystone_password'. Required if no context
# is passed to the credential factory. (string value)
#auth_type = <None>

# Token for authentication. Required for 'token' and 'keystone_token' auth_type
# if no context is passed to the credential factory. (string value)
#token = <None>

# Username for authentication. Required for 'password' auth_type. Optional for
# the 'keystone_password' auth_type. (string value)
#username = <None>

# Password for authentication. Required for 'password' and 'keystone_password'
# auth_type. (string value)
#password = <None>

# Use this endpoint to connect to Keystone. (string value)
#auth_url = <None>

# User ID for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_id = <None>

# User's domain ID for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_domain_id = <None>

# User's domain name for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_domain_name = <None>

# Trust ID for trust scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#trust_id = <None>

# Domain ID for domain scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#domain_id = <None>

# Domain name for domain scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#domain_name = <None>

# Project ID for project scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_id = <None>

# Project name for project scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_name = <None>

# Project's domain ID for project. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_domain_id = <None>

# Project's domain name for project. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_domain_name = <None>

# Allow fetching a new token if the current one is going to expire. Optional for
# 'keystone_token' and 'keystone_password' auth_type. (boolean value)
#reauthenticate = true


[keystone]
# Configuration options for the identity service

#
# From nova.conf
#

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# The default service_type for endpoint URL discovery. (string value)
#service_type = identity

# The default service_name for endpoint URL discovery. (string value)
#service_name = <None>

# List of interfaces, in order of preference, for endpoint URL. (list value)
#valid_interfaces = internal,public

# The default region_name for endpoint URL discovery. (string value)
#region_name = <None>

# Always use this endpoint URL for requests for this client. NOTE: The
# unversioned endpoint should be specified here; to request a particular API
# version, use the `version`, `min-version`, and/or `max-version` options.
# (string value)
#endpoint_override = <None>


[libvirt]
#
# Libvirt options allows cloud administrator to configure related
# libvirt hypervisor driver to be used within an OpenStack deployment.
#
# Almost all of the libvirt config options are influence by ``virt_type`` config
# which describes the virtualization type (or so called domain type) libvirt
# should use for specific features such as live migration, snapshot.

#
# From nova.conf
#

#
# The ID of the image to boot from to rescue data from a corrupted instance.
#
# If the rescue REST API operation doesn't provide an ID of an image to
# use, the image which is referenced by this ID is used. If this
# option is not set, the image from the instance is used.
#
# Possible values:
#
# * An ID of an image or nothing. If it points to an *Amazon Machine
#   Image* (AMI), consider to set the config options ``rescue_kernel_id``
#   and ``rescue_ramdisk_id`` too. If nothing is set, the image of the instance
#   is used.
#
# Related options:
#
# * ``rescue_kernel_id``: If the chosen rescue image allows the separate
#   definition of its kernel disk, the value of this option is used,
#   if specified. This is the case when *Amazon*'s AMI/AKI/ARI image
#   format is used for the rescue image.
# * ``rescue_ramdisk_id``: If the chosen rescue image allows the separate
#   definition of its RAM disk, the value of this option is used if,
#   specified. This is the case when *Amazon*'s AMI/AKI/ARI image
#   format is used for the rescue image.
#  (string value)
#rescue_image_id = <None>

#
# The ID of the kernel (AKI) image to use with the rescue image.
#
# If the chosen rescue image allows the separate definition of its kernel
# disk, the value of this option is used, if specified. This is the case
# when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image.
#
# Possible values:
#
# * An ID of an kernel image or nothing. If nothing is specified, the kernel
#   disk from the instance is used if it was launched with one.
#
# Related options:
#
# * ``rescue_image_id``: If that option points to an image in *Amazon*'s
#   AMI/AKI/ARI image format, it's useful to use ``rescue_kernel_id`` too.
#  (string value)
#rescue_kernel_id = <None>

#
# The ID of the RAM disk (ARI) image to use with the rescue image.
#
# If the chosen rescue image allows the separate definition of its RAM
# disk, the value of this option is used, if specified. This is the case
# when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image.
#
# Possible values:
#
# * An ID of a RAM disk image or nothing. If nothing is specified, the RAM
#   disk from the instance is used if it was launched with one.
#
# Related options:
#
# * ``rescue_image_id``: If that option points to an image in *Amazon*'s
#   AMI/AKI/ARI image format, it's useful to use ``rescue_ramdisk_id`` too.
#  (string value)
#rescue_ramdisk_id = <None>

#
# Describes the virtualization type (or so called domain type) libvirt should
# use.
#
# The choice of this type must match the underlying virtualization strategy
# you have chosen for this host.
#
# Related options:
#
# * ``connection_uri``: depends on this
# * ``disk_prefix``: depends on this
# * ``cpu_mode``: depends on this
# * ``cpu_model``: depends on this
#  (string value)
# Possible values:
# kvm - <No description provided>
# lxc - <No description provided>
# qemu - <No description provided>
# uml - <No description provided>
# xen - <No description provided>
# parallels - <No description provided>
#virt_type = kvm

#
# Overrides the default libvirt URI of the chosen virtualization type.
#
# If set, Nova will use this URI to connect to libvirt.
#
# Possible values:
#
# * An URI like ``qemu:///system`` or ``xen+ssh://oirase/`` for example.
#   This is only necessary if the URI differs to the commonly known URIs
#   for the chosen virtualization type.
#
# Related options:
#
# * ``virt_type``: Influences what is used as default value here.
#  (string value)
#connection_uri =

#
# Allow the injection of an admin password for instance only at ``create`` and
# ``rebuild`` process.
#
# There is no agent needed within the image to do this. If *libguestfs* is
# available on the host, it will be used. Otherwise *nbd* is used. The file
# system of the image will be mounted and the admin password, which is provided
# in the REST API call will be injected as password for the root user. If no
# root user is available, the instance won't be launched and an error is thrown.
# Be aware that the injection is *not* possible when the instance gets launched
# from a volume.
#
# *Linux* distribution guest only.
#
# Possible values:
#
# * True: Allows the injection.
# * False: Disallows the injection. Any via the REST API provided admin password
#   will be silently ignored.
#
# Related options:
#
# * ``inject_partition``: That option will decide about the discovery and usage
#   of the file system. It also can disable the injection at all.
#  (boolean value)
#inject_password = false

#
# Allow the injection of an SSH key at boot time.
#
# There is no agent needed within the image to do this. If *libguestfs* is
# available on the host, it will be used. Otherwise *nbd* is used. The file
# system of the image will be mounted and the SSH key, which is provided
# in the REST API call will be injected as SSH key for the root user and
# appended to the ``authorized_keys`` of that user. The SELinux context will
# be set if necessary. Be aware that the injection is *not* possible when the
# instance gets launched from a volume.
#
# This config option will enable directly modifying the instance disk and does
# not affect what cloud-init may do using data from config_drive option or the
# metadata service.
#
# *Linux* distribution guest only.
#
# Related options:
#
# * ``inject_partition``: That option will decide about the discovery and usage
#   of the file system. It also can disable the injection at all.
#  (boolean value)
#inject_key = false

#
# Determines the way how the file system is chosen to inject data into it.
#
# *libguestfs* will be used a first solution to inject data. If that's not
# available on the host, the image will be locally mounted on the host as a
# fallback solution. If libguestfs is not able to determine the root partition
# (because there are more or less than one root partition) or cannot mount the
# file system it will result in an error and the instance won't be boot.
#
# Possible values:
#
# * -2 => disable the injection of data.
# * -1 => find the root partition with the file system to mount with libguestfs
# *  0 => The image is not partitioned
# * >0 => The number of the partition to use for the injection
#
# *Linux* distribution guest only.
#
# Related options:
#
# * ``inject_key``: If this option allows the injection of a SSH key it depends
#   on value greater or equal to -1 for ``inject_partition``.
# * ``inject_password``: If this option allows the injection of an admin
# password
#   it depends on value greater or equal to -1 for ``inject_partition``.
# * ``guestfs`` You can enable the debug log level of libguestfs with this
#   config option. A more verbose output will help in debugging issues.
# * ``virt_type``: If you use ``lxc`` as virt_type it will be treated as a
#   single partition image
#  (integer value)
# Minimum value: -2
#inject_partition = -2

# DEPRECATED:
# Enable a mouse cursor within a graphical VNC or SPICE sessions.
#
# This will only be taken into account if the VM is fully virtualized and VNC
# and/or SPICE is enabled. If the node doesn't support a graphical framebuffer,
# then it is valid to set this to False.
#
# Related options:
#
# * ``[vnc]enabled``: If VNC is enabled, ``use_usb_tablet`` will have an effect.
# * ``[spice]enabled`` + ``[spice].agent_enabled``: If SPICE is enabled and the
#   spice agent is disabled, the config value of ``use_usb_tablet`` will have
#   an effect.
#  (boolean value)
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason: This option is being replaced by the 'pointer_model' option.
#use_usb_tablet = true

#
# URI scheme used for live migration.
#
# Override the default libvirt live migration scheme (which is dependent on
# virt_type). If this option is set to None, nova will automatically choose a
# sensible default based on the hypervisor. It is not recommended that you
# change
# this unless you are very sure that hypervisor supports a particular scheme.
#
# Related options:
#
# * ``virt_type``: This option is meaningful only when ``virt_type`` is set to
#   `kvm` or `qemu`.
# * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the
#   scheme used for live migration is taken from ``live_migration_uri`` instead.
#  (string value)
#live_migration_scheme = <None>

#
# The IP address or hostname to be used as the target for live migration
# traffic.
#
# If this option is set to None, the hostname of the migration target compute
# node will be used.
#
# This option is useful in environments where the live-migration traffic can
# impact the network plane significantly. A separate network for live-migration
# traffic can then use this config option and avoids the impact on the
# management network.
#
# Possible values:
#
# * A valid IP address or hostname, else None.
#
# Related options:
#
# * ``live_migration_tunnelled``: The live_migration_inbound_addr value is
#   ignored if tunneling is enabled.
#  (string value)
#live_migration_inbound_addr = <None>

# DEPRECATED:
# Live migration target URI to use.
#
# Override the default libvirt live migration target URI (which is dependent
# on virt_type). Any included "%s" is replaced with the migration target
# hostname.
#
# If this option is set to None (which is the default), Nova will automatically
# generate the `live_migration_uri` value based on only 4 supported `virt_type`
# in following list:
#
# * 'kvm': 'qemu+tcp://%s/system'
# * 'qemu': 'qemu+tcp://%s/system'
# * 'xen': 'xenmigr://%s/system'
# * 'parallels': 'parallels+tcp://%s/system'
#
# Related options:
#
# * ``live_migration_inbound_addr``: If ``live_migration_inbound_addr`` value
#   is not None and ``live_migration_tunnelled`` is False, the ip/hostname
#   address of target compute node is used instead of ``live_migration_uri`` as
#   the uri for live migration.
# * ``live_migration_scheme``: If ``live_migration_uri`` is not set, the scheme
#   used for live migration is taken from ``live_migration_scheme`` instead.
#  (string value)
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# live_migration_uri is deprecated for removal in favor of two other options
# that
# allow to change live migration scheme and target URI:
# ``live_migration_scheme``
# and ``live_migration_inbound_addr`` respectively.
#live_migration_uri = <None>

#
# Enable tunnelled migration.
#
# This option enables the tunnelled migration feature, where migration data is
# transported over the libvirtd connection. If enabled, we use the
# VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure
# the network to allow direct hypervisor to hypervisor communication.
# If False, use the native transport. If not set, Nova will choose a
# sensible default based on, for example the availability of native
# encryption support in the hypervisor. Enabling this option will definitely
# impact performance massively.
#
# Note that this option is NOT compatible with use of block migration.
#
# Related options:
#
# * ``live_migration_inbound_addr``: The live_migration_inbound_addr value is
#   ignored if tunneling is enabled.
#  (boolean value)
#live_migration_tunnelled = false

#
# Maximum bandwidth(in MiB/s) to be used during migration.
#
# If set to 0, the hypervisor will choose a suitable default. Some hypervisors
# do not support this feature and will return an error if bandwidth is not 0.
# Please refer to the libvirt documentation for further details.
#  (integer value)
#live_migration_bandwidth = 0

#
# Maximum permitted downtime, in milliseconds, for live migration
# switchover.
#
# Will be rounded up to a minimum of 100ms. You can increase this value
# if you want to allow live-migrations to complete faster, or avoid
# live-migration timeout errors by allowing the guest to be paused for
# longer during the live-migration switch over.
#
# Related options:
#
# * live_migration_completion_timeout
#  (integer value)
# Minimum value: 100
#live_migration_downtime = 500

#
# Number of incremental steps to reach max downtime value.
#
# Will be rounded up to a minimum of 3 steps.
#  (integer value)
# Minimum value: 3
#live_migration_downtime_steps = 10

#
# Time to wait, in seconds, between each step increase of the migration
# downtime.
#
# Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be
# transferred, with lower bound of a minimum of 2 GiB per device.
#  (integer value)
# Minimum value: 3
#live_migration_downtime_delay = 75

#
# Time to wait, in seconds, for migration to successfully complete transferring
# data before aborting the operation.
#
# Value is per GiB of guest RAM + disk to be transferred, with lower bound of
# a minimum of 2 GiB. Should usually be larger than downtime delay * downtime
# steps. Set to 0 to disable timeouts.
#
# Related options:
#
# * live_migration_downtime
# * live_migration_downtime_steps
# * live_migration_downtime_delay
#  (integer value)
# Note: This option can be changed without restarting.
#live_migration_completion_timeout = 800

# DEPRECATED:
# Time to wait, in seconds, for migration to make forward progress in
# transferring data before aborting the operation.
#
# Set to 0 to disable timeouts.
#
# This is deprecated, and now disabled by default because we have found serious
# bugs in this feature that caused false live-migration timeout failures. This
# feature will be removed or replaced in a future release.
#  (integer value)
# Note: This option can be changed without restarting.
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# Serious bugs found in this feature, see
# https://bugs.launchpad.net/nova/+bug/1644248 for details.
#live_migration_progress_timeout = 0

#
# This option allows nova to switch an on-going live migration to post-copy
# mode, i.e., switch the active VM to the one on the destination node before the
# migration is complete, therefore ensuring an upper bound on the memory that
# needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.
#
# When permitted, post-copy mode will be automatically activated if a
# live-migration memory copy iteration does not make percentage increase of at
# least 10% over the last iteration.
#
# The live-migration force complete API also uses post-copy when permitted. If
# post-copy mode is not available, force complete falls back to pausing the VM
# to ensure the live-migration operation will complete.
#
# When using post-copy mode, if the source and destination hosts loose network
# connectivity, the VM being live-migrated will need to be rebooted. For more
# details, please see the Administration guide.
#
# Related options:
#
#     * live_migration_permit_auto_converge
#  (boolean value)
#live_migration_permit_post_copy = false

#
# This option allows nova to start live migration with auto converge on.
#
# Auto converge throttles down CPU if a progress of on-going live migration
# is slow. Auto converge will only be used if this flag is set to True and
# post copy is not permitted or post copy is unavailable due to the version
# of libvirt and QEMU in use.
#
# Related options:
#
#     * live_migration_permit_post_copy
#  (boolean value)
#live_migration_permit_auto_converge = false

#
# Determine the snapshot image format when sending to the image service.
#
# If set, this decides what format is used when sending the snapshot to the
# image service. If not set, defaults to same type as source image.
#  (string value)
# Possible values:
# raw - RAW disk format
# qcow2 - KVM default disk format
# vmdk - VMWare default disk format
# vdi - VirtualBox default disk format
#snapshot_image_format = <None>

#
# Override the default disk prefix for the devices attached to an instance.
#
# If set, this is used to identify a free disk device name for a bus.
#
# Possible values:
#
# * Any prefix which will result in a valid disk device name like 'sda' or 'hda'
#   for example. This is only necessary if the device names differ to the
#   commonly known device name prefixes for a virtualization type such as: sd,
#   xvd, uvd, vd.
#
# Related options:
#
# * ``virt_type``: Influences which device type is used, which determines
#   the default disk prefix.
#  (string value)
#disk_prefix = <None>

# Number of seconds to wait for instance to shut down after soft reboot request
# is made. We fall back to hard reboot if instance does not shutdown within this
# window. (integer value)
#wait_soft_reboot_seconds = 120

#
# Is used to set the CPU mode an instance should have.
#
# If ``virt_type="kvm|qemu"``, it will default to ``host-model``, otherwise it
# will default to ``none``.
#
# Related options:
#
# * ``cpu_model``: This should be set ONLY when ``cpu_mode`` is set to
#   ``custom``. Otherwise, it would result in an error and the instance launch
#   will fail.
#  (string value)
# Possible values:
# host-model - Clone the host CPU feature flags
# host-passthrough - Use the host CPU model exactly
# custom - Use the CPU model in ``[libvirt]cpu_model``
# none - Don't set a specific CPU model. For instances with ``[libvirt]
# virt_type`` as KVM/QEMU, the default CPU model from QEMU will be used, which
# provides a basic set of CPU features that are compatible with most hosts
#cpu_mode = <None>

#
# Set the name of the libvirt CPU model the instance should use.
#
# Possible values:
#
# * The named CPU models listed in ``/usr/share/libvirt/cpu_map.xml``
#
# Related options:
#
# * ``cpu_mode``: This should be set to ``custom`` ONLY when you want to
#   configure (via ``cpu_model``) a specific named CPU model.  Otherwise, it
#   would result in an error and the instance launch will fail.
# * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this.
#  (string value)
#cpu_model = <None>

#
# This allows specifying granular CPU feature flags when configuring CPU
# models.  For example, to explicitly specify the ``pcid``
# (Process-Context ID, an Intel processor feature -- which is now required
# to address the guest performance degradation as a result of applying the
# "Meltdown" CVE fixes to certain Intel CPU models) flag to the
# "IvyBridge" virtual CPU model::
#
#     [libvirt]
#     cpu_mode = custom
#     cpu_model = IvyBridge
#     cpu_model_extra_flags = pcid
#
# To specify multiple CPU flags (e.g. the Intel ``VMX`` to expose the
# virtualization extensions to the guest, or ``pdpe1gb`` to configure 1GB
# huge pages for CPU models that do not provide it)::
#
#     [libvirt]
#     cpu_mode = custom
#     cpu_model = Haswell-noTSX-IBRS
#     cpu_model_extra_flags = PCID, VMX, pdpe1gb
#
# As it can be noticed from above, the ``cpu_model_extra_flags`` config
# attribute is case insensitive.  And specifying extra flags is valid in
# combination with all the three possible values for ``cpu_mode``:
# ``custom`` (this also requires an explicit ``cpu_model`` to be
# specified), ``host-model``, or ``host-passthrough``.  A valid example
# for allowing extra CPU flags even for ``host-passthrough`` mode is that
# sometimes QEMU may disable certain CPU features -- e.g. Intel's
# "invtsc", Invariable Time Stamp Counter, CPU flag.  And if you need to
# expose that CPU flag to the Nova instance, the you need to explicitly
# ask for it.
#
# The possible values for ``cpu_model_extra_flags`` depends on the CPU
# model in use.  Refer to ``/usr/share/libvirt/cpu_map.xml`` possible CPU
# feature flags for a given CPU model.
#
# Note that when using this config attribute to set the 'PCID' CPU flag
# with the ``custom`` CPU mode, not all virtual (i.e. libvirt / QEMU) CPU
# models need it:
#
# * The only virtual CPU models that include the 'PCID' capability are
#   Intel "Haswell", "Broadwell", and "Skylake" variants.
#
# * The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge",
#   and "IvyBridge" will _not_ expose the 'PCID' capability by default,
#   even if the host CPUs by the same name include it.  I.e.  'PCID' needs
#   to be explicitly specified when using the said virtual CPU models.
#
# The libvirt driver's default CPU mode, ``host-model``, will do the right
# thing with respect to handling 'PCID' CPU flag for the guest --
# *assuming* you are running updated processor microcode, host and guest
# kernel, libvirt, and QEMU.  The other mode, ``host-passthrough``, checks
# if 'PCID' is available in the hardware, and if so directly passes it
# through to the Nova guests.  Thus, in context of 'PCID', with either of
# these CPU modes (``host-model`` or ``host-passthrough``), there is no
# need to use the ``cpu_model_extra_flags``.
#
# Related options:
#
# * cpu_mode
# * cpu_model
#  (list value)
#cpu_model_extra_flags =

# Location where libvirt driver will store snapshots before uploading them to
# image service (string value)
#snapshots_directory = $instances_path/snapshots

# Location where the Xen hvmloader is kept (string value)
#xen_hvmloader_path = /usr/lib/xen/boot/hvmloader

#
# Specific cache modes to use for different disk types.
#
# For example: file=directsync,block=none,network=writeback
#
# For local or direct-attached storage, it is recommended that you use
# writethrough (default) mode, as it ensures data integrity and has acceptable
# I/O performance for applications running in the guest, especially for read
# operations. However, caching mode none is recommended for remote NFS storage,
# because direct I/O operations (O_DIRECT) perform better than synchronous I/O
# operations (with O_SYNC). Caching mode none effectively turns all guest I/O
# operations into direct I/O operations on the host, which is the NFS client in
# this environment.
#
# Possible cache modes:
#
# * default: Same as writethrough.
# * none: With caching mode set to none, the host page cache is disabled, but
#   the disk write cache is enabled for the guest. In this mode, the write
#   performance in the guest is optimal because write operations bypass the host
#   page cache and go directly to the disk write cache. If the disk write cache
#   is battery-backed, or if the applications or storage stack in the guest
#   transfer data properly (either through fsync operations or file system
#   barriers), then data integrity can be ensured. However, because the host
#   page cache is disabled, the read performance in the guest would not be as
#   good as in the modes where the host page cache is enabled, such as
#   writethrough mode. Shareable disk devices, like for a multi-attachable block
#   storage volume, will have their cache mode set to 'none' regardless of
#   configuration.
# * writethrough: writethrough mode is the default caching mode. With
#   caching set to writethrough mode, the host page cache is enabled, but the
#   disk write cache is disabled for the guest. Consequently, this caching mode
#   ensures data integrity even if the applications and storage stack in the
#   guest do not transfer data to permanent storage properly (either through
#   fsync operations or file system barriers). Because the host page cache is
#   enabled in this mode, the read performance for applications running in the
#   guest is generally better. However, the write performance might be reduced
#   because the disk write cache is disabled.
# * writeback: With caching set to writeback mode, both the host page cache
#   and the disk write cache are enabled for the guest. Because of this, the
#   I/O performance for applications running in the guest is good, but the data
#   is not protected in a power failure. As a result, this caching mode is
#   recommended only for temporary data where potential data loss is not a
#   concern.
#   NOTE: Certain backend disk mechanisms may provide safe writeback cache
#   semantics. Specifically those that bypass the host page cache, such as
#   QEMU's integrated RBD driver. Ceph documentation recommends setting this
#   to writeback for maximum performance while maintaining data safety.
# * directsync: Like "writethrough", but it bypasses the host page cache.
# * unsafe: Caching mode of unsafe ignores cache transfer operations
#   completely. As its name implies, this caching mode should be used only for
#   temporary data where data loss is not a concern. This mode can be useful for
#   speeding up guest installations, but you should switch to another caching
#   mode in production environments.
#  (list value)
#disk_cachemodes =

#
# The path to an RNG (Random Number Generator) device that will be used as
# the source of entropy on the host.  Since libvirt 1.3.4, any path (that
# returns random numbers when read) is accepted.  The recommended source
# of entropy is ``/dev/urandom`` -- it is non-blocking, therefore
# relatively fast; and avoids the limitations of ``/dev/random``, which is
# a legacy interface.  For more details (and comparision between different
# RNG sources), refer to the "Usage" section in the Linux kernel API
# documentation for ``[u]random``:
# http://man7.org/linux/man-pages/man4/urandom.4.html and
# http://man7.org/linux/man-pages/man7/random.7.html.
#  (string value)
#rng_dev_path = /dev/urandom

# For qemu or KVM guests, set this option to specify a default machine type per
# host architecture. You can find a list of supported machine types in your
# environment by checking the output of the "virsh capabilities" command. The
# format of the value for this config option is host-arch=machine-type. For
# example: x86_64=machinetype1,armv7l=machinetype2 (list value)
#hw_machine_type = <None>

# The data source used to the populate the host "serial" UUID exposed to guest
# in the virtual BIOS. (string value)
# Possible values:
# none - A serial number entry is not added to the guest domain xml.
# os - A UUID serial number is generated from the host ``/etc/machine-id`` file.
# hardware - A UUID for the host hardware as reported by libvirt. This is
# typically from the host SMBIOS data, unless it has been overridden in
# ``libvirtd.conf``.
# auto - Uses the "os" source if possible, else "hardware".
#sysinfo_serial = auto

# A number of seconds to memory usage statistics period. Zero or negative value
# mean to disable memory usage statistics. (integer value)
#mem_stats_period_seconds = 10

# List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5
# allowed. (list value)
#uid_maps =

# List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5
# allowed. (list value)
#gid_maps =

# In a realtime host context vCPUs for guest will run in that scheduling
# priority. Priority depends on the host kernel (usually 1-99) (integer value)
#realtime_scheduler_priority = 1

#
# This will allow you to specify a list of events to monitor low-level
# performance of guests, and collect related statsitics via the libvirt
# driver, which in turn uses the Linux kernel's `perf` infrastructure.
# With this config attribute set, Nova will generate libvirt guest XML to
# monitor the specified events.  For more information, refer to the
# "Performance monitoring events" section here:
# https://libvirt.org/formatdomain.html#elementsPerf.  And here:
# https://libvirt.org/html/libvirt-libvirt-domain.html -- look for
# ``VIR_PERF_PARAM_*``
#
# For example, to monitor the count of CPU cycles (total/elapsed) and the
# count of cache misses, enable them as follows::
#
#     [libvirt]
#     enabled_perf_events = cpu_clock, cache_misses
#
# Possible values: A string list.  The list of supported events can be
# found here: https://libvirt.org/formatdomain.html#elementsPerf.
#
# Note that support for Intel CMT events (`cmt`, `mbmbt`, `mbml`) is
# deprecated, and will be removed in the "Stein" release.  That's because
# the upstream Linux kernel (from 4.14 onwards) has deleted support for
# Intel CMT, because it is broken by design.
#  (list value)
#enabled_perf_events =

#
# The number of PCIe ports an instance will get.
#
# Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a
# target instance will get. Some will be used by default, rest will be available
# for hotplug use.
#
# By default we have just 1-2 free ports which limits hotplug.
#
# More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt
#
# Due to QEMU limitations for aarch64/virt maximum value is set to '28'.
#
# Default value '0' moves calculating amount of ports to libvirt.
#  (integer value)
# Minimum value: 0
# Maximum value: 28
#num_pcie_ports = 0

#
# Available capacity in MiB for file-backed memory.
#
# Set to 0 to disable file-backed memory.
#
# When enabled, instances will create memory files in the directory specified
# in ``/etc/libvirt/qemu.conf``'s ``memory_backing_dir`` option. The default
# location is ``/var/lib/libvirt/qemu/ram``.
#
# When enabled, the value defined for this option is reported as the node memory
# capacity. Compute node system memory will be used as a cache for file-backed
# memory, via the kernel's pagecache mechanism.
#
# .. note::
#    This feature is not compatible with hugepages.
#
# .. note::
#    This feature is not compatible with memory overcommit.
#
# Related options:
#
# * ``virt_type`` must be set to ``kvm`` or ``qemu``.
# * ``ram_allocation_ratio`` must be set to 1.0.
#  (integer value)
# Minimum value: 0
#file_backed_memory = 0

#
# VM Images format.
#
# If default is specified, then use_cow_images flag is used instead of this
# one.
#
# Related options:
#
# * virt.use_cow_images
# * images_volume_group
#  (string value)
# Possible values:
# raw - <No description provided>
# flat - <No description provided>
# qcow2 - <No description provided>
# lvm - <No description provided>
# rbd - <No description provided>
# ploop - <No description provided>
# default - <No description provided>
#images_type = default

#
# LVM Volume Group that is used for VM images, when you specify images_type=lvm
#
# Related options:
#
# * images_type
#  (string value)
#images_volume_group = <None>

# DEPRECATED:
# Create sparse logical volumes (with virtualsize) if this flag is set to True.
#  (boolean value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# Sparse logical volumes is a feature that is not tested hence not supported.
# LVM logical volumes are preallocated by default. If you want thin
# provisioning,
# use Cinder thin-provisioned volumes.
#sparse_logical_volumes = false

# The RADOS pool in which rbd volumes are stored (string value)
#images_rbd_pool = rbd

# Path to the ceph configuration file to use (string value)
#images_rbd_ceph_conf =

#
# Discard option for nova managed disks.
#
# Requires:
#
# * Libvirt >= 1.0.6
# * Qemu >= 1.5 (raw format)
# * Qemu >= 1.6 (qcow2 format)
#  (string value)
# Possible values:
# ignore - <No description provided>
# unmap - <No description provided>
#hw_disk_discard = <None>

# DEPRECATED: Allows image information files to be stored in non-standard
# locations (string value)
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason: Image info files are no longer used by the image cache
#image_info_filename_pattern = $instances_path/$image_cache_subdirectory_name/%(image)s.info

# Unused resized base images younger than this will not be removed (integer
# value)
#remove_unused_resized_minimum_age_seconds = 3600

# DEPRECATED: Write a checksum for files in _base to disk (boolean value)
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason: The image cache no longer periodically calculates checksums of stored
# images. Data integrity can be checked at the block or filesystem level.
#checksum_base_images = false

# DEPRECATED: How frequently to checksum base images (integer value)
# This option is deprecated for removal since 14.0.0.
# Its value may be silently ignored in the future.
# Reason: The image cache no longer periodically calculates checksums of stored
# images. Data integrity can be checked at the block or filesystem level.
#checksum_interval_seconds = 3600

#
# Method used to wipe ephemeral disks when they are deleted. Only takes effect
# if LVM is set as backing storage.
#
# Related options:
#
# * images_type - must be set to ``lvm``
# * volume_clear_size
#  (string value)
# Possible values:
# zero - Overwrite volumes with zeroes
# shred - Overwrite volumes repeatedly
# none - Do not wipe deleted volumes
#volume_clear = zero

#
# Size of area in MiB, counting from the beginning of the allocated volume,
# that will be cleared using method set in ``volume_clear`` option.
#
# Possible values:
#
# * 0 - clear whole volume
# * >0 - clear specified amount of MiB
#
# Related options:
#
# * images_type - must be set to ``lvm``
# * volume_clear - must be set and the value must be different than ``none``
#   for this option to have any impact
#  (integer value)
# Minimum value: 0
#volume_clear_size = 0

#
# Enable snapshot compression for ``qcow2`` images.
#
# Note: you can set ``snapshot_image_format`` to ``qcow2`` to force all
# snapshots to be in ``qcow2`` format, independently from their original image
# type.
#
# Related options:
#
# * snapshot_image_format
#  (boolean value)
#snapshot_compression = false

# Use virtio for bridge interfaces with KVM/QEMU (boolean value)
#use_virtio_for_bridges = true

#
# Use multipath connection of the iSCSI or FC volume
#
# Volumes can be connected in the LibVirt as multipath devices. This will
# provide high availability and fault tolerance.
#  (boolean value)
# Deprecated group/name - [libvirt]/iscsi_use_multipath
#volume_use_multipath = false

#
# Number of times to scan given storage protocol to find volume.
#  (integer value)
# Deprecated group/name - [libvirt]/num_iscsi_scan_tries
#num_volume_scan_tries = 5

#
# Number of times to rediscover AoE target to find volume.
#
# Nova provides support for block storage attaching to hosts via AOE (ATA over
# Ethernet). This option allows the user to specify the maximum number of retry
# attempts that can be made to discover the AoE device.
#  (integer value)
#num_aoe_discover_tries = 3

#
# The iSCSI transport iface to use to connect to target in case offload support
# is desired.
#
# Default format is of the form <transport_name>.<hwaddress> where
# <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and
# <hwaddress> is the MAC address of the interface and can be generated via the
# iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be
# provided here with the actual transport name.
#  (string value)
# Deprecated group/name - [libvirt]/iscsi_transport
#iscsi_iface = <None>

#
# Number of times to scan iSER target to find volume.
#
# iSER is a server network protocol that extends iSCSI protocol to use Remote
# Direct Memory Access (RDMA). This option allows the user to specify the
# maximum
# number of scan attempts that can be made to find iSER volume.
#  (integer value)
#num_iser_scan_tries = 5

#
# Use multipath connection of the iSER volume.
#
# iSER volumes can be connected as multipath devices. This will provide high
# availability and fault tolerance.
#  (boolean value)
#iser_use_multipath = false

#
# The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
#
# Libvirt will refer to this user when connecting and authenticating with
# the Ceph RBD server.
#  (string value)
#rbd_user = <None>

#
# The libvirt UUID of the secret for the rbd_user volumes.
#  (string value)
#rbd_secret_uuid = <None>

#
# Directory where the NFS volume is mounted on the compute node.
# The default is 'mnt' directory of the location where nova's Python module
# is installed.
#
# NFS provides shared storage for the OpenStack Block Storage service.
#
# Possible values:
#
# * A string representing absolute path of mount point.
#  (string value)
#nfs_mount_point_base = $state_path/mnt

#
# Mount options passed to the NFS client. See section of the nfs man page
# for details.
#
# Mount options controls the way the filesystem is mounted and how the
# NFS client behaves when accessing files on this mount point.
#
# Possible values:
#
# * Any string representing mount options separated by commas.
# * Example string: vers=3,lookupcache=pos
#  (string value)
#nfs_mount_options = <None>

#
# Directory where the Quobyte volume is mounted on the compute node.
#
# Nova supports Quobyte volume driver that enables storing Block Storage
# service volumes on a Quobyte storage back end. This Option specifies the
# path of the directory where Quobyte volume is mounted.
#
# Possible values:
#
# * A string representing absolute path of mount point.
#  (string value)
#quobyte_mount_point_base = $state_path/mnt

# Path to a Quobyte Client configuration file. (string value)
#quobyte_client_cfg = <None>

#
# Directory where the SMBFS shares are mounted on the compute node.
#  (string value)
#smbfs_mount_point_base = $state_path/mnt

#
# Mount options passed to the SMBFS client.
#
# Provide SMBFS options as a single string containing all parameters.
# See mount.cifs man page for details. Note that the libvirt-qemu ``uid``
# and ``gid`` must be specified.
#  (string value)
#smbfs_mount_options =

#
# libvirt's transport method for remote file operations.
#
# Because libvirt cannot use RPC to copy files over network to/from other
# compute nodes, other method must be used for:
#
# * creating directory on remote host
# * creating file on remote host
# * removing file from remote host
# * copying file to remote host
#  (string value)
# Possible values:
# ssh - <No description provided>
# rsync - <No description provided>
#remote_filesystem_transport = ssh

#
# Directory where the Virtuozzo Storage clusters are mounted on the compute
# node.
#
# This option defines non-standard mountpoint for Vzstorage cluster.
#
# Related options:
#
# * vzstorage_mount_* group of parameters
#  (string value)
#vzstorage_mount_point_base = $state_path/mnt

#
# Mount owner user name.
#
# This option defines the owner user of Vzstorage cluster mountpoint.
#
# Related options:
#
# * vzstorage_mount_* group of parameters
#  (string value)
#vzstorage_mount_user = stack

#
# Mount owner group name.
#
# This option defines the owner group of Vzstorage cluster mountpoint.
#
# Related options:
#
# * vzstorage_mount_* group of parameters
#  (string value)
#vzstorage_mount_group = qemu

#
# Mount access mode.
#
# This option defines the access bits of Vzstorage cluster mountpoint,
# in the format similar to one of chmod(1) utility, like this: 0770.
# It consists of one to four digits ranging from 0 to 7, with missing
# lead digits assumed to be 0's.
#
# Related options:
#
# * vzstorage_mount_* group of parameters
#  (string value)
#vzstorage_mount_perms = 0770

#
# Path to vzstorage client log.
#
# This option defines the log of cluster operations,
# it should include "%(cluster_name)s" template to separate
# logs from multiple shares.
#
# Related options:
#
# * vzstorage_mount_opts may include more detailed logging options.
#  (string value)
#vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz

#
# Path to the SSD cache file.
#
# You can attach an SSD drive to a client and configure the drive to store
# a local cache of frequently accessed data. By having a local cache on a
# client's SSD drive, you can increase the overall cluster performance by
# up to 10 and more times.
# WARNING! There is a lot of SSD models which are not server grade and
# may loose arbitrary set of data changes on power loss.
# Such SSDs should not be used in Vstorage and are dangerous as may lead
# to data corruptions and inconsistencies. Please consult with the manual
# on which SSD models are known to be safe or verify it using
# vstorage-hwflush-check(1) utility.
#
# This option defines the path which should include "%(cluster_name)s"
# template to separate caches from multiple shares.
#
# Related options:
#
# * vzstorage_mount_opts may include more detailed cache options.
#  (string value)
#vzstorage_cache_path = <None>

#
# Extra mount options for pstorage-mount
#
# For full description of them, see
# https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html
# Format is a python string representation of arguments list, like:
# "['-v', '-R', '500']"
# Shouldn't include -c, -l, -C, -u, -g and -m as those have
# explicit vzstorage_* options.
#
# Related options:
#
# * All other vzstorage_* options
#  (list value)
#vzstorage_mount_opts =

#
# Configure virtio rx queue size.
#
# This option is only usable for virtio-net device with vhost and
# vhost-user backend. Available only with QEMU/KVM. Requires libvirt
# v2.3 QEMU v2.7. (integer value)
# Possible values:
# 256 - <No description provided>
# 512 - <No description provided>
# 1024 - <No description provided>
#rx_queue_size = <None>

#
# Configure virtio tx queue size.
#
# This option is only usable for virtio-net device with vhost-user
# backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU
# v2.10. (integer value)
# Possible values:
# 256 - <No description provided>
# 512 - <No description provided>
# 1024 - <No description provided>
#tx_queue_size = <None>

#
# Number of times to rediscover NVMe target to find volume
#
# Nova provides support for block storage attaching to hosts via NVMe
# (Non-Volatile Memory Express). This option allows the user to specify the
# maximum number of retry attempts that can be made to discover the NVMe device.
#  (integer value)
#num_nvme_discover_tries = 5


[metrics]
#
# Configuration options for metrics
#
# Options under this group allow to adjust how values assigned to metrics are
# calculated.

#
# From nova.conf
#

#
# When using metrics to weight the suitability of a host, you can use this
# option
# to change how the calculated weight influences the weight assigned to a host
# as
# follows:
#
# * >1.0: increases the effect of the metric on overall weight
# * 1.0: no change to the calculated weight
# * >0.0,<1.0: reduces the effect of the metric on overall weight
# * 0.0: the metric value is ignored, and the value of the
#   'weight_of_unavailable' option is returned instead
# * >-1.0,<0.0: the effect is reduced and reversed
# * -1.0: the effect is reversed
# * <-1.0: the effect is increased proportionally and reversed
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multipler
#   ratio for this weigher.
#
# Related options:
#
# * weight_of_unavailable
#  (floating point value)
#weight_multiplier = 1.0

#
# This setting specifies the metrics to be weighed and the relative ratios for
# each metric. This should be a single string value, consisting of a series of
# one or more 'name=ratio' pairs, separated by commas, where 'name' is the name
# of the metric to be weighed, and 'ratio' is the relative weight for that
# metric.
#
# Note that if the ratio is set to 0, the metric value is ignored, and instead
# the weight will be set to the value of the 'weight_of_unavailable' option.
#
# As an example, let's consider the case where this option is set to:
#
#     ``name1=1.0, name2=-1.3``
#
# The final weight will be:
#
#     ``(name1.value * 1.0) + (name2.value * -1.3)``
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * A list of zero or more key/value pairs separated by commas, where the key is
#   a string representing the name of a metric and the value is a numeric weight
#   for that metric. If any value is set to 0, the value is ignored and the
#   weight will be set to the value of the 'weight_of_unavailable' option.
#
# Related options:
#
# * weight_of_unavailable
#  (list value)
#weight_setting =

#
# This setting determines how any unavailable metrics are treated. If this
# option
# is set to True, any hosts for which a metric is unavailable will raise an
# exception, so it is recommended to also use the MetricFilter to filter out
# those hosts before weighing.
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * True or False, where False ensures any metric being unavailable for a host
#   will set the host weight to 'weight_of_unavailable'.
#
# Related options:
#
# * weight_of_unavailable
#  (boolean value)
#required = true

#
# When any of the following conditions are met, this value will be used in place
# of any actual metric value:
#
# * One of the metrics named in 'weight_setting' is not available for a host,
#   and the value of 'required' is False
# * The ratio specified for a metric in 'weight_setting' is 0
# * The 'weight_multiplier' option is set to 0
#
# This option is only used by the FilterScheduler and its subclasses; if you use
# a different scheduler, this option has no effect.
#
# Possible values:
#
# * An integer or float value, where the value corresponds to the multipler
#   ratio for this weigher.
#
# Related options:
#
# * weight_setting
# * required
# * weight_multiplier
#  (floating point value)
#weight_of_unavailable = -10000.0


[mks]
#
# Nova compute node uses WebMKS, a desktop sharing protocol to provide
# instance console access to VM's created by VMware hypervisors.
#
# Related options:
# Following options must be set to provide console access.
# * mksproxy_base_url
# * enabled

#
# From nova.conf
#

#
# Location of MKS web console proxy
#
# The URL in the response points to a WebMKS proxy which
# starts proxying between client and corresponding vCenter
# server where instance runs. In order to use the web based
# console access, WebMKS proxy should be installed and configured
#
# Possible values:
#
# * Must be a valid URL of the form:``http://host:port/`` or
#   ``https://host:port/``
#  (uri value)
#mksproxy_base_url = http://127.0.0.1:6090/

#
# Enables graphical console access for virtual machines.
#  (boolean value)
#enabled = false


[neutron]
#
# Configuration options for neutron (network connectivity as a service).

#
# From nova.conf
#

# DEPRECATED:
# This option specifies the URL for connecting to Neutron.
#
# Possible values:
#
# * Any valid URL that points to the Neutron API service is appropriate here.
#   This typically matches the URL returned for the 'network' service type
#   from the Keystone service catalog.
#  (uri value)
# This option is deprecated for removal since 17.0.0.
# Its value may be silently ignored in the future.
# Reason: Endpoint lookup uses the service catalog via common keystoneauth1
# Adapter configuration options. In the current release, "url" will override
# this behavior, but will be ignored and/or removed in a future release. To
# achieve the same result, use the endpoint_override option instead.
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#url = http://127.0.0.1:9696

#
# Default name for the Open vSwitch integration bridge.
#
# Specifies the name of an integration bridge interface used by OpenvSwitch.
# This option is only used if Neutron does not specify the OVS bridge name in
# port binding responses.
#  (string value)
#ovs_bridge = br-int

#
# Default name for the floating IP pool.
#
# Specifies the name of floating IP pool used for allocating floating IPs. This
# option is only used if Neutron does not specify the floating IP pool name in
# port binding reponses.
#  (string value)
#default_floating_pool = nova

#
# Integer value representing the number of seconds to wait before querying
# Neutron for extensions.  After this number of seconds the next time Nova
# needs to create a resource in Neutron it will requery Neutron for the
# extensions that it has loaded.  Setting value to 0 will refresh the
# extensions with no wait.
#  (integer value)
# Minimum value: 0
#extension_sync_interval = 600

#
# List of physnets present on this host.
#
# For each *physnet* listed, an additional section,
# ``[neutron_physnet_$PHYSNET]``, will be added to the configuration file. Each
# section must be configured with a single configuration option, ``numa_nodes``,
# which should be a list of node IDs for all NUMA nodes this physnet is
# associated with. For example::
#
#     [neutron]
#     physnets = foo, bar
#
#     [neutron_physnet_foo]
#     numa_nodes = 0
#
#     [neutron_physnet_bar]
#     numa_nodes = 0,1
#
# Any *physnet* that is not listed using this option will be treated as having
# no
# particular NUMA node affinity.
#
# Tunnelled networks (VXLAN, GRE, ...) cannot be accounted for in this way and
# are instead configured using the ``[neutron_tunnel]`` group. For example::
#
#     [neutron_tunnel]
#     numa_nodes = 1
#
# Related options:
#
# * ``[neutron_tunnel] numa_nodes`` can be used to configure NUMA affinity for
#   all tunneled networks
# * ``[neutron_physnet_$PHYSNET] numa_nodes`` must be configured for each value
#   of ``$PHYSNET`` specified by this option
#  (list value)
#physnets =

#
# When set to True, this option indicates that Neutron will be used to proxy
# metadata requests and resolve instance ids. Otherwise, the instance ID must be
# passed to the metadata request in the 'X-Instance-ID' header.
#
# Related options:
#
# * metadata_proxy_shared_secret
#  (boolean value)
#service_metadata_proxy = false

#
# This option holds the shared secret string used to validate proxy requests to
# Neutron metadata requests. In order to be used, the
# 'X-Metadata-Provider-Signature' header must be supplied in the request.
#
# Related options:
#
# * service_metadata_proxy
#  (string value)
#metadata_proxy_shared_secret =

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# Authentication type to load (string value)
# Deprecated group/name - [neutron]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Scope for system operations (string value)
#system_scope = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
#project_id = <None>

# Project name to scope to (string value)
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used for
# both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# User ID (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [neutron]/user_name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>

# Tenant ID (string value)
#tenant_id = <None>

# Tenant Name (string value)
#tenant_name = <None>

# The default service_type for endpoint URL discovery. (string value)
#service_type = network

# The default service_name for endpoint URL discovery. (string value)
#service_name = <None>

# List of interfaces, in order of preference, for endpoint URL. (list value)
#valid_interfaces = internal,public

# The default region_name for endpoint URL discovery. (string value)
#region_name = <None>

# Always use this endpoint URL for requests for this client. NOTE: The
# unversioned endpoint should be specified here; to request a particular API
# version, use the `version`, `min-version`, and/or `max-version` options.
# (string value)
#endpoint_override = <None>


[notifications]
#
# Most of the actions in Nova which manipulate the system state generate
# notifications which are posted to the messaging component (e.g. RabbitMQ) and
# can be consumed by any service outside the OpenStack. More technical details
# at https://docs.openstack.org/nova/latest/reference/notifications.html

#
# From nova.conf
#

#
# If set, send compute.instance.update notifications on
# instance state changes.
#
# Please refer to
# https://docs.openstack.org/nova/latest/reference/notifications.html for
# additional information on notifications.
#  (string value)
# Possible values:
# <None> - no notifications
# vm_state - Notifications are sent with VM state transition information in the
# ``old_state`` and ``state`` fields. The ``old_task_state`` and
# ``new_task_state`` fields will be set to the current task_state of the
# instance
# vm_and_task_state - Notifications are sent with VM and task state transition
# information
#notify_on_state_change = <None>

# Default notification level for outgoing notifications. (string value)
# Possible values:
# DEBUG - <No description provided>
# INFO - <No description provided>
# WARN - <No description provided>
# ERROR - <No description provided>
# CRITICAL - <No description provided>
# Deprecated group/name - [DEFAULT]/default_notification_level
#default_level = INFO

#
# Specifies which notification format shall be used by nova.
#
# The default value is fine for most deployments and rarely needs to be changed.
# This value can be set to 'versioned' once the infrastructure moves closer to
# consuming the newer format of notifications. After this occurs, this option
# will be removed.
#
# Note that notifications can be completely disabled by setting ``driver=noop``
# in the ``[oslo_messaging_notifications]`` group.
#
# The list of versioned notifications is visible in
# https://docs.openstack.org/nova/latest/reference/notifications.html
#  (string value)
# Possible values:
# both - Both the legacy unversioned and the new versioned notifications are
# emitted
# versioned - Only the new versioned notifications are emitted
# unversioned - Only the legacy unversioned notifications are emitted
#notification_format = both

#
# Specifies the topics for the versioned notifications issued by nova.
#
# The default value is fine for most deployments and rarely needs to be changed.
# However, if you have a third-party service that consumes versioned
# notifications, it might be worth getting a topic for that service.
# Nova will send a message containing a versioned notification payload to each
# topic queue in this list.
#
# The list of versioned notifications is visible in
# https://docs.openstack.org/nova/latest/reference/notifications.html
#  (list value)
#versioned_notifications_topics = versioned_notifications

#
# If enabled, include block device information in the versioned notification
# payload. Sending block device information is disabled by default as providing
# that information can incur some overhead on the system since the information
# may need to be loaded from the database.
#  (boolean value)
#bdms_in_notifications = false


[os_win]

#
# From os_win
#

# Fibre Channel hbaapi library path. If no custom hbaapi library is requested,
# the default one will be used. (string value)
#hbaapi_lib_path = hbaapi.dll

# Caches temporary WMI objects in order to increase performance. This only
# affects networkutils, where almost all operations require a reference to a
# switch port. The cached objects are no longer valid if the VM they are
# associated with is destroyed. (boolean value)
#cache_temporary_wmi_objects = true

# The default amount of seconds to wait when stopping pending WMI jobs. Setting
# this value to 0 will disable the timeout. (integer value)
#wmi_job_terminate_timeout = 120


[osapi_v21]

#
# From nova.conf
#

# DEPRECATED:
# This option is a string representing a regular expression (regex) that matches
# the project_id as contained in URLs. If not set, it will match normal UUIDs
# created by keystone.
#
# Possible values:
#
# * A string representing any legal regular expression
#  (string value)
# This option is deprecated for removal since 13.0.0.
# Its value may be silently ignored in the future.
# Reason:
# Recent versions of nova constrain project IDs to hexadecimal characters and
# dashes. If your installation uses IDs outside of this range, you should use
# this option to provide your own regex and give you time to migrate offending
# projects to valid IDs before the next release.
#project_id_regex = <None>


[oslo_concurrency]

#
# From oslo.concurrency
#

# Enables or disables inter-process locks. (boolean value)
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified directory should
# only be writable by the user running the processes that need locking. Defaults
# to environment variable OSLO_LOCK_PATH. If external locks are used, a lock
# path must be set. (string value)
#lock_path = <None>


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
#trace = false

# Attempt to connect via SSL. If no other ssl-related parameters are given, it
# will use the system's CA-bundle to verify the server's certificate. (boolean
# value)
#ssl = false

# CA certificate PEM file used to verify the server's certificate (string value)
#ssl_ca_file =

# Self-identifying certificate PEM file for client authentication (string value)
#ssl_cert_file =

# Private key PEM file used to sign ssl_cert_file certificate (optional) (string
# value)
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
#ssl_key_password = <None>

# By default SSL checks that the name in the server's certificate matches the
# hostname in the transport_url. In some configurations it may be preferable to
# use the virtual hostname instead, for example if the server uses the Server
# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
# virtual host name instead of the DNS name. (boolean value)
#ssl_verify_vhost = false

# Space separated list of acceptable SASL mechanisms (string value)
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
#sasl_config_name =

# SASL realm to use if no realm present in username (string value)
#sasl_default_realm =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The maximum number of attempts to re-send a reply message which failed due to
# a recoverable error. (integer value)
# Minimum value: -1
#default_reply_retry = 0

# The deadline for an rpc reply message delivery. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# The duration to schedule a purge of idle sender links. Detach link after
# expiry. (integer value)
# Minimum value: 1
#default_sender_link_timeout = 600

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# Enable virtual host support for those message buses that do not natively
# support virtual hosting (such as qpidd). When set to true the virtual host
# name will be added to all message bus addresses, effectively creating a
# private 'subnet' per virtual host. Set to False if the message bus supports
# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
# as the name of the virtual host. (boolean value)
#pseudo_vhost = true

# address prefix used when sending to a specific server (string value)
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-robin
# fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100

# Send messages of this type pre-settled.
# Pre-settled messages will not receive acknowledgement
# from the peer. Note well: pre-settled messages may be
# silently discarded if the delivery fails.
# Permitted values:
# 'rpc-call' - send RPC Calls pre-settled
# 'rpc-reply'- send RPC Replies pre-settled
# 'rpc-cast' - Send RPC Casts pre-settled
# 'notify'   - Send Notifications pre-settled
#  (multi valued)
#pre_settled = rpc-cast
#pre_settled = rpc-reply


[oslo_messaging_kafka]

#
# From oslo.messaging
#

# Max fetch bytes of Kafka consumer (integer value)
#kafka_max_fetch_bytes = 1048576

# Default timeout(s) for Kafka consumers (floating point value)
#kafka_consumer_timeout = 1.0

# DEPRECATED: Pool Size for Kafka Consumers (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Driver no longer uses connection pool.
#pool_size = 10

# DEPRECATED: The pool size limit for connections expiration policy (integer
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Driver no longer uses connection pool.
#conn_pool_min_size = 2

# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Driver no longer uses connection pool.
#conn_pool_ttl = 1200

# Group id for Kafka consumer. Consumers in one group will coordinate message
# consumption (string value)
#consumer_group = oslo_messaging_consumer

# Upper bound on the delay for KafkaProducer batching in seconds (floating point
# value)
#producer_batch_timeout = 0.0

# Size of batch for the producer async send (integer value)
#producer_batch_size = 16384

# Enable asynchronous consumer commits (boolean value)
#enable_auto_commit = false

# The maximum number of records returned in a poll call (integer value)
#max_poll_records = 500

# Protocol used to communicate with brokers (string value)
# Possible values:
# PLAINTEXT - <No description provided>
# SASL_PLAINTEXT - <No description provided>
# SSL - <No description provided>
# SASL_SSL - <No description provided>
#security_protocol = PLAINTEXT

# Mechanism when security protocol is SASL (string value)
#sasl_mechanism = PLAIN

# CA certificate PEM file used to verify the server certificate (string value)
#ssl_cafile =


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are messaging,
# messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications

# The maximum number of attempts to re-send a notification message which failed
# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
# (integer value)
#retry = -1


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
#amqp_auto_delete = false

# Connect over SSL. (boolean value)
# Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
#ssl = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
#ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
#ssl_key_file =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
#ssl_cert_file =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
#ssl_ca_file =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than one
# RabbitMQ node is provided in config. (string value)
# Possible values:
# round-robin - <No description provided>
# shuffle - <No description provided>
#kombu_failover_strategy = round-robin

# The RabbitMQ login method. (string value)
# Possible values:
# PLAIN - <No description provided>
# AMQPLAIN - <No description provided>
# RABBIT-CR-DEMO - <No description provided>
#rabbit_login_method = AMQPLAIN

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue. If
# you just want to make sure that all queues (except those with auto-generated
# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically deleted.
# The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows unlimited
# messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the heartbeat.
# (integer value)
#heartbeat_rate = 2


[pci]

#
# From nova.conf
#

#
# An alias for a PCI passthrough device requirement.
#
# This allows users to specify the alias in the extra specs for a flavor,
# without
# needing to repeat all the PCI property requirements.
#
# Possible Values:
#
# * A dictionary of JSON values which describe the aliases. For example::
#
#     alias = {
#       "name": "QuickAssist",
#       "product_id": "0443",
#       "vendor_id": "8086",
#       "device_type": "type-PCI",
#       "numa_policy": "required"
#     }
#
#   This defines an alias for the Intel QuickAssist card. (multi valued). Valid
#   key values are :
#
#   ``name``
#     Name of the PCI alias.
#
#   ``product_id``
#     Product ID of the device in hexadecimal.
#
#   ``vendor_id``
#     Vendor ID of the device in hexadecimal.
#
#   ``device_type``
#     Type of PCI device. Valid values are: ``type-PCI``, ``type-PF`` and
#     ``type-VF``.
#
#   ``numa_policy``
#     Required NUMA affinity of device. Valid values are: ``legacy``,
#     ``preferred`` and ``required``.
#
# * Supports multiple aliases by repeating the option (not by specifying
#   a list value)::
#
#     alias = {
#       "name": "QuickAssist-1",
#       "product_id": "0443",
#       "vendor_id": "8086",
#       "device_type": "type-PCI",
#       "numa_policy": "required"
#     }
#     alias = {
#       "name": "QuickAssist-2",
#       "product_id": "0444",
#       "vendor_id": "8086",
#       "device_type": "type-PCI",
#       "numa_policy": "required"
#     }
#  (multi valued)
# Deprecated group/name - [DEFAULT]/pci_alias
#alias =

#
# White list of PCI devices available to VMs.
#
# Possible values:
#
# * A JSON dictionary which describe a whitelisted PCI device. It should take
#   the following format::
#
#     ["vendor_id": "<id>",] ["product_id": "<id>",]
#     ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
#      "devname": "<name>",]
#     {"<tag>": "<tag_value>",}
#
#   Where ``[`` indicates zero or one occurrences, ``{`` indicates zero or
#   multiple occurrences, and ``|`` mutually exclusive options. Note that any
#   missing fields are automatically wildcarded.
#
#   Valid key values are :
#
#   ``vendor_id``
#     Vendor ID of the device in hexadecimal.
#
#   ``product_id``
#     Product ID of the device in hexadecimal.
#
#   ``address``
#     PCI address of the device. Both traditional glob style and regular
#     expression syntax is supported.
#
#   ``devname``
#     Device name of the device (for e.g. interface name). Not all PCI devices
#     have a name.
#
#   ``<tag>``
#     Additional ``<tag>`` and ``<tag_value>`` used for matching PCI devices.
#     Supported ``<tag>`` values are :
#
#     - ``physical_network``
#     - ``trusted``
#
#   Valid examples are::
#
#     passthrough_whitelist = {"devname":"eth0",
#                              "physical_network":"physnet"}
#     passthrough_whitelist = {"address":"*:0a:00.*"}
#     passthrough_whitelist = {"address":":0a:00.",
#                              "physical_network":"physnet1"}
#     passthrough_whitelist = {"vendor_id":"1137",
#                              "product_id":"0071"}
#     passthrough_whitelist = {"vendor_id":"1137",
#                              "product_id":"0071",
#                              "address": "0000:0a:00.1",
#                              "physical_network":"physnet1"}
#     passthrough_whitelist = {"address":{"domain": ".*",
#                                         "bus": "02", "slot": "01",
#                                         "function": "[2-7]"},
#                              "physical_network":"physnet1"}
#     passthrough_whitelist = {"address":{"domain": ".*",
#                                         "bus": "02", "slot": "0[1-2]",
#                                         "function": ".*"},
#                              "physical_network":"physnet1"}
#     passthrough_whitelist = {"devname": "eth0", "physical_network":"physnet1",
#                              "trusted": "true"}
#
#   The following are invalid, as they specify mutually exclusive options::
#
#     passthrough_whitelist = {"devname":"eth0",
#                              "physical_network":"physnet",
#                              "address":"*:0a:00.*"}
#
# * A JSON list of JSON dictionaries corresponding to the above format. For
#   example::
#
#     passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"},
#                              {"product_id":"0002", "vendor_id":"8086"}]
#  (multi valued)
# Deprecated group/name - [DEFAULT]/pci_passthrough_whitelist
#passthrough_whitelist =


[placement]

#
# From nova.conf
#

#
# If True, when limiting allocation candidate results, the results will be
# a random sampling of the full result set. If False, allocation candidates
# are returned in a deterministic but undefined order. That is, all things
# being equal, two requests for allocation candidates will return the same
# results in the same order; but no guarantees are made as to how that order
# is determined.
#  (boolean value)
#randomize_allocation_candidates = false

# The file that defines placement policies. This can be an absolute path or
# relative to the configuration file. (string value)
#policy_file = placement-policy.yaml

#
# Early API microversions (<1.8) allowed creating allocations and not specifying
# a project or user identifier for the consumer. In cleaning up the data
# modeling, we no longer allow missing project and user information. If an older
# client makes an allocation, we'll use this in place of the information it
# doesn't provide.
#  (string value)
#incomplete_consumer_project_id = 00000000-0000-0000-0000-000000000000

#
# Early API microversions (<1.8) allowed creating allocations and not specifying
# a project or user identifier for the consumer. In cleaning up the data
# modeling, we no longer allow missing project and user information. If an older
# client makes an allocation, we'll use this in place of the information it
# doesn't provide.
#  (string value)
#incomplete_consumer_user_id = 00000000-0000-0000-0000-000000000000

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# Authentication type to load (string value)
# Deprecated group/name - [placement]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Scope for system operations (string value)
#system_scope = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
#project_id = <None>

# Project name to scope to (string value)
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used for
# both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# User ID (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [placement]/user_name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>

# Tenant ID (string value)
#tenant_id = <None>

# Tenant Name (string value)
#tenant_name = <None>

# The default service_type for endpoint URL discovery. (string value)
#service_type = placement

# The default service_name for endpoint URL discovery. (string value)
#service_name = <None>

# List of interfaces, in order of preference, for endpoint URL. (list value)
#valid_interfaces = internal,public

# The default region_name for endpoint URL discovery. (string value)
#region_name = <None>

# Always use this endpoint URL for requests for this client. NOTE: The
# unversioned endpoint should be specified here; to request a particular API
# version, use the `version`, `min-version`, and/or `max-version` options.
# (string value)
#endpoint_override = <None>


[placement_database]
#
# The *Placement API Database* is a separate database which can be used with the
# placement service. This database is optional: if the connection option is not
# set, the nova api database will be used instead.

#
# From nova.conf
#

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
#connection = <None>

# Optional URL parameters to append onto the connection URL at connect time;
# specify as param1=value1&param2=value2&... (string value)
#connection_parameters =

# If True, SQLite uses synchronous mode. (boolean value)
#sqlite_synchronous = true

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Connections which have been present in the connection pool longer than this
# number of seconds will be replaced with a new one the next time they are
# checked out from the pool. (integer value)
#connection_recycle_time = 3600

# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
#max_pool_size = <None>

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
#max_overflow = <None>

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
#pool_timeout = <None>


[powervm]
#
# PowerVM options allow cloud administrators to configure how OpenStack will
# work
# with the PowerVM hypervisor.

#
# From nova.conf
#

#
# Factor used to calculate the amount of physical processor compute power given
# to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas
# 0.05 means 1/20th of a physical processor.
#  (floating point value)
# Minimum value: 0
# Maximum value: 1
#proc_units_factor = 0.1

#
# The disk driver to use for PowerVM disks. PowerVM provides support for
# localdisk and PowerVM Shared Storage Pool disk drivers.
#
# Related options:
#
# * volume_group_name - required when using localdisk
#
#  (string value)
# Possible values:
# localdisk - <No description provided>
# ssp - <No description provided>
#disk_driver = localdisk

#
# Volume Group to use for block device operations. If disk_driver is localdisk,
# then this attribute must be specified. It is strongly recommended NOT to use
# rootvg since that is used by the management partition and filling it will
# cause
# failures.
#  (string value)
#volume_group_name =


[quota]
#
# Quota options allow to manage quotas in openstack deployment.

#
# From nova.conf
#

#
# The number of instances allowed per project.
#
# Possible Values
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_instances
#instances = 10

#
# The number of instance cores or vCPUs allowed per project.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_cores
#cores = 20

#
# The number of megabytes of instance RAM allowed per project.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_ram
#ram = 51200

# DEPRECATED:
# The number of floating IPs allowed per project.
#
# Floating IPs are not allocated to instances by default. Users need to select
# them from the pool configured by the OpenStack administrator to attach to
# their
# instances.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_floating_ips
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#floating_ips = 10

# DEPRECATED:
# The number of fixed IPs allowed per project.
#
# Unlike floating IPs, fixed IPs are allocated dynamically by the network
# component when instances boot up.  This quota value should be at least the
# number of instances allowed
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_fixed_ips
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#fixed_ips = -1

#
# The number of metadata items allowed per instance.
#
# Users can associate metadata with an instance during instance creation. This
# metadata takes the form of key-value pairs.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_metadata_items
#metadata_items = 128

#
# The number of injected files allowed.
#
# File injection allows users to customize the personality of an instance by
# injecting data into it upon boot. Only text file injection is permitted:
# binary
# or ZIP files are not accepted. During file injection, any existing files that
# match specified files are renamed to include ``.bak`` extension appended with
# a
# timestamp.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_injected_files
#injected_files = 5

#
# The number of bytes allowed per injected file.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_injected_file_content_bytes
#injected_file_content_bytes = 10240

#
# The maximum allowed injected file path length.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_injected_file_path_length
#injected_file_path_length = 255

# DEPRECATED:
# The number of security groups per project.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_security_groups
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#security_groups = 10

# DEPRECATED:
# The number of security rules per security group.
#
# The associated rules in each security group control the traffic to instances
# in
# the group.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_security_group_rules
# This option is deprecated for removal since 15.0.0.
# Its value may be silently ignored in the future.
# Reason:
# nova-network is deprecated, as are any related configuration options.
#security_group_rules = 20

#
# The maximum number of key pairs allowed per user.
#
# Users can create at least one key pair for each project and use the key pair
# for multiple instances that belong to that project.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_key_pairs
#key_pairs = 100

#
# The maxiumum number of server groups per project.
#
# Server groups are used to control the affinity and anti-affinity scheduling
# policy for a group of servers or instances. Reducing the quota will not affect
# any existing group, but new servers will not be allowed into groups that have
# become over quota.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_server_groups
#server_groups = 10

#
# The maximum number of servers per server group.
#
# Possible values:
#
# * A positive integer or 0.
# * -1 to disable the quota.
#  (integer value)
# Minimum value: -1
# Deprecated group/name - [DEFAULT]/quota_server_group_members
#server_group_members = 10

#
# The number of seconds until a reservation expires.
#
# This quota represents the time period for invalidating quota reservations.
#  (integer value)
#reservation_expire = 86400

#
# The count of reservations until usage is refreshed.
#
# This defaults to 0 (off) to avoid additional load but it is useful to turn on
# to help keep quota usage up-to-date and reduce the impact of out of sync usage
# issues.
#  (integer value)
# Minimum value: 0
#until_refresh = 0

#
# The number of seconds between subsequent usage refreshes.
#
# This defaults to 0 (off) to avoid additional load but it is useful to turn on
# to help keep quota usage up-to-date and reduce the impact of out of sync usage
# issues. Note that quotas are not updated on a periodic task, they will update
# on a new reservation if max_age has passed since the last reservation.
#  (integer value)
# Minimum value: 0
#max_age = 0

#
# Provides abstraction for quota checks. Users can configure a specific
# driver to use for quota checks.
#  (string value)
# Possible values:
# nova.quota.DbQuotaDriver - Stores quota limit information in the database and
# relies on the ``quota_*`` configuration options for default quota limit
# values. Counts quota usage on-demand.
# nova.quota.NoopQuotaDriver - Ignores quota and treats all resources as
# unlimited.
#driver = nova.quota.DbQuotaDriver

#
# Recheck quota after resource creation to prevent allowing quota to be
# exceeded.
#
# This defaults to True (recheck quota after resource creation) but can be set
# to
# False to avoid additional load if allowing quota to be exceeded because of
# racing requests is considered acceptable. For example, when set to False, if a
# user makes highly parallel REST API requests to create servers, it will be
# possible for them to create more servers than their allowed quota during the
# race. If their quota is 10 servers, they might be able to create 50 during the
# burst. After the burst, they will not be able to create any more servers but
# they will be able to keep their 50 servers until they delete them.
#
# The initial quota check is done before resources are created, so if multiple
# parallel requests arrive at the same time, all could pass the quota check and
# create resources, potentially exceeding quota. When recheck_quota is True,
# quota will be checked a second time after resources have been created and if
# the resource is over quota, it will be deleted and OverQuota will be raised,
# usually resulting in a 403 response to the REST API user. This makes it
# impossible for a user to exceed their quota with the caveat that it will,
# however, be possible for a REST API user to be rejected with a 403 response in
# the event of a collision close to reaching their quota limit, even if the user
# has enough quota available when they made the request.
#  (boolean value)
#recheck_quota = true


[rdp]
#
# Options under this group enable and configure Remote Desktop Protocol (
# RDP) related features.
#
# This group is only relevant to Hyper-V users.

#
# From nova.conf
#

#
# Enable Remote Desktop Protocol (RDP) related features.
#
# Hyper-V, unlike the majority of the hypervisors employed on Nova compute
# nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to
# provide instance console access. This option enables RDP for graphical
# console access for virtual machines created by Hyper-V.
#
# **Note:** RDP should only be enabled on compute nodes that support the Hyper-V
# virtualization platform.
#
# Related options:
#
# * ``compute_driver``: Must be hyperv.
#
#  (boolean value)
#enabled = false

#
# The URL an end user would use to connect to the RDP HTML5 console proxy.
# The console proxy service is called with this token-embedded URL and
# establishes the connection to the proper instance.
#
# An RDP HTML5 console proxy service will need to be configured to listen on the
# address configured here. Typically the console proxy service would be run on a
# controller node. The localhost address used as default would only work in a
# single node environment i.e. devstack.
#
# An RDP HTML5 proxy allows a user to access via the web the text or graphical
# console of any Windows server or workstation using RDP. RDP HTML5 console
# proxy services include FreeRDP, wsgate.
# See https://github.com/FreeRDP/FreeRDP-WebConnect
#
# Possible values:
#
# * <scheme>://<ip-address>:<port-number>/
#
#   The scheme must be identical to the scheme configured for the RDP HTML5
#   console proxy service. It is ``http`` or ``https``.
#
#   The IP address must be identical to the address on which the RDP HTML5
#   console proxy service is listening.
#
#   The port must be identical to the port on which the RDP HTML5 console proxy
#   service is listening.
#
# Related options:
#
# * ``rdp.enabled``: Must be set to ``True`` for ``html5_proxy_base_url`` to be
#   effective.
#  (uri value)
#html5_proxy_base_url = http://127.0.0.1:6083/


[remote_debug]

#
# From nova.conf
#

#
# Debug host (IP or name) to connect to. This command line parameter is used
# when
# you want to connect to a nova service via a debugger running on a different
# host.
#
# Note that using the remote debug option changes how Nova uses the eventlet
# library to support async IO. This could result in failures that do not occur
# under normal operation. Use at your own risk.
#
# Possible Values:
#
#    * IP address of a remote host as a command line parameter
#      to a nova service. For Example:
#
#     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
#     --remote_debug-host <IP address where the debugger is running>
#  (host address value)
#host = <None>

#
# Debug port to connect to. This command line parameter allows you to specify
# the port you want to use to connect to a nova service via a debugger running
# on different host.
#
# Note that using the remote debug option changes how Nova uses the eventlet
# library to support async IO. This could result in failures that do not occur
# under normal operation. Use at your own risk.
#
# Possible Values:
#
#    * Port number you want to use as a command line parameter
#      to a nova service. For Example:
#
#     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
#     --remote_debug-host <IP address where the debugger is running>
#     --remote_debug-port <port> it's listening on>.
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#port = <None>


[scheduler]

#
# From nova.conf
#

#
# The class of the driver used by the scheduler. This should be chosen from one
# of the entrypoints under the namespace 'nova.scheduler.driver' of file
# 'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is
# used.
#
# Other options are:
#
# * 'fake_scheduler' which is used for testing.
#
# Possible values:
#
# * Any of the drivers included in Nova:
#
#   * filter_scheduler
#   * fake_scheduler
#
# * You may also set this to the entry point name of a custom scheduler driver,
#   but you will be responsible for creating and maintaining it in your
# setup.cfg
#   file.
#
# Related options:
#
# * workers
#  (string value)
# Deprecated group/name - [DEFAULT]/scheduler_driver
#driver = filter_scheduler

#
# Periodic task interval.
#
# This value controls how often (in seconds) to run periodic tasks in the
# scheduler. The specific tasks that are run for each period are determined by
# the particular scheduler being used. Currently there are no in-tree scheduler
# driver that use this option.
#
# If this is larger than the nova-service 'service_down_time' setting, the
# ComputeFilter (if enabled) may think the compute service is down. As each
# scheduler can work a little differently than the others, be sure to test this
# with your selected scheduler.
#
# Possible values:
#
# * An integer, where the integer corresponds to periodic task interval in
#   seconds. 0 uses the default interval (60 seconds). A negative value disables
#   periodic tasks.
#
# Related options:
#
# * ``nova-service service_down_time``
#  (integer value)
#periodic_task_interval = 60

#
# This is the maximum number of attempts that will be made for a given instance
# build/move operation. It limits the number of alternate hosts returned by the
# scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded
# exception is raised and the instance is set to an error state.
#
# Possible values:
#
# * A positive integer, where the integer corresponds to the max number of
#   attempts that can be made when building or moving an instance.
#          (integer value)
# Minimum value: 1
# Deprecated group/name - [DEFAULT]/scheduler_max_attempts
#max_attempts = 3

#
# Periodic task interval.
#
# This value controls how often (in seconds) the scheduler should attempt
# to discover new hosts that have been added to cells. If negative (the
# default), no automatic discovery will occur.
#
# Deployments where compute nodes come and go frequently may want this
# enabled, where others may prefer to manually discover hosts when one
# is added to avoid any overhead from constantly checking. If enabled,
# every time this runs, we will select any unmapped hosts out of each
# cell database on every run.
#  (integer value)
# Minimum value: -1
#discover_hosts_in_cells_interval = -1

#
# This setting determines the maximum limit on results received from the
# placement service during a scheduling operation. It effectively limits
# the number of hosts that may be considered for scheduling requests that
# match a large number of candidates.
#
# A value of 1 (the minimum) will effectively defer scheduling to the placement
# service strictly on "will it fit" grounds. A higher value will put an upper
# cap on the number of results the scheduler will consider during the filtering
# and weighing process. Large deployments may need to set this lower than the
# total number of hosts available to limit memory consumption, network traffic,
# etc. of the scheduler.
#
# This option is only used by the FilterScheduler; if you use a different
# scheduler, this option has no effect.
#  (integer value)
# Minimum value: 1
#max_placement_results = 1000

#
# Number of workers for the nova-scheduler service. The default will be the
# number of CPUs available if using the "filter_scheduler" scheduler driver,
# otherwise the default will be 1.
#  (integer value)
# Minimum value: 0
#workers = <None>

#
# This setting causes the scheduler to look up a host aggregate with the
# metadata key of `filter_tenant_id` set to the project of an incoming
# request, and request results from placement be limited to that aggregate.
# Multiple tenants may be added to a single aggregate by appending a serial
# number to the key, such as `filter_tenant_id:123`.
#
# The matching aggregate UUID must be mirrored in placement for proper
# operation. If no host aggregate with the tenant id is found, or that
# aggregate does not match one in placement, the result will be the same
# as not finding any suitable hosts for the request.
#
# See also the placement_aggregate_required_for_tenants option.
#  (boolean value)
#limit_tenants_to_placement_aggregate = false

#
# This setting, when limit_tenants_to_placement_aggregate=True, will control
# whether or not a tenant with no aggregate affinity will be allowed to schedule
# to any available node. If aggregates are used to limit some tenants but
# not all, then this should be False. If all tenants should be confined via
# aggregate, then this should be True to prevent them from receiving
# unrestricted
# scheduling to any available node.
#
# See also the limit_tenants_to_placement_aggregate option.
#  (boolean value)
#placement_aggregate_required_for_tenants = false

#
# This setting causes the scheduler to look up a host aggregate with the
# metadata key of `availability_zone` set to the value provided by an
# incoming request, and request results from placement be limited to that
# aggregate.
#
# The matching aggregate UUID must be mirrored in placement for proper
# operation. If no host aggregate with the `availability_zone` key is
# found, or that aggregate does not match one in placement, the result will
# be the same as not finding any suitable hosts.
#
# Note that if you enable this flag, you can disable the (less efficient)
# AvailabilityZoneFilter in the scheduler.
#  (boolean value)
#query_placement_for_availability_zone = false


[serial_console]
#
# The serial console feature allows you to connect to a guest in case a
# graphical console like VNC, RDP or SPICE is not available. This is only
# currently supported for the libvirt, Ironic and hyper-v drivers.

#
# From nova.conf
#

#
# Enable the serial console feature.
#
# In order to use this feature, the service ``nova-serialproxy`` needs to run.
# This service is typically executed on the controller node.
#  (boolean value)
#enabled = false

#
# A range of TCP ports a guest can use for its backend.
#
# Each instance which gets created will use one port out of this range. If the
# range is not big enough to provide another port for an new instance, this
# instance won't get launched.
#
# Possible values:
#
# * Each string which passes the regex ``\d+:\d+`` For example ``10000:20000``.
#   Be sure that the first port number is lower than the second port number
#   and that both are in range from 0 to 65535.
#  (string value)
#port_range = 10000:20000

#
# The URL an end user would use to connect to the ``nova-serialproxy`` service.
#
# The ``nova-serialproxy`` service is called with this token enriched URL
# and establishes the connection to the proper instance.
#
# Related options:
#
# * The IP address must be identical to the address to which the
#   ``nova-serialproxy`` service is listening (see option ``serialproxy_host``
#   in this section).
# * The port must be the same as in the option ``serialproxy_port`` of this
#   section.
# * If you choose to use a secured websocket connection, then start this option
#   with ``wss://`` instead of the unsecured ``ws://``. The options ``cert``
#   and ``key`` in the ``[DEFAULT]`` section have to be set for that.
#  (uri value)
#base_url = ws://127.0.0.1:6083/

#
# The IP address to which proxy clients (like ``nova-serialproxy``) should
# connect to get the serial console of an instance.
#
# This is typically the IP address of the host of a ``nova-compute`` service.
#  (string value)
#proxyclient_address = 127.0.0.1

#
# The IP address which is used by the ``nova-serialproxy`` service to listen
# for incoming requests.
#
# The ``nova-serialproxy`` service listens on this IP address for incoming
# connection requests to instances which expose serial console.
#
# Related options:
#
# * Ensure that this is the same IP address which is defined in the option
#   ``base_url`` of this section or use ``0.0.0.0`` to listen on all addresses.
#  (string value)
#serialproxy_host = 0.0.0.0

#
# The port number which is used by the ``nova-serialproxy`` service to listen
# for incoming requests.
#
# The ``nova-serialproxy`` service listens on this port number for incoming
# connection requests to instances which expose serial console.
#
# Related options:
#
# * Ensure that this is the same port number which is defined in the option
#   ``base_url`` of this section.
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#serialproxy_port = 6083


[service_user]
#
# Configuration options for service to service authentication using a service
# token. These options allow sending a service token along with the user's token
# when contacting external REST APIs.

#
# From nova.conf
#

#
# When True, if sending a user token to a REST API, also send a service token.
#
# Nova often reuses the user token provided to the nova-api to talk to other
# REST
# APIs, such as Cinder, Glance and Neutron. It is possible that while the user
# token was valid when the request was made to Nova, the token may expire before
# it reaches the other service. To avoid any failures, and to make it clear it
# is
# Nova calling the service on the user's behalf, we include a service token
# along
# with the user token. Should the user's token have expired, a valid service
# token ensures the REST API request will still be accepted by the keystone
# middleware.
#  (boolean value)
#send_service_user_token = false

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# Authentication type to load (string value)
# Deprecated group/name - [service_user]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Scope for system operations (string value)
#system_scope = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
#project_id = <None>

# Project name to scope to (string value)
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used for
# both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# User ID (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [service_user]/user_name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>

# Tenant ID (string value)
#tenant_id = <None>

# Tenant Name (string value)
#tenant_name = <None>


[spice]
#
# SPICE console feature allows you to connect to a guest virtual machine.
# SPICE is a replacement for fairly limited VNC protocol.
#
# Following requirements must be met in order to use SPICE:
#
# * Virtualization driver must be libvirt
# * spice.enabled set to True
# * vnc.enabled set to False
# * update html5proxy_base_url
# * update server_proxyclient_address

#
# From nova.conf
#

#
# Enable SPICE related features.
#
# Related options:
#
# * VNC must be explicitly disabled to get access to the SPICE console. Set the
#   enabled option to False in the [vnc] section to disable the VNC console.
#  (boolean value)
#enabled = false

#
# Enable the SPICE guest agent support on the instances.
#
# The Spice agent works with the Spice protocol to offer a better guest console
# experience. However, the Spice console can still be used without the Spice
# Agent. With the Spice agent installed the following features are enabled:
#
# * Copy & Paste of text and images between the guest and client machine
# * Automatic adjustment of resolution when the client screen changes - e.g.
#   if you make the Spice console full screen the guest resolution will adjust
# to
#   match it rather than letterboxing.
# * Better mouse integration - The mouse can be captured and released without
#   needing to click inside the console or press keys to release it. The
#   performance of mouse movement is also improved.
#  (boolean value)
#agent_enabled = true

#
# Location of the SPICE HTML5 console proxy.
#
# End user would use this URL to connect to the `nova-spicehtml5proxy``
# service. This service will forward request to the console of an instance.
#
# In order to use SPICE console, the service ``nova-spicehtml5proxy`` should be
# running. This service is typically launched on the controller node.
#
# Possible values:
#
# * Must be a valid URL of the form:  ``http://host:port/spice_auto.html``
#   where host is the node running ``nova-spicehtml5proxy`` and the port is
#   typically 6082. Consider not using default value as it is not well defined
#   for any real deployment.
#
# Related options:
#
# * This option depends on ``html5proxy_host`` and ``html5proxy_port`` options.
#   The access URL returned by the compute node must have the host
#   and port where the ``nova-spicehtml5proxy`` service is listening.
#  (uri value)
#html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html

#
# The  address where the SPICE server running on the instances should listen.
#
# Typically, the ``nova-spicehtml5proxy`` proxy client runs on the controller
# node and connects over the private network to this address on the compute
# node(s).
#
# Possible values:
#
# * IP address to listen on.
#  (string value)
#server_listen = 127.0.0.1

#
# The address used by ``nova-spicehtml5proxy`` client to connect to instance
# console.
#
# Typically, the ``nova-spicehtml5proxy`` proxy client runs on the
# controller node and connects over the private network to this address on the
# compute node(s).
#
# Possible values:
#
# * Any valid IP address on the compute node.
#
# Related options:
#
# * This option depends on the ``server_listen`` option.
#   The proxy client must be able to access the address specified in
#   ``server_listen`` using the value of this option.
#  (string value)
#server_proxyclient_address = 127.0.0.1

# DEPRECATED:
# A keyboard layout which is supported by the underlying hypervisor on this
# node.
#
# Possible values:
#
# * This is usually an 'IETF language tag' (default is 'en-us'). If you
#   use QEMU as hypervisor, you should find the list of supported keyboard
#   layouts at /usr/share/qemu/keymaps.
#  (string value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# Configuring this option forces QEMU to do keymap conversions. These
# conversions
# are lossy and can result in significant issues for users of non en-US
# keyboards. Refer to bug #1682020 for more information.
#keymap = <None>

#
# IP address or a hostname on which the ``nova-spicehtml5proxy`` service
# listens for incoming requests.
#
# Related options:
#
# * This option depends on the ``html5proxy_base_url`` option.
#   The ``nova-spicehtml5proxy`` service must be listening on a host that is
#   accessible from the HTML5 client.
#  (host address value)
#html5proxy_host = 0.0.0.0

#
# Port on which the ``nova-spicehtml5proxy`` service listens for incoming
# requests.
#
# Related options:
#
# * This option depends on the ``html5proxy_base_url`` option.
#   The ``nova-spicehtml5proxy`` service must be listening on a port that is
#   accessible from the HTML5 client.
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#html5proxy_port = 6082


[upgrade_levels]
#
# upgrade_levels options are used to set version cap for RPC
# messages sent between different nova services.
#
# By default all services send messages using the latest version
# they know about.
#
# The compute upgrade level is an important part of rolling upgrades
# where old and new nova-compute services run side by side.
#
# The other options can largely be ignored, and are only kept to
# help with a possible future backport issue.

#
# From nova.conf
#

#
# Compute RPC API version cap.
#
# By default, we always send messages using the most recent version
# the client knows about.
#
# Where you have old and new compute services running, you should set
# this to the lowest deployed version. This is to guarantee that all
# services never send messages that one of the compute nodes can't
# understand. Note that we only support upgrading from release N to
# release N+1.
#
# Set this option to "auto" if you want to let the compute RPC module
# automatically determine what version to use based on the service
# versions in the deployment.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * 'auto': Automatically determines what version to use based on
#   the service versions in the deployment.
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#compute = <None>

#
# Cells RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#cells = <None>

#
# Intercell RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#intercell = <None>

# DEPRECATED:
# Cert RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# The nova-cert service was removed in 16.0.0 (Pike) so this option
# is no longer used.
#cert = <None>

#
# Scheduler RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#scheduler = <None>

#
# Conductor RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#conductor = <None>

#
# Console RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#console = <None>

# DEPRECATED:
# Consoleauth RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# The nova-consoleauth service was deprecated in 18.0.0 (Rocky) and will be
# removed in an upcoming release.
#consoleauth = <None>

# DEPRECATED:
# Network RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# The nova-network service was deprecated in 14.0.0 (Newton) and will be
# removed in an upcoming release.
#network = <None>

#
# Base API RPC API version cap.
#
# Possible values:
#
# * By default send the latest version the client knows about
# * A string representing a version number in the format 'N.N';
#   for example, possible values might be '1.12' or '2.0'.
# * An OpenStack release name, in lower case, such as 'mitaka' or
#   'liberty'.
#  (string value)
#baseapi = <None>


[vault]

#
# From nova.conf
#

# root token for vault (string value)
#root_token_id = <None>

# Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200"
# (string value)
#vault_url = http://127.0.0.1:8200

# Absolute path to ca cert file (string value)
#ssl_ca_crt_file = <None>

# SSL Enabled/Disabled (boolean value)
#use_ssl = false


[vendordata_dynamic_auth]
#
# Options within this group control the authentication of the vendordata
# subsystem of the metadata API server (and config drive) with external systems.

#
# From nova.conf
#

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# Timeout value for http requests (integer value)
#timeout = <None>

# Collect per-API call timing information. (boolean value)
#collect_timing = false

# Log requests to multiple loggers. (boolean value)
#split_loggers = false

# Authentication type to load (string value)
# Deprecated group/name - [vendordata_dynamic_auth]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Scope for system operations (string value)
#system_scope = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
#project_id = <None>

# Project name to scope to (string value)
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used for
# both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# User ID (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [vendordata_dynamic_auth]/user_name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>

# Tenant ID (string value)
#tenant_id = <None>

# Tenant Name (string value)
#tenant_name = <None>


[vmware]
#
# Related options:
# Following options must be set in order to launch VMware-based
# virtual machines.
#
# * compute_driver: Must use vmwareapi.VMwareVCDriver.
# * vmware.host_username
# * vmware.host_password
# * vmware.cluster_name

#
# From nova.conf
#

#
# This option specifies the physical ethernet adapter name for VLAN
# networking.
#
# Set the vlan_interface configuration option to match the ESX host
# interface that handles VLAN-tagged VM traffic.
#
# Possible values:
#
# * Any valid string representing VLAN interface name
#  (string value)
#vlan_interface = vmnic0

#
# This option should be configured only when using the NSX-MH Neutron
# plugin. This is the name of the integration bridge on the ESXi server
# or host. This should not be set for any other Neutron plugin. Hence
# the default value is not set.
#
# Possible values:
#
# * Any valid string representing the name of the integration bridge
#  (string value)
#integration_bridge = <None>

#
# Set this value if affected by an increased network latency causing
# repeated characters when typing in a remote console.
#  (integer value)
# Minimum value: 0
#console_delay_seconds = <None>

#
# Identifies the remote system where the serial port traffic will
# be sent.
#
# This option adds a virtual serial port which sends console output to
# a configurable service URI. At the service URI address there will be
# virtual serial port concentrator that will collect console logs.
# If this is not set, no serial ports will be added to the created VMs.
#
# Possible values:
#
# * Any valid URI
#  (string value)
#serial_port_service_uri = <None>

#
# Identifies a proxy service that provides network access to the
# serial_port_service_uri.
#
# Possible values:
#
# * Any valid URI (The scheme is 'telnet' or 'telnets'.)
#
# Related options:
# This option is ignored if serial_port_service_uri is not specified.
# * serial_port_service_uri
#  (uri value)
#serial_port_proxy_uri = <None>

#
# Specifies the directory where the Virtual Serial Port Concentrator is
# storing console log files. It should match the 'serial_log_dir' config
# value of VSPC.
#  (string value)
#serial_log_dir = /opt/vmware/vspc

#
# Hostname or IP address for connection to VMware vCenter host. (host address
# value)
#host_ip = <None>

# Port for connection to VMware vCenter host. (port value)
# Minimum value: 0
# Maximum value: 65535
#host_port = 443

# Username for connection to VMware vCenter host. (string value)
#host_username = <None>

# Password for connection to VMware vCenter host. (string value)
#host_password = <None>

#
# Specifies the CA bundle file to be used in verifying the vCenter
# server certificate.
#  (string value)
#ca_file = <None>

#
# If true, the vCenter server certificate is not verified. If false,
# then the default CA truststore is used for verification.
#
# Related options:
# * ca_file: This option is ignored if "ca_file" is set.
#  (boolean value)
#insecure = false

# Name of a VMware Cluster ComputeResource. (string value)
#cluster_name = <None>

#
# Regular expression pattern to match the name of datastore.
#
# The datastore_regex setting specifies the datastores to use with
# Compute. For example, datastore_regex="nas.*" selects all the data
# stores that have a name starting with "nas".
#
# NOTE: If no regex is given, it just picks the datastore with the
# most freespace.
#
# Possible values:
#
# * Any matching regular expression to a datastore must be given
#  (string value)
#datastore_regex = <None>

#
# Time interval in seconds to poll remote tasks invoked on
# VMware VC server.
#  (floating point value)
#task_poll_interval = 0.5

#
# Number of times VMware vCenter server API must be retried on connection
# failures, e.g. socket error, etc.
#  (integer value)
# Minimum value: 0
#api_retry_count = 10

#
# This option specifies VNC starting port.
#
# Every VM created by ESX host has an option of enabling VNC client
# for remote connection. Above option 'vnc_port' helps you to set
# default starting port for the VNC client.
#
# Possible values:
#
# * Any valid port number within 5900 -(5900 + vnc_port_total)
#
# Related options:
# Below options should be set to enable VNC client.
# * vnc.enabled = True
# * vnc_port_total
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#vnc_port = 5900

#
# Total number of VNC ports.
#  (integer value)
# Minimum value: 0
#vnc_port_total = 10000

#
# Keymap for VNC.
#
# The keyboard mapping (keymap) determines which keyboard layout a VNC
# session should use by default.
#
# Possible values:
#
# * A keyboard layout which is supported by the underlying hypervisor on
#   this node. This is usually an 'IETF language tag' (for example
#   'en-us').
#  (string value)
#vnc_keymap = en-us

#
# This option enables/disables the use of linked clone.
#
# The ESX hypervisor requires a copy of the VMDK file in order to boot
# up a virtual machine. The compute driver must download the VMDK via
# HTTP from the OpenStack Image service to a datastore that is visible
# to the hypervisor and cache it. Subsequent virtual machines that need
# the VMDK use the cached version and don't have to copy the file again
# from the OpenStack Image service.
#
# If set to false, even with a cached VMDK, there is still a copy
# operation from the cache location to the hypervisor file directory
# in the shared datastore. If set to true, the above copy operation
# is avoided as it creates copy of the virtual machine that shares
# virtual disks with its parent VM.
#  (boolean value)
#use_linked_clone = true

#
# This option sets the http connection pool size
#
# The connection pool size is the maximum number of connections from nova to
# vSphere.  It should only be increased if there are warnings indicating that
# the connection pool is full, otherwise, the default should suffice.
#  (integer value)
# Minimum value: 10
#connection_pool_size = 10

#
# This option enables or disables storage policy based placement
# of instances.
#
# Related options:
#
# * pbm_default_policy
#  (boolean value)
#pbm_enabled = false

#
# This option specifies the PBM service WSDL file location URL.
#
# Setting this will disable storage policy based placement
# of instances.
#
# Possible values:
#
# * Any valid file path
#   e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl
#  (string value)
#pbm_wsdl_location = <None>

#
# This option specifies the default policy to be used.
#
# If pbm_enabled is set and there is no defined storage policy for the
# specific request, then this policy will be used.
#
# Possible values:
#
# * Any valid storage policy such as VSAN default storage policy
#
# Related options:
#
# * pbm_enabled
#  (string value)
#pbm_default_policy = <None>

#
# This option specifies the limit on the maximum number of objects to
# return in a single result.
#
# A positive value will cause the operation to suspend the retrieval
# when the count of objects reaches the specified limit. The server may
# still limit the count to something less than the configured value.
# Any remaining objects may be retrieved with additional requests.
#  (integer value)
# Minimum value: 0
#maximum_objects = 100

#
# This option adds a prefix to the folder where cached images are stored
#
# This is not the full path - just a folder prefix. This should only be
# used when a datastore cache is shared between compute nodes.
#
# Note: This should only be used when the compute nodes are running on same
# host or they have a shared file system.
#
# Possible values:
#
# * Any string representing the cache prefix to the folder
#  (string value)
#cache_prefix = <None>


[vnc]
#
# Virtual Network Computer (VNC) can be used to provide remote desktop
# console access to instances for tenants and/or administrators.

#
# From nova.conf
#

#
# Enable VNC related features.
#
# Guests will get created with graphical devices to support this. Clients
# (for example Horizon) can then establish a VNC connection to the guest.
#  (boolean value)
# Deprecated group/name - [DEFAULT]/vnc_enabled
#enabled = true

# DEPRECATED:
# Keymap for VNC.
#
# The keyboard mapping (keymap) determines which keyboard layout a VNC
# session should use by default.
#
# Possible values:
#
# * A keyboard layout which is supported by the underlying hypervisor on
#   this node. This is usually an 'IETF language tag' (for example
#   'en-us').  If you use QEMU as hypervisor, you should find the  list
#   of supported keyboard layouts at ``/usr/share/qemu/keymaps``.
#  (string value)
# Deprecated group/name - [DEFAULT]/vnc_keymap
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# Configuring this option forces QEMU to do keymap conversions. These
# conversions
# are lossy and can result in significant issues for users of non en-US
# keyboards. You should instead use a VNC client that supports Extended Key
# Event
# messages, such as noVNC 1.0.0. Refer to bug #1682020 for more information.
#keymap = <None>

#
# The IP address or hostname on which an instance should listen to for
# incoming VNC connection requests on this node.
#  (host address value)
# Deprecated group/name - [DEFAULT]/vncserver_listen
# Deprecated group/name - [vnc]/vncserver_listen
#server_listen = 127.0.0.1

#
# Private, internal IP address or hostname of VNC console proxy.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients.
#
# This option sets the private address to which proxy clients, such as
# ``nova-xvpvncproxy``, should connect to.
#  (host address value)
# Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address
# Deprecated group/name - [vnc]/vncserver_proxyclient_address
#server_proxyclient_address = 127.0.0.1

#
# Public address of noVNC VNC console proxy.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients. noVNC provides
# VNC support through a websocket-based client.
#
# This option sets the public base URL to which client systems will
# connect. noVNC clients can use this address to connect to the noVNC
# instance and, by extension, the VNC sessions.
#
# If using noVNC >= 1.0.0, you should use ``vnc_lite.html`` instead of
# ``vnc_auto.html``.
#
# Related options:
#
# * novncproxy_host
# * novncproxy_port
#  (uri value)
#novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html

#
# IP address or hostname that the XVP VNC console proxy should bind to.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients. Xen provides
# the Xenserver VNC Proxy, or XVP, as an alternative to the
# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
# XVP clients are Java-based.
#
# This option sets the private address to which the XVP VNC console proxy
# service should bind to.
#
# Related options:
#
# * xvpvncproxy_port
# * xvpvncproxy_base_url
#  (host address value)
#xvpvncproxy_host = 0.0.0.0

#
# Port that the XVP VNC console proxy should bind to.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients. Xen provides
# the Xenserver VNC Proxy, or XVP, as an alternative to the
# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
# XVP clients are Java-based.
#
# This option sets the private port to which the XVP VNC console proxy
# service should bind to.
#
# Related options:
#
# * xvpvncproxy_host
# * xvpvncproxy_base_url
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#xvpvncproxy_port = 6081

#
# Public URL address of XVP VNC console proxy.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients. Xen provides
# the Xenserver VNC Proxy, or XVP, as an alternative to the
# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
# XVP clients are Java-based.
#
# This option sets the public base URL to which client systems will
# connect. XVP clients can use this address to connect to the XVP
# instance and, by extension, the VNC sessions.
#
# Related options:
#
# * xvpvncproxy_host
# * xvpvncproxy_port
#  (uri value)
#xvpvncproxy_base_url = http://127.0.0.1:6081/console

#
# IP address that the noVNC console proxy should bind to.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients. noVNC provides
# VNC support through a websocket-based client.
#
# This option sets the private address to which the noVNC console proxy
# service should bind to.
#
# Related options:
#
# * novncproxy_port
# * novncproxy_base_url
#  (string value)
#novncproxy_host = 0.0.0.0

#
# Port that the noVNC console proxy should bind to.
#
# The VNC proxy is an OpenStack component that enables compute service
# users to access their instances through VNC clients. noVNC provides
# VNC support through a websocket-based client.
#
# This option sets the private port to which the noVNC console proxy
# service should bind to.
#
# Related options:
#
# * novncproxy_host
# * novncproxy_base_url
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#novncproxy_port = 6080

#
# The authentication schemes to use with the compute node.
#
# Control what RFB authentication schemes are permitted for connections between
# the proxy and the compute host. If multiple schemes are enabled, the first
# matching scheme will be used, thus the strongest schemes should be listed
# first.
#
# Related options:
#
# * ``[vnc]vencrypt_client_key``, ``[vnc]vencrypt_client_cert``: must also be
# set
#  (list value)
#auth_schemes = none

# The path to the client certificate PEM file (for x509)
#
# The fully qualified path to a PEM file containing the private key which the
# VNC
# proxy server presents to the compute node during VNC authentication.
#
# Related options:
#
# * ``vnc.auth_schemes``: must include ``vencrypt``
# * ``vnc.vencrypt_client_cert``: must also be set
#  (string value)
#vencrypt_client_key = <None>

# The path to the client key file (for x509)
#
# The fully qualified path to a PEM file containing the x509 certificate which
# the VNC proxy server presents to the compute node during VNC authentication.
#
# Realted options:
#
# * ``vnc.auth_schemes``: must include ``vencrypt``
# * ``vnc.vencrypt_client_key``: must also be set
#  (string value)
#vencrypt_client_cert = <None>

# The path to the CA certificate PEM file
#
# The fully qualified path to a PEM file containing one or more x509
# certificates
# for the certificate authorities used by the compute node VNC server.
#
# Related options:
#
# * ``vnc.auth_schemes``: must include ``vencrypt``
#  (string value)
#vencrypt_ca_certs = <None>


[workarounds]
#
# A collection of workarounds used to mitigate bugs or issues found in system
# tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These
# should only be enabled in exceptional circumstances. All options are linked
# against bug IDs, where more information on the issue can be found.

#
# From nova.conf
#

#
# Use sudo instead of rootwrap.
#
# Allow fallback to sudo for performance reasons.
#
# For more information, refer to the bug report:
#
#   https://bugs.launchpad.net/nova/+bug/1415106
#
# Possible values:
#
# * True: Use sudo instead of rootwrap
# * False: Use rootwrap as usual
#
# Interdependencies to other options:
#
# * Any options that affect 'rootwrap' will be ignored.
#  (boolean value)
#disable_rootwrap = false

#
# Disable live snapshots when using the libvirt driver.
#
# Live snapshots allow the snapshot of the disk to happen without an
# interruption to the guest, using coordination with a guest agent to
# quiesce the filesystem.
#
# When using libvirt 1.2.2 live snapshots fail intermittently under load
# (likely related to concurrent libvirt/qemu operations). This config
# option provides a mechanism to disable live snapshot, in favor of cold
# snapshot, while this is resolved. Cold snapshot causes an instance
# outage while the guest is going through the snapshotting process.
#
# For more information, refer to the bug report:
#
#   https://bugs.launchpad.net/nova/+bug/1334398
#
# Possible values:
#
# * True: Live snapshot is disabled when using libvirt
# * False: Live snapshots are always used when snapshotting (as long as
#   there is a new enough libvirt and the backend storage supports it)
#  (boolean value)
#disable_libvirt_livesnapshot = false

#
# Enable handling of events emitted from compute drivers.
#
# Many compute drivers emit lifecycle events, which are events that occur when,
# for example, an instance is starting or stopping. If the instance is going
# through task state changes due to an API operation, like resize, the events
# are ignored.
#
# This is an advanced feature which allows the hypervisor to signal to the
# compute service that an unexpected state change has occurred in an instance
# and that the instance can be shutdown automatically. Unfortunately, this can
# race in some conditions, for example in reboot operations or when the compute
# service or when host is rebooted (planned or due to an outage). If such races
# are common, then it is advisable to disable this feature.
#
# Care should be taken when this feature is disabled and
# 'sync_power_state_interval' is set to a negative value. In this case, any
# instances that get out of sync between the hypervisor and the Nova database
# will have to be synchronized manually.
#
# For more information, refer to the bug report:
#
#   https://bugs.launchpad.net/bugs/1444630
#
# Interdependencies to other options:
#
# * If ``sync_power_state_interval`` is negative and this feature is disabled,
#   then instances that get out of sync between the hypervisor and the Nova
#   database will have to be synchronized manually.
#  (boolean value)
#handle_virt_lifecycle_events = true

#
# Disable the server group policy check upcall in compute.
#
# In order to detect races with server group affinity policy, the compute
# service attempts to validate that the policy was not violated by the
# scheduler. It does this by making an upcall to the API database to list
# the instances in the server group for one that it is booting, which violates
# our api/cell isolation goals. Eventually this will be solved by proper
# affinity
# guarantees in the scheduler and placement service, but until then, this late
# check is needed to ensure proper affinity policy.
#
# Operators that desire api/cell isolation over this check should
# enable this flag, which will avoid making that upcall from compute.
#
# Related options:
#
# * [filter_scheduler]/track_instance_changes also relies on upcalls from the
#   compute service to the scheduler service.
#  (boolean value)
#disable_group_policy_check_upcall = false

# DEPRECATED:
# Enable the consoleauth service to avoid resetting unexpired consoles.
#
# Console token authorizations have moved from the ``nova-consoleauth`` service
# to the database, so all new consoles will be supported by the database
# backend.
# With this, consoles that existed before database backend support will be
# reset.
# For most operators, this should be a minimal disruption as the default TTL of
# a
# console token is 10 minutes.
#
# Operators that have much longer token TTL configured or otherwise wish to
# avoid
# immediately resetting all existing consoles can enable this flag to continue
# using the ``nova-consoleauth`` service in addition to the database backend.
# Once all of the old ``nova-consoleauth`` supported console tokens have
# expired,
# this flag should be disabled. For example, if a deployment has configured a
# token TTL of one hour, the operator may disable the flag, one hour after
# deploying the new code during an upgrade.
#
# .. note:: Cells v1 was not converted to use the database backend for
#   console token authorizations. Cells v1 console token authorizations will
#   continue to be supported by the ``nova-consoleauth`` service and use of
#   the ``[workarounds]/enable_consoleauth`` option does not apply to
#   Cells v1 users.
#
# Related options:
#
# * ``[consoleauth]/token_ttl``
#  (boolean value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# This option has been added as deprecated originally because it is used
# for avoiding a upgrade issue and it will not be used in the future.
# See the help text for more details.
#enable_consoleauth = false


[wsgi]
#
# Options under this group are used to configure WSGI (Web Server Gateway
# Interface). WSGI is used to serve API requests.

#
# From nova.conf
#

#
# This option represents a file name for the paste.deploy config for nova-api.
#
# Possible values:
#
# * A string representing file name for the paste.deploy config.
#  (string value)
#api_paste_config = api-paste.ini

# DEPRECATED:
# It represents a python format string that is used as the template to generate
# log lines. The following values can be formatted into it: client_ip,
# date_time, request_line, status_code, body_length, wall_seconds.
#
# This option is used for building custom request loglines when running
# nova-api under eventlet. If used under uwsgi or apache, this option
# has no effect.
#
# Possible values:
#
# * '%(client_ip)s "%(request_line)s" status: %(status_code)s'
#   'len: %(body_length)s time: %(wall_seconds).7f' (default)
# * Any formatted string formed by specific values.
#  (string value)
# This option is deprecated for removal since 16.0.0.
# Its value may be silently ignored in the future.
# Reason:
# This option only works when running nova-api under eventlet, and
# encodes very eventlet specific pieces of information. Starting in Pike
# the preferred model for running nova-api is under uwsgi or apache
# mod_wsgi.
#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f

#
# This option specifies the HTTP header used to determine the protocol scheme
# for the original request, even if it was removed by a SSL terminating proxy.
#
# Possible values:
#
# * None (default) - the request scheme is not influenced by any HTTP headers
# * Valid HTTP header, like ``HTTP_X_FORWARDED_PROTO``
#
# WARNING: Do not set this unless you know what you are doing.
#
# Make sure ALL of the following are true before setting this (assuming the
# values from the example above):
#
# * Your API is behind a proxy.
# * Your proxy strips the X-Forwarded-Proto header from all incoming requests.
#   In other words, if end users include that header in their requests, the
# proxy
#   will discard it.
# * Your proxy sets the X-Forwarded-Proto header and sends it to API, but only
#   for requests that originally come in via HTTPS.
#
# If any of those are not true, you should keep this setting set to None.
#  (string value)
#secure_proxy_ssl_header = <None>

#
# This option allows setting path to the CA certificate file that should be used
# to verify connecting clients.
#
# Possible values:
#
# * String representing path to the CA certificate file.
#
# Related options:
#
# * enabled_ssl_apis
#  (string value)
#ssl_ca_file = <None>

#
# This option allows setting path to the SSL certificate of API server.
#
# Possible values:
#
# * String representing path to the SSL certificate.
#
# Related options:
#
# * enabled_ssl_apis
#  (string value)
#ssl_cert_file = <None>

#
# This option specifies the path to the file where SSL private key of API
# server is stored when SSL is in effect.
#
# Possible values:
#
# * String representing path to the SSL private key.
#
# Related options:
#
# * enabled_ssl_apis
#  (string value)
#ssl_key_file = <None>

#
# This option sets the value of TCP_KEEPIDLE in seconds for each server socket.
# It specifies the duration of time to keep connection active. TCP generates a
# KEEPALIVE transmission for an application that requests to keep connection
# active. Not supported on OS X.
#
# Related options:
#
# * keep_alive
#  (integer value)
# Minimum value: 0
#tcp_keepidle = 600

#
# This option specifies the size of the pool of greenthreads used by wsgi.
# It is possible to limit the number of concurrent connections using this
# option.
#  (integer value)
# Minimum value: 0
# Deprecated group/name - [DEFAULT]/wsgi_default_pool_size
#default_pool_size = 1000

#
# This option specifies the maximum line size of message headers to be accepted.
# max_header_line may need to be increased when using large tokens (typically
# those generated by the Keystone v3 API with big service catalogs).
#
# Since TCP is a stream based protocol, in order to reuse a connection, the HTTP
# has to have a way to indicate the end of the previous response and beginning
# of the next. Hence, in a keep_alive case, all messages must have a
# self-defined message length.
#  (integer value)
# Minimum value: 0
#max_header_line = 16384

#
# This option allows using the same TCP connection to send and receive multiple
# HTTP requests/responses, as opposed to opening a new one for every single
# request/response pair. HTTP keep-alive indicates HTTP connection reuse.
#
# Possible values:
#
# * True : reuse HTTP connection.
# * False : closes the client socket connection explicitly.
#
# Related options:
#
# * tcp_keepidle
#  (boolean value)
# Deprecated group/name - [DEFAULT]/wsgi_keep_alive
#keep_alive = true

#
# This option specifies the timeout for client connections' socket operations.
# If an incoming connection is idle for this number of seconds it will be
# closed. It indicates timeout on individual read/writes on the socket
# connection. To wait forever set to 0.
#  (integer value)
# Minimum value: 0
#client_socket_timeout = 900


[xenserver]
#
# XenServer options are used when the compute_driver is set to use
# XenServer (compute_driver=xenapi.XenAPIDriver).
#
# Must specify connection_url, connection_password and ovs_integration_bridge to
# use compute_driver=xenapi.XenAPIDriver.

#
# From nova.conf
#

#
# Number of seconds to wait for agent's reply to a request.
#
# Nova configures/performs certain administrative actions on a server with the
# help of an agent that's installed on the server. The communication between
# Nova and the agent is achieved via sharing messages, called records, over
# xenstore, a shared storage across all the domains on a Xenserver host.
# Operations performed by the agent on behalf of nova are: 'version','
# key_init',
# 'password','resetnetwork','inject_file', and 'agentupdate'.
#
# To perform one of the above operations, the xapi 'agent' plugin writes the
# command and its associated parameters to a certain location known to the
# domain
# and awaits response. On being notified of the message, the agent performs
# appropriate actions on the server and writes the result back to xenstore. This
# result is then read by the xapi 'agent' plugin to determine the
# success/failure
# of the operation.
#
# This config option determines how long the xapi 'agent' plugin shall wait to
# read the response off of xenstore for a given request/command. If the agent on
# the instance fails to write the result in this time period, the operation is
# considered to have timed out.
#
# Related options:
#
# * ``agent_version_timeout``
# * ``agent_resetnetwork_timeout``
#
#  (integer value)
# Minimum value: 0
#agent_timeout = 30

#
# Number of seconds to wait for agent't reply to version request.
#
# This indicates the amount of time xapi 'agent' plugin waits for the agent to
# respond to the 'version' request specifically. The generic timeout for agent
# communication ``agent_timeout`` is ignored in this case.
#
# During the build process the 'version' request is used to determine if the
# agent is available/operational to perform other requests such as
# 'resetnetwork', 'password', 'key_init' and 'inject_file'. If the 'version'
# call
# fails, the other configuration is skipped. So, this configuration option can
# also be interpreted as time in which agent is expected to be fully
# operational.
#  (integer value)
# Minimum value: 0
#agent_version_timeout = 300

#
# Number of seconds to wait for agent's reply to resetnetwork
# request.
#
# This indicates the amount of time xapi 'agent' plugin waits for the agent to
# respond to the 'resetnetwork' request specifically. The generic timeout for
# agent communication ``agent_timeout`` is ignored in this case.
#  (integer value)
# Minimum value: 0
#agent_resetnetwork_timeout = 60

#
# Path to locate guest agent on the server.
#
# Specifies the path in which the XenAPI guest agent should be located. If the
# agent is present, network configuration is not injected into the image.
#
# Related options:
#
# For this option to have an effect:
# * ``flat_injected`` should be set to ``True``
# * ``compute_driver`` should be set to ``xenapi.XenAPIDriver``
#
#  (string value)
#agent_path = usr/sbin/xe-update-networking

#
# Disables the use of XenAPI agent.
#
# This configuration option suggests whether the use of agent should be enabled
# or not regardless of what image properties are present. Image properties have
# an effect only when this is set to ``True``. Read description of config option
# ``use_agent_default`` for more information.
#
# Related options:
#
# * ``use_agent_default``
#
#  (boolean value)
#disable_agent = false

#
# Whether or not to use the agent by default when its usage is enabled but not
# indicated by the image.
#
# The use of XenAPI agent can be disabled altogether using the configuration
# option ``disable_agent``. However, if it is not disabled, the use of an agent
# can still be controlled by the image in use through one of its properties,
# ``xenapi_use_agent``. If this property is either not present or specified
# incorrectly on the image, the use of agent is determined by this configuration
# option.
#
# Note that if this configuration is set to ``True`` when the agent is not
# present, the boot times will increase significantly.
#
# Related options:
#
# * ``disable_agent``
#
#  (boolean value)
#use_agent_default = false

# Timeout in seconds for XenAPI login. (integer value)
# Minimum value: 0
#login_timeout = 10

#
# Maximum number of concurrent XenAPI connections.
#
# In nova, multiple XenAPI requests can happen at a time.
# Configuring this option will parallelize access to the XenAPI
# session, which allows you to make concurrent XenAPI connections.
#  (integer value)
# Minimum value: 1
#connection_concurrent = 5

#
# Cache glance images locally.
#
# The value for this option must be chosen from the choices listed
# here. Configuring a value other than these will default to 'all'.
#
# Note: There is nothing that deletes these images.
#  (string value)
# Possible values:
# all - Will cache all images
# some - Will only cache images that have the image_property
# ``cache_in_nova=True``
# none - Turns off caching entirely
#cache_images = all

#
# Compression level for images.
#
# By setting this option we can configure the gzip compression level.
# This option sets GZIP environment variable before spawning tar -cz
# to force the compression level. It defaults to none, which means the
# GZIP environment variable is not set and the default (usually -6)
# is used.
#
# Possible values:
#
# * Range is 1-9, e.g., 9 for gzip -9, 9 being most
#   compressed but most CPU intensive on dom0.
# * Any values out of this range will default to None.
#  (integer value)
# Minimum value: 1
# Maximum value: 9
#image_compression_level = <None>

# Default OS type used when uploading an image to glance (string value)
#default_os_type = linux

# Time in secs to wait for a block device to be created (integer value)
# Minimum value: 1
#block_device_creation_timeout = 10

#
# Maximum size in bytes of kernel or ramdisk images.
#
# Specifying the maximum size of kernel or ramdisk will avoid copying
# large files to dom0 and fill up /boot/guest.
#  (integer value)
#max_kernel_ramdisk_size = 16777216

#
# Filter for finding the SR to be used to install guest instances on.
#
# Possible values:
#
# * To use the Local Storage in default XenServer/XCP installations
#   set this flag to other-config:i18n-key=local-storage.
# * To select an SR with a different matching criteria, you could
#   set it to other-config:my_favorite_sr=true.
# * To fall back on the Default SR, as displayed by XenCenter,
#   set this flag to: default-sr:true.
#  (string value)
#sr_matching_filter = default-sr:true

#
# Whether to use sparse_copy for copying data on a resize down.
# (False will use standard dd). This speeds up resizes down
# considerably since large runs of zeros won't have to be rsynced.
#  (boolean value)
#sparse_copy = true

#
# Maximum number of retries to unplug VBD.
# If set to 0, should try once, no retries.
#  (integer value)
# Minimum value: 0
#num_vbd_unplug_retries = 10

#
# Name of network to use for booting iPXE ISOs.
#
# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
# This feature gives a means to roll your own image.
#
# By default this option is not set. Enable this option to
# boot an iPXE ISO.
#
# Related Options:
#
# * `ipxe_boot_menu_url`
# * `ipxe_mkisofs_cmd`
#  (string value)
#ipxe_network_name = <None>

#
# URL to the iPXE boot menu.
#
# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
# This feature gives a means to roll your own image.
#
# By default this option is not set. Enable this option to
# boot an iPXE ISO.
#
# Related Options:
#
# * `ipxe_network_name`
# * `ipxe_mkisofs_cmd`
#  (string value)
#ipxe_boot_menu_url = <None>

#
# Name and optionally path of the tool used for ISO image creation.
#
# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
# This feature gives a means to roll your own image.
#
# Note: By default `mkisofs` is not present in the Dom0, so the
# package can either be manually added to Dom0 or include the
# `mkisofs` binary in the image itself.
#
# Related Options:
#
# * `ipxe_network_name`
# * `ipxe_boot_menu_url`
#  (string value)
#ipxe_mkisofs_cmd = mkisofs

#
# URL for connection to XenServer/Xen Cloud Platform. A special value
# of unix://local can be used to connect to the local unix socket.
#
# Possible values:
#
# * Any string that represents a URL. The connection_url is
#   generally the management network IP address of the XenServer.
# * This option must be set if you chose the XenServer driver.
#  (string value)
#connection_url = <None>

# Username for connection to XenServer/Xen Cloud Platform (string value)
#connection_username = root

# Password for connection to XenServer/Xen Cloud Platform (string value)
#connection_password = <None>

#
# The interval used for polling of coalescing vhds.
#
# This is the interval after which the task of coalesce VHD is
# performed, until it reaches the max attempts that is set by
# vhd_coalesce_max_attempts.
#
# Related options:
#
# * `vhd_coalesce_max_attempts`
#  (floating point value)
# Minimum value: 0
#vhd_coalesce_poll_interval = 5.0

#
# Ensure compute service is running on host XenAPI connects to.
# This option must be set to false if the 'independent_compute'
# option is set to true.
#
# Possible values:
#
# * Setting this option to true will make sure that compute service
#   is running on the same host that is specified by connection_url.
# * Setting this option to false, doesn't perform the check.
#
# Related options:
#
# * `independent_compute`
#  (boolean value)
#check_host = true

#
# Max number of times to poll for VHD to coalesce.
#
# This option determines the maximum number of attempts that can be
# made for coalescing the VHD before giving up.
#
# Related opitons:
#
# * `vhd_coalesce_poll_interval`
#  (integer value)
# Minimum value: 0
#vhd_coalesce_max_attempts = 20

# Base path to the storage repository on the XenServer host. (string value)
#sr_base_path = /var/run/sr-mount

#
# The iSCSI Target Host.
#
# This option represents the hostname or ip of the iSCSI Target.
# If the target host is not present in the connection information from
# the volume provider then the value from this option is taken.
#
# Possible values:
#
# * Any string that represents hostname/ip of Target.
#  (host address value)
#target_host = <None>

#
# The iSCSI Target Port.
#
# This option represents the port of the iSCSI Target. If the
# target port is not present in the connection information from the
# volume provider then the value from this option is taken.
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#target_port = 3260

#
# Used to prevent attempts to attach VBDs locally, so Nova can
# be run in a VM on a different host.
#
# Related options:
#
# * ``CONF.flat_injected`` (Must be False)
# * ``CONF.xenserver.check_host`` (Must be False)
# * ``CONF.default_ephemeral_format`` (Must be unset or 'ext3')
# * Joining host aggregates (will error if attempted)
# * Swap disks for Windows VMs (will error if attempted)
# * Nova-based auto_configure_disk (will error if attempted)
#  (boolean value)
#independent_compute = false

#
# Wait time for instances to go to running state.
#
# Provide an integer value representing time in seconds to set the
# wait time for an instance to go to running state.
#
# When a request to create an instance is received by nova-api and
# communicated to nova-compute, the creation of the instance occurs
# through interaction with Xen via XenAPI in the compute node. Once
# the node on which the instance(s) are to be launched is decided by
# nova-schedule and the launch is triggered, a certain amount of wait
# time is involved until the instance(s) can become available and
# 'running'. This wait time is defined by running_timeout. If the
# instances do not go to running state within this specified wait
# time, the launch expires and the instance(s) are set to 'error'
# state.
#  (integer value)
# Minimum value: 0
#running_timeout = 60

# DEPRECATED:
# Dom0 plugin driver used to handle image uploads.
#
# Provide a string value representing a plugin driver required to
# handle the image uploading to GlanceStore.
#
# Images, and snapshots from XenServer need to be uploaded to the data
# store for use. image_upload_handler takes in a value for the Dom0
# plugin driver. This driver is then called to uplaod images to the
# GlanceStore.
#  (string value)
# This option is deprecated for removal since 18.0.0.
# Its value may be silently ignored in the future.
# Reason:
# Instead of setting the class path here, we will use short names
# to represent image handlers. The download and upload handlers
# must also be matching. So another new option "image_handler"
# will be used to set the short name for a specific image handler
# for both image download and upload.
#image_upload_handler =

#
# The plugin used to handle image uploads and downloads.
#
# Provide a short name representing an image driver required to
# handle the image between compute host and glance.
#  (string value)
# Possible values:
# direct_vhd - This plugin directly processes the VHD files in XenServer
# SR(Storage Repository). So this plugin only works when the host's SR type is
# file system based e.g. ext, nfs.
# vdi_local_dev - This plugin implements an image handler which attaches the
# instance's VDI as a local disk to the VM where the OpenStack Compute service
# runs. It uploads the raw disk to glance when creating image; when booting an
# instance from a glance image, it downloads the image and streams it into the
# disk which is attached to the compute VM.
# vdi_remote_stream - This plugin implements an image handler which works as a
# proxy between glance and XenServer. The VHD streams to XenServer via a remote
# import API supplied by XAPI for image download; and for image upload, the VHD
# streams from XenServer via a remote export API supplied by XAPI. This plugin
# works for all SR types supported by XenServer.
#image_handler = direct_vhd

#
# Number of seconds to wait for SR to settle if the VDI
# does not exist when first introduced.
#
# Some SRs, particularly iSCSI connections are slow to see the VDIs
# right after they got introduced. Setting this option to a
# time interval will make the SR to wait for that time period
# before raising VDI not found exception.
#  (integer value)
# Minimum value: 0
#introduce_vdi_retry_wait = 20

#
# The name of the integration Bridge that is used with xenapi
# when connecting with Open vSwitch.
#
# Note: The value of this config option is dependent on the
# environment, therefore this configuration value must be set
# accordingly if you are using XenAPI.
#
# Possible values:
#
# * Any string that represents a bridge name.
#  (string value)
#ovs_integration_bridge = <None>

#
# When adding new host to a pool, this will append a --force flag to the
# command, forcing hosts to join a pool, even if they have different CPUs.
#
# Since XenServer version 5.6 it is possible to create a pool of hosts that have
# different CPU capabilities. To accommodate CPU differences, XenServer limited
# features it uses to determine CPU compatibility to only the ones that are
# exposed by CPU and support for CPU masking was added.
# Despite this effort to level differences between CPUs, it is still possible
# that adding new host will fail, thus option to force join was introduced.
#  (boolean value)
#use_join_force = true

#
# Publicly visible name for this console host.
#
# Possible values:
#
# * Current hostname (default) or any string representing hostname.
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#console_public_hostname = <current_hostname>


[xvp]
#
# Configuration options for XVP.
#
# xvp (Xen VNC Proxy) is a proxy server providing password-protected VNC-based
# access to the consoles of virtual machines hosted on Citrix XenServer.

#
# From nova.conf
#

# XVP conf template (string value)
#console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template

# Generated XVP conf file (string value)
#console_xvp_conf = /etc/xvp.conf

# XVP master process pid file (string value)
#console_xvp_pid = /var/run/xvp.pid

# XVP log file (string value)
#console_xvp_log = /var/log/xvp.log

# Port for XVP to multiplex VNC connections on (port value)
# Minimum value: 0
# Maximum value: 65535
#console_xvp_multiplex_port = 5900


[zvm]
#
# zvm options allows cloud administrator to configure related
# z/VM hypervisor driver to be used within an OpenStack deployment.
#
# zVM options are used when the compute_driver is set to use
# zVM (compute_driver=zvm.ZVMDriver)

#
# From nova.conf
#

#
# URL to be used to communicate with z/VM Cloud Connector.
#  (uri value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#cloud_connector_url = http://zvm.example.org:8080/

#
# CA certificate file to be verified in httpd server with TLS enabled
#
# A string, it must be a path to a CA bundle to use.
#  (string value)
#ca_file = <None>

#
# The path at which images will be stored (snapshot, deploy, etc).
#
# Images used for deploy and images captured via snapshot
# need to be stored on the local disk of the compute host.
# This configuration identifies the directory location.
#
# Possible values:
#     A file system path on the host running the compute service.
#  (string value)
#
# This option has a sample default set, which means that
# its actual default value may vary from the one documented
# below.
#image_tmp_path = $state_path/images

#
# Timeout (seconds) to wait for an instance to start.
#
# The z/VM driver relies on communication between the instance and cloud
# connector. After an instance is created, it must have enough time to wait
# for all the network info to be written into the user directory.
# The driver will keep rechecking network status to the instance with the
# timeout value, If setting network failed, it will notify the user that
# starting the instance failed and put the instance in ERROR state.
# The underlying z/VM guest will then be deleted.
#
# Possible Values:
#     Any positive integer. Recommended to be at least 300 seconds (5 minutes),
#     but it will vary depending on instance and system load.
#     A value of 0 is used for debug. In this case the underlying z/VM guest
#     will not be deleted when the instance is marked in ERROR state.
#  (integer value)
#reachable_timeout = 300

Usage guide

This section contains information on how to create Glance images for Hyper-V compute nodes and how to use various Hyper-V features through image metadata properties and Nova flavor extra specs.

Prepare images for use with Hyper-V

Hyper-V currently supports only the VHD and VHDx file formats for virtual machines.

OpenStack Hyper-V images should have the following items installed:

  • cloud-init (Linux) or cloudbase-init (Windows)
  • Linux Integration Services (on Linux type OSes)

Images can be uploaded to glance using the openstack client:

openstack image create --name "VM_IMAGE_NAME" --property hypervisor_type=hyperv --public \
    --container-format bare --disk-format vhd --file /path/to/image

Note

VHD and VHDx files sizes can be bigger than their maximum internal size, as such you need to boot instances using a flavor with a slightly bigger disk size than the internal size of the disk files.

Generation 2 VM images

Windows / Hyper-V Server 2012 R2 introduced a feature called Generation 2 VMs, which adds the support for Secure Boot, UEFI, reduced boot times, etc.

Starting with Kilo, the Hyper-V Driver supports Generation 2 VMs.

Check the original spec for more details on its features, how to prepare and create the glance images, and restrictions.

Regarding restrictions, the original spec mentions that RemoteFX is not supported with Generation 2 VMs, but starting with Windows / Hyper-V Server 2016, this is a supported usecase.

Important

The images must be prepared for Generation 2 VMs before uploading to glance (can be created and prepared in a Hyper-V Generation 2 VM). Generation 2 VM images cannot be used in Generation 1 VMs and vice-versa. The instances will spawn and will be in the Running state, but they will not be usable.

UEFI Secure Boot

Secure Boot is a mechanism that starts the bootloader only if the bootloader’s signature has maintained integrity, assuring that only approved components are allowed to run. This mechanism is dependent on UEFI.

As it requires UEFI, this feature is only available to Generation 2 VMs, and the guest OS must be supported by Hyper-V. Newer Hyper-V versions supports more OS types and versions, for example:

  • Windows / Hyper-V Server 2012 R2 supports only Windows guests
  • Windows / Hyper-V Server 2016 supports Windows and Linux guests

Check the following for a detailed list of supported Linux distributions and versions.

The Hyper-V Driver supports this feature starting with OpenStack Liberty.

Important

The images must be prepared for Secure Boot before they’re uploaded to glance. For example, the VM on which the image is prepared must be a Generation 2 VM with Secure Boot enabled. These images can be spawned with Secure Boot enabled or disabled, while other images can only be spawned with Secure Boot disabled. The instances will spawn and will be in the Running state, but they will not be usable.

UEFI Secure Boot instances are created by specifying the os_secure_boot image metadata property, or the nova flavor extra spec os:secure_boot (the flavor extra spec’s value takes precedence).

The os_secure_boot image metadata property acceptable values are: disabled, optional, required (disabled by default). The optional value means that the image is capable of Secure Boot, but it will require the flavor extra spec os:secure_boot to be required in order to use this feature.

Additionally, the image metadata property os_type is mandatory when enabling Secure Boot. Acceptable values: windows, linux.

Finally, in deployments with compute nodes with different Hyper-V versions, the hypervisor_version_requires image metadata property should be set in order to ensure proper scheduling. The correct values are:

  • >=6.3 for images targeting Windows / Hyper-V Server 2012 R2 or newer
  • >=10.0 for images targeting Windows / Hyper-V Server 2016 or newer (Linux guests)

Examples of how to create the glance image:

glance image-create --property hypervisor_type=hyperv \
    --property hw_machine_type="hyperv-gen2" \
    --property hypervisor_version_requires=">=6.3" \
    --property os_secure_boot=required --os-type=windows \
    --name win-secure --disk-format vhd --container-format bare \
    --file path/to/windows.vhdx

glance image-update --property os_secure_boot=optional <linux-image-uuid>
glance image-update --property hypervisor_version_requires=">=10.0" <linux-image-uuid>
glance image-update --property os_type=linux

nova flavor-key <flavor-name> set "os:secure_boot=required"

Shielded VMs

Introduced in Windows / Hyper-V Server 2016, shielded virtual machines are Generation 2 VMs, with virtual TPMs, and encrypted using BitLocker (memory, disks, VM state, video, etc.). These VMs can only run on healthy Guarded Hosts. Because of this, the shielded VMs have better protection against malware or even compromised administrators, as they cannot tamper with, inspect, or steal data from these virtual machines.

This feature has been introduced in OpenStack in Newton.

In order to use this feature in OpenStack, the Hyper-V compute nodes must be prepared and configured as a Guarded Host beforehand. Additionally, the Shielded VM images must be prepared for this feature before uploading them into Glance.

For information on how to create a Host Guardian Service and Guarded Host setup, and how to create a Shielded VM template for Glance, you can check this article.

Finally, after the Shielded VM template has been created, it will have to be uploaded to Glance. After which, Shielded VM instances can be spawned through Nova. You can read the followup article for details on how to do these steps.

Setting Boot Order

Support for setting boot order for Hyper-V instances has been introduced in Liberty, and it is only available for Generation 2 VMs. For Generation 1 VMs, the spawned VM’s boot order is changed only if the given image is an ISO, booting from ISO first.

The boot order can be specified when creating a new instance:

nova boot --flavor m1.tiny --nic --net-name=private --block-device \
    source=image,id=<image_id>,dest=volume,size=2,shutdown=remove,bootindex=0 \
    my-new-vm

For more details on block devices, including more details about setting the the boot order, you can check the block device mapping docs.

RemoteFX

RemoteFX allows you to virtualize your GPUs and share them with Hyper-V VMs by adding virtual graphics devices to them, especially useful for enhancing GPU-intensive applications (CUDA, OpenCL, etc.) and a richer RDP experience.

We have added support for RemoteFX in OpenStack in Kilo.

Check this article for more details on RemoteFX’s prerequisites, how to configure the host and the nova-compute service, guest OS requirements, and how to spawn RemoteFX instances in OpenStack.

RemoteFX can be enabled during spawn, or it can be enabled / disabled through cold resize.

Hyper-V vNUMA instances

Hyper-V instances can have a vNUMA topology starting with Windows / Hyper-V Server 2012. This feature improves the performance for instances with large amounts of memory and for high-performance NUMA-aware applications.

Support for Hyper-V vNUMA instances has been added in Liberty.

Before spawning vNUMA instances, the Hyper-V host must be configured first. For this, refer to NUMA spanning configuration.

Hyper-V only supports symmetric NUMA topologies, and the Hyper-V Driver will raise an exception if an asymmetric one is given.

Additionally, a Hyper-V VM cannot be configured with a NUMA topology and Dynamic Memory at the same time. Because of this, the Hyper-V Driver will always disable Dynamic Memory on VMs that require NUMA topology, even if the configured dynamic_memory_ratio is higher than 1.0.

For more details on this feature and how to use it in OpenStack, check the original spec

Note: Since Hyper-V is responsible for fitting the instance’s vNUMA topologies in the host’s NUMA topology, there’s a slight risk of instances not being to be started after they’ve been stopped for a while, because it doesn’t fit in the NUMA topology anymore. For example, let’s consider the following scenario:

Host A with 2 NUMA nodes (0, 1), 16 GB memory each. The host has the following instances:

  • instance A: 16 GB memory, spans 2 vNUMA nodes (8 each).
  • instances B, C: 6 GB memory each, spans 1 vNUMA node.
  • instances D, E: 2 GB memory each, spans 1 vNUMA node.

Topology-wise, they would fit as follows:

NUMA node 0: A(0), B, D NUMA node 1: A(1), C, E

All instances are stopped, then the following instances are started in this order: B, D, E, C. The topology would look something like this:

NUMA node 0: B NUMA node 1: D, E, C

Starting A will fail, as the NUMA node 1 will have 10 GB memory used, and A needs 8 GB on that node.

One way to mitigate this issue would be to segregate instances spanning multiple NUMA nodes to different compute nodes / availability zones from the regular instances.

Using Cinder Volumes

Identifying disks

When attaching multiple volumes to an instance, it’s important to have a way in which you can safely identify them on the guest side.

While Libvirt exposes the Cinder volume id as disk serial id (visible in /dev/disk/by-id/), this is not possible in case of Hyper-V.

The mountpoints exposed by Nova (e.g. /dev/sd*) are not a reliable source either (which mostly stands for other Nova drivers as well).

Starting with Queens, the Hyper-V driver includes disk address information in the instance metadata, accessible on the guest side through the metadata service. This also applies to untagged volume attachments.

Note

The config drive should not be relied upon when fetching disk metadata as it never gets updated after an instance is created.

Here’s an example:

nova volume-attach cirros 1517bb04-38ed-4b4a-bef3-21bec7d38792
vm_fip="192.168.42.74"

cmd="curl -s 169.254.169.254/openstack/latest/meta_data.json"
ssh_opts=( -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" )
metadata=`ssh "${ssh_opts[@]}" "cirros@$vm_fip" $cmd`
echo $metadata | python -m json.tool

# Sample output
#
# {
#     "availability_zone": "nova",
#     "devices": [
#         {
#             "address": "0:0:0:0",
#             "bus": "scsi",
#             "serial": "1517bb04-38ed-4b4a-bef3-21bec7d38792",
#             "tags": [],
#             "type": "disk"
#         }
#     ],
#     "hostname": "cirros.novalocal",
#     "launch_index": 0,
#     "name": "cirros",
#     "project_id": "3a8199184dfc4821ab01f9cbd72f905e",
#     "uuid": "f0a09969-d477-4d2f-9ad3-3e561226d49d"
# }

# Now that we have the disk SCSI address, we may fetch its path.
file `find /dev/disk/by-path  | grep "scsi-0:0:0:0"`

# Sample output
# /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0: symbolic link to ../../sdb

The volumes may be identified in a similar way in case of Windows guests as well.

Online volume extend

The Hyper-V driver supports online Cinder volume resize. Still, there are a few cases in which this feature is not available:

  • SMB backed volumes
  • Some iSCSI backends where the online resize operation impacts connected initiators. For example, when using the Cinder LVM driver and TGT, the iSCSI targets are actually recreated during the process. The MS iSCSI initiator will attempt to reconnect but TGT will report that the target does not exist, for which reason no reconnect attempts will be performed.

Disk QoS

In terms of QoS, Hyper-V allows IOPS limits to be set on virtual disk images preventing instances to exhaust the storage resources.

Support for setting disk IOPS limits in Hyper-V has been added in OpenStack in Kilo.

The IOPS limits can be specified by number of IOPS, or number of bytes per second (IOPS has precedence). Keep in mind that Hyper-V sets IOPS in normalized IOPS allocation units (8 KB increments) and if the configured QoS policies are not multiple of 8 KB, the Hyper-V Driver will round down to the nearest multiple (minimum 1 IOPS).

QoS is set differently for Cinder volumes and Nova local disks.

Cinder Volumes

Cinder QoS specs can be either front-end (enforced on the consumer side), in this case Nova, or back-end (enforced on the Cinder side).

The Hyper-V driver only allows setting IOPS limits for volumes exposed by Cinder SMB backends. For other Cinder backends (e.g. SANs exposing volumes through iSCSI or FC), backend QoS specs must be used.

# alternatively, total_iops_sec can be specified instead.
cinder qos-create my-qos consumer=front-end total_bytes_sec=<number_of_bytes>
cinder qos-associate my-qos <volume_type>

cinder create <size> --volume-type <volume_type>

# The QoS specs are applied when the volume is attached to a Hyper-V instance
nova volume-attach <hyperv_instance_id> <volume_id>

Nova instance local disks

The QoS policy is set to all of the instance’s disks (including ephemeral disks), and can be enabled at spawn, or enabled / disabled through cold resize.

# alternatively, quota:disk_total_iops_sec can be used instead.
nova flavor-key <my_flavor> set quota:disk_total_bytes_sec=<number_of_bytes>

PCI devices

Windows / Hyper-V Server 2016 introduced Discrete Device Assignment, which allows users to attach PCI devices directly to Hyper-V VMs. The Hyper-V host must have SR-IOV support and have the PCI devices prepared before assignment.

The Hyper-V Driver added support for this feature in OpenStack in Ocata.

For preparing the PCI devices for assignment, refer to PCI passthrough host configuration.

The PCI devices must be whitelisted before being able to assign them. For this, refer to Whitelisting PCI devices.

PCI devices can be attached to Hyper-V instances at spawn, or attached / detached through cold resize through nova flavor extra specs:

nova flavor-key <my_flavor> set "pci_passthrough:alias"="alias:num_pci_devices"

Serial port configuration

Serial ports are used to interact with an instance’s console and / or read its output. This feature has been introduced for the Hyper-V Drvier in Kilo.

For Hyper-V, the serial ports can be configured to be Read Only or Read / Write. This can be specified through the image metadata properties:

  • interactive_serial_port: configure the given port as Read / Write.
  • logging_serial_port: configure the given port as Read Only.

Valid values: 1,2

One port will always be configured as Read / Write, and by default, that port is 1.

Hyper-V VM vNIC attach / detach

When creating a new instance, users can specify how many NICs the instance will have, and to which neutron networks / ports they will be connected to. But starting with Kilo, additional NICs can be added to Hyper-V VMs after they have been created. This can be done through the command:

# alternatively, --port_id <port_id> can be specified.
nova interface-attach --net-id <net_id> <instance>

However, there are a few restrictions that have to be taken into account in order for the operation to be successful. When attaching a new vNIC to an instance, the instance must be turned off, unless all the following conditions are met:

  • The compute node hosting the VM is a Windows / Hyper-V Server 2016 or newer.
  • The instance is a Generation 2 VM.

If the conditions are met, the vNIC can be hot-plugged and the instance does not have to be turned off.

The same restrictions apply when detaching a vNIC from a Hyper-V instance. Detaching interfaces can be done through the command:

nova interface-detach <instance> <port_id>

Nested virtualization

Nested virtualization has been introduced in Windows / Hyper-V Server 2016 and support for it has been added to OpenStack in Pike. This feature will allow you to create Hyper-V instances which will be able to create nested VMs of their own.

In order to use this feature, the compute nodes must have the latest updates installed.

At the moment, only Windows / Hyper-V Server 2016 or Windows 10 guests can benefit from this feature.

Dynamic Memory is not supported for instances with nested virtualization enabled, thus, the Hyper-V Driver will always spawn such instances with Dynamic Memory disabled, even if the configured dynamic_memory_ratio is higher than 1.0.

Disabling the security groups associated with instance’s neutron ports will enable MAC spoofing for instance’s NICs (Queens or newer, if neutron-hyperv-agent is used), which is necessary if the nested VMs needs access to the tenant or external network.

Instances with nested virtualization enabled can be spawned by adding vmx to the image metadata property hw_cpu_features or the nova flavor extra spec hw:cpu_features.

Important

This feature will not work on clustered compute nodes.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.