Networking

Todo

  • document hardware specific commands (maybe in admin guide?) (todd)
  • document a map between flags and managers/backends (todd)

The nova.network.manager Module

The nova.network.linux_net Driver

Implements vlans, bridges, and iptables rules using linux utilities.

class IptablesManager(execute=None)

Bases: object

Wrapper for iptables.

See IptablesTable for some usage docs

A number of chains are set up to begin with.

First, nova-filter-top. It’s added at the top of FORWARD and OUTPUT. Its name is not wrapped, so it’s shared between the various nova workers. It’s intended for rules that need to live at the top of the FORWARD and OUTPUT chains. It’s in both the ipv4 and ipv6 set of tables.

For ipv4 and ipv6, the built-in INPUT, OUTPUT, and FORWARD filter chains are wrapped, meaning that the “real” INPUT chain has a rule that jumps to the wrapped INPUT chain, etc. Additionally, there’s a wrapped chain named “local” which is jumped to from nova-filter-top.

For ipv4, the built-in PREROUTING, OUTPUT, and POSTROUTING nat chains are wrapped in the same was as the built-in filter chains. Additionally, there’s a snat chain that is applied after the POSTROUTING chain.

IptablesManager.apply()
IptablesManager.defer_apply_off()
IptablesManager.defer_apply_on()
class IptablesRule(chain, rule, wrap=True, top=False)

Bases: object

An iptables rule.

You shouldn’t need to use this class directly, it’s only used by IptablesManager.

class IptablesTable

Bases: object

An iptables table.

IptablesTable.add_chain(name, wrap=True)

Adds a named chain to the table.

The chain name is wrapped to be unique for the component creating it, so different components of Nova can safely create identically named chains without interfering with one another.

At the moment, its wrapped name is <binary name>-<chain name>, so if nova-compute creates a chain named ‘OUTPUT’, it’ll actually end up named ‘nova-compute-OUTPUT’.

IptablesTable.add_rule(chain, rule, wrap=True, top=False)

Add a rule to the table.

This is just like what you’d feed to iptables, just without the ‘-A <chain name>’ bit at the start.

However, if you need to jump to one of your wrapped chains, prepend its name with a ‘$’ which will ensure the wrapping is applied correctly.

IptablesTable.empty_chain(chain, wrap=True)

Remove all rules from a chain.

IptablesTable.remove_chain(name, wrap=True)

Remove named chain.

This removal “cascades”. All rule in the chain are removed, as are all rules in other chains that jump to it.

If the chain is not found, this is merely logged.

IptablesTable.remove_rule(chain, rule, wrap=True, top=False)

Remove a rule from a chain.

Note: The rule must be exactly identical to the one that was added. You cannot switch arguments around like you can with the iptables CLI tool.

class LinuxBridgeInterfaceDriver

Bases: nova.network.linux_net.LinuxNetInterfaceDriver

classmethod LinuxBridgeInterfaceDriver.ensure_bridge(*args, **kwargs)

Create a bridge unless it already exists.

Parameters:
  • interface – the interface to create the bridge on.
  • net_attrs – dictionary with attributes used to create bridge.
  • gateway – whether or not the bridge is a gateway.
  • filtering – whether or not to create filters on the bridge.

If net_attrs is set, it will add the net_attrs[‘gateway’] to the bridge using net_attrs[‘broadcast’] and net_attrs[‘cidr’]. It will also add the ip_v6 address specified in net_attrs[‘cidr_v6’] if use_ipv6 is set.

The code will attempt to move any ips that already exist on the interface onto the bridge and reset the default gateway if necessary.

classmethod LinuxBridgeInterfaceDriver.ensure_vlan(*args, **kwargs)

Create a vlan unless it already exists.

classmethod LinuxBridgeInterfaceDriver.ensure_vlan_bridge(_self, vlan_num, bridge, bridge_interface, net_attrs=None, mac_address=None)

Create a vlan and bridge unless they already exist.

LinuxBridgeInterfaceDriver.get_dev(network)
LinuxBridgeInterfaceDriver.plug(network, mac_address, gateway=True)
LinuxBridgeInterfaceDriver.unplug(network)
class LinuxNetInterfaceDriver

Bases: object

Abstract class that defines generic network host API

LinuxNetInterfaceDriver.get_dev(network)

Get device name

LinuxNetInterfaceDriver.plug(network, mac_address)

Create Linux device, return device name

LinuxNetInterfaceDriver.unplug(network)

Destory Linux device, return device name

class LinuxOVSInterfaceDriver

Bases: nova.network.linux_net.LinuxNetInterfaceDriver

LinuxOVSInterfaceDriver.get_dev(network)
LinuxOVSInterfaceDriver.plug(network, mac_address, gateway=True)
LinuxOVSInterfaceDriver.unplug(network)
class QuantumLinuxBridgeInterfaceDriver

Bases: nova.network.linux_net.LinuxNetInterfaceDriver

classmethod QuantumLinuxBridgeInterfaceDriver.create_tap_dev(_self, dev, mac_address=None)
QuantumLinuxBridgeInterfaceDriver.get_bridge(network)
QuantumLinuxBridgeInterfaceDriver.get_dev(network)
QuantumLinuxBridgeInterfaceDriver.plug(network, mac_address, gateway=True)
QuantumLinuxBridgeInterfaceDriver.unplug(network)
add_snat_rule(ip_range)
bind_floating_ip(floating_ip, device)

Bind ip to public interface.

ensure_floating_forward(floating_ip, fixed_ip, device)

Ensure floating ip forwarding rule.

ensure_metadata_ip()

Sets up local metadata ip.

ensure_vpn_forward(public_ip, port, private_ip)

Sets up forwarding rules for vlan.

floating_forward_rules(floating_ip, fixed_ip, device)
get_binary_name()

Grab the name of the binary we’re running in.

get_dev(network)
get_dhcp_hosts(context, network_ref)

Get network’s hosts config in dhcp-host format.

get_dhcp_leases(context, network_ref)

Return a network’s hosts config in dnsmasq leasefile format.

get_dhcp_opts(context, network_ref)

Get network’s hosts config in dhcp-opts format.

init_host(ip_range=None)

Basic networking setup goes here.

initialize_gateway_device(dev, network_ref)
kill_dhcp(dev)
metadata_accept()

Create the filter accept rule for metadata.

metadata_forward()

Create forwarding rule for metadata.

plug(network, mac_address, gateway=True)
release_dhcp(dev, address, mac_address)
remove_floating_forward(floating_ip, fixed_ip, device)

Remove forwarding for floating ip.

restart_dhcp(*args, **kwargs)

(Re)starts a dnsmasq server for a given network.

If a dnsmasq instance is already running then send a HUP signal causing it to reload, otherwise spawn a new instance.

send_arp_for_ip(ip, device, count)
unbind_floating_ip(floating_ip, device)

Unbind a public ip from public interface.

unplug(network)
update_dhcp(context, dev, network_ref)
update_dhcp_hostfile_with_text(dev, hosts_text)
update_ra(*args, **kwargs)
write_to_file(file, data, mode='w')

Tests

The network_unittest Module

Legacy docs

The nova networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.

Components

There are several key components:

  • NetworkController (Manages address and vlan allocation)
  • RoutingNode (NATs public IPs to private IPs, and enforces firewall rules)
  • AddressingNode (runs DHCP services for private networks)
  • BridgingNode (a subclass of the basic nova ComputeNode)
  • TunnelingNode (provides VPN connectivity)

Component Diagram

Overview:

                               (PUBLIC INTERNET)
                                |              \
                               / \             / \
                 [RoutingNode] ... [RN]    [TunnelingNode] ... [TN]
                       |             \    /       |              |
                       |            < AMQP >      |              |
[AddressingNode]--  (VLAN) ...         |        (VLAN)...    (VLAN)      --- [AddressingNode]
                       \               |           \           /
                      / \             / \         / \         / \
                       [BridgingNode] ...          [BridgingNode]


                 [NetworkController]   ...    [NetworkController]
                                   \          /
                                     < AMQP >
                                        |
                                       / \
                      [CloudController]...[CloudController]

While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).

State Model

Network State consists of the following facts:

  • VLAN assignment (to a project)
  • Private Subnet assignment (to a security group) in a VLAN
  • Private IP assignments (to running instances)
  • Public IP allocations (to a project)
  • Public IP associations (to a private IP / running instance)

While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed “fact engine” for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.

The Public Traffic Path

Public Traffic:

               (PUBLIC INTERNET)
                      |
                    <NAT>  <-- [RoutingNode]
                      |
[AddressingNode] -->  |
                   ( VLAN )
                      |    <-- [BridgingNode]
                      |
               <RUNNING INSTANCE>

The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.

Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).