Todo
Implements vlans, bridges, and iptables rules using linux utilities.
Bases: object
Wrapper for iptables.
See IptablesTable for some usage docs
A number of chains are set up to begin with.
First, nova-filter-top. It’s added at the top of FORWARD and OUTPUT. Its name is not wrapped, so it’s shared between the various nova workers. It’s intended for rules that need to live at the top of the FORWARD and OUTPUT chains. It’s in both the ipv4 and ipv6 set of tables.
For ipv4 and ipv6, the built-in INPUT, OUTPUT, and FORWARD filter chains are wrapped, meaning that the “real” INPUT chain has a rule that jumps to the wrapped INPUT chain, etc. Additionally, there’s a wrapped chain named “local” which is jumped to from nova-filter-top.
For ipv4, the built-in PREROUTING, OUTPUT, and POSTROUTING nat chains are wrapped in the same was as the built-in filter chains. Additionally, there’s a snat chain that is applied after the POSTROUTING chain.
Bases: object
An iptables rule.
You shouldn’t need to use this class directly, it’s only used by IptablesManager.
Bases: object
An iptables table.
Adds a named chain to the table.
The chain name is wrapped to be unique for the component creating it, so different components of Nova can safely create identically named chains without interfering with one another.
At the moment, its wrapped name is <binary name>-<chain name>, so if nova-compute creates a chain named ‘OUTPUT’, it’ll actually end up named ‘nova-compute-OUTPUT’.
Add a rule to the table.
This is just like what you’d feed to iptables, just without the ‘-A <chain name>’ bit at the start.
However, if you need to jump to one of your wrapped chains, prepend its name with a ‘$’ which will ensure the wrapping is applied correctly.
Remove all rules from a chain.
Remove named chain.
This removal “cascades”. All rule in the chain are removed, as are all rules in other chains that jump to it.
If the chain is not found, this is merely logged.
Remove a rule from a chain.
Note: The rule must be exactly identical to the one that was added. You cannot switch arguments around like you can with the iptables CLI tool.
Bases: nova.network.linux_net.LinuxNetInterfaceDriver
Create a bridge unless it already exists.
Parameters: |
|
---|
If net_attrs is set, it will add the net_attrs[‘gateway’] to the bridge using net_attrs[‘broadcast’] and net_attrs[‘cidr’]. It will also add the ip_v6 address specified in net_attrs[‘cidr_v6’] if use_ipv6 is set.
The code will attempt to move any ips that already exist on the interface onto the bridge and reset the default gateway if necessary.
Create a vlan unless it already exists.
Create a vlan and bridge unless they already exist.
Bases: object
Abstract class that defines generic network host API
Get device name
Create Linux device, return device name
Destory Linux device, return device name
Bases: nova.network.linux_net.LinuxNetInterfaceDriver
Bases: nova.network.linux_net.LinuxNetInterfaceDriver
Bind ip to public interface.
Ensure floating ip forwarding rule.
Sets up local metadata ip.
Sets up forwarding rules for vlan.
Grab the name of the binary we’re running in.
Get network’s hosts config in dhcp-host format.
Return a network’s hosts config in dnsmasq leasefile format.
Get network’s hosts config in dhcp-opts format.
Basic networking setup goes here.
Create the filter accept rule for metadata.
Create forwarding rule for metadata.
Remove forwarding for floating ip.
(Re)starts a dnsmasq server for a given network.
If a dnsmasq instance is already running then send a HUP signal causing it to reload, otherwise spawn a new instance.
Unbind a public ip from public interface.
The nova networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.
There are several key components:
Overview:
(PUBLIC INTERNET)
| \
/ \ / \
[RoutingNode] ... [RN] [TunnelingNode] ... [TN]
| \ / | |
| < AMQP > | |
[AddressingNode]-- (VLAN) ... | (VLAN)... (VLAN) --- [AddressingNode]
\ | \ /
/ \ / \ / \ / \
[BridgingNode] ... [BridgingNode]
[NetworkController] ... [NetworkController]
\ /
< AMQP >
|
/ \
[CloudController]...[CloudController]
While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).
Network State consists of the following facts:
While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed “fact engine” for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.
Public Traffic:
(PUBLIC INTERNET)
|
<NAT> <-- [RoutingNode]
|
[AddressingNode] --> |
( VLAN )
| <-- [BridgingNode]
|
<RUNNING INSTANCE>
The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.
Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).