NSX Distributed Firewall
This post also exists in NSX Components and features - here
The DFW provides L2-L4 stateful firewall services to any workload in the NSX environment. DFW runs in the kernel space and as such performs near line rate network traffic protection. DFW performance and throughput scale linearly by adding new ESXi hosts.
The distributed firewall is activated as soon as the host preparation process is completed. If you want to exclude a VM from DFW service, you can add it to the exclusion list.
One DFW instance per VM vNIC is created, so if you create a new VM with 5 vNICS, 5 instances of DFW will be allocated to the VM. When a DFW rule is created a Point of Enforcement (PEP) can be selected, options vary from vNIC to logical switch. By default "apply to" option is not selected and the DFW rule is applied to all instances.
DFW policy rules can be written in 2 ways: using L2 rules (Ethernet) or L3/L4 rules (General).
L2 rules are mapped to L2 OSI model: only MAC addresses can be used in the source and destination fields – and only L2 protocols can be used in the service fields (like ARP for instance).
L3/L4 rules are mapped to L3/L4 OSI model: policy rules can be written using IP addresses and TCP/UDP ports. It is important to remember
that L2 rules are always enforced before L3/L4 rules. As a concrete example, if the L2 default policy rule is modified to ‘block’, then all L3/L4 traffic will be blocked as well by DFW (and ping would stop working for instance).
One DFW instance per VM vNIC is created, so if you create a new VM with 5 vNICS, 5 instances of DFW will be allocated to the VM. When a DFW rule is created a Point of Enforcement (PEP) can be selected, options vary from vNIC to logical switch. By default "apply to" option is not selected and the DFW rule is applied to all instances.
DFW policy rules can be written in 2 ways: using L2 rules (Ethernet) or L3/L4 rules (General).
L2 rules are mapped to L2 OSI model: only MAC addresses can be used in the source and destination fields – and only L2 protocols can be used in the service fields (like ARP for instance).
L3/L4 rules are mapped to L3/L4 OSI model: policy rules can be written using IP addresses and TCP/UDP ports. It is important to remember
that L2 rules are always enforced before L3/L4 rules. As a concrete example, if the L2 default policy rule is modified to ‘block’, then all L3/L4 traffic will be blocked as well by DFW (and ping would stop working for instance).
The DFW is an NSX component designed to protect workload to workload network traffic, either virtual to virtual or virtual to physical. The main goal of the DFW is to protect east-west traffic. But since the DFW policy enforcement is applied to the vNIC, it can also be used to prevent communication between VMs and the physical network. The edge services gateway os the first point of entry in the data centre as it primarily concerned with protecting north-south traffic flow.
The DFW operates at the vNIC level, meaning that a VM is always protected no matter how it is connected to the logical network. VM can be connected to a VDS VLAN-backed port-group or to a Logical Switch (VXLAN-backed port-group). All these connectivity modes are fully supported. ESG Firewall can also be used to protect workloads sitting on physical servers and appliances, such as NAS.
There are 3 entities that make up the DFW architecture:
vCenter server: This is the management plane on the DFW. Policy rules are created through the vSphere web client. Any vCenter container can be used in the source/destination field of the policy rule: cluster, VDS port-group, Logical Switch, VM, vNIC, resource pool, etc.
NSX Manager: This is the control plane on the DFW. It receives rules from the vCenter and stores them in the central database. NSX Manager then pushes DFW rules down to all hosts. NSX Manager can also recieve rules directly from Rest API.
ESXi Host: This is the data plane of the solution. DFW rules are received from the NSX Manager and then translated into the kernel space for real-time execution. VM network traffic is inspected and enforced per ESXi host.
VMtools need to be installed on VMs to provide IP connectivity.
While a host is being prepared, and DFW is being activated a kernel VIB is loaded into the hypervisor. This is called VMware Service Insertion Platform (VSIP)
VSIP is responsible for all data plane traffic protection and runs at near line speed. A DFW instance is created per vNIC and this instance is located between the VM and the virtual switch.
A set of daemons called vsfwd run permanently on the ESXi host and perform the follwing tasks:
- Interact with the NSX Manager to retrieve DFW rules.
- Gather DFW statistics and send them to NSX Manager.
- Send audit logs to the NSX Manager.
The communication path between the vCenter Server and the ESXi host (using the vpxa process on the ESXi host) is only used for vSphere related purposes like VM creation or storage modification and to program host with the IP address of the NSX Manager. This communication is not used at all for any DFW operation.
The VSIP kernel module adds services like Spoofguard ( protects against IP spoofing) and traffic redirection to third party services like Palo Alto.
How DFW rules are enforced
- DFW rules are enforced in top-to-bottom ordering.
- Each packet is checked against the top rule in the rule table before moving down the subsequent rules in the table.
- The first rule in the table that matches the parameters are enforced.
When creating rules it is recommended to put the most granular rules at the top, to ensure they are all enforced.
By default the bottom rule in the table is a catch all, and will be applied to all by default. This default catch all rule is set to allow.
An IP packet (first packet - pkt1) that matches Rule number 2 is sent by the VM. The order of operation is the following:
1. Lookup is performed in the connection tracker table to check if an entry for the flow already exists.
2. As Flow 3 is not present in the connection tracker table (i.e miss result), a lookup is performed in the rule table to identify which rule is applicable to Flow 3. The first rule that match the flow will be enforced.
3. Rule 2 matches for Flow 3. Action is set to ‘Allow’.
4. Because action is set to ‘Allow’ for Flow 3, a new entry will be created inside the connection tracker table. The packet is then transmitted properly out of DFW.
For subsequent packets, lookup is performed in the connection tracker table to check if an entry for the flow already exists, if a flow exists, the packet is transmitted out of the DFW.



No comments:
Post a Comment