Wednesday, 17 June 2015

Blueprint Part 5




These are the following objectives for Section 2.2 of the blueprint.


Section 2 – Create and Manage VMware NSX Virtual Networks


Objective 2.2 – Configure VXLANs


Skills and Abilities


· Prepare cluster for VXLAN  -  Yellow-Bricks
· Configure VXLAN transport zone parameters - Wahl Network
· Configure the appropriate teaming policy for a given implementation - link





Just a few quick points on VXLAN
A more detailed explanation can be found in  NSX Components and Features


VXLAN is a L2 over L3 encapsulation tecnology. The original ethernet frame is encapsulated with external VXLAN, UDP, IP and ethernet headers to ensure it can be transported across the netwrok infrastructure.

A VXLAN segment can span multiple L3 networks, with full connectivity of VMs.
VMkernel interface is used to communicate over VXLAN. Everything is tunneled directly via vmkernels.

VXLAN kernel module encapsulates packet in a VXLAN header, sends it out the vmkernel interface (VTEP) - VXLAN tunnel endpoint, to the VTEP on a destination host, which de-encapsulates it and hands it to VM. This process is completely transparant to the VM.










Depending on teaming types hosts may have single or multiple VTEPs

The VXLAN Network Identifier (VNI) is a 24 bit identifier which is associated with each L2 segment created, it is carried inside the VXLAN header and is associated to an IP subnet, like a traditional VLAN. This VNI is the reason VXLAN can scale beyond the 4094 VLAN limitation.


VTEPs are identified  by the source and destination IP addresses used in the external IP header.


A minimum of 1600 Bytes for the MTU is the recommendation for VXLAN.


Completed 17/06


Prepare a cluster for VXLAN



Configure VXLAN transport zone parameters



Configure the appropriate distributed virtual port group





Tuesday, 16 June 2015

Blueprint Part 4



These are the following objectives for Section 2.1 of the blueprint.


Section 2 – Create and Manage VMware NSX Virtual Networks


Objective 2.1 – Create and Administer Logical Switches


Skills and Abilities


· Create/Delete Logical Switches
· Connect a Logical Switch to an NSX Edge
· Deploy services on a Logical Switch
· Connect/Disconnect virtual machines to/from a Logical Switch
· Test Logical Switch connectivity



Completed 16/06


Create/Delete Logical Switches


A Logical Switch can extend across multiple VDS. A given Logical Switch can provide connectivity for VMs that are connected to the Compute Clusters or to the Edge Cluster. A logical Switch is always created as part of a specific Transport Zone. This implies that normally the Transport Zone extends across all the ESXi clusters and defines the extension of a Logical Switch.



Completed 17/06


Connect logical switch to edge gateway


Deploy services on a logical switch



As I have no 3rd party services defined I am unable to complete this task. It seems to be straight forward - Select service and click ok.


Connect/Disconnect a VM from a logical switch in NSX 



Test logical switch connectivity in NSX


I had no issues testing connectivity between any of my hosts




Blueprint Part 3



These are the following objectives for Section 1.3 of the blueprint.


Section 1 - Install and Upgrade VMware NSX


Objective 1.3 – Configure and Manage Transport Zones 


Skills and Abilities


· Create Transport Zones - link
· Configure the control plane mode for a Transport Zone - link
· Add clusters to Transport Zones - link
· Remove clusters from Transport Zones - link



Completed 16/06


Create Transport Zones - In the simplest sense, a Transport Zone defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure.




Blueprint Part 2



These are the following objectives for Section 1.2 of the blueprint.


Section 1 - Install and Upgrade VMware NSX


Objective 1.2 – Upgrade VMware NSX Components


Skills and Abilities


· Upgrade vShield Manager 5.5 to NSX Manager 6.x
· Upgrade NSX Manager 6.0 to NSX Manager 6.0.x
· Upgrade Virtual Wires to Logical Switches
· Upgrade vShield App to NSX Firewall
· Upgrade vShield 5.5 to NSX Edge 6.x
· Upgrade vShield Endpoint 5.x to vShield Endpoint 6.x
· Upgrade to NSX Data Security


I am going to come back to this part of the lab later as I currently don't have vShield manager deployed and I think it would suit better to continue with the blueprint for now.

Blueprint Part 1


These are the following objectives for Section 1.1 of the blueprint.


Section 1 - Install and Upgrade VMware NSX


Objective 1.1 – Deploy VMware NSX Components


Skills and Abilities


· Deploy the NSX Manager virtual appliance - Link to kb
· Integrate the NSX Manager with vCenter Server - Link to blog
· Implement and Configure NSX Controllers - Link to blog by Sean Whitney
· Prepare Host Clusters for Network Virtualization - Link to kb
· Implement NSX Edge Services Gateway devices - Link to blog by Sean Whitney
· Implement Logical Routers - Link to blog by Sean Whitney
· Deploy vShield Endpoints - Link to VMware pdf
· Implement Data Security - Link to blog by Sean Whitney
· Create IP Pools - Link to VMware kb


My goal for this evening/tomorrow, is to set up a lab  and run through some of the above topics. I also have access to Pluralsight so I will be going through the relevant videos of Jason Nash's series 
VMware NSX for vSphere: Network Services

I will update this post throughout the evening with my findings or any issues I come across. 
I will also create a separate post detailing the lab I will be using.


Completed 10/06


Overview of the blueprint section 1.1


Technical Deep Dive Part 1 from VMworld 2014


Pluralsight - What is VMware NSX


Completed 11/06


Pluralsight - The components of NSX


Chapter 8: vSphere Standard Switch - Networking for VMware Administrators - Chris Wahl/Steven Pantol


Chapter 9: vSphere Distributed Switch -  Networking for VMware Administrators - Chris Wahl/Steven Pantol


Completed 12/06



Overview and key components of VXLAN from VXLAN deployment guide



Example Deployment 1 from VXLAN deployment guide



Completed 15/06


Pluralsight - How NSX works with vSphere


Pluralsight - VMware NSX and the Physical World


Deploy the NSX Manager virtual appliance


*Ok so looks like I had an issue with deploying the template first time around. Deployment gets to 99% and then error message flashes up stating "The task was cancelled by a user"
Looking at the error message in alarms, I see the following - "OVF deployment failed: SHA1 digest of file VMware-NSX-Manager-6.1.4-2752137-disk1.vmdk does not match manifest" I will redownload the ova and try again.*


Completed 16/06


Integrate NSX Manager with vCenter


Create IP Pools


Implement and configure NSX controllers


* Receiving the following error message while deploying the controller on my management cluster -Operation failed on VC. For more details, refer to the rootCauseString or the VC logs*




So it looks like the error was related to a shortage of storage. I shut down my VSA appliance and added another disk for the controllers.
I was still unable to deploy controllers after sorting out the disk space issues. I was getting an error unable to deploy ovf template. When looking at the deployment process in vSphere I could see the deployment start, but it was deleted after a couple of seconds. I tailed the log to see what was happening and it turned out the NSX Manager was having issues communicating with any of the hosts. After a bit of searching around, I noticed I had forgotten to set a local DNS server, and both DNS servers were external. I set up a DNS server on my Win2k8 jump box and restarted the NSX Manager. The deployment completed successfully on the next attempt.

Prepare host clusters for network 





Implement NSX Edge Services Gateway devices



Deploy vShield Endpoints

No real errors here, failed the first time as I had selected the wrong Datastore and forgot to change the IP assignment to my address pools.

Implement Data Security


That is Part 1.1 of the blueprint completed. I ran into a few issues, but overall it was straightforward enough. I think I will run back through some of the pluralsight videos also, to see if Mr. Nash covers anything differently when deploying and configuring the components. Also I certainly still have some gaps that need filling in.

Michael Hogan

Friday, 12 June 2015

Logical Switching


In NSX we have the ability to create isolated logical L2 networks. Both physical and virtual endpoints  can be connected to these logical segments and establish connectivity independently from where they are deployed.

The diagram below shows the logical and physical views when logical switching is deployed using VXLAN. This allows us to stretch a L2 domain across multiple server racks, by utilizing a logical switch. This all happens independently from the underlay L2 or L3 infrastructure.






Replication Modes for Multi-Destination Traffic

If we have 2 VMs on different hosts that need to communicate with each other, unicast VXLAN traffic is exchanged between the 2 VTEPs. If we need to send traffic to other VMs on the same logical switch, we have 3 options

  • Broadcast
  • Unknown Unicast
  • Multicast

So how does NSX replicate traffic to multiple unknown remote hosts? The different replication types are

  • Multicast
  • Unicast
  • Hybrid
The logical switch inherits its replication mode from the transport zone by default, although this can be changed on a switch by switch basis.

In the below diagram we have a look at VTEP segemnts. there are 2 VTEP segments in the below scenario. Host 1 & 2 are one VTEP segment and host 3 & 4 make up the second segment.


                                 




Multicast


When Multicast replication mode is chosen for a given Logical Switch, NSX relies on the Layer 2 and Layer 3 Multicast capability of the data center physical network to ensure VXLAN encapsulated multi-destination traffic is sent to all the VTEPs.

Multicast mode is a way of handling Broadcast, Unknown Unicast and Multicast (BUM) traffic. This does not allow us to use decoupling between physical and logical networking infrastructures.

In Multicast mode a multicast IP address needs to be assigned to each logical switch. L2 multicast capability is used to replicate traffic to all VTEPs in the local segment. IGMP Snooping must be enabled on physical devices. ALso to ensure multicast traffic is delivered to VTEPs on a different subnet L3 multicast routing and  Protocol Independent Multicast (PIM) must be enabled.


In the example below the VXLAN segment 5001 has been associated to the multicast group 239.1.1.1. As a consequence, as soon as the first VM gets connected to that logical switch, the ESXi hypervisor hosting the VM generates an IGMP join message to notify the physical infrastructure its interest in receiving multicast traffic sent to that specific group.



                          



As a result of the IGMP Joins sent by ESXi1-ESXi-3, multicast state is built in the physical network to ensure delivery of multicast frames sent to the 239.1.1.1 destination. Notice that ESXi-4 does not send the IGMP Join since it does not host any active receivers (VMs connected to the VXLAN 5001 segment). The sequence of events required to deliver a BUM frame in multicast mode is depicted below.



                        


  • VM1 generates a BUM frame
  • ESXi 1 encapsulates the frame with VXLAN. The destination IP address in the VXLAN header is set to the multicast address of 239.1.1.1
  • The L2 switch receiving the multicast frame performs replication: assuming IGMP Snooping is configured on the switch, it will be able to replicate the frame only to the relevant interfaces connecting to ESXi-2 and the L3 router. If IGMP Snooping is not enabled or not supported, the L2 switch treats the frame as a L2 broadcast packet and replicates it to all the interfaces belonging to the same VLAN of the port where the packet was received.
  • The L3 router performs L3 multicast replication and sends the packet into the Transport subnet B.
  • The L2 switch then replicates the frame.
  • ESXi-2 and ESXI-3 decapsulate the received VXLAN packets exposing the original Ethernet frames that are delivered to VM2 and VM3.





Unicast

In unicast mode, decoupling logical and physical networks is achieved. The hosts in the NSX domain are divided into separate VTEP segments based on their IP subnet. A host in each subnet is selected to be the Unicast Tunnel End Point (UTEP). The UTEP is responsible for replicating multi-destination traffic. An NSX controller is required for this, as it acts as a cache server for ARP and MAC address tables.

Every UTEp will only replicate traffic to ESXi hosts on the local segment that have at least one VM actively connected to the logical network where multi-destination traffic is sent to.

                                    


  • VM generates a BUM frame to be sent to all VMs.
  • ESXi 1 looks at its local VTEP and determines the need to replicate the packet only to the other VTEP belonging to the local segment (ESXi2) and to the UTEP part of remote segments. The unicast copy sent to the UTEP is characterized by having the  "REPLICATE_LOCALLY" bit in the VXLAN header.
  • The UTEP receives the frame, looks at its local VTEP table and replicates it to all the hosts which are part of the local VTEP segment with at least one VM connected.


Hybrid


Hybrid Mode offers operational simplicity similar to Unicast Mode (no IP Multicast Routing configuration required in the physical network) while leveraging the Layer 2 Multicast capability of physical switches.

The specific VTEP responsible for performing local replication to the other VTEPs part of the same subnet is now named “MTEP”. The reason is that in Hybrid Mode the [M]TEP uses L2 [M]ulticast to replicate BUM frames locally.



  • VM1 generates a BUM frame that needs to be replicated to all the other VMs part of VXLAN 5001. The multicast group 239.1.1.1 must be associated with the VXLAN segment, as multicast encapsulation is performed for local traffic replication
  • ESXi1 encapsulates the frame in a multicast packet addressed to the 239.1.1.1 group. Layer 2 multicast configuration in the physical network is leveraged to ensure that the VXLAN frame is delivered to all VTEPs in the local VTEP segment; in hybrid mode the ESXi hosts send an IGMP Join when there are local VMs interested in receiving multi-destination traffic.
  • At the same time ESXi-1 looks at the local VTEP table and determines the need to replicate the packet to the MTEP part of remote segments. The unicast copy sent to the MTEP with the bit set in the VXLAN header, as an indication to the MTEP that this frame is coming from a remote VTEP segment and needs to be locally re-injected in the network.
  • The MTEP creates a multicast packet and sends it to the physical network where will be replicated by the local L2 switching infrastructure.



So to simplify 

In a 2 rack environment with 2 VTEP networks, 2 hosts on each rack.

Unicast - Has to send out frames to each host on rack 2. Can cause a lot of overhead in larger environments.
Hybrid - Only has to send one frame to rack 2. The MTEP will then replicate to each host from here.
Multicast - Local and remote replication handled by multicast. Needs multicast addresses. Biggest challenge is configuring physical network. 



You can change the VXLAN mode at any stage by going to Logical Network Preparation - Transport Zones - Edit settings.

You can also migrate existing logical switches to the new control plane mode, by checking the check box.


To create a logical switch, select the logical switches menu icon, hit the plus button, give it a name, choose transport zone and replication mode.




































Distributed Firewall

NSX Distributed Firewall

This post also exists in NSX Components and features - here 

The DFW provides L2-L4 stateful firewall services to any workload in the NSX environment. DFW runs in the kernel space and as such performs near line rate network traffic protection. DFW performance and throughput scale linearly by adding new ESXi hosts.
The distributed firewall is activated as soon as the host preparation process is completed. If you want to exclude a VM from DFW service, you can add it to the exclusion list.

One DFW instance per VM vNIC is created, so if you create a new VM with 5 vNICS, 5 instances of DFW will be allocated to the VM. When a DFW rule is created a Point of Enforcement (PEP) can be selected, options vary from vNIC to logical switch. By default "apply to" option is not selected and the DFW rule is applied to all instances.

DFW policy rules can be written in 2 ways: using L2 rules (Ethernet) or L3/L4 rules (General).

L2 rules are mapped to L2 OSI model: only MAC addresses can be used in the source and destination fields – and only L2 protocols can be used in the service fields (like ARP for instance).

L3/L4 rules are mapped to L3/L4 OSI model: policy rules can be written using IP addresses and TCP/UDP ports. It is important to remember
that L2 rules are always enforced before L3/L4 rules. As a concrete example, if the L2 default policy rule is modified to ‘block’, then all L3/L4 traffic will be blocked as well by DFW (and ping would stop working for instance). 


The DFW is an NSX component designed to protect workload to workload network traffic, either virtual to virtual or virtual to physical. The main goal of the DFW is to protect east-west traffic. But since the DFW policy enforcement is applied to the vNIC, it can also be used to prevent communication between VMs and the physical network. The edge services gateway os the first point of entry in the data centre as it primarily concerned with protecting north-south traffic flow.






The DFW operates at the vNIC level, meaning that a VM is always protected no matter how it is connected to the logical network. VM can be connected to a VDS VLAN-backed port-group or to a Logical Switch (VXLAN-backed port-group). All these connectivity modes are fully supported. ESG Firewall can also be used to protect workloads sitting on physical servers and appliances, such as NAS.

There are 3 entities that make up the DFW architecture:

vCenter server: This is the management plane on the DFW. Policy rules are created through the vSphere web client. Any vCenter container can be used in the source/destination field of the policy rule: cluster, VDS port-group, Logical Switch, VM, vNIC, resource pool, etc.

NSX Manager: This is the control plane on the DFW. It receives rules from the vCenter and stores them in the central database. NSX Manager then pushes DFW rules down to all hosts. NSX Manager can also recieve rules directly from Rest API.

ESXi Host: This is the data plane of the solution. DFW rules are received from the NSX Manager and then translated into the kernel space for real-time execution. VM network traffic is inspected and enforced per ESXi host.

VMtools need to be installed on VMs to provide IP connectivity.

While a host is being prepared, and DFW is being activated a kernel VIB is loaded into the hypervisor. This is called VMware Service Insertion Platform (VSIP)

VSIP is responsible for all data plane traffic protection and runs at near line speed. A DFW instance is created per vNIC and this instance is located between the VM and the virtual switch.

A set of daemons called vsfwd run permanently on the ESXi host and perform the follwing tasks:


  • Interact with the NSX Manager to retrieve DFW rules.
  • Gather DFW statistics and send them to NSX Manager.
  • Send audit logs to the NSX Manager. 

The communication path between the vCenter Server and the ESXi host (using the vpxa process on the ESXi host) is only used for vSphere related purposes like VM creation or storage modification and to program host with the IP address of the NSX Manager. This communication is not used at all for any DFW operation.

The VSIP kernel module adds services like Spoofguard ( protects against IP spoofing) and traffic redirection to third party services like Palo Alto.


How DFW rules are enforced

  • DFW rules are enforced in top-to-bottom ordering.
  • Each packet is checked against the top rule in the rule table before moving down the subsequent rules in the table.
  • The first rule in the table that matches the parameters are enforced.  
When creating rules it is recommended to put the most granular rules at the top, to ensure they are all enforced.

By default the bottom rule in the table is a catch all, and will be applied to all by default. This default catch all rule is set to allow.







An IP packet (first packet - pkt1) that matches Rule number 2 is sent by the VM. The order of operation is the following:

1. Lookup is performed in the connection tracker table to check if an entry for the flow already exists. 
2. As Flow 3 is not present in the connection tracker table (i.e miss result), a lookup is performed in the rule table to identify which rule is applicable to Flow 3. The first rule that match the flow will be enforced. 
3. Rule 2 matches for Flow 3. Action is set to ‘Allow’. 
4. Because action is set to ‘Allow’ for Flow 3, a new entry will be created inside the connection tracker table. The packet is then transmitted properly out of DFW.






For subsequent packets, lookup is performed in the connection tracker table to check if an entry for the flow already exists, if a flow exists, the packet is transmitted out of the DFW.

Thursday, 11 June 2015

NSX Components and Features



 VMware NSX network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks. With the ability to be deployed on any IP network, including both existing networking models and next generation fabric architectures from any vendor, NSX is a completely non-disruptive solution.

From the design guide we get a decent analogy comparing NSX to traditional compute virtualization.








With NSX the complete set of L2 - L7  networking services (switching, routing, firewall and load balancing) are reproduced in software.

An NSX deployment is made up of data plane, control plane and management plane.









Control Plane

The control plane runs in the NSX controller. This is responsible for managing the switching and routing modules in the hypervisors. It consists of controller nodes to manage logical switches. By using the controller cluster to manage VXLAN based logical switches, it eliminates the need for multicast support form the physical network. The control plane will reduce ARP broadcasts. The user world agent is installed here, it tells VXLAN and DLR kernel modules what to do. It communicates between different components and kernel modules. The logical Router Control VM, handles routing table updates, route distribution. Three controllers a recommended for redundancy.



Data Plane

This is the area where packets are actually moved around. Frames are switched, packets are routed. The kernel modules installed here are VXLAN, Distributed logical router and Distributed firewall. The vSphere distributed switch is used here to switch the actual packets. The edge services appliances is a VM that does edge services such as NAT, edge firewalling, VPN termination, routing between network segments.


Management Plane

The management plane is what you interact with. It talks to the control plane to initiate changes. The NSX manager talks to the NSX Controllers. The message bus is the way these send commands between components. It also supplies the REST API entry points to the environment.


NSX Services


Switching - Enabling extension of a L2 segment anywhere in the netwrok irrespective of the physical network


Routing - Routing between IP subnets can be done without traffic having to leave through the physical router. routing is performed in the kernel. This provides optimal data path for routing traffic (East-West communication). The NSX Edge provides a centralized point to integrate with the physical network. (North - South communication)


Distributed Firewall - Security enforcement takes place at the kernel and VNIC level. This makes firewall rule enforcement highly scalable.


Load Balancing - L4 - L7 Load balancing and SSL termination.


VPN - SSL VPN for L2 and L3 VPNs


Connectivity to physical network - L2 and L3 gateway functions provide communication between logical and physical networks.


NSX Manager

The NSX manager is the management plane virtual appliance, it helps configure logical switches and connect vms to these logical switches. It provides management UI and is the entry point for the API for NSX, used for automating deployment of logical networks.

For every vCenter server in an environment there is one NSX manager.

The following diagram illustrates the order in which NSX manager is configured.





It is responsible for the deployment of the controllers, preparing the ESXi hosts, installing the vSphere installation bundles (VIBs) on the hosts to enable VXLAN, Distributed routing, distributed firewall and the user world agent which is used to communicate at the control plane.

The NSX manager is also responsible for deployment of the edge services gateway and services such as load balancing, firewalling, NAT.

Controller Cluster

The controller cluster is the control plane component that is responsible in managing the switching and routing modules in the hypervisors. It contains controller nodes that manage specific logical switches, this eliminates the need for multicast support on the physical network.

It is advised to deploy a controller cluster with multiple nodes in an odd number. A slicing method is used to ensure all nodes are being utilized.

In case of failure of a node, the slices assigned to this node are reassigned to the remaining members of the cluster. To ensure this method works correctly, one of the cluster is nominated as a master for each role. It is responsible for allocating slices to individual nodes and determining if a node has failed, if so it will redistribute the slices to remaining nodes. In the case of  an election, a majority is needed to become master, therefore controllers must be deployed in odd numbers.


VXLAN Introduction

Definition

Virtual eXtensible Local Area Network (VXLAN): A Framework

for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks




For a VXLAN deepdive, have a look at Joe Oniskick's post on  Define the Cloud VXLAN Deepdive
Here is the VXLAN deepdive presentation from VMworld 2012 - link

VXLAN is a L2 over L3 encapsulation technology. The original ethernet frame is encapsulated with external VXLAN, UDP and ethernet headers to ensure it can be transfered between VTEPs.


A VXLAN segment can span multiple L3 networks, with full connectivity of VMs.
VMkernel interface is used to communicate over VXLAN. Everything is tunneled directly via vmkernels.

VXLAN kernel module encapsulates packet in a VXLAN header, sends it out the vmkernel interface (VTEP) - VXLAN tunnel endpoint, to the VTEP on a destination host, which de-encapsulates it and hands it to VM. This process is completely transparant to the VM.

Depending on teaming types hosts may have single or multiple VTEPs

The VXLAN Network Identifier (VNI) is a 24 bit identifier which is associated with each L2 segment created, it is carried inside the VXLAN header and is associated to an IP subnet, like a traditional VLAN. This VNI is the reason VXLAN can scale beyond the 4094 VLAN limitation.


VTEPs are identified  by the source and destination IP addresses used in the external IP header.


Because the original ethernet frame is encapsulated into a UDP packet, this increases the size of the IP packet, therefore it is recommended that the minimum MTU size is set to 1600 Bytes.








Below we will look at L2 communication using VXLAN 


  • VM1 originates a frame destined to the VM2 part of the same L2 logical segment
  • The source ESXi host identifies the ESXi host (VTEP) where the VM2 is connected and encapsulates the frame before sending it into the transport network.
  • The transport network is only required to enable IP communication between the source and destination VTEPs
  • The destination ESXi host receives the VXLAN frame, decapsulates it and identifies the L2 segment it belongs to (leveraging the VNI value inserted in the VXLAN header by the source ESXi host).
  • The frame is delivered to the VM2.








NSX Edge Services Gateway


Services provided by NSX edge services gateway - 

  • Routing and NAT: the NSX edge provides centralized on-ramp/off-ramp routing between logical networks deployed in the NSX domain and the external physical infrastructure. It supports various routing protocols such as OSPF, iBGP and eBGP and can communicate leveraging static routing. Source and Destination NAT is performed here.
  • Firewall: It has stateful firewall capabilities which compliment the distributed firewall enabled on the kernel of the ESXi hosts. While the distributed firewall enforces security policies for communication between workloads connected to the logical networks (east-west), the firewall on the edge filters communication between the logical and physical network (north-south)
  • Load balancing: the NSX edge can perform load-balancing services for server farms of workloads deployed in the logical space
  • L2 and L3 VPN: L2 VPn is usually applied to extend L2 domains between different DC sites. L3 VPNs can be deployed to allow IPSEC site to site connectivity between to edges or orther VPN terminators, using an SSL VPN.
  • DHCP, DNS and IP address management: DNS relay, DHCP server and default gateway features also available. 

NSX Manager deploys a pair of NSX edges on different hosts (anti-affinity). Heartbeat keepalives are exchanged every second between the active and standby edges to monitor each others health status. These keepalives are L2 probes sent over an internal portgroup. VXLAN can be used to transmit these keepalives, therefore allowing this to happen over a routed network.

If the ESXi server hosting the active NSX edge fails, at the expiration of a "Declare Dead Time" timer, the standby edge takes over. The default time takes 15 seconds but can be decreased to 6.

The NSX Manager also monitors the state of health of deployed edges.



Transport Zone


A transport zone defines a collection of ESXi hosts that can communicate with each other a physical network infrastructure. This communication happens by leveraging at least one VTEP on each host.

As a transport zone extends across one or more ESXi clusters it defines the span of logical switched.

A VDS can span across a number of ESXi hosts.






A Logical switch can extend across multiple VDS







NSX Distributed Firewall


The DFW provides L2-L4 stateful firewall services to any workload in the NSX environment. DFW runs in the kernel space and as asuch performs near line rate network traffic protection. DFW performance and throughput scale linearly by adding new ESXi hosts.
The distributed firewall is activated as soon as the host preparation process is completed. If you want to exclude a VM from DFW service, you can add it to the exclusion list.

One DFW instance per VM vNIC is created, so if you create a new VM with 5 vNICS, 5 instances of DFW will be allocated to the VM. When a DFW rule is created a Point of Enforcement (PEP) can be selected, options vary from vNIC to logical switch. By default "apply to" option is not selected and the DFW rule is applied to all instances.

DFW policy rules can be written in 2 ways: using L2 rules (Ethernet) or L3/L4 rules (General).

L2 rules are mapped to L2 OSI model: only MAC addresses can be used in the source and destination fields – and only L2 protocols can be used in the service fields (like ARP for instance).

L3/L4 rules are mapped to L3/L4 OSI model: policy rules can be written using IP addresses and TCP/UDP ports. It is important to remember that L2 rules are always enforced before L3/L4 rules. As a concrete example, if the L2 default policy rule is modified to ‘block’, then all L3/L4 traffic will be blocked as well by DFW (and ping would stop working for instance). 


The DFW is an NSX component designed to protect workload to workload network traffic, either virtual to virtual or virtual to physical. The main goal of the DFW is to protect east-west traffic. But since the DFW policy enforcement is applied to the vNIC, it can also be used to prevent communication between VMs and the physical network. The edge services gateway os the first point of entry in the data centre as it primarily concerned with protecting north-south traffic flow.






The DFW operates at the vNIC level, meaning that a VM is always protected no matter how it is connected to the logical network. VM can be connected to a VDS VLAN-backed port-group or to a Logical Switch (VXLAN-backed port-group). All these connectivity modes are fully supported. ESG Firewall can also be used to protect workloads sitting on physical servers and appliances, such as NAS.

There are 3 entities that make up the DFW architecture:

vCenter server: This is the management plane on the DFW. Policy rules are created through the vSphere web client. Any vCenter container can be used in the source/destination field of the policy rule: cluster, VDS port-group, Logical Switch, VM, vNIC, resource pool, etc.

NSX Manager: This is the control plane on the DFW. It receives rules from the vCenter and stores them in the central database. NSX Manager then pushes DFW rules down to all hosts. NSX Manager can also recieve rules directly from Rest API.

ESXi Host: This is the data plane of the solution. DFW rules are received from the NSX Manager and then translated into the kernel space for real-time execution. VM network traffic is inspected and enforced per ESXi host.

VMtools need to be installed on VMs to provide IP connectivity.

While a host is being prepared, and DFW is being activated a kernel VIB is loaded into the hypervisor. This is called VMware Service Insertion Platform (VSIP)

VSIP is responsible for all data plane traffic protection and runs at near line speed. A DFW instance is created per vNIC and this instance is located between the VM and the virtual switch.

A set of daemons called vsfwd run permanently on the ESXi host and perform the follwing tasks:


  • Interact with the NSX Manager to retrieve DFW rules.
  • Gather DFW statistics and send them to NSX Manager.
  • Send audit logs to the NSX Manager. 

The communication path between the vCenter Server and the ESXi host (using the vpxa process on the ESXi host) is only used for vSphere related purposes like VM creation or storage modification and to program host with the IP address of the NSX Manager. This communication is not used at all for any DFW operation.

The VSIP kernel module adds services like Spoofguard ( protects against IP spoofing) and traffic redirection to third party services like Palo Alto.


How DFW rules are enforced

  • DFW rules are enforced in top-to-bottom ordering.
  • Each packet is checked against the top rule in the rule table before moving down the subsequent rules in the table.
  • The first rule in the table that matches the parameters are enforced.  
When creating rules it is recommended to put the most granular rules at the top, to ensure they are all enforced.

By default the bottom rule in the table is a catch all, and will be applied to all by default. This default catch all rule is set to allow.








An IP packet (first packet - pkt1) that matches Rule number 2 is sent by the VM. The order of operation is the following:

1. Lookup is performed in the connection tracker table to check if an entry for the flow already exists. 
2. As Flow 3 is not present in the connection tracker table (i.e miss result), a lookup is performed in the rule table to identify which rule is applicable to Flow 3. The first rule that match the flow will be enforced. 
3. Rule 2 matches for Flow 3. Action is set to ‘Allow’. 
4. Because action is set to ‘Allow’ for Flow 3, a new entry will be created inside the connection tracker table. The packet is then transmitted properly out of DFW.








For subsequent packets, lookup is performed in the connection tracker table to check if an entry for the flow already exists, if a flow exists, the packet is transmitted out of the DFW.

Wednesday, 10 June 2015

Lab Specifications


So it looks at present that I will be going with a hosted lab environment. I am hoping to have at least 3 ESXi hosts in the Management Cluster and maybe 1 or 2 more in a Production Cluster.

I will not have access to this until the middle of next week as things stand, but I will update this post as soon as I have any further details.

Michael

Tuesday, 9 June 2015

List of Resources


These are a list of resources I have come across and will be using for my studies:


VMware resources

VCIX-NV Blueprint
VMware VXLAN Deployment guide
VMware Network Virtualization Design guide
VMware NSX Documentation Center - link
VMware NSX: Install, Configure, Manage (6.0) - link
VMware vSphere Distributed Switch Best Practices - link


Blog resources

VCIX-NV Study Guide by Sean Whitney -Virtually Limitless
VCIX-NV Study Guide by Martijn Smit - Lost Domain
VCIX-NV Study Guide by Roie Ben Haim - Routed to Cloud
Working with NSX by Chris Wahl - Wahl Network
Yellow-Bricks by Duncan Epping - Yellow-Bricks
Virtualization is Life by Anthony Spiteri - Virtualization is Life
An exhaustive list of everything NSX related by Rene Van Den Bedem - vcdx13


Video Resources


Pluralsight - VMware NSX for vSphere Introduction and Installation by Jason Nash - link
Pluralsight - VMware NSX for vSphere: Network Services by Jason Nash - link
VMworld 2014 NSX Deep Dive Part 1 - link
VMworld 2014 NSX Deep Dive Part 2 - link


Book Resources


Networking for VMware Administrators - Chris Wahl and Steven Pantol - link

Monday, 8 June 2015

Starting Off



I passed my VCP-NV last Friday 08/06/2015, so the next logical step for me was to attempt to pass the VCIX-NV. Now, I'm not one hundred percent on timescale I will be working from. I received authorization to take the exam this morning, but as of yet have not booked a date.

Basically I'm just going to use this blog post to set my study plan, collect resources and post information that might be helpful to others.

So far I've come across a couple of resources. The first and most obvious (and essential) is the official exam blueprint. The second I have looked at is a very comprehensive study guide by Martijn Smit. I urge you to take a look at this guide as it is very impressive, and you will find nothing of the sort here!

I have also come across another excellent blog by Sean Whitney - Virtually Limitless
I will add all these to a resources post once I have found the material I am satisfied with.

Next up is watching the VMware NSX A Technical Deep Dive, which can be found here. This gives a good overview of the product before I start any labs.

So that's the first few days planned out for me. I am going to give a good run through the blueprint, noting down particular areas where I will need to put the most work in. After that have a brief look through the study guide listed above and watch the 2 deep dive videos from VMworld 2014.

I will probably structure this blog along the lines of the blueprint and post and extra useful information I find.


Edit: I have just booked my exam for 4th of September. Hopefully this will give me ample time to get to grips with all the material.

Michael Hogan