Thursday, 4 February 2016

Backing up Windows Servers in vCloud Air with CloudBerry Backup to Google Object Storage


I have had many people ask me for a file level backup solution that can be deployed in vCloud Air,

So I spent the best part of this afternoon looking at different back up solutions in the vCloud Air Solution Exchange

I came across Cloud Berry Backup and said I would have a look at using it to back up Windows Servers. (They also offer a Linux solution, but I haven't tried this yet) Below is a quick guide on how to set up Cloud Berry on Server 2012 and connect to Google Object Storage in vCloud Air.

Deploy Windows server in vCA

Download and install Cloud Berry Backup - link

Click on the Google Object Storage tile in vCloud Air

Create a new service account



Make sure to create SNAT and firewall rules to give VM external access



Open Cloud Berry and click Add new Account



Select vCloud Air (Google Cloud Storage) and connect to your object storage account




Use the import token to import your service account certification.

Once this is done, you can select to run a once off backup or create a scheduled backup. I was surprised to see a plethora of option in Cloud Berry form encryption to compression.

There are a couple of different backup/restore options.

We can take System State/File system or you can select individual drives. Along with this we can do bare metal or image backups.

On restoring files we can restore image level to vhd or vmfs, or just restore individual files or folders from the bucket we have selected for our backup.




I want to have a further look at the more advanced options over the coming days, but if you have any questions or suggestions, please ask.

Monday, 19 October 2015

Exam experience and thoughts

So I sat my exam on the 16th of October at my local exam center. Here are a few bits an pieces of advise, that you have probably come across in countless places already.

First off if you are not in the US, try get an exam time that is not during working hours in the US. My exam began at 1:30 pm GMT and steadily grew worse over the next few hours. This will not be news for people who have already sat a VCAP level exam, but even after reading a lot of exam experience posts, I was not quite prepared for how bad it actually was!

I had constant browser crashes, flash plugin crashes which in the heat of the exam is very annoying.

In regards to the exam blueprint, almost everything on it will be touched, another obvious point, but you really need to know all the topics very well. I think I got about 13/14 of the 18 questions done. I may have got part of some of the later questions done, but time is very tight. The 210 minutes passes so quickly and I may have been a bit lax at the start of the exam.

The tips I would give would be to use both C# client and the Web client to do different tasks.
Take notes of some key points in the questions, so you are not flipping back and forward between the two screens ( on that note, when will get dual screens for these exams?! It's ridiculous trying to do this exam on a small 4:3 monitor....)
Try SSH as much as you can, cmd performance is terrible, screen will redraw itself for pings.

Overall I found the exam pretty tough, mainly due to time and performance, but in the end it was worth it. I have learned a huge amount preparing for this exam.

I got my result about 3 hours later and was delighted to have passed. Barely.

Please feel free to ask any questions on what I used to study and my overall experience.

Friday, 14 August 2015

Logical Routing

By using the NSX platform, we have the ability to interconnect endpoint, either virtual or physical, deployed in separate logical L2 networks. This is possible because of the decoupling of the virtual network from the physical.

In the diagram below we can see a routed topology connecting two logical switches.



By deploying a logical router we can interconnect endpoints, either virtual or physical, belonging to separate L2 domains, or by interconnecting endpoints belonging to L2 domains with devices deployed in the external L3 physical environment. The first type is east-west communication while the latter is north-south.


Here, we can see both east-west and north-south routing in a multi-tier application.

Logical Routing Components 

Here we will discuss centralized and distributed routing.

Centralized routing represents the functionality that allows communication between the logical network and layer 3 physical infrastructure.







Here we can see that centralized routing can be used for both east-west and north-south routed communications. However east-west routing is not optimized in a centralized routing deployment, since the traffic is always hair-pinned from the compute rack towards the edge. This occurs even when two VMs in separate logical networks reside on the same physical host.


The deployment of distributed routing prevents hair-pinning for VM to VM routed communication by providing hypervisor level routing. Each hypervisor installs the kernel specific flow information, ensuring direct communication path when the endpoints belong to separate IP subnets.


The DLR control plane is is provided by the DLR Control VM. This VM supports dynamic routing protocols, such as BGP and OSPF. It exchanges routing updates with the next L3 hop device and communicates with the NSX Manager and Controller Cluster. HA is provided through Active-Standby.

At the data-plane level there are DLR modules (VIBs) that are installed on the ESXi hosts. The modules have routing information base (RIB) that is pushed through the controller cluster. Route lookup and ARP  lookup are performed by these modules. The kernel modules are equipped with logical interfaces (LIFs) connecting to different logical switches or VLAN backed portgroups. Each LIF is assigned an IP address representing the default gateway for that logical segment and also a vMAC address.





The above shows the integration between all logical routing components to enable distributed routing.
  1. A DLR instance is created from NSX Manager, either by by UI or and API call and routing is enables, using the protocol of choice, either OSPF or BGP.
  2. The controller leverages the control plane with the ESXi hosts to push the new DLR configuration, including LIFs and their IP addresses and vMAC addresses.
  3. OSPF/BGP peering is established between the edge and the DLR control VM.
  4. The DLR control VM pushes the IP route learnt from the NSX edge to the controller cluster.
  5. The controller cluster is responsible for distributing routes learned from the DLR control VM across the hypervisors. Each controller node in the cluster distributes information for a particular logical router instance. 
  6. The DLR Routing kernel modules on the hosts handle the data path traffic for communications to the external network via the NSX edge.



The above shows the required steps for routed communication between two virtual machines connected to separate logical segments.

  1. VM1 wants to send a packet to VM2 connected to a different VXLAN segment, so the packets is sent VM2 default gateway interfaces located on the local DLR.
  2. A routing lookup is performed at the local DLR, which determines that the destination subnet is directly connected to DLR LIF2. A lookup is performed on the LIF2 ARP table to determine the MAC address associated to VM2.
  3. An L2 lookup is performed in the local MAC address table to determine how to reach VM2, the original packet is then VXLAN encapsulated and sent out the VTEP of ESXi2.
  4. ESXi2 de-encapsulates the packet and performs an L2 lookup in the local MAC table associated with the given VXLAN
  5. The packet is delivered to VM2. 

Local routing will always take place on the DLR instance running in the kernel of the ESXi hosting the workload that initiates the communication.




Above is an example of ingress traffic form external networks.

  1. A device on external network wants to communicate with VM1
  2. The packet is delivered from external network to ESXi server which is hosting the edge. The edge receives the packet and performs a routing lookup.
  3. Both routing lookups, NSX level and DLR level are performed locally on ESXi2.
  4. The destination is directly connected to the DLR, packet is VXLAN encapsulated and routed from transit network to correct VXLAN segment.
  5. ESXi 1 de-encapsulates the packet and delivers it to VM1. 





Above is an example of egress traffic to external network.

  1. VM1 wants to reply to external destination. Packet is sent to default gateway, located on local DLR.
  2. Routing lookup is performed at local DLR, determines next hop. This is the NSX edge on the transit network. Information is pushed to DLR kernel by the Controller.
  3. L2 lookup is performed to determine how to reach NSX edge interface on transit network. Packet is VXlAN encapsulated and sent to VTEP of ESXi2.
  4. Edge performs a routing lookup and sends packet into phyical network to next L3 hop. Packet will then be delivered by physical network.




Wednesday, 17 June 2015

Blueprint Part 5




These are the following objectives for Section 2.2 of the blueprint.


Section 2 – Create and Manage VMware NSX Virtual Networks


Objective 2.2 – Configure VXLANs


Skills and Abilities


· Prepare cluster for VXLAN  -  Yellow-Bricks
· Configure VXLAN transport zone parameters - Wahl Network
· Configure the appropriate teaming policy for a given implementation - link





Just a few quick points on VXLAN
A more detailed explanation can be found in  NSX Components and Features


VXLAN is a L2 over L3 encapsulation tecnology. The original ethernet frame is encapsulated with external VXLAN, UDP, IP and ethernet headers to ensure it can be transported across the netwrok infrastructure.

A VXLAN segment can span multiple L3 networks, with full connectivity of VMs.
VMkernel interface is used to communicate over VXLAN. Everything is tunneled directly via vmkernels.

VXLAN kernel module encapsulates packet in a VXLAN header, sends it out the vmkernel interface (VTEP) - VXLAN tunnel endpoint, to the VTEP on a destination host, which de-encapsulates it and hands it to VM. This process is completely transparant to the VM.










Depending on teaming types hosts may have single or multiple VTEPs

The VXLAN Network Identifier (VNI) is a 24 bit identifier which is associated with each L2 segment created, it is carried inside the VXLAN header and is associated to an IP subnet, like a traditional VLAN. This VNI is the reason VXLAN can scale beyond the 4094 VLAN limitation.


VTEPs are identified  by the source and destination IP addresses used in the external IP header.


A minimum of 1600 Bytes for the MTU is the recommendation for VXLAN.


Completed 17/06


Prepare a cluster for VXLAN



Configure VXLAN transport zone parameters



Configure the appropriate distributed virtual port group





Tuesday, 16 June 2015

Blueprint Part 4



These are the following objectives for Section 2.1 of the blueprint.


Section 2 – Create and Manage VMware NSX Virtual Networks


Objective 2.1 – Create and Administer Logical Switches


Skills and Abilities


· Create/Delete Logical Switches
· Connect a Logical Switch to an NSX Edge
· Deploy services on a Logical Switch
· Connect/Disconnect virtual machines to/from a Logical Switch
· Test Logical Switch connectivity



Completed 16/06


Create/Delete Logical Switches


A Logical Switch can extend across multiple VDS. A given Logical Switch can provide connectivity for VMs that are connected to the Compute Clusters or to the Edge Cluster. A logical Switch is always created as part of a specific Transport Zone. This implies that normally the Transport Zone extends across all the ESXi clusters and defines the extension of a Logical Switch.



Completed 17/06


Connect logical switch to edge gateway


Deploy services on a logical switch



As I have no 3rd party services defined I am unable to complete this task. It seems to be straight forward - Select service and click ok.


Connect/Disconnect a VM from a logical switch in NSX 



Test logical switch connectivity in NSX


I had no issues testing connectivity between any of my hosts




Blueprint Part 3



These are the following objectives for Section 1.3 of the blueprint.


Section 1 - Install and Upgrade VMware NSX


Objective 1.3 – Configure and Manage Transport Zones 


Skills and Abilities


· Create Transport Zones - link
· Configure the control plane mode for a Transport Zone - link
· Add clusters to Transport Zones - link
· Remove clusters from Transport Zones - link



Completed 16/06


Create Transport Zones - In the simplest sense, a Transport Zone defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure.




Blueprint Part 2



These are the following objectives for Section 1.2 of the blueprint.


Section 1 - Install and Upgrade VMware NSX


Objective 1.2 – Upgrade VMware NSX Components


Skills and Abilities


· Upgrade vShield Manager 5.5 to NSX Manager 6.x
· Upgrade NSX Manager 6.0 to NSX Manager 6.0.x
· Upgrade Virtual Wires to Logical Switches
· Upgrade vShield App to NSX Firewall
· Upgrade vShield 5.5 to NSX Edge 6.x
· Upgrade vShield Endpoint 5.x to vShield Endpoint 6.x
· Upgrade to NSX Data Security


I am going to come back to this part of the lab later as I currently don't have vShield manager deployed and I think it would suit better to continue with the blueprint for now.