One of the very frequent questions I see internally on the VMware mailing lists is how to publish a vCloud Director portal on the Internet. I’ve personally went through the dilemma of searching for such information and had no luck to find something documented in a clear way with configuration examples.
In this post I will cover both the architecture considerations as well as the technical configuration from my experience in a real-world implementation. You have to keep in mind though that there is no one solution that fits all requirements, however, there are always some common guidelines and that’s what I will try to cover here.
As you already know, a vCloud Director cell provides two services for end users to self-provision and access VMs in a cloud. We will refer to the first service here as “HTTP” and the second one as “VMRC”. The former is obviously responsible for providing the web portal and the latter for accessing the remote console of the VM running on the ESX host even if it doesn’t have any networking set for it.
Architecting your solution
There are two approaches here for publishing the vCD portal on the internet. The first one by connecting your HTTP and Console Proxy interfaces to the DMZ, and the second one by putting a reverse proxy in front of the Cells to handle the https requests back and forth (but not the VMRC). I intend to blog about the reverse proxy solution in a future post so we will focus here only on the first approach.
First thing first, you need to have at least three network interfaces on the vCloud Director cells:
- the first one for the HTTP service
- the second one for the Console Proxy service.
- the third one for the back end communications with the management network e.g. vCenter Server, ESX hosts, NFS shared mount and so forth.
This is a diagram showing you in details the complete architecture that we will talk about throughout this article.
As you see, we have the first and second network adapters connected to the DMZ network which is typically a port-group set on your Management Pod ESX hosts and either segmented with a VLAN or dedicated network cards depending of course on your network and security infrastructure.
As we mentioned above, the third network card on the vCD cell will be communicating with the management network. You have another two options here:
- The first option is to connect this to the same management network port-group in your ESX host, the one that is also serving the vCenter Server, database, NFS ..etc.
- The second option is to connect this interface to a new port-group/VLAN that is being routed through a Firewall to your management network.
The reason being for the second option is that if your vCD cell is compromised on the Internet, the intruder will be still facing another firewall to access your internal management network. In my article here I will adopt the second option since it is the most secured architecture.
Here are an examples on how the networking would look like on a Management Pod ESX host.
The Linux Routing
As illustrated in the diagram (and the vSS/vDS screenshots), we have three different networks for routing the traffic. The first one is the external perimeter network (typically the DMZ), the second network is the internal perimeter network, and the third one is the management network for the vSphere substrate. The common question or confusion here is around the routing. How would the Linux OS decide on the routing paths to the upstream DMZ and the downstream management traffic? And the answer to that is just some basic static routing. Let’s have a closer look.
Firstly, you need to set the IP addresses facing the DMZ with a default gateway. In our case here the IPs for the two cells are (192.168.25.11 / 12 / 21 /22). The default gateway for them is (192.168.25.1). Secondly, for the management network, you need to set the IP address without a default gateway and then set a static route for that network. You have to note here that we will need to have a persistent entry for that route in order to retain the configuration should the cell be rebooted or shutdown for any reason. To do that, you need to add the following entry to the /etc/sysconfig/networking-scripts/route-eth2 file:
Let’s quickly explain that. First of all, the file name mentioned above may be different in your case if you are using another order for the vNIC assignment (also note that we are using here a RHEL distribution). In my case, the eth0 and eth1 are assigned to the HTTP and Console Proxy services respectively. The third vNIC, eth2, is set for the management network which we are setting the static route for here. The other entries are self explanatory. The Gateway0 is the firewall IP address that will route our traffic from the Internal perimeter network to the management network. The Address0 and Netmask0 are for the destination management network we are routing the traffic to.
After setting these entries, you will need to restart the networking service and then test the connectivity like pinging the vCenter Server IP address from the Cell shell (sounds like a biology lesson!). After that I recommend to reboot the cell and see if your configuration is still persistent to avoid any issues in the future.
Okay, so now that we have the network all set for our traffic flow, you will need to install the vCloud Director software. When you run the configuration script, you will be asked which NICs you would like to assign to which service (HTTP and VMRC). Just make sure you set that properly.
The SSL Certificates
Depending on your case, you might be using a signed or self-signed certificates for your vCD portal. Each has a slightly different configuration approach the I intend to blog about in details in the future. Meanwhile, I’d highly recommend that you checkout this excellent blog post by Chris Colotti for the high level considerations on the Certificates as well as some great tips on the cell Load balancing.
The Load balancer
First of all, you will need to have two public IPs assigned to you and set on your public DNS servers to resolve to the relevant host names. For example, I’m using these two entries in my diagram:
- vcloud.provider.com -> 126.96.36.199
- vmrc.provider.com -> 188.8.131.52
Of course these are fictitious IP addresses just to show a real-world configuration example end to end. Now it’s time to set your load balancer to point each IP address to distribute the load across the cells as illustrated in the diagram. If we look at a simple traffic flow for the HTTP service it would be as follows:
- The end-user fires-up his browser and point the URL to vcloud.provider.com
- The hostname gets resolved into the public IP 184.108.40.206 and hits the Load balancer external interface.
- The LB then distribute the traffic through its internal interface to the cells IPs (192.168.25.11 and 192.168.25.21).
Note that in our example here we are using a path-through SSL traffic example for the LB.
If you are wondering about the VMRC traffic flow and how it is different from the HTTP service, you can have a look into this great two-part blog post by Michael Hines.
Setting the public address fields in the vCD admin panel
Now to the important part that most of us forget to set. You will have to configure the relevant host names in your “Public Addresses” section in your vCloud Director Administration panel.
As shown in the above screenshot, there are three URLs:
- The vCD Public URL: this will be reflected in the Organization URLs set for your customers access to your cloud.
- vCD public console proxy address: this will be used when the customer clicks on the VM console to access his VM on the web. If this is not set, the cell will use the private IP address which will obviously fail for the user accessing the portal on the Internet.
- vCD public REST API base URL: this will be used for all the functions depending on the APIs. One of which is the end-user uploads of ISOs/templates to the cloud. This one gave me a bit of a grief where I didn’t set that entry properly and had all my uploads failing (again because the cell will use the private IP if this field is empty).
The Network Ports
Last but not least, you have to understand the right network ports that is used in your entire vCloud environment. I have published a detailed KB Diagram earlier that you can grab from here: http://kb.vmware.com/kb/1030816/
You will need to work with your network/security team to open the ports between your different zones. Please note that (at the time of writing this post) there is a small mistake in the ports listed in the diagram. The vCD and ESX hosts do not communicate on port 22 and port 80. Also vCD talks to vSM on port 443 and vSM talks to ESX on port 443. This will be updated very soon on the KB.
I hope you found this article useful in planning and publishing your portals. Happy vCloud’ing!