This is my attempt to “eat my own dog food” and use a handful of VMware products/technologies in my current role in PSO. It is also kind of my way to make sense of the entire Cloud Application Platform. For a quite some time now I’ve been focused on the infrastructure layer of the “cake” but I thought it might be good to start exploring other areas in the very rich VMware offerings.
First, let’s start with the holistic view.
This is an overall illustration showing the products mentioned in the subject and how they relate to each other. In the next few sections I’ll talk briefly about my experience with each item and then wrap up with a conclusion on how I was able to benefit from this micro-project in the real world. Let’s get started!
Making the Wave
WaveMaker (WM) is one of the products that really impressed me from the first look. Coming from a web development background, I can tell you that a tool like this would have been instrumental for me when I used to build complex web apps in the old days. You literally drag and drop items here and there and voila! You have a fully functioning web application.
I used WM here to build the application interface and multi-tenancy (i.e. accounting and authorization) as a starting point. I didn’t write a single code in this part, everything is visually available for you to drag, drop and run. That’s it. As you can imagine, this was the easiest part of the whole project and I wouldn’t be exaggerating if I told you that it was done in a matter of minutes.
A sip of Java
Now to the tricky part. More than 15 years ago I used to call myself a programmer. I used Basic, Pascal, C, C++ and then figured that it’s not really my thing. Few years latter to that I found myself dragged by force into the WebDev and hence started learning PHP with mySQL. Again, it wasn’t my area of interest so I stopped and promised myself that it will be my last attempt to learn anything in the programming/scripting world. Of course I was wrong.
I started learning about Java about 2 weeks ago! It was really a very, very fresh start since it’s been years without practicing to write a code. Despite that, I was able to find my way through to build some java methods and use them in WM. Let me explain in a bit detail here.
WM comes with a quite rich “Services” that you can inject in your project and one of which is Java. All you need to do is to inset a new Java service in your project, write your own methods and then call them within the app. Since I was quite fresh with Java and was as good as a newbie to this world, I found the Java SDKs of vCloud Director to be all what I need! I used also here the SpringSource Tool Suite (STS) to test my Java code before porting to WM. Of course I could have used any other IDE like Eclipse or NetBeans but I just wanted to stick with the VMware tools here. Note that switching between WM and STS is quite easy and straight forward. You just need to create a new Java project in STS with the WM sources, do your java coding/testing and then go back to WM and “refresh” your service to pickup anything new you have applied.
Pushing to the Cloud
A working App without a solid foundation to run on is useless. There are many options/places to run my app in but what else would be better than the amazing Cloud Foundry? You literally need to type a “vmc push” command and your app will be in the cloud in a minute. Now, how cool is that?
Prior to pushing my app to CF, I wanted to have a taste of the MicroCloud as well. It’s a complete CF platform running in a VM! You just need to download it and run it in Workstation/Fusion and then test your application exactly as if you were pushing and running it on CF.com. After doing all my trials on the MicroCloud, I pushed my app to CF.com, created a couple of instances with another one-liner and that’s all. My app is running now in the cloud http://cloudwave.cloudfoundry.com
Putting it all together
When I first started creating this app, it was really just for fun. I call some Java methods in the app to go and grab specific information from a vCloud Director environment and return it back in a form of data grids. Now, these vCloud environments are actually real public clouds that I’ve built for customers and they were kind to keep an access for me to them. The application is multi-tenant as I explained earlier, so I can login either as an Admin to view all my clouds, or as a vCloud owner who can only see his/her own environment and call information from it. At the time of writing these lines, I have three public clouds and one private/home cloud registered in the app where I can live-grab information from them and show back in my UI. Here is a simple screenshot.
Now to my favorite part in the whole article. I was doing a vCloud engagement this week for one of my customers and while we were in the middle of the discussions they challenged me on how easy or hard it is to leverage the vCloud APIs for integrating with their own portal. In fact, I always get this question in my vCD engagements especially with Service Providers and I normally talk in high level since it’s not in the scope of the project. This time, I had a better story to tell my customer. In fact I didn’t even talk, I just fired-up my browser, opened my CloudWave application online, logged in, and then pulled live information from the customer cloud to the application. I then looked back at them and said “I built this app with no programming experience, and just during this week in my spare time!”
Every single thing you’ve seen in the diagram or read in the article is a VMware product or technology. Even better, it’s FREE! You can go ahead and download WM or STS for free and play with them, you can download the MicroCloud and run it for free on your PCs/Macs. You can register an account, again for free, on CloudFoundry.com and start pushing your apps to the cloud. You can download and use the vCloud SDKs (be it Java, PHP or .NET) and start coding your own apps leveraging the examples included in the kits.
I took it up a notch this morning and decided to go crazy. I’ve built my own Cloud Foundry platform from the scratch on one the public vClouds that I’ve built for a customer. It’s up and running at the time of this writing and I’ve just pushed my very first app to it. Stay tuned for more details soon. http://www.myvcap.com/
Those two diagrams have been sitting in my PC for ages and I thought it’s time for them to see the light. As a perfectionist by nature I think that they are probably the worst diagrams I’ve designed to date! The reason being is that I am probably missing way too many items and services. It’s a quite rich topic when it comes to the management and monitoring of the cloud (be it public or private) and to top that, VMware is coming out with something new every day that is revolutionary and game changing. Have a look at the AppDirector on YouTube for example or Google what is coming in vCenter Operations Enterprise 5.0! Mind blowing stuff!
So, that being said, those diagrams are far from being complete or perfect. Just accept them as they are and I will keep trying to adjust and complete the missing pieces.
Few notes on the diagrams:
- As you notice, there are two diagrams here covering the same topic but one is focusing on the private cloud and the other on the public side.
- There are many items that can overlap between the two diagrams. You can mix and match what you see relevant to your environment. The things are organized the way they are just to fit everything nicely in the limited A3 size.
- I focused in the Public Cloud diagram on the portal exposure to the Internet since it’s a (somewhat) complex topic and requires proper illustration (I blogged about it in details here).
- For the Private Cloud, I focused more on the management and monitoring aspect but make no mistake, these are as quite important for a Service Provider in a public cloud! Again, I’m just trying to fit so many things in so little space.
That’s it from me today.
Public Cloud Management Pod:
Private Cloud Management Pod:
I’ve received this question from one of my readers where he wanted an RDP-like mechanism to access the Linux GUI from a PC or a mobile device (e.g. iPad or iPhone). In his environment he plans to provide some GUI-based applications to his end-users and it must be accessible remotely using Windows and iOS. The vCD portal, for some reason, is not a preferred method in his use-case.
I’ve tried to simulate this in one of my public vCloud accounts and I wanted to share with you these two methods in case they are of any interest to others.
Remote access from Windows machines:
I’ve used here two free tools in order to access the Linux GUI remotely. The first one is my favorite SSH client, Putty. This is to initiate the ssh connection to your VM in the cloud. The second tool is called Xming and we will use it here to forward the X-Window traffic. You need to download and install both the Xming and Xming-fonts utilities.
After the installation is done, all what you need is to fire-up your putty client, and set your SSH connection as follows:
in the Connection -> SSH -> X11 panel click on the “Enable X11 forwarding” check box and put “localhost:0″ in your X display location. Now start the SSH connection and when you are there in the Linux shell you can either launch the GUI application directly (e.g. firefox as shown in the screen shot below) or launch a complete Gnome / KDE session by issuing the commands gnome-session / kdestart respectively.
Remote access from an iOS device (iPhone or iPad):
In this method i used a paid App called “iSSH” since i already have it installed in my iPhone/iPad. You simple need to download the App to your iOS device and setup your SSH connection. once you are at the Linux shell, you also need to run your GUI app and then click to the “X” icon to see the display. Make sure to enable the X-Window in the App after you install it since it is switched off my default.
The iPhone Screenshots:
The iPad Screenshots:
So what is the catch here?
As you may have already noticed, you can only use these two methods if your VMs are connected to an external network (direct or routed). If your VMs are isolated, there is no way to access them remotely. Your only option will be the native VMRC through the vCloud portal.
One of the very frequent questions I see internally on the VMware mailing lists is how to publish a vCloud Director portal on the Internet. I’ve personally went through the dilemma of searching for such information and had no luck to find something documented in a clear way with configuration examples.
In this post I will cover both the architecture considerations as well as the technical configuration from my experience in a real-world implementation. You have to keep in mind though that there is no one solution that fits all requirements, however, there are always some common guidelines and that’s what I will try to cover here.
As you already know, a vCloud Director cell provides two services for end users to self-provision and access VMs in a cloud. We will refer to the first service here as “HTTP” and the second one as “VMRC”. The former is obviously responsible for providing the web portal and the latter for accessing the remote console of the VM running on the ESX host even if it doesn’t have any networking set for it.
Architecting your solution
There are two approaches here for publishing the vCD portal on the internet. The first one by connecting your HTTP and Console Proxy interfaces to the DMZ, and the second one by putting a reverse proxy in front of the Cells to handle the https requests back and forth (but not the VMRC). I intend to blog about the reverse proxy solution in a future post so we will focus here only on the first approach.
First thing first, you need to have at least three network interfaces on the vCloud Director cells:
- the first one for the HTTP service
- the second one for the Console Proxy service.
- the third one for the back end communications with the management network e.g. vCenter Server, ESX hosts, NFS shared mount and so forth.
This is a diagram showing you in details the complete architecture that we will talk about throughout this article.
As you see, we have the first and second network adapters connected to the DMZ network which is typically a port-group set on your Management Pod ESX hosts and either segmented with a VLAN or dedicated network cards depending of course on your network and security infrastructure.
As we mentioned above, the third network card on the vCD cell will be communicating with the management network. You have another two options here:
- The first option is to connect this to the same management network port-group in your ESX host, the one that is also serving the vCenter Server, database, NFS ..etc.
- The second option is to connect this interface to a new port-group/VLAN that is being routed through a Firewall to your management network.
The reason being for the second option is that if your vCD cell is compromised on the Internet, the intruder will be still facing another firewall to access your internal management network. In my article here I will adopt the second option since it is the most secured architecture.
Here are an examples on how the networking would look like on a Management Pod ESX host.
The Linux Routing
As illustrated in the diagram (and the vSS/vDS screenshots), we have three different networks for routing the traffic. The first one is the external perimeter network (typically the DMZ), the second network is the internal perimeter network, and the third one is the management network for the vSphere substrate. The common question or confusion here is around the routing. How would the Linux OS decide on the routing paths to the upstream DMZ and the downstream management traffic? And the answer to that is just some basic static routing. Let’s have a closer look.
Firstly, you need to set the IP addresses facing the DMZ with a default gateway. In our case here the IPs for the two cells are (192.168.25.11 / 12 / 21 /22). The default gateway for them is (192.168.25.1). Secondly, for the management network, you need to set the IP address without a default gateway and then set a static route for that network. You have to note here that we will need to have a persistent entry for that route in order to retain the configuration should the cell be rebooted or shutdown for any reason. To do that, you need to add the following entry to the /etc/sysconfig/networking-scripts/route-eth2 file:
Let’s quickly explain that. First of all, the file name mentioned above may be different in your case if you are using another order for the vNIC assignment (also note that we are using here a RHEL distribution). In my case, the eth0 and eth1 are assigned to the HTTP and Console Proxy services respectively. The third vNIC, eth2, is set for the management network which we are setting the static route for here. The other entries are self explanatory. The Gateway0 is the firewall IP address that will route our traffic from the Internal perimeter network to the management network. The Address0 and Netmask0 are for the destination management network we are routing the traffic to.
After setting these entries, you will need to restart the networking service and then test the connectivity like pinging the vCenter Server IP address from the Cell shell (sounds like a biology lesson!). After that I recommend to reboot the cell and see if your configuration is still persistent to avoid any issues in the future.
Okay, so now that we have the network all set for our traffic flow, you will need to install the vCloud Director software. When you run the configuration script, you will be asked which NICs you would like to assign to which service (HTTP and VMRC). Just make sure you set that properly.
The SSL Certificates
Depending on your case, you might be using a signed or self-signed certificates for your vCD portal. Each has a slightly different configuration approach the I intend to blog about in details in the future. Meanwhile, I’d highly recommend that you checkout this excellent blog post by Chris Colotti for the high level considerations on the Certificates as well as some great tips on the cell Load balancing.
The Load balancer
First of all, you will need to have two public IPs assigned to you and set on your public DNS servers to resolve to the relevant host names. For example, I’m using these two entries in my diagram:
- vcloud.provider.com -> 184.108.40.206
- vmrc.provider.com -> 220.127.116.11
Of course these are fictitious IP addresses just to show a real-world configuration example end to end. Now it’s time to set your load balancer to point each IP address to distribute the load across the cells as illustrated in the diagram. If we look at a simple traffic flow for the HTTP service it would be as follows:
- The end-user fires-up his browser and point the URL to vcloud.provider.com
- The hostname gets resolved into the public IP 18.104.22.168 and hits the Load balancer external interface.
- The LB then distribute the traffic through its internal interface to the cells IPs (192.168.25.11 and 192.168.25.21).
Note that in our example here we are using a path-through SSL traffic example for the LB.
If you are wondering about the VMRC traffic flow and how it is different from the HTTP service, you can have a look into this great two-part blog post by Michael Hines.
Setting the public address fields in the vCD admin panel
Now to the important part that most of us forget to set. You will have to configure the relevant host names in your “Public Addresses” section in your vCloud Director Administration panel.
As shown in the above screenshot, there are three URLs:
- The vCD Public URL: this will be reflected in the Organization URLs set for your customers access to your cloud.
- vCD public console proxy address: this will be used when the customer clicks on the VM console to access his VM on the web. If this is not set, the cell will use the private IP address which will obviously fail for the user accessing the portal on the Internet.
- vCD public REST API base URL: this will be used for all the functions depending on the APIs. One of which is the end-user uploads of ISOs/templates to the cloud. This one gave me a bit of a grief where I didn’t set that entry properly and had all my uploads failing (again because the cell will use the private IP if this field is empty).
The Network Ports
Last but not least, you have to understand the right network ports that is used in your entire vCloud environment. I have published a detailed KB Diagram earlier that you can grab from here: http://kb.vmware.com/kb/1030816/
You will need to work with your network/security team to open the ports between your different zones. Please note that (at the time of writing this post) there is a small mistake in the ports listed in the diagram. The vCD and ESX hosts do not communicate on port 22 and port 80. Also vCD talks to vSM on port 443 and vSM talks to ESX on port 443. This will be updated very soon on the KB.
I hope you found this article useful in planning and publishing your portals. Happy vCloud’ing!
Leveraging the vSphere 5.0 NetFlow support to monitor and report traffic data in a Service Provider vCloud environment
One of the cool networking features in vSphere 5.0 is the built-in support for NetFlow. This was first introduced in VI3.5 as an experimental feature and then it vanished, for some reason, in vSphere 4.x.
I’ve blogged already about NetFlow with VI3.5 in this blog post, and I explained how you can configure it form the command line on an ESX host to push the Netflow data to an external collector/analyzer.
The cool thing is that now it is fully supported in vSphere 5.0 and it can be configured also right from the GUI. Let’s have a quick look first on this.
Configuring NetFlow on the vNetwork Distributed Switch
1 – You will need to go to your networking panel in vSphere 5.0 and choose the vNetwork Distributed Switch (vDS) you want, and then right click and choose “Edit Settings”
2 – Go to the “NetFlow” tab and then fill the required fields as shown in the screenshot below.
3 – The first field is the NetFlow collector/analyzer IP address and the relevant port it will be listening to. The second field is the vDS IP which I must say can cause a lot of confusion. This doesn’t have to be a real IP address, it’s more of an identifier, if you will. This IP address will *not* be attached to any ESX vNIC. Think of it as if you are sending an email to someone with “your name” in the sender field so that the recipient knows from where it’s coming from. It’s important in our case here because you will probably have many ESX hosts in the cluster, each sending the data to the same collector. The unified IP address here is meant to tell that collector that all these data are coming from the same source/router rather than different ones.
4 – The rest of the settings are self explanatory and aimed to tweak the NetFlow exporting settings. Just keep in mind that if you set the “Sampling rate” to “0″ the sampling will be disabled and you will be pushing all the traffic stats. This is of course the most accurate results you will have but in the same time it may require more resources from the ESX hosts in a busy network environment.
5 – In our case here, we are typically selecting the “External Networks” vDS which will have the external traffic of the customers/tenants in the SP (typically out to the Internet or their Site-to-Site VPN)
6 – Last step is to enable the NetFlow monitoring on the designated ports, up-links or port-groups. In our case here, I enabled the monitoring on a port-group which is in effect an external network for a pool of customers.
Use cases for a vCloud Service Provider
I’ll list below some of the use cases for a Service Provider applying this in their vCloud environment:
- Traffic Reporting: Some end-customers would like to have a “live” traffic statistics for their cloud. With something like “NetFlow Analyzer” from ManageEngine (one of my all-time-favorites) , the SP can facilitate that to its customers. A scheduled reports can also be set to push the traffic statistics to each tenant based on his Organization Network.
- Bandwidth Utilization: a customer may want to have a capped bandwidth for his/her cloud external networking or at least a notification if they exceeded a specific quota. With NetFlow (and again NF Analyzer) you can set specific thresholds so that the customer get notified if they exceeded a certain bandwidth per day/week/month ..etc.
Security: Through the NetFlow properties like (Protocol, Source and Destinations ports), a Service Provider can generate some security related reports like suspicious virus traffic. If you want to have a more accurate results, you can also leverage the Port-Mirroring feature in vSphere 5.0 (will talk about it in a future post)
DoS attacks: With NetFlow, a Service Provider can easily identify internal DoS attacks that may be launched between a tenant and another across Organization Networks or shared External Networks.
What about the Enterprises and Private Clouds?
Similar use cases can be also considered in an enterprise or a private cloud. For example, a developer may want to analyze the internal or external traffic of his applications in the cloud. A Networking/Security team may want to have a visibility into a cloud environment for troubleshooting, security, auditing (you name it) and with a tool like this, it’s quite easy and very effective to achieve that.
Leveraging the NetFlow support in vSphere 5.0 with 3rd-party collectors/analyzers can be a of a great benefit to any Service Provider. I’ve personally managed various ISPs in the old days and I know for a fact that with a simple protocol like NetFlow I was able to not only have the required visibility in my environment, but also have a very effective tool to monitor and troubleshoot any problems. Of course you can still leverage expensive and high-end solutions like IPSs or traffic-shapers with a lot of administration and redesigning of your network infrastructure to have the inter-VM traffic visibility, but everything has its limit at the end of the day.