Monthly Archives: July 2011

Adding value to SaaS

Software as a service is an entirely different animal from IaaS or PaaS. Implementing the latter two can be done (almost) with platforms available off the shelf and engaging a few consultants: Grab your favorite cloud automation platform (pick any: Eucalyptus, [Elastic|Open|Cloud]stack, Applogic, Abiquo, even HP SCA, throw in volume servers and storage, host on a reliable DC and you are good to go).

On the other hand, SaaS is something you have to:

  1. Conceive. IaaS and PaaS are self explanatory (infrastructure and platform: Virtual computing and database/application engine/analytics for rent); SaaS is… everything: from cloud storage to CRM for MDs.
  2. Implement: SaaS is not sold in shops. You have to develop code. This means, finding talented and intelligent humans to write code, and keep them with you throughout the project lifecycle.
  3. Market: Finding the right market for your SaaS is equally important to building it. SaaS is a service; services are tailored for customers and come in different sizes, colours, flavors. One SaaS to rule them all does not work.
  4. Sell: Will you go retail and address directly end customers? Advertising and social media is the road to go. Wholesale? Strike a good revenue sharing deal with somebody that already has customers within your target group, say, a datacenter provider or web hosting.
  5. Add some value to your SaaS. Cloudifying a desktop application brings little value to your SaaS product: It’s as good as running it on the desktop; the sole added value is ubiquitous access over the web. Want some real value? Eliminate the need to do backups. Integrate with conventional desktop software. Do auto-sync. Offer break-away capability (take your app and data and host somewhere else).
Let’s take two hypothetical examples: Cloud storage and CRM for doctors.
Cloud storage is a good offering for customers seeking a secure repository, accessible from everywhere. Let’s consider two approaches:
  • High end branded storage array with FC and SSD disks
  • 5-minute snapshots, continuous data protection
  • FTP and HTTP interface
  • Disk encryption
  • Secure deletion
The second approach would be:
  • WebDAV interface
  • Data retention
  • Daily replication
  • Auto sync with customer endpoints
  • Integrated content search

What’s wrong with the first approach? It is typical of the IT mindset: Offer enterprise IT features, like OLTP/OLAP-capable storage to the cloud. Potential customers? Enterprises that need to utilize high-powered data storage. Well, if you are an enterprise, most likely you’d rather keep your OLTP/OLAP workloads in house, wouldn’t you? Why bother?

The second approach offers services that are not delivered from your enterprise IT machinery. It’s added value to a cloud storage service and at the end of the day, they are deemed too expensive or complicated to implement in house. Potential customers? Enterprises that have not implemented these services but would seriously consider renting them.

Let’s consider now a cloud CRM for doctors. What would be some value added features for private MDs, apart from a database with customer names and appointment scheduling? I can think of a few:

  • Brief medical history of patient delivered to the doctor’s smartphone/pad. Can save lives.
  • List of prescribed medicines with direct links to medicare/manufacturer site. Patients can forget or mix up their prescribed drugs; computers never forget.
  • Videochat with patient.
  • Patient residence on Google maps and directions how to get there
Advertisements

A quick tour of Cloudstack

Cloud.com, now a part of Citrix, has developed a neat, compact, yet powerful platform for cloud management: Enter cloudstack, a provisioning, management and automation platform for KVM, Xen and VMware, already trusted for private and public cloud management frmo companies like Zynga (got Farmville?), Tata communications (public IaaS) and KT (major Korean Service provider).

Recently I had the chance to give cloudstack a spin in a small lab installation with one NFS repository and two Xenservers. Interested in how it breathes and hums? Read on, then.

Cloudstack was installed in a little VM in our production vSphere environment. Although it does support vSphere 4.1, we decided to try it with Xen and keep it off the production ESX servers. Installation was completed in 5 minutes (including the provisioning of the Ubuntu 10.04 server from a ready VMware tremplate) and cloudstack came to life, waiting for us to login:

The entire interface is AJAX – no local client. In fact, cloudstack can be deployed in a really small scale (a standalone server) or in a full-blown fashion, with redundant application and database servers to fulfill scalability and availability policies.

Configuring cloudstack is a somewhat more lengthy process and requires reading the admin guide. We decided to follow the simple networking paradigm, without VLANs and use NFS storage for simplicity. Then, it was time to define zones, pods and clusters, primary and secondary storage. In a nutshell:

  • A zone is a datacenter. A zone has a distinct secondary storage, used to store boot ISO images and preconfigured virtual machine templates.
  • A pod is servers and storage inside a zone, sharing the same network segments
  • A cluster is a group of servers with identical CPUs (to allow VM migration) inside a pod. Clusters share the same primary storage.
We created a single zone (test zone) with one pod and two clusters, each cluster consisting of a single PC (one CPU, 8 GB RAM) running Xenserver 5.6. Configuring two clusters was mandatory, since the two Xenservers were of different architectures (Core 2 and Xeon). After the configuration was finished, logging in to Cloudstack as administrator brings us to the dashboard.

In a neat window, the datacenter status is shown in clear, with events and status in the same frame. From here an administrator has full power over the entire deployment. This is a host (processing node in Openstack terms) view:

You can see the zone hierarchy in the left pane and the virtual machines (instances) running on the host shown in the pane on the right.

Pretty much, what an administrator can do is more or less what Xencenter and vCenter do: Create networks, virtual machine templates, configure hosts and so on. Let’s see how the cloudstack templates look like:

Cloudstack comes with some sample templates and internal system virtual machine templates. These are used internally, but more on them later. The administrator is free to upload templates for all three hypervisor clans (KVM, Xen and Vcenter). For KVM, qemu images, for VMware, .ova and for Xenserver VHD. We created one Windows 2008 server template quite easily, by creating a new VM in Xencenter, installing Xentools and then uploading the VHD file in Cloudstack:

As soon as the VHD upload is finished, it is stored internally in the Zone secondary storage area and is ready to be used by users (or customers).

How does cloudstack look like from the user/customer side? We created a customer account (Innova) and delegated access to our test zone:

Customers (depending on their wallet…) have access to one or more pods and can create virtual machines freely, either from templates of from ISO boot images they have access to, without bringing into the loop cloudstack administrators. Creating a new virtual machine (instance) is done through a wizard. First, select your favorite template:

Then, select a service offering from preconfigured sizes (looks similar to EC2?)

Then, select a virtual disk. A template comes with its own disk (in our case the VHD we uploaded earlier), but you can add more disks to your instances. This can also be done after the instance is deployed.

…and after configuring the network (step 4), you are good to go:

The template will be cloned to your new instance, boot up, and form this point on, you can log in through the web browser – no RDP or VNC client needed!

It’s kind of magic — doing this via an app server seems impossible, right? Correct. Cloudstack deploys silently and automagically its own system VMs that take care of template deployment to computing nodes and storage. Three special kinds of VMs are used:

  • Console proxies that relay to a web browser VNC, KVM console or RDP sessions of instances. One console proxy runs in every zone.
  • Secondary storage VM, that takes care of template provisioning
  • Virtual router, one for every domain (that is, customers), which supplies instances with DNS services, DHCP addressing and firewalling.
Through the virtual router users can add custom firewall rules, like this:
All these system virtual machines are managed directly from cloudstack. Login is not permitted and they are restarted upon failure. This was demonstrated during an unexpected Xenserver crash, which brought down the zone secondary storage VM. After the Xenserver was booted up, the secondary storage VM was restarted automatically by cloudstack and relevant messages showed up in the dashboard. Cool, huh?

Customers have full power over their instances, for example, they can directly interact with virtual disks (volumes), including creating snapshots:

In all, from our little cloudstack deployment we were really impressed. The platform is very solid, all advertised features do work (VM provisioning, management, user creation and delegation, templates, ISO booting, VM consoles, networking) and the required resources are literally peanuts: It is open source and all you need are L2 switches (if you go with basic networking), servers and some NFS storage. Service providers investigating options for their production IaaS platform definitely should look into cloud.com offerings, which has been a part of Citrix since July 2011.

A quick spin of Amazon EC2

This is my version of how-to-create-an-EC2-instance-with-pictures. Users of Amazon AWS stuff will find these trivial, others are welcome to see how to create your own little virtual servers in Amazon’s (non-free) cloud infrastructure.

The first thing you need to do, is of course to sign up to Amazon AWS. Point your browser to aws.amazon.com and click on the “Sign in to the AWS Management Console” link (top right). Creating an account is trivial, except that you have to enter your credit card number and a valid telephone number. The credit card number is mandatory (you have to be billed somehow to use AWS); the phone number will be used for Amazon’s automated billing service to literally give you a call and ask you to enter the four digit random challenge number that will show up on your browser. So, enter a valid phone number in a nearby phone, wait for it to ring, type the number and your account is created.

After that, you are a customer of Amazon Web Services. You will now be transferred to the AWS console, which looks like this:

The tabs at the top are all Amazon services available to customers. From here, you can create virtual machines, use elastic storage services, change networking rules, use platform tools and virtually run your own (virtual) datacenter from your browser. The limit is your wallet.

Before creating any resources, it is vital to do some geeky stuff, like downloading Amazon’s command line tools. Ubuntu people can do that like this:

# apt-get install ec2-api-tools

EC2 tools allow creation and management of AWS on the fly. They are front-end utilities to Amazon’s web services API, which is well documented and open, allowing Amazon customers to develop own applications and frameworks that directly interact with the AWS cloud. To make the EC2 API tools work, you need to take a few extra steps. Accessing the AWS API is not done via a password, but by using two authentication methods: A symmetric key to access  REST & query APIs and your personal X.509 certificate and private key signed by Amazon. These are used to make use of their SOAP web services API. Download them (.pem files) and store them in at least two safe locations. Note: The private key is generated only once. Amazon will not keep a copy; if you lose it, it is impossible to use the web services API again and you have to generate a new one.

In addition to the above, you will also need a keypair to log in to your instance. And now that you have your certificate and keys, you can fire up the console and start creating your own virtual servers. The easiest way is to select any of the offered, preconfigured Amazon Machine Images (AMI) that are available:

Amazon offers two free, no-cost (really really small…) Linux images and many more, with all sorts of operating systems (including Microsoft Windows) and middleware preinstalled. The AMI marketplace is growing, with images submitted from all major software vendors.

The customization options of your virtual server will look familiar to Xen and vCenter users (selection of memory, disk, CPUs etc), with the extra option of network parameters, like configuring access ports:

Firewall configuration

By default for a Linux AMI only SSH (port 22) is available. The next step is to start your virtual server. Select your instance and from the “Instance Actions” menu select “Start”. Wait a couple of minutes for EC2 and EBS to provision your virtual server, add another minute for booting and your machine comes to life:

Something that Xen and vCenter users would expect to be there and is not: The console. AWS does not provide (at this time) a console window where you can see your server booting up and running; rather, you have to wait until SSH (or RDP for Windows VMs) starts up. Then you can login like this:

SSH into your AMI instance

Remember what we said about keys? There is no password to log in via SSH, you have to use they keypair you have downloaded earlier. As soon as you log in, you can sudo to root (no password required) and configure your virtual server the way you like.

Apart from starting and stopping your virtual server, the AWS console allows you to create and restore disk snapshots, like this:

…and retrieve detailed report usage reports in CSV and XML format:

…and have a 10,000ft view of AWS status:

Many more features and services are available: S3 storage services, purpose-built AMIs, load balancers, CloudFront services, network latency and bandwidth options, all are available for a price, summarized in a single page:

That’s what true IaaS looks like. Signing up, creating a VM and bringing it up and live on the Internet does not take more than 15 minutes. The underlying infrastructure is massive and in constant development for close to 5 years now, yet, mature enough to be used from all kinds of customers, from freelancers up to large enterprises.

Building a cloud

Question: How many people do you need to build and run a cloud?

Answer: As many as you can fit in a meeting room.

A cloud offering IaaS and SaaS to customers is nothing more than a compact and complex technology stack. Starting from the bottom to the top, you have servers, storage (NFS/iSCSI/FC), networking (LIR, upstream connections, VLANs, load balancers) , data protection (snapshots, replication, backup/restore), virtualization (pick your flavor), cloud management (Applogic/Openstack/Cloudstack/OpenNebula/Abiquo/vCommander/you-name-it), metering & billing (eg WHCMS), helpdesk (like Kayako), user identity management, database platform (Hadoop), application servers, hosted applications and web services. All this stuff has to work. And work efficiently, if you want to attract, retain and expand your customer base, simply because your customers simultaneously use all these resources: From their browsers, customer actions ripple through firewalls, load balancers, switches, web and application servers, databases, hypervisors and disks, crossing the entire cloud stack up, down and sideways.

The only way to run this stack is… to use humans. Of what skills? System engineering, storage management, networking, security, application architecture, coding, coding, coding, web marketing, technical management and more coding. And all of them must be able to sit around the same table, talk and understand each other, if you want your cloud stack to simply work. This calls for a small headcount of gifted people (and well compensated – slide 8) that can not only deliver on the technical side but understand the cloud business and the Internet business as well.

The trick question: What kind of company can host this ecosystem? Service providers? Datacenter hosting? Web hosters? Software vendors? Well… this would depend on the company DNA. Take for example Amazon and Google. Neither was a datacenter/network provider or software vendor; Amazon is the largest online retailer, Google is the king of online advertising. Yet, both of them fostered the right kind of people that spun off what we have and use today.

Tech dive: Causal event correlation in NNMi 9.10

Another (deep) tech dive in NNMi event configuration, following up on this post. NNM consultants read on.

In my previous post I’ve described a way to create an incident (and affect node status) whenever a certain condition is detected on the node, like an interface coming up. This was done by creating a custom poller and works fine, as long as you need to monitor only one status change on the managed node. But, what happens if you need to detect two or more status changes on the same node occuring on different entities, like detecting the case when on a branch router both the main and backup lines are up?

In the NNMi world this is a “causal rule”. A causal rule is fulfilled whenever a certain number of “child incidents” are detected by NNMi. In our case, one child incident is “main line up” and the other “backup line up”. Whenever these incidents are detected for the same managed node, a causal correlation is fulfilled and a custom event is shown in the operator incident browser.

The procedure is (not so) simple. First, make sure you have created the custom pollers for the incidents you are interested in. Then, you need to create a causal correlation rule in NNMi:

  • Go to Configuration tab -> Custom correlation configuration. Click on the “Causal rules” tab on the right pane.
  • Create a new rule. Type a rule name and in the “Parent Incident” drop down entry select “New”. This will be the event that will be generated in NNMi when your causal correlation kicks in. Type a description and set its criticality.
  • Now we need to define the conditions that must occur to produce the event described above. Back in the “Causal rule” form, select as correlation nature “Root Cause” and in the “Common Child Incident Attribute” type:

 ${hostname}

This is the primary key in correlating the children incidents: If they occur in the same node, then they are correlated.

  • In the “correlation window” box below set a threshold time value to a meaningful time window. NNMi will create the correlation if the desired children incidents for the same node occur within this time window.
  • Now it’s time to define the children incident subrules that, when fulfilled, will generate our custom event. In the “Causal rule” form, create a new “child incident”. Set a proper name and in the “Child Incident” drop down, select the right incident. In our example in my previous post, this would be a “CustomPollWarning” or “CustomPollMinor” event. It is important to designate this correctly and match your custom poller type.
  • Make sure you check the “Use Child Incident’s Source Object for Parent” and “Use Child Incident’s Source Node for Parent” check boxes. Leave the “Optional Child Incident” unchecked.
  • If you have defined policies in your custom pollers, move to the pane on the right in the “Child Incident Filter” tab. Create a filter like this:
${valueOfCia(cia.custompoller.policy)} = PolicyName

This will match your policy set in your custom poller.

  • Repeat the same steps for the second child incident. The difference is that you have to leave the “Use Child Incident’s Source Object for Parent” and “Use Child Incident’s Source Node for Parent” check boxes unchecked. Again, leave the “Optional Child Incident” unchecked.
  • Save everything and close the forms.
That’s it! If it doesn’t work at the first time, experiment with the custom poller policies, correlation window and custom poller definitions. Drop me a line if you need help.

Of supermarkets and clouds

OK, no more cloud computing definitions for me. I’ve found the perfect metaphor to explain what cloud computing is: The supermarket.

Probably you don’t remember how your parents (or grandparents) did their shopping in ye olde days, when supermarkets did not exist. Well, I can still remember my grandmother; she took her shopping bag and went to the butcher around the corner, the fish market downtown, the grocery store across the street and so on. It was fun; each shop had its own smell, arrangement, window and a different face behind the bench. The whole process took hours but it sure was a pleasant thing to do. And you had to do that over and over again, at least 2-3 times a week.

Now, my grandmother has passed away and all these little shops are long gone. Behold the supermarket. Drive, park, grab a cart, cross all the aisles, fill the cart, push across the tellers, pay, load car, drive away, talk to nobody. You’re done in one hour tops. And you’ve got to do that only once per week (depending on the mouths you have to feed…)

What does this have to with cloud computing? Think about it:

  • Cloud computing is about infrastructure uniformity. Like a supermarket, you have abundance of a limited number of the latest choices: Storage is massive, yet in two or three flavors (FC, NFS, iSCSI). Servers are Intel/AMD only with the same CPU stepping. Software stacks are canned – and everything must be kept at the same current revision, otherwise things will start breaking off. In contrast, a cluster of “legacy” HP superdomes or Sun E-series boxes, complete with their own SAN, backup TAN and a team of humans to manage them smells and feels like that old local shop around the corner: It has a little bit of everything. Complex, disparate, old software stacks. Dedicated storage. Cluster-specific network interconnects. Cryptic hardware. Exotic chips. Loyal admins. Human interaction. Everything.
  • Cloud computing is about making things easier. Service provisioning is a few clicks away. Hardware provisioning does not exist; everything is racked, cabled and powered once. System reconfiguration is almost automatic. In a legacy environment (well, in a non-cloud IT shop) trips to the computer room are frequent, CD/DVD swapping does happen, system provisioning is still a ritual ceremony, installing firmware, operating systems, service packs, patches and applications. Just like paying a visit to the grocery shop, then the bakery and the butcher, carrying those heavy shopping bags. Now, think how shopping is done in a supermarket and you get the picture.
  •  Supermarkets are big, neighborhood shops are small. Big size means cheap prices and countless shelves with goods. The same applies to cloud computing: Clouds are efficient in XXL sizes; that’s why cloud provider datacenters are massive. The downside? In a supermarket you can buy only what’s on the shelf and pay what the pricetag says. Unless you buy tons of stuff, you cannot ask the management to bring in a new product at a better price. In a small shop, if the owner knows your grandmother, well, you can ask for extra candies.
  • There is a supermarket in every town, meaning you can find your preferred brand of coffee (as long as it’s on the shelf) all over the country. If your local supermarket is blown to bits by a giant spider/tsunami/alien, drive to the next town. Cloud metaphor: More or less, all cloud service providers have redundant datacenters and data replication across them, so whenever a network outage or a natural disaster strikes, it’s likely that your services will survive.

Use python to talk to NNMi

One more post for people who work with NNMi 9.x. NNMi, being a jboss animal, has a pretty decent WS-I API to talk to the world. The interface is documented in the developer’s toolkit guide (available from the HP documentation site, need an HP passport account to access) and open.

Accessing the API is peanuts to any Java developer, but for the rest of us, it’s not straightforward. Well, with a little help from Python, everything is possible:

#!/usr/bin/python

from suds.client import Client
from suds.transport.http import HttpAuthenticated

t = HttpAuthenticated(username='system', password='NNMPASSWORD')

url = 'http://nnmi/NodeBeanService/NodeBean?wsdl'

client = Client(url, transport=t)

# Retrieve full node list
allNodes = client.service.getNodes('*')

print "Nodes in topology:", len(allNodes.item)

for i in allNodes.item[:]:
  print i.name,i.deviceModel

This small script connects to the NNMi NodeBean web service, retrieves the full list of managed nodes to populate the ‘allNodes’ object and from there it prints out the hostname and device type as discovered from NNMi. What you need is the suds library (available here, or installable with a few clicks in the Ubuntu software center).