The Cloud – A disruptive game changer – just ask Nokia !

iStock_000000579002XxxSmallIt’s often said that the Cloud will be a game changer and disruptive and that statement is put out there for the future but I believe we have already seen a huge example of this in the mobile telecommunications domain.  We have seen in the last twelve months the beginning of a fundamental change in a users relationship with services because of the ability of real time delivery over mobile and fixed broadband. Apple single handedly changed the perception of, not only what dollar value a user would pay, but that they would actually pay at all. In the first 60 days Apple had 100 million downloads from their App Store.  Just think about this…60 days, 100 million downloads ! Phenomenal. Even more phenomenal is they ripped up the script of the established model and established their own.

We have quickly seen other providers such as RIM, and Google quickly adopt the same model, with Nokia lagging behind and then news filtering out that they would launch an App Store at the Mobile World Conference in Barcelona and when they did,  well lets just say that it was not exactly a success.  Microsoft, late to the party as always, are also jumped on the bandwagon with the launch of their “My Phone” service . Samsung have also now launched their own Mobile Applications MarketPlace. This shift has hugely changed the whole model of the Telco market. Nokia, the 100 pound gorilla, is losing market share hand over fist as it struggles to get to grips with this new model. Motorola has lost $3.6 billion as they too struggle to get to grips with this new consumer model.

In 1 year Apple has become the eighth largest mobile phone vendor in the world (source: strategy analytics). This whilst only competing in the smartphone market and, at the time of the report, not selling into markets such as China. Overall during the past March quarter mobile phone sales fell 13% worldwide, the fastest rate of annual decline since record began, but in contrast sales rose 10% in the US, largely because of Apple. The top 5 handset vendors saw their market share fall from 83.5% to 78%, a decline that is predicted to continue as Android comes of age and Apple continues it’s dominance with low-end entry points into the consumer market.

The whole notion of how to sell to an individual has changed, it has become from the edge and back rather than the reverse i.e. it has proved that users are wiling to not only pay for real-time services and just-in-time applications, but will actually choose their handset provider based on the perceived value and breadth of those services. How many times have you read of a competitive phone review, “In some ways it is a better handset than the iPhone but it just cannot match the App Store for breadth of Appllications”. Interestingly not everyone agrees. MobileCrunch recently ran an article, “Not every Company needs an App Store“.  I believe they miss the point. The rules have changed and the humble phone has changed to become a platform to deliver services aided by on-demand cloud applications and services.  

I agree that ideally we would be able to write against one platform for services delivered. Unfortunately the mobile phone OS market is very segmented with lots of players such as Symbian (Nokia), Microsoft, Google, iPhone etc. Having said that their are some initiatives to try and provide some abstraction to allow code / services written for one platform to run on others, such as PhoneGap, which supports iPhone, Android and Blackberry. Ultimately the Genie is out of the bottle and new mobile companies can see the carrot that is new revenue and business models that Apple has made reality. Ultimately the will have no choice, with an Open Source OS model in Android squeezing them from one side, and Apple on the other, the landscape is being changed and the 100 pound gorilla is starting to look like an endangered species. Figures compiled by Gartner show that Apples Market share more than doubled in 2008, whilst Nokia’s Market share of the global smartphone Market fell from 47% in 2007 to 31% in 2008, and based on projections in the Gartner analysis, this would make Apple the leading global smartphone provider by 2011.

Amazon EC2 News / Round Up

There is a good PDF whitepaper on using Oracle with Amazon Web Services which can be downloaded here.


A tutorial by Amazon on creating an Active Directory Domain on Amazon EC2 is a thorough article and well worth the read if you intend to implement this functionality on the cloud.


Simon Brunozzi from Amazon gives a good talk on “From zero to Cloud in 30 minutes” at the Next conference in Hamburg which can be viewed below.





Leventum talk about how they implemented the first ERP solution on the cloud using Compiere.


Jay Crossler Looks at how to visualize different cloud computing algorithms using serious Games technologies on the Amazon EC2 cloud below:


Practical Guide for Developing Enterprise Applications for the Cloud

This session was presented at Cloud Slam 09 by Nati Shalom CTO of GigaSpaces. It provides a practical guideline addressing the common challenges of developing and deploying an existing enterprise application on the cloud. Additionally, you will get the opportunity for hands-on experience running and deploying production ready applications in a matter of minutes on Amazon EC2.

McKinsey Cloud research kicks up a storm

A research paper on Cloud Computing by McKinsey & Company entitled ‘Clearing the Air on Cloud Computing’ has kicked up a right old storm with various luminaries either for or against it. The premise of the results of the article are that for large organisations, if they adopt the cloud model, then they would be making a mistake and most likely will lose money, as outsourcing from a more traditional data centre will likely double the cost (($150 per month per unit for data center vs $366 per month per unit for Amazon virtual cloud) . The New York times has an excellent summary of the study here.


Many of the complaints focus on McKinsey totally missing the “Private Cloud” and basing their assumptions on Public Clouds only. However there seems to be a general consensus that Amazon is too expensive and will need to adjust to survive. I’m not convinced about this. It is not the first study to suggest that Amazon are more expensive to use than a traditional data centre. Amazon seems to have been doing just fine up to now and they seem to be getting Enterprises to move across. Whether they replace a whole corporate data centre misses the point. I think this is unlikely, but for certain applications and service it makes perfect sense. Also, more competition unfolds then economics suggest that prices will naturally adjust if they need to.


You can download a PDF of the McKinsey presentation on this paper here.

The Open Cloud Manifesto – the condensed version

Introducing the Open Cloud Manifesto is posted on the Cloud Computing Journal blog on 27th March 2009 and announces the first version of the manifesto will be published Monday, March 30th and will be ratified by the cloud community.

This post is submitted after Microsoft’s Re: Microsoft Moving Toward an Open Process on Cloud Computing Interoperability post on the 26th March 2009

Microsoft and Amazon respond immediately and say they are not currently not intending to sign the manifesto.

You can read version 1.0.9 of the Manifesto here.

How do you design and handle peak load on the Cloud ?

We see these questions time and time again – “How do I design for Peak load” and “How do I scale out on the cloud?”. First lets figure out how to give some definition for Peak load:

We will make a stab at defining peak load as: “A percentage of activity on a day/week/month/year that comes within a window of a few hours and is deemed as extreme and occurs because of either seasonality or because of unpredictable spikes.”

The Thomas Consulting Group have a good stab(ppt) at a forumla to try and predict and plan for Peak load. Their formula and example is shown below:

H = peak hits per second
h = # hits received over a one month period
a = % of activity that comes during peak time
t = peak time in hours
then
H = h * a / (days * t * minutes * seconds)
H = h * a / (108,000 * t)

Determine the peak Virtual Users: Peak hits/second + page view times

U = peak virtual users
H = peak hits per second
p = average number of hits / page
v = average time a user views a page

U = (H / p) * v

Example:

h = 150,000,000 hits per month
a = 10% of traffic occurs during peak time
t = peak time is 2 hours
p = a page consists of 6 hits
v = the average view time is 30 seconds

H = (h X a) / (108,000 * t)
H = (150,000,000 * .1) / (108,000 X 2)
H = 48

U = (H / p) * v
U = (48 / 6) * 30
U = 8 * 30
U = 240

Desired Metric – 48 Hits / Sec or 240 Virtual Users

In the example Thomas Consulting present above  Peak load is 15,000 hits in two hours whereas the  normal average of hits for two hours is 411 [(((h*12)/365)/24)*2]. This is over a 70% increase and a huge difference, and this example is not even extreme. Online web consumer companies can do 70% of their yearly business in December alone.

Depending on the what else occurs during the transactionality of the hits, then this could be difference between having 1 EC2 instance and having 10, or a cost difference between $6912 to $82,944 over the course of a year (based on a large Amazon EC2 instance).  And of course building for what you think is peak can still lead to problems. A famous quote by Scott Gulbransen from Intuit is:

“Every year, we take the busiest minute of the busiest hour of the busiest day and build capacity on that, We built our systems to (handle that load) and we went above and beyond that.” Despite this the systems still could not handle the load.

What we really want to be able to do is to have our site build for our average load, excluding peak, and have scale on demand built into the Architecture. As EC2 is the most mature cloud platform we will look at tools that can achieve this on EC2:

GigaSpaces XAP: From version 6.6 of the GigaSpaces XAP Platform Cloud tooling is built in. GigaSpaces is a next generation virtualised middleware platform that hosts logic, data, and messaging in-memory, and has less moving parts so that scaling out can be achieved linearly, unlike traditional middlware platforms. GigaSpaces us underpinned by a service grid which enables application level Service Level Agreement’s to be set and which are monitored and acted on in real-time. This means if load is increased then GigaSpaces can scale threads or the number of virtualised middlware instances to ensure that the SLA is met, which in our example would be the ability to act process the number of requests. GigaSpaces also partner with RightScale. GigaSpaces lets you try their Cloud offering for free before following the traditional utility compute pricing model.

Scalr:Scalr is a series of  Amazon Machine Images (AMI), for basic website needs i.e. an app server, a load balancer, and a database server. The AMIs are pre-built with a management suite that monitors the load and operating status of the various servers on the cloud. Scalr purports to increase / decrease capacity as demand fluctuates, as well as detecting and rebuilding improperly functioning instances. Scalr has open source and commercial versions and is a relatively new infrastructure service / application. We liked the the  ‘Synchronize to All’ features of Scalr.  This auto-bundles an AMI and then re-deploys it on a new instance. It does this without interrupting the core running of your site. This saves time going through the EC2 image/AMI creation process. To find out more about Scalr you should check out the Scalr Google Groups forum.

RightScale: RightScale has an automated Cloud Management platform. RightScale services include auto-scaling of servers according to usage load, and pre-built installation templates for common software stacks. RightScale support Amazon EC2, Eucalyptus, FlexiScale, and GoGrid. They are quoted as saying that RackSpace support will happen also at some point.  RightScale has a great case study oveview on their blog about Animoto and also explains how they have launched, configured and managed over 200,oo0 instances to date. RightScale are VC backed and in December 2008 did a $13 million series B funding round. RightScale have free and commercial offerings.

FreedomOSS:  Freedom OSS has created custom templates, called jPaaS (JBoss Platform as a Service), for scaling resources such as JBoss Application Server, JBoss Messaging, JBoss Rules, jBPM, Hibernate and JBoss Seam. jPaaS monitors the instances for load and scales them as necessary. jPaaS takes care of updating the vhosts file and other relevant configuration files to ensure that all instances of Apache respond to this hostname. The newly deployed app that runs either on Tomcat or JBoss becomes part of the new app server image.

Is it Grid or is it Cloud ?

recent post by the Cloud vendor CohesiveFT talks about the potential changes in technical sales cycles when evaluating Grid based products. I’m not sure I agree totally with the article, but the ethos behind the article i.e. making it easier to trial products, try out solutions and build apps /services quicker to be build internal business cases is solid.

Cloud is a game changer, which is the intent of the article, but you cannot apply a broad brush to “Grid on the Cloud” as a unilateral game changer  in respect of Cloud replacing Grid (which to be fair is not the intent of the article). For many companies replacing internal Grids, or even planning for new Grids, cannot be done on the Cloud. There are challenge of integration, moving data, securing data (and this is where Cohesive FT’s VPN-Cubed product offering can help), physical location, legislation, SLA’s and availability (see this article for a good synopsis on this as applied to EC2). Many of these will be resolved in time, and some of course can be resolved right now, but with the move by many vendors to enable existing IT infrastructure and Data Centers as private clouds the point is likely to be mute in the future I think. Right now, an internal Grid is not elastic. It does not add more servers or resources to the service as required, but this will change as such internal Fabric enablers become more normal.In fact one can image a future where such companies may sell excess capacity of their “Grid Clouds” to ensure a more economical running of their infrastructures.

Recession pushing companies toward SaaS and Cloud ?

An interesting article at EBizQ looks at some recent analyst reports and supports the prediction that the current global downturn in the economy has resulted in a cost-conscious, capex-constrained economic environment in which cloud and SaaS are more appealing than ever. 

Firstly they look at email and come to the conclusion that running an in-house email system is only cost effective if there is more than 15,000 users on the system. Google Apps is apparently two thirds cheaper per use at $8.47 in the report.

Also they look at Ray Wangs report that recommends software buyers to Shape Your Apps Strategy To Reflect New SaaS Licensing And Pricing Trends

They pull some highlights from this report, reproduced below:

  • “Rich user experiences and intuitive Web 2.0 approaches reduce the overall cost of user training compared with fat-client user interfaces that reflect older user-experience practices.”
  • “True multitenant SaaS users experience frequent upgrades with minimal downtime and minimal reduced testing resources — leaving business users time to get value from the software. “
  • “Forrester’s Total Economic Impact (TEI) studies show that in most cases, SaaS delivers better TEI and lower cost.”
  • “Constant innovation with quarterly and even monthly product updates deliver product road map predictability.”
     

This is worth looking at in conjunction with Jim Liddle’s recent article on the “Economy of Cloud” which looks at whether cloud is indeed cheaper and has some real pricing comparisons as well as the O’Reilly article on the “Economics of Cloud Computing”

The future may be Cloudy but that isn’t necessarily a bad thing.