GigaSpaces take e-Gaming to the Cloud with Yazino

New York & London, December 7, 2010GigaSpaces Technologies, a leading provider of a new generation of application platforms for Java and .Net environments, has provided Yazino, a massively multiplayer online casino, the application infrastructure used to build the first cloud-based, social casino platform. Yazino is using GigaSpaces’ eXtreme Application Platform (XAP) to scale on demand while reducing costs and speeding time to market.

As a social gaming online casino, we knew from the beginning that scalability was critical to our success,” says Hussein Chahine, Yazino’s Founder and CEO. “With XAP, we have the flexibility to meet constantly changing business volumes, with linear, real-time dynamic scalability using a cloud-data center hybrid model.

Yazino fuses social interaction and multiplayer functionality, building a bridge between traditional online gambling and social gaming sites. Yazino has already registered more than 500,000 players, with more than 10,000 new players joining daily.

XAP provides a unique enterprise-grade, end-to-end application scalability platform, which can handle extremely large volumes, be scaled out (or in) in real-time, and is governed by preset business SLAs, and which is ultra-fast due to the whole platform running in memory. A strategic solution enhancing IT efficiency and agility, it guarantees performance under peak demand while improving hardware utilization by up to 500%. It allows developers to build, deploy, and operate their infrastructures in any environment without a single code change.

Yazino benefits from our long experience with cloud development,” says Adi Paz, GigaSpaces Executive Vice President of Marketing and Business Development. “With XAP as the underlying infrastructure, our clients like Yazino can focus on the business logic and speed time to market without concern for where the application will actually run.

By using XAP, Yazino now has a massively multiplayer/multi-game online casino platform optimized for extremely high throughput, with hundreds of thousands of concurrent, interactive players, while giving each player excellent response time.

GigaSpaces technology helped us build a hybrid infrastructure, where we can leverage the best of the cloud’s economies of scale while ensuring our data center can manage all of the regulatory-related processing,” continued Hussein. “This provides Yazino a valuable competitive edge as we use more costly hosting only for what is required by regulation, while other services sit entirely on the cloud.

About GigaSpaces

GigaSpaces Technologies is a leading provider of a new generation of virtualized application platforms. Our flagship product, eXtreme Application Platform (XAP), delivers end-to-end scalability across the entire stack, from the data all the way to the application. XAP is the only product that provides a complete in-memory solution on a single platform, enabling high-speed processing of extreme transactional loads, while scaling to meet any requirement – dynamically and linearly. XAP was designed from the ground up to support any cloud environment – private, public, or hybrid – and offers a pain-free, evolutionary path from today’s data center to the technologies of tomorrow.

More than 350 organizations worldwide are leveraging XAP to enhance IT efficiency and performance. Among our customers are Fortune Global 500 companies, including top financial services enterprises, telecom carriers, online gaming providers, and e-commerce companies, such as Dow Jones, NYSE, Société Générale, Virgin Mobile, and Sears.

About Yazino
Yazino, the world’s first social casino (www.yazino.com) was conceived by three friends who wanted to reinvent the social gaming and online casino worlds by connecting the two together. Yazino has built a bridge between traditional online gambling and social gaming sites, creating a whole new hybrid category. The entire brand and in-game experience is entertaining and social to its core. Yazino offers a uniquely fun and competitive environment to connect the world around casino games, such as Blackjack, Roulette, Texas Hold’em and Slots.  Constantly refreshed multiplayer content, tournaments and the engaging challenge of levels and achievements allow Yazino to define the next generation of online gambling.

Yazino, a wholly owned subsidiary of Yazino Group AG (Switzerland), was founded by Hussein Chahine, Bijan Khezri and Gojko Adzic in 2008.

Supporting SLA’s on the Cloud

What does it take to make a Cloud Computing infrastructure enterprise ready ? Well, as always, this probably depends on the use case, but support for real-time scaling and SLA support must figure highly.

Software that purports to scale the applications on the cloud is not new, have a look at our prior blog post on this topic, and you will see some of the usual suspects such as RightScale, and Scalr. A new offering in this space is by Tibco with its Tibco Silver offering. Tibco Silver is trying to solve the problem of not whether cloud services can scale but whether the applications themselves can scale with them. This problem is addressed by Silver through ‘self aware elasticity’. Hmmm….sounds good but what exactly does that mean ? It means the system can automatically provision new cloud capacity (be that storage or compute) dependent upon fluctuations in application usage.

According to Tibco, unlike services in a service-oriented architecture cloud services are not aware of the SLA’s to which they are required to adhere and Tibco Silver is aimed at providing this missing functionality. Tibco claim that “Self-aware elasticity” is something no other vendor has developed. I would dispute this. GigaSpaces XAP with it’s ability to deploy to the cloud as well as on-premise using the same technology has very fine grained application level SLA control that when breached allows the application to react accordingly, whether this be to increase the number of threads, provision new instances or to distribute workloads in a different way. GigaSpaces Service Grid technology enables support for this real-times elasticity.  The GigaSpaces Service Grid originated from Sun’s RIO Project. (interestingly it seems GigaSpaces are doing some work on enabling their cloud tools to deploy to and manage VMWARE images on private clouds as they do with AMI’s on Amazon’s public cloud) 

Without a doubt the ability to react in real-time to application level SLA’s rather than just breaches of an SLA at an infrastructure level is something that will find a welcome home in both private and public clouds.

CloudCamp London

CloudCamp London was fun as usual in the plush Microsoft offices in London and is now developing a real sense of community. Simon Wardley was a host extrodinaire as always and his 100 slide 5 minute presentations are stuff of legends now.


Interesting 5 minute lightening presentations from Dan Stone on Terracotta V GigaSpaces, and HP on on Clouds Security and obfuscation and a good talk by Zeus as well as CloudSoft


I haven’t seen many of the presentations made available yet but the SMEStorage talk on Unifying Storage Clouds can be viewed below:

Practical Guide for Developing Enterprise Applications for the Cloud

This session was presented at Cloud Slam 09 by Nati Shalom CTO of GigaSpaces. It provides a practical guideline addressing the common challenges of developing and deploying an existing enterprise application on the cloud. Additionally, you will get the opportunity for hands-on experience running and deploying production ready applications in a matter of minutes on Amazon EC2.

London Amazon Web Services Startup Event Videos

For those of you who missed the Amazon Web Services startup event in London, you can find the customer presentations on Slideshare.net. And view the videos from the links below:

Cedric Roll, Co-Founder, ORbyte Solutions http://www.vimeo.com/4409867

Felipe Padilla, Co-Founder, Skipso http://www.vimeo.com/4409569

Nigel Hamilton, CEO, Turbo10.com http://www.vimeo.com/4409682

Simone Brunozzi, Getting Started with AWS http://www.vimeo.com/4411474

Tal Saraf, Accelerating Your Website with CloudFront http://www.vimeo.com/4409756

Is average utilisation of servers in Data Centers really between 10 and 15% ?

Server RacksThere has been  an interesting discussion occurring on The Cloud Computing forum hosted on Google Groups (and if you are all interested in Cloud I recommend you join this as it really does have some excellent discussions). What has been interesting about it from my viewpoint is that there is a general consensus that the average CPU utilisation in organisational data centre’s runs between 10 and 15%. Some snippets of the discussion below:

 

Initial Statement on the Group discussion

The Wallstreet Journal article “Internet Industry is on a Cloud” does not do Cloud computing any justice at all.

First: Value proposition of Cloud computing is crystal clear. Averaged over 24 hours, and 7 days a week , 52 weeks in a year most servers have a CPU utilization of 1% or less.  The same is also true of network bandwidth. The storage capacity on harddisks that can be accessed only from a specific servers is also underutilized. For example, harddisk capacity of hard disks attached to a database server, is used only when certain queries that require intermediate results to be stored to the harddisk.  At all other times the hard disk capacity is not used at all.

First response to the statement above on the group

Utilization of *** 1 % or less *** ???

Who fed them this? I have seen actual collected data from 1000s of customers showing server utilization, and it’s consistently 10-15%. (Except mainframes.) (But including big proprietary UNIX systems.)

2nd Response:

Mea Culpa. My 1% figure is not authoritative.  It is based on my experience with a specific set of servers: 

J2EE Application Servers: Only one application is allowed per cluster of servers. So if  you had 15% utilization when you designed the application 8 years ago, on current servers it could be 5% or less.  With applications that are used only few hours per week,  1% is certainly possible.  The other set of servers for which utilization is really low are: departmental web servers and mail servers.

3rd Response 
  

Actually, it was across a very large set of companies that hired IBM Global Services to manage their systems. Once a month, along with a bill, each company got a report on outages, costs, … and utilization.

A friend of mine heard of this, and asked “are you, by any chance, archiving those utilization numbers anywhere?” When the answer came back “Yes” — you can guess the rest. He drew graphs of # of servers at a given utilization level. He was astonished that for every category of server he had data on, the graphs all peaked between 10% and 15%. In fact, the mean, the median, and the mode of the distributions were all in that range. Which also indicates that it’s a range. Some were nearer zero, and some were out past 90%. That yours was 1% is no shock. 

4th Response:

This is no surprise for me, as HPC packages like Sun Grid Engine working on batch jobs can increase close to 90% utilization. We had data that without a workload manager of sorts, the average utilization is 10% to 15%, confirming what you discovered.

This means world wide, 85% to 90% of the installed computing capacity is sitting idle. Grids improved this utilization rate dramatically, but grid adoption was limited. 

If this is not an argument for virtualisation in private data centers / clouds then I don’t know what is. It should also be a big kicker for those who who are considering moving applications to public clouds, out of the data centre and the racks of machines spinning their wheels. It is also a good example of companies planning for Peak capacity (see our previous blog on this). What is really needed is scale on demand and hybrid cloud / Grid technologies such as GigaSpaces which can react to Peak loading in real-time. Consider not only the wasted cost but also the “Green computing” cost for the running of hordes of machines running at 15% capacity….

 

How do you design and handle peak load on the Cloud ?

We see these questions time and time again – “How do I design for Peak load” and “How do I scale out on the cloud?”. First lets figure out how to give some definition for Peak load:

We will make a stab at defining peak load as: “A percentage of activity on a day/week/month/year that comes within a window of a few hours and is deemed as extreme and occurs because of either seasonality or because of unpredictable spikes.”

The Thomas Consulting Group have a good stab(ppt) at a forumla to try and predict and plan for Peak load. Their formula and example is shown below:

H = peak hits per second
h = # hits received over a one month period
a = % of activity that comes during peak time
t = peak time in hours
then
H = h * a / (days * t * minutes * seconds)
H = h * a / (108,000 * t)

Determine the peak Virtual Users: Peak hits/second + page view times

U = peak virtual users
H = peak hits per second
p = average number of hits / page
v = average time a user views a page

U = (H / p) * v

Example:

h = 150,000,000 hits per month
a = 10% of traffic occurs during peak time
t = peak time is 2 hours
p = a page consists of 6 hits
v = the average view time is 30 seconds

H = (h X a) / (108,000 * t)
H = (150,000,000 * .1) / (108,000 X 2)
H = 48

U = (H / p) * v
U = (48 / 6) * 30
U = 8 * 30
U = 240

Desired Metric – 48 Hits / Sec or 240 Virtual Users

In the example Thomas Consulting present above  Peak load is 15,000 hits in two hours whereas the  normal average of hits for two hours is 411 [(((h*12)/365)/24)*2]. This is over a 70% increase and a huge difference, and this example is not even extreme. Online web consumer companies can do 70% of their yearly business in December alone.

Depending on the what else occurs during the transactionality of the hits, then this could be difference between having 1 EC2 instance and having 10, or a cost difference between $6912 to $82,944 over the course of a year (based on a large Amazon EC2 instance).  And of course building for what you think is peak can still lead to problems. A famous quote by Scott Gulbransen from Intuit is:

“Every year, we take the busiest minute of the busiest hour of the busiest day and build capacity on that, We built our systems to (handle that load) and we went above and beyond that.” Despite this the systems still could not handle the load.

What we really want to be able to do is to have our site build for our average load, excluding peak, and have scale on demand built into the Architecture. As EC2 is the most mature cloud platform we will look at tools that can achieve this on EC2:

GigaSpaces XAP: From version 6.6 of the GigaSpaces XAP Platform Cloud tooling is built in. GigaSpaces is a next generation virtualised middleware platform that hosts logic, data, and messaging in-memory, and has less moving parts so that scaling out can be achieved linearly, unlike traditional middlware platforms. GigaSpaces us underpinned by a service grid which enables application level Service Level Agreement’s to be set and which are monitored and acted on in real-time. This means if load is increased then GigaSpaces can scale threads or the number of virtualised middlware instances to ensure that the SLA is met, which in our example would be the ability to act process the number of requests. GigaSpaces also partner with RightScale. GigaSpaces lets you try their Cloud offering for free before following the traditional utility compute pricing model.

Scalr:Scalr is a series of  Amazon Machine Images (AMI), for basic website needs i.e. an app server, a load balancer, and a database server. The AMIs are pre-built with a management suite that monitors the load and operating status of the various servers on the cloud. Scalr purports to increase / decrease capacity as demand fluctuates, as well as detecting and rebuilding improperly functioning instances. Scalr has open source and commercial versions and is a relatively new infrastructure service / application. We liked the the  ‘Synchronize to All’ features of Scalr.  This auto-bundles an AMI and then re-deploys it on a new instance. It does this without interrupting the core running of your site. This saves time going through the EC2 image/AMI creation process. To find out more about Scalr you should check out the Scalr Google Groups forum.

RightScale: RightScale has an automated Cloud Management platform. RightScale services include auto-scaling of servers according to usage load, and pre-built installation templates for common software stacks. RightScale support Amazon EC2, Eucalyptus, FlexiScale, and GoGrid. They are quoted as saying that RackSpace support will happen also at some point.  RightScale has a great case study oveview on their blog about Animoto and also explains how they have launched, configured and managed over 200,oo0 instances to date. RightScale are VC backed and in December 2008 did a $13 million series B funding round. RightScale have free and commercial offerings.

FreedomOSS:  Freedom OSS has created custom templates, called jPaaS (JBoss Platform as a Service), for scaling resources such as JBoss Application Server, JBoss Messaging, JBoss Rules, jBPM, Hibernate and JBoss Seam. jPaaS monitors the instances for load and scales them as necessary. jPaaS takes care of updating the vhosts file and other relevant configuration files to ensure that all instances of Apache respond to this hostname. The newly deployed app that runs either on Tomcat or JBoss becomes part of the new app server image.