Cloud Failure – Files cannot be downloaded from Box.net

Again the ugly issue of what do you do when the cloud goes wrong rears it’s head. Right now if you login to box.net and try and download a file you cannot download a file. Instead you get a screen like the below. I’m sure Box are aware of this, but it again shows you the total reliance you have on an outsourced infrastructure on the cloud, and their problems become your problems.

Picture 12

Spying in the Cloud to enable better customer service

As the debate about social privacy settings in the cloud rages on one company is taking advantage of social networks lax privacy to monitor customer comments.

British Telecom has devised software to monitor social networks for negative comments to enable their customer service teams to instantly react and try to turn a negative experience into a positive one.

BT is using software developed in their labs called DebateScape. The software checks social sites to find negative comments from consumers and employs sophisticated algorithms to filter out other chatter.

As many social networks use the ‘opt out’ method of displaying comments made by users, they are often made public enabling DebateScape to trawl and find them.

In research published last year it is claimed that just one negative comment on the Internet can lose 30 customers for a company.

Personally I think it’s a great initiative and it’s great to see BT innovating in this way.

Amazon S3 showing elevated error rates

In a recent post CenterNetworks noted that the Amazon S3 service is showing elevated error rates. They noticed that several images were not loading correctly and they heard from multiple CN readers with the same issue on their sites.

They note the issues seem only to be hitting the U.S. Standard centers — other S3 centers including Northern California, Europe and Asia are functioning correctly.

Abiquo ushers in the Virtualisation 2.0 era

Abiquo announced themselves on the world stage recently at the SysCon Cloud Computing expo in New York, which Abiquo completely dominated. Their impressive booth was packed with people the wholetime. If audience feedback and energy is anything to go by, it was clear that they gave by far the most compelling presentations, which was an interesting contrast to Oracle who just seemed to re-state old material, and Microsoft who just didn’t excite the audience at all.

The Abiquo vision of “Virtualization 2.0” was very well received, and they presented a clear path to resolve the shortcomings of Virtualization 1.0 and at the same time provided a practical route to a fully brokered marketplace, something that Pete Malcom, their CEO, termed the “Resource Cloud”.  Their demonstration in which they converted a virtual machine from VMware to Hyper-V literally had the audience both cheering and gasping simultaneously, and was one of those magic moments that you wait for at an event such as this.

Also it seems that the folks from VMWare, who were in the audience also took note, if this blog post is anything to go by. VMWare recently accidentally announced VMware’s private-cloud initative, known as “Project Redwood” when details of it were mistakingly placed on their website. The presentation detailed vCloud Service Director which purports to let Enterprises create internal clouds; help internal and external clouds interoperate, and enable developers to create new applications within a cloud framework.  Abiquo already does this today, and in an open way, supporting multiple virtualisation vendors. It seems VMWare are quickly trying to make up lost ground and get with the programme !

Although Abiquo seem to have come from nowhere they have been honing their product for the last few years and launched so spectacularly at the SysCon expo, with a product and vision that has resonated so well that they were recently named as one of the 15 Cloud Computing Companies to watch by Network World.

The keynote that was presented at the recent SysCon session in New York can be read below, and the Abiquo product can be downloaded for free from the Abiquo website which also has some great videos and whitepapers to get you up to speed on what can be achieved with their platform.

Summary:

Abiquo abiCloud product offers vendor neutrality, workload management and resource management of physical servers, storage management and the ability to scale applications. abiCloud allows customers to provision virtual machines without having access the physical servers.  This multi-tenancy delegation capability is an enabler for companies to easily create and manage both public and private clouds.

Abiquo offers two editions of its product: The first is a free community edition which comes without support and the second is an enterprise edition with three levels of support. The product is open source and released under the LGPL.

Amazon S3 add RRS – Reduced Redundancy Storage

introduce a new storage option for Amazon S3 called Reduced Redundancy Storage (RRS) that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than the standard storage of Amazon S3. It provides a cost-effective solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage does, and thus is even more cost effective. Both storage options are designed to be highly available, and both are backed by Amazon S3’s Service Level Agreement.
Once customer data is stored using either Amazon S3’s standard or reduced redundancy storage options, Amazon S3 maintains durability by quickly detecting failed, corrupted, or unresponsive devices and restoring redundancy by re-replicating the data. Amazon S3 standard storage is designed to provide 99.999999999% durability and to sustain the concurrent loss of data in two facilities, while RRS is designed to provide 99.99% durability and to sustain the loss of data in a single facility.
Pricing for Amazon S3 Reduced Redundancy Storage starts at only $0.10 per gigabyte per month and decreases as you store more data. To get started using RRS and Amazon S3, visit http://aws.amazon.com/s3 or learn more by joining our May 26 webinar.
Sincerely,
The Amazon S3 Team

Amazon have introduced a new storage option for Amazon S3 called Reduced Redundancy Storage (RRS) that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than the standard storage of Amazon S3.

It provides a cost-effective solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage does, and thus is even more cost effective.

Both storage options are designed to be highly available, and both are backed by Amazon S3’s Service Level Agreement.

Once customer data is stored using either Amazon S3’s standard or reduced redundancy storage options, Amazon S3 maintains durability by quickly detecting failed, corrupted, or unresponsive devices and restoring redundancy by re-replicating the data. Amazon S3 standard storage is designed to provide 99.999999999% durability and to sustain the concurrent loss of data in two facilities, while RRS is designed to provide 99.99% durability and to sustain the loss of data in a single facility.

Pricing for Amazon S3 Reduced Redundancy Storage starts at only $0.10 per gigabyte per month and decreases as you store more data.

From a programming viewpoint to enable your storage to take advantage of RRS  you need to set the storage class of an object you upload to RRS. To enable this you set x-amz-storage-class to REDUCED_REDUNDANCY in a PUT request.

Amazon announce new Asia Pacific region in Singapore for their cloud services

Starting today, Asia Pacific-based businesses and global businesses with customers based in Asia Pacific can run their applications and workloads in AWS’s Singapore Region to reduce latency to end-users in Asia and to avoid the undifferentiated heavy lifting associated with maintaining and operating their own infrastructure.

The new Singapore Region launches with multiple availability zones and currently supports Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon SimpleDB, Amazon Relational Database Service (Amazon RDS), Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), Amazon CloudWatch, and Amazon CloudFront. Singapore Region pricing is available on the detail page of each service, at aws.amazon.com/products.

GigaSpaces release 7.1 of XAP Cloud enabled Middleware – certified for use on Cisco UCS

The upcoming release of GigaSpaces XAP includes the ‘Elastic Data Grid’, which enables deploying a full clustered application with a single API call. Users basically specify their business requirements and XAP automatically performs sizing, hardware provisioning, configuration and deployment. The aim of this is to  provide simplification resulting in reduced effort and cost savings for enterprise applications that require dynamic scalability.

Other features of the XAP 7.1 release include:

  • Certified for use with Cisco UCS, providing enhanced performance
  • Built-in multi-tenancy
  • Extended in-memory querying capabilities
  • Real-time distributed troubleshooting
  • Multi-core utilization

More detail can be found from the GigaSpaces website.

Sun’s Grid Engine now features Cloud burst and Apache Hadoop Integration

Sun (or is that Oracle…) has released a new version of their Grid Engine which brings it into the cloud.

There are two main additions in this release. The First is is integration with Apache Hadoop in which Hadoop jobs can now be submitted to Grid Engine, as if they were any other computation job. The Grid Engine also understand Hadoop’s global file systems which means that the Grid Engine is able to send work to the correct part of the cluster (data affinity).

The second is dynamic resource reallocation which also includes the ability to use on-demand resources from Amazon EC2. Grid Engine also is now able to manage resources across logical clusters which can be either in Cloud or off Cloud. This means that Grid engine can now be configured to “cloud burst” dependent on load which is a great feature. Integration is specifically set up with EC2 and enables scale down as well as scale up.

This release of Grid Engine also implements a usage accounting and billing feature called ARCo, making it truly SaaS ready as it is able to cost and bill jobs.

Impressive and useful stuff, and if you are interested in finding out more you can do so here.

IBM Developer Cloud gains new features

The free beta of the IBM Developer Cloud continues to move forward with more features being added. Recently added were:

– REST and Java API’s

– Instance-independent storage

– RHEL 5.4 base image

– IP address reservation.

Currently you have boot a RHEL image i.e. there is no notion of using your own image and storing it, but IBM confirmed they are working on this.

The beta is free so anyone can sign up and launch some servers and get to grips with the big blue cloud offering.

GigaSpaces Version 7 and Intel Nehalem deliver impressive benchmark results

GigaSpaces, in conjunction with MPI Europe, Globant, and Intel recently conducted some benchmarks on the in-memory data caching / data grid element of their version 7 XAP platform on Intel’s Nehalem chipset. XAP’s latest version 7 reached 1 million data updates per second and 2.6 million data retrievals per second with four client threads on the Nehalem Chip.

Previously XAP version 6 had benchmarked 276,000 updates per second and 453,000 retrievals per second on the best previous Intel processor. 

The summary of the tests are:

– GigaSpaces Write and Take operation from their in-memory data cache are about 3-4 times faster (300-400%) with the Nehalem chipset (in absolute numbers).

– Read operations perform much better with the Nehalem (3-6 times better with 1-4 threads). As much as there are more concurrent threads the difference is increasing. According to Shay Hassidim, one of the reasons for this is the non lock read capability added to XAP 7.0.

– Nehalem+ XAP 7.0.1 shows better scalability than Dunnington+XAP 6.2.2. About 30 % better with write and take operations and growing numbers with read operations (90% with 10 threads).

GigaSpaces continues to push the speed and performance envelope with its product, and I’m informed that the 7.02 version of GigaSpaces has again been highly performance tuned and is even faster than the 7.0 platform that was used for this benchmark.

It will be interesting to see if other vendors in this space publish results of their product on Nehalem, which looks set to deliver a huge performance jump.