Is UK G-Cloud fit for purpose ? A UK service provider says ‘No’ !

UK G-CloudKate Craig-Wood founder and MD of Memset, a UK Service Provider who had an early involvement in UK G-Cloud, not only from a practical standpoint, but also in an advisory capacity, recently outed G-Cloud with regards to it still being a ‘who you know’ rather than ‘what you provide’ type of marketplace, the very thing it was supposed to move away from with regards its ability to provide a level playing ground for UK SME’s. Kate’s post goes onto layout Memset’s own experience with G-Cloud from a commercial viewpoint (or lack thereof).

In a counter post Nicky Stewart, commercial director at Skyscape Cloud Services, lays out a counter argument which in essence states that whereas “G-Cloud isn’t perfect, and never will be…  it would be a disservice to G-Cloud, its buyers and suppliers, to suggest that G-Cloud is a fundamentally broken model”. Stewart goes onto outline that Skyscape grew with G-Cloud and has created over 130 jobs to date.

G-Cloud sales are growing month on month, now exceeding £1bn, and the proof of success will be in achieving the government 33% SME spend target goal.


Trouble in Cloud Paradise – OwnCloud shuts down and Egnyte Pivots (again!)

We have reported in the past on an  increasing dead pool of consumer file sharing services and now it seems hosted Enterprise File Sharing Services are having a similar issue.

OwnCloud Inc recently announced that, after 5 years,  it was shutting down its operations. Given the press and announcements coming out of OwnCloud in recent months this seemed a strange turn of events and one surmises that at some level revenues and sales must have played a part. OwnCloud had some stellar partnerships, including Redhat, in the Open Source space, which already seem to have been taken over but other incumbents capitalising on their demise. Storage Made Easy, a commercial not open source vendor, yesterday announced their own partnership with RedHat at a storage level, with a primary focus on Ceph and OpenStack.

Whilst not entirely in a similar vein, but perhaps with a similar ethos, another enterprise file sharing vendor has announced a pivot. Engine announce that they were now focusing on protecting documents rather than Enterprise File Share and Sync which they believe to be commoditised. Engine’s issue is that the hosted Enterprise File Share and Sync is indeed saturated unlike the self hosted space which seems to be much more in demand from the enterprise. Although Egnyte purports a hybrid capability they really are a service provider in which data goes back through their back end eco-system which, since they took Google venture money, is Google’s storage infrastructure.

The Egnyte announcement comes off the back of a previous pivot in which Egnyte announced they were focused on analytics and the adaptive enterprise. Maybe one of these will eventually stick ! Egnyte is not entirely going to have this space to themselves with other incumbents such as Accellion (who had/ have their own issue given the recently reported FaceBook breach), Watchdox ( a BlackBerry company since 2015) and Storage Made Easy all providing audit and governance features across a wide range of storage endpoints and , at least some of those vendors, do provide secure on-site behind the firewall self-hosting.

Expect to see more companies falling by the wayside, even maybe some unicorns in this space, as it commoditises and VC backed vendors come under pressure to prove out revenue models.

Solving Time Sync Issue on Azure

We just came off an Azure project and we thought it would be useful to push out our notes on keeping a hosted server time in sync.

1. We were configuring a Linux hosted server on Azure and thehe NTP protocol uses UDP on port 123. But you don’t have to allow that on ‘iptables’ in Linux – NTP just gets passed through.

2.  On Azure you don’t have to define the port in the VM configuration, like you do for 22/80/443.

3. Old) posts say Azure doesn’t support UDP, but it seems to now.

4. In theory, Azure provides a service “” but it was 80ms behind the standard servers at [0123]

After configuring the clock on the hosted Linux appliance starts drifting very quickly.
The problem seems to be that there are problems with the time sync in Hyper-V on Windows Server 2008, which is what Azure is built on.

The solution is to look at the changes required to grub.conf and ntp.conf as described at:

Cold Storage not so cold – Google Nearline v Amazon Glacier

It may have taken a little time but Google has come up with an alternative to Amazon’s Glacier cold storage proposition.

Called Google Storage Nearline it is now available in beta as part of Google’s object storage product, Google Storage.

Google Storage is targeted at businesses and a Google Storage account is required to take advantage of Google Nearline as Nearline is a choice when creating a Google Storage bucket / container.

Google Storage Nearline

Once a bucket is designated as Google Nearline it can be used immediately.

Google positions Nearline as providing “the convenience of online storage at the price of offline storage” and indeed it does with access and retrieval times in the order of around 3 seconds.

Nearline also offers regionality of the  storage bucket (similar to what users can expect from Amazon S3 / Glacier). This allows users to control where data is stored. The regional options include U.S., Europe and Asia. The regional storage buckets are not expected to be fully available until Nearline emerges from beta.

Ultimately Nearline is offering companies a relatively simple, fast-response, low cost, tiered storage solution with not only quick data backup but on-demand retrieval and access.

For users who are already using or aware of Amazon Glacier the major differences are as follows:

Nearline                                     Glacier

1 cent per GB pm                     1 cent per GB pm
($10 per TB pm)                        ($10 per TB pm)

3 second retrieval                      5 hours retrieval
(on demand access)                  (request needed)

Data Redundancy                     Data Redundancy
(multiple locations)                    (multiple locations)

Regional support                      Regional support
3 locations                               7 locations

Google Storage API’s               New Glacier API’s
(use existing)                            (API specific to Glacier)

Egress fees                              Egress fees

Retrieval cost .12 per GB         Retrieval cost .09 per GB

Retrieval 4MB/s per TB            Data delivered in 3 to 5 hours
(after first byte / scales linearly)

Availability 99%                        Availablility 99.99%

Pricing and features are of course subject to to change so always check the links below for latest details:

Amazon Glacier Pricing can be found here.
Google Nearline Storage Pricing can be found here.

Amazon Glacier Whitepaper here.
Google Nearline Whitepaper here.

Amazon EBS Provisioned IOPS volumes can now store up to 16 TB

Amazon EVS 16TBFrom  today, users of Amazon Web Services can create Amazon EBS Provisioned IOPS volumes that can store up to 16 TB, and process up to 20,000 input/output operations per second (IOPS).

Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 (Elastic Compute) instances in the AWS Cloud.

Users can also create Amazon EBS General Purpose (SSD) volumes that can store up to 16 TB, and process up to 10,000 IOPS. These volumes are designed for five 9s of availability and up to 320 megabytes per second of throughput when attached to EBS optimized instances.

These performance improvements make it even easier to run applications requiring high performance or high amounts of storage, such as large transactional databases, big data analytics, and log processing systems. Users can now run large-scale, high performance workloads on a single volume, without needing to stripe together several smaller volumes.

Larger and faster volumes are available now in all commercial AWS regions and in AWS GovCloud (US). To learn more please check out the Amazon EBS details page.

Storage Vendors go for broke with OpenStack Swift Storage

openstack logoOpenStack, the open-source on-premise alternative to Amazon S3 is heading into 2015 with a vast mount of momentum. VC’s are falling over themselves to invest in OpenStack related companies and there seems to be genuine enterprise momentum.

The OpenStack story kicked off in 2010 and was initially a combined project between Rackspace and NASA. Fast forward to 2015 and it is managed by the OpenStack Foundation which is a non-profit corporate entity that was established in September 2012 as a means to promote OpenStack software.

Most people may know OpenStack primarily due to it’s infrastructure as a service (IaaS) solution, but it also has an object Storage solution, called ‘Swift’ (not to be confused with Apple’s new programming language, also confusingly called ‘Swift’) which also has garnered a momentum of its own.

Object Storage is a type storage architecture that manages data as objects unlike other storage systems which either manage data as a file hierarchy or as blocks within sectors and tracks (block storage).

The advantages of object storage architectures is that they offer unlimited scalability with a lower emphasis on processing and they offer access using Internet protocols (REST) rather than storage commands.

There is a momentum around Object Storage companies that include such commercial vendors as CleverSafe, Cloudian, Amplistore and Scality.

Vendors who are offering an OpenStack Swift distro as part of their offering include:

HP (Helion Content Depot)
IBM (Cloud Manager with OpenStack)
SoftLayer (Now owned by IBM)
SUSE Cloud
Ubuntu OpenStack
RedHat OpenStack
VMWare OpenStack

As an example of the sums of money involved, Mirantis recently closed a round for $100 million and SwiftStack a round for $16 million, taking both company to total investments of $120 million and $23.6 million respectively. IBM also purchased SoftLayer for a reputed $2 billion. It’s clear that VC’s and Software vendors see something special in OpenStack.

Amazon Web Services may rule when it comes to public cloud but a recent survey sponsored by GigaOM gave results indicating that half of private clouds deployed where OpenStack based.

OpenStack, like Amazon Web Services, is primarily supplied with REST API’s and toolkits  that developers can use to interact with the OpenStack infrastructure. As with AWS this creates opportunities for vendors at the Application level to provide Apps and tools.

Storage Made Easy is a company that has already make an impact on the OpenStack community with its Enterprise File Share and Sync product offering, which has been optimized for OpenStack Swift. The company, itself a startup, already has a growing number of service providers and customer using its enterprise application in conjunction with OpenStack Swift, and has partnered with a number of the key players listed above in a strategy focused around taking advantage of OpenStack’s growth.

Other companies are treading the same path and this itself creates an eco-system of enterprise ready Applications ready to take advantage of OpenStack’s foothold in the Enterprise to grow or to be acquired.

Of course, with OpenStack being an open-source initiative it is not just commercial Apps that have sprung up around OpenStack. There are  Open Source Applications such as Swift Explorer and CyberDuck, but strangely, given the Open Source root of OpenStack there seems to be more commercial offerings rather than open source offerings.

All in all OpenStack is an initiative that is in its ascendancy. It used to be said that OpenStack was more hype than reality but as we head into 2015 the money men have placed their bets and they tend to bet on reality rather than hype.



Hardening RedHat (CentOS) Linux for use on Cloud

If you next to deploy Linux on Cloud you should consider hardening the Linux instance prior to any deployment. Below are guidelines we have pulled together with regards to hardening a RedHat or CentOS instance.

Hardening Redhat linux guidelines

enable selinux

Ensure that /etc/selinux/config includes the following lines:

Run the following on commandline to allow httpd to create outbound network connections
setsebool httpd_can_network_connect=1

check using
To enable/disable
echo 1 >/selinux/enforce

disable the services

chkconfig anacron off
chkconfig autofs off
chkconfig avahi-daemon off
chkconfig gpm off
chkconfig haldaemon off
chkconfig mcstrans off
chkconfig mdmonitor off
chkconfig messagebus off
chkconfig readahead_early
chkconfig readahead_early off
chkconfig readahead_later off
chkconfig xfs off

Disable SUID and SGID Binaries

chmod -s /bin/ping6
chmod -s /usr/bin/chfn
chmod -s /usr/bin/chsh
chmod -s /usr/bin/chage
chmod -s /usr/bin/wall
chmod -s /usr/bin/rcp
chmod -s /usr/bin/rlogin
chmod -s /usr/bin/rsh
chmod -s /usr/bin/write

Set Kernel parameters

At boot, the system reads and applies a set of kernel parameters from /etc/sysctl.conf. Add the following lines to that file to prevent certain kinds of attacks:


Disable IPv6

Unless your policy or network configuration requires it, disable IPv6. To do so, prevent the kernel module from loading by adding the following line to /etc/modprobe.conf:
install ipv6 /bin/true
Next, add or change the following lines in /etc/sysconfig/network:

Nessus PCI Scan

Upgrade openssh to latest version

upgrade bash to latest version

Set HTTP headers off

In /etc/httpd/conf/httpd.conf set the following values
ServerTokens Prod
ServerSignature Off
TraceEnable off

In /etc/php.ini set
expose_php = Off

Change MySQL to listens on only localhost

Edit /etc/my.cnf and add following to mysqld section
bind-address =

Make sure only port 80 443 21 are open

vi /etc/sysconfig/iptables
and add
ACCEPT tcp anywhere anywhere state NEW tcp dpt:https
ACCEPT tcp — anywhere anywhere state NEW tcp dpt:ftp

Cloud Advertising: Google Adwords – how much is enough?

Normally this blog is pretty tech focused but we thought we’d depart slightly from our normal mode operandus and provide a high level overview on Google Adwords with regards to spend. We often get asked. How much should we spend ? If we are only spending a small amount should we even bother ? Good questions, so here is our 5 cents:

– To let you figure out effectiveness plan a test budget and test campaign matrix and run it for a month or so to see where you get the best bang for you buck

– Remember It is not about the spend it is about the ROI. If the ROI holds up your spend should increase.

– Your should focus on Earnings Per Click (EPC) not Cost Per Click (CPC). That is what really counts. (EPC =  Customer Value X Conversion Rate)

Focus on how to increase EPC during your trial. In particular:

Set up Google Adwords conversion tracking – without it your campaign is worthless. You need to be able to track conversions.

– Focus on refining the Ad to make it as compelling as possible. Monitor the conversions won (or lost) due to the change.

– You must create relevance between the Ad and the landing page otherwise Google will score you down as your prospects quickly click away and/or the check the page for relevant keywords.

– Focus on the most cost effective keywords. Don’t bother with those that are outside of your value range i.e those that eat into your ROI or end up in a negative ROI.

– Use lots of negative keywords to prevent untargeted traffic.

That’s it ! There are a gazillion great ways of refining or making Adwords work for you (Long Tail keywords, different type of matches etc) but these high level tips should get you on the right road from the beginning.


Ed Snowdon’s email service shuts down – advises not to trust physical data to US companies – what are options ?

It has been a while since we did a post and a lot has happened in that time including the explosion from Edward Snowdon and the PRISM snooping revelations. These have continued to gather momentum culminating  in the email service that Snowdon used, Lavabit, closing. The owner, Ladar Levision, said that he had to walk away to prevent becoming complicit in crimes against the American public. All very cryptic and chilling. He also had this advised that he “would  strongly recommend against anyone trusting their private data to a company with physical ties to the United States.” So what to do if you have data stored on remote servers ?

Well firstly you may not care. The data you are storing may no way be sensitive and that is the key ie. you need a strategy for how you deal with sensitive data and sharing of sensitive data so what can you do ?

1. You could consider encrypting the data that is stored on cloud servers. There are various ways to do this. There are client side tools such as BoxCryptor that do a good job of this, and there are also more enterprise type platform solutions such as CipherCloud and Storage Made Easy that enable private key encryption of data stored remotely . Both can be deployed on-premise behind the corporate firewall.

2. You could consider a different policy entirely for sharing sensitive data. On a personal basis you could use OwnCloud or even setup a RaspBerry Pi as your own personal DropBox or again you could use StorageMadeEasy to create your own business cloud for keeping sensitive data behind the firewall and encrypting remote data stored outside the firewall.

The bottom line is think about your data security, have a policy, think about how you protect sensitive data.


Understanding DNS, propagation and migration

We recently had a customer that was migrating from one DNS provider to another due to large outages from their existing supplier ie. a failure to keep their DNS services working correctly. They went ahead and migrated by changing their A Record and MX records for their domain/ sub-domains and only contacted us when they started getting outages during propagation as they suspected they must have done something wrong and they were not sure of how to check.

The best way to check this out is to use the DIG command. DIG is an acronym for Domain Information Groper. Passing a domain name to the DIG command by default displays the A record of the queried sit (the IP address).

We can use Dig to check the new nameserver are correctly returning the A record and MX records. To do this:

 Dig@<nameserver URL or IP> <DomainName to check>

If this is correct then it means that the name servers have the correct records which means when they are changed at the registrar we can assume they will be correct.

In the case of the company in question the DNS was correctly returning the new NameServer and MX records for the Domain but their local recursor was still returning the old NameServer records as propagation had not taken place.

Other recursors can be checked to identify whether propogation has taken place there i.e.:

dig @ ns <domain> would check the Verizon recursor

Others of note are:, – OpenDNS, – Google

Others can be found on the OpenNic Wiki

So in the companies case caching of the prior NameServers and the TTL (time to live) was causing the problem as the new NameServers were not completed propagating. Essentially there were two different “nameservers”, each returning different values, and, being selected randomly (due to cached ns records).

One of the things we were able to do help smooth the transition was to ensure each NameServer returned identical values by making both zones were 100% identical ie. on the original service we changed the NameServer NS records to match the new NameServer NS records. Ideally this would have been done as soon as migration occurred.