Cloud Conflation

Good for what ails ya! We are a few years into this whole Cloud thing now and I’m surprised by how people still talk about it as a Cure All, some sort of silver bullet, conflating Cloud as a Service Delivery model with all sorts of things like collaboration, increased productivity, analytics – analytics?!, and a new model for application development.  Wow, where can I get some of that?  How much would you pay for such a wonder drug?  You need only open an industry rag, scholarly journal, or turn on the TV to get blasted with some of this hype.  At least I haven’t seen a “To the Cloud!” commercial in awhile.

I think we need to be much more precise in how we talk about Cloud because all of this squishiness is not only misleading, but it distracts from how we should be designing and adopting solutions that use this service delivery model.  And let me once again beg for a new moniker for this service delivery model, I’m so over Cloud.
Continue reading Cloud Conflation

Share

Cloud Heresy

I’m about to commit a bit of cloud heresy as a technology guy writing about cloud and claiming that it’s really not all about hypervisors, automation and orchestration. Sure, you need a measure of these components in order to be able to deliver on the cloud vision and model efficiently, but does that really solve the problems that are driving the consumers of IT to try and skirt enterprise IT and give their dollars to the public cloud? I think the number of services being consumed that are called cloud but really aren’t and the amount of cloud washing going on in the marketplace clue us in on the fact that it’s not the technology per se that is driving the consumption of cloud. The key thing I am hearing from my customers, and more importantly their customers, is that what is driving people to consume these services, some of which are actually inferior from a service management stand point to what is already offered internally, is the ease of consumption. Consumers are voting with their dollars for quick provisioning, knowing what they’ll pay and the levers that effect that cost, and transparency around what they are getting and using. Continue reading Cloud Heresy

Share

Trusted Cloud

More and more I’m hearing that it is no longer a matter of ‘if’ clients will use cloud computing in some way but a matter of ‘how’ and ‘when’.  Security is often listed as the number one concern regarding cloud adoption in surveys of EMC and VMware customers, and an informal poll at VMWorld reflected that as well.  Why the need for a Trusted Cloud?  Well by now people have figured out the benefits of cloud computing outside just the evangelist ranks and are looking to use it within their enterprises, authorized or not.  The “consumers” within the enterprise really want the provisioning, management and reporting promised by the cloud and they are willing to go around IT to get it in some instances.  So if “consumers” are already using cloud, and more and more of them want to be, we need to figure out a way to inject security and compliance into those services.  VMware’s been doing their part with the launch of the vShield security portfolio last week, but that is only part of the equation.  So what is the Trusted Cloud?  It’s a cloud that assures that the right people have access to the right services, applications and information via a secured infrastructure.

I’ll be hosting an EMC Live! webcast tomorrow on the topic and some best practices for beginning the implementation of the Trusted Cloud.  You’ve got to start with an analysis and rationalization of your application portfolio in order to understand how and where trust needs to be incorporated in your transformed environment.  The rationalized application portfolio feeds into your service portfolio analysis:  what are the appropriate application or service architectural models for your environment?  This is the basis for your cloud strategy and cloud sourcing model:  what are the services that I need to provide my customers and where can they be sourced from?  From here you define your services, policies and controls via ITIL or whatever framework you prefer, document them in your Service Catalog, and then publish them via a Service Portal.  The goal is to provide an end-to-end unified look and feel across the different delivery models with the trust attributes integrated into the environment.

Building the Trusted Cloud

If you’re interested in learning more please join me on September 9th at 11:00am EST for the EMC Live! webcast:

The GRC-Enabled Cloud

As cloud computing becomes more pervasive, one of the most important business questions concerns governance, risk, and compliance (GRC).

How can you achieve business agility and lower costs, while still ensuring that security and compliance issues are resolved?

Attend this webcast and you will:

Understand how to incorporate GRC considerations into the IT services provided by private cloud

Learn best practices from recent private cloud customer deployments by EMC Consulting

See how you can take advantage of private cloud initiatives to meet future requirements for GRC

Find out how defining IT services can help you incorporate public cloud capabilities into your private cloud without compromising security and compliance

Share

Accelerating the Journey to Private Cloud

I argue, frequently and with just about anyone who will engage, that Cloud Computing is the model and there are several different types of instantiations.¬† This certainly isn’t a new or controversial idea, and not a sea change in and of itself.¬† The same could be said for Web 2.0, SOA, N-Tier, Client-Server and back to the Platonic Ideal.¬† The blogosphere and twitterdom is filled with talk of IaaS, PaaS, SaaS &c. as various forms of Cloud Computing and those are interesting forms but not necessarily new ideas or modes of computing.¬† EMC has laid out the vision for a Private Cloud, it’s rather well defined and we have gathered together a number of partners to help us enable our customers in the creation and operation of private clouds.¬† I’m certainly a proponent of Private Cloud, believe in the model and think that it is innovative and a new mode of computing, but I come here not to praise private cloud, but to enable it.

I’ve spent the last few months talking with customers all over the world about Cloud Computing in general and what EMC means by Private Cloud in particular.¬† I’ve been fortunate enough to get a lot of feedback from the CXO level down to the managers and administrators that will be tasked with running these clouds.¬† A few common themes have emerged in these conversations.¬† Rarely does the question, “Why Cloud Computing?” come up, it’s almost as if Cloud is a foregone conclusion, hyped into the mainstream.¬† I am almost consistently asked by people at every level, “So now what?”.¬† EMC and our partners, and the market in general, has done a good job of laying out the groundwork and vision for Cloud Computing and its benefits and a hardware and software portfolio to enable it.¬† The question becomes how do I actually execute against the vision with the products to make it reality, as it does with most paradigm shifts.

It seems to me that a lot of IT organizations are positioning themselves for Private Cloud, knowingly or unknowingly.  The virtualization of the data center, not just of servers, but real enterprise virtualization is a key milestone on the path to Private Cloud.  Not only does it provide the framework to build a Private Cloud on, it brings real benefits to the organization in terms of reduced Capital Expenses, Operating Expenses, time to provision, mean time to repair and improved customer satisfaction for internal and external customers.  These benefits are core to the allure of Private Cloud and IT is keen to realize them as quickly as possible.

I’ve often seen, and industry analysts seem to weekly report, that virtualization efforts seem to hit a wall when around 20-30% of the workloads in the data center have been virtualized.¬† There are many reasons for this, ranging from applicability of previous virtualization solutions to enterprise workloads, and insufficient application owner and line of business buy-in to the transformation leading to lack of approved downtimes and applications not being approved for P2V.¬† We’ve helped a number of customers push through this wall and drive towards their goals of 80-90% of workloads being virtualized through the development of enterprise virtualization programs, acceleration services, documenting the activities and processes surrounding the virtualization of servers and applications, training and comprehensive communication and marketing plans to get the buy-in of the stakeholders and application owners.

It’s not just driving enterprise virtualization that will help IT realize the benefits of Private Cloud, however.¬† A lot of outsourcing companies operated for years on the concept of “Your mess for less”.¬† For this to be a real transformation it can’t just be the same old problems running on a shiny new architecture.¬† A key component of the journey to Private Cloud has to be the rationalization of the application portfolio.¬† We are constantly adding new applications and features and functionality into the environment, and for every “server hugger” out there I’d argue there’s an “application hugger”, we all have our babies and we’re certainly not going to let them be torn from our arms.

A systematic review of the existing application portfolio to identify opportunities for retirement, feature\functionality consolidation, replatforming and virtualization on proprietary unix systems provides the roadmap for how many of the promised savings can be realized.  If you want to embrace x86 as the chosen platform you have to figure out how to get as much of your application portfolio as possible onto it.  Coupling this portfolio rationalization with a comprehensive business case for Private Cloud provides the framework for driving line of business and application team compliance and for a realistic timeline of how quickly you can actually realize Private Cloud.

So that accounts for the infrastructure and the applications, now for the trifecta, governance!¬† A new model of computing requires a new model of governance and the associated tools and processes.¬† Thousands of virtual machines crammed into a small number of cabinets dynamically allocating and deallocating resources is a daunting environment if your key governance tool is Microsoft Excel.¬† The identification of appropriate services to provide, service levels to achieve, and a chargeback model to allocate costs are required, absolutely required, to have any chance of successfully building and operating a Private Cloud successfully.¬† This requires transparency into what you have, what you’re using, where it is, who owns it, what it requires, how it is to be measured and monitored, backed up, replicated, encrypted, allowed to grow or shrink, &c.¬† Sounds scary, I’m sure.

The service catalog, an integrated management tool framework and automated processes allow you to monitor, maintain, provision and recover the costs of such an environment.¬† Your administrators, engineers and operations teams need to be trained on the technologies, service levels, communications plan and have their roles and responsibilities well documented to empower them in this kind of model.¬† New tools and proactive methods for communicating with your clients have to be developed and integrated to ensure they understand what services you are providing them, how they are being charged for them and what service levels you guarantee.¬† I personally think that self-service plays a key role in the development of a Private Cloud, or most cloud models for that matter, and integration of Change, Release and Capacity Management into a self-service portal can make the difference in your client’s adoption of this new paradigm.

We’ve packaged these services up under the umbrella of Accelerating the Journey to Private Cloud and have integrated our Technology Implementation Services, and several new EMC Proven Solutions into a holistic stack to enable our customers. It’s not a light switch or a silver bullet, it still is a journey, but we’ve worked hard to take the lessons learned from many years of data center consolidation and migrations, process automation, custom reporting and dashboards, building innovative solutions and architectures, product training and managing transformative programs and integrate them into an effective services and solutions stack to accelerate the journey to Private Cloud and realize real benefits today.

Share

Utilization

One of my biggest pet peeves over the years has been utilization or capacity reporting.¬† I firmly believe that in order to figure out how to transform an environment into a more efficient one you have to first know what you’ve got.¬† Over the years I’ve walked into customer after customer, or dealt with admins or peers when I was on the other side of the table, who couldn’t tell me how much storage they had on the floor, or how it was allocated, or what the utilization of their servers were.¬† Part of the problem is that calculating utilization is one of those problems were perspective is reality, a DBA will have a much different idea of storage utilization than a sysadmin or a storage administrator.¬† And depending on how these various stakeholders are incented to manage the environment you will see a great disparity in the numbers you get back.¬† It may sound like the most “no duh” advice ever given but the definition of utilization metrics for each part of the infrastructure is a necessary first step.¬† The second step is publishing those definitions to any and every one and incorporating them into your resource management tools.

Stephen Foskett has a great break down of the problem in his post on “Storage Utilization Remains at 2001 Levels: Low!“, but I’d like to expand on his breakdown to include database utilization at the bottom of his storage waterfall.¬† I often use the “waterfall” to explain utilization to our customers.¬† In this case knowledge truly is power and like Chris Evan’s mentions in his post on “Beating the Credit Crunch” there is free money to be had in reclaiming storage in your environment.

It’s not just knowing about stale snapshots sitting out in the SAN, knowing how many copies of the data that exist is imperative.¬† One customer had a multi-terabyte database that was replicated to a second site, with two full exports on disk and replicated, a BCV at each location and backups to tape at each site.¬† That’s 8 copies of the data on their most expensive disk.¬† Now I’m all for safety, but that’s belt, suspenders and a flying buttress holding up those trousers.¬† A full analysis of utilization needs to take these sorts of outdated/outmoded management practices into account for a full understanding of what is really on the floor.

Old paradigms regarding the amount of overhead at each layer of the utilization cake need to be updated, the concept of 15% – 20% overhead for the environment is a great concept, until that environment gets to be mutli-petabyte, then you’re talking about hundreds of terabytes of storage sucking up your power and cooling.¬† Of course storage virtualization is supposed to solve problems like this, but proper capacity planning and a transparent method of moving data between arrays and/or systems with realistic service levels in place can address it just as effectively.

Share