There’s been a lot of discussion lately about clouds and the future of IT across the blogosphere: Chuck is always good for a post or two; IBM spoke up the other day; and there are even reports that “Hey, this is real!”.¬† I can’t help but wonder if Cloud Computing is really just the marriage of flexible architecture, ubiquitous networks and IT Service Management?¬† As has been noted on this blog I am highly infrastructure biased, but I think it is apparent that fast, readily available networks are changing IT, your phone, laptop, Kindle, &c. are now viable end devices for application and content delivery almost anywhere on the planet.¬† Exciting times indeed!
If you scratch beneath the surface a bit the magic and mystery of the Cloud becomes a little more apparent: you have a high-performance, omnipresent network; a flexible delivery engine that is highly scalable and efficient; and a management framework that provides the appropriate Service Levels, security, compliance and communications the customer is seeking.¬† To truly deliver a cloud service you first have to identify and define a service that can be readily doled out to customers clamoring for it.¬† I can think of tons of services internal to an enterprise that would qualify for this designation, so I think the concept of a private cloud is a cogent one.¬† Take for example File Sharing, or Email, or Market Data, or Order Processing.
So why now?¬† The emergence of good allocation and resource management tools certainly makes the management of the service a lot easier, add adaptive authentication, identity management and role based access, couple that with the virtualization capabilities and infrastructure components geared to hypervirtualization and you have the recipe for easy to deploy private and public crowds.¬† The market adoption of frameworks like ITIL and ISO 20000 and their focus on Service Level Management provides the appropriate mindset for the IT organization looking to become service oriented.¬† Now ride all of that on a ubiquitous, converged, highly available fabric and you can provide these services to pretty much any client, via any platform, any where.
Suddenly Clouds aren’t so amorphous but really the next logical progression of virtualized infrastructure, Service-Oriented Architecture, and IT Service Management.
Awhile back I got a call on a Friday night that is familiar to many consultants, “Can you be in City X on Monday morning?”¬† The program manager on the other end of the phone remembered hearing that I had a degree in Product Management and was eager to get me in front of his customer who was looking to transform his organization into one that managed infrastructure according to a Product Management Lifecycle (PML).¬† Now I admittedly view the world through PML-tinted glasses, but this concept had really piqued my interest.¬† The idea was a pretty simple one: convert his organization to be product-oriented and merge the PML with the IT Infrastructure Library (ITIL) framework and the Software Development Lifecycle (SDLC) that the organization was already spottily using.¬† As a Unified Field Theory devout I was hooked!
The customer, like most, was approaching the development, testing and management of their infrastructure through a number of siloes: people thinking about the long term strategy; another group concerned with the implementation of systems; a group that tested the integrated infrastructure; a group responsible for the daily management of the environment; and an organization dedicated to interfacing with the customer to understand their requirements (and on occasion their satisfaction).¬† Strategy, architecture, engineering and operations were divided across the organization with several silos within each knowledge area.¬† No one was incented to work together, no one had a vision of the entire infrastructure as a “system” and finger pointing was the order of the day during any outage.¬† Walking around the several floors the IT department was spread over there was an air of discontent, people bolted for the door at 5pm, at the latest, were largely disengaged and took pride in the walls they put up around their particular part of the organization.¬† Worst of all the business, their customer, was unhappy and questioning why they were spending so much on that black box called IT.
Continue reading Product Management
One of my biggest pet peeves over the years has been utilization or capacity reporting.¬† I firmly believe that in order to figure out how to transform an environment into a more efficient one you have to first know what you’ve got.¬† Over the years I’ve walked into customer after customer, or dealt with admins or peers when I was on the other side of the table, who couldn’t tell me how much storage they had on the floor, or how it was allocated, or what the utilization of their servers were.¬† Part of the problem is that calculating utilization is one of those problems were perspective is reality, a DBA will have a much different idea of storage utilization than a sysadmin or a storage administrator.¬† And depending on how these various stakeholders are incented to manage the environment you will see a great disparity in the numbers you get back.¬† It may sound like the most “no duh” advice ever given but the definition of utilization metrics for each part of the infrastructure is a necessary first step.¬† The second step is publishing those definitions to any and every one and incorporating them into your resource management tools.
Stephen Foskett has a great break down of the problem in his post on “Storage Utilization Remains at 2001 Levels: Low!“, but I’d like to expand on his breakdown to include database utilization at the bottom of his storage waterfall.¬† I often use the “waterfall” to explain utilization to our customers.¬† In this case knowledge truly is power and like Chris Evan’s mentions in his post on “Beating the Credit Crunch” there is free money to be had in reclaiming storage in your environment.
It’s not just knowing about stale snapshots sitting out in the SAN, knowing how many copies of the data that exist is imperative.¬† One customer had a multi-terabyte database that was replicated to a second site, with two full exports on disk and replicated, a BCV at each location and backups to tape at each site.¬† That’s 8 copies of the data on their most expensive disk.¬† Now I’m all for safety, but that’s belt, suspenders and a flying buttress holding up those trousers.¬† A full analysis of utilization needs to take these sorts of outdated/outmoded management practices into account for a full understanding of what is really on the floor.
Old paradigms regarding the amount of overhead at each layer of the utilization cake need to be updated, the concept of 15% – 20% overhead for the environment is a great concept, until that environment gets to be mutli-petabyte, then you’re talking about hundreds of terabytes of storage sucking up your power and cooling.¬† Of course storage virtualization is supposed to solve problems like this, but proper capacity planning and a transparent method of moving data between arrays and/or systems with realistic service levels in place can address it just as effectively.
It seems about 50% of my clients these days have outsourced, are thinking about outsourcing or are insourcing.¬† Some of my customers are themselves outsourcers.¬† An interesting facet of the model these days is the introduction of new services to meet customer needs while providing opportunity for the outsourcer.¬† I’ve had the opportunity to meet with several outsourcers over the past few years and advise them on their service catalog, usually for storage.¬† A common complaint has been that “we’re losing money on this deal” which always manages to surprise me.¬† If you’re losing money on so many deals you may want to get out of the business, but I digress.¬† Usually it’s not just the outsourcer that is unhappy, the customers generally are too: they think the prices are too high, the service is lousy, and that they aren’t really getting what they need.¬† You can rarely go back to the table and renegotiate your price for Tier 1 service and raise the price, so how do you create a win-win situation?
I’d like to present one solution that has been used to good effect in the past: the introduction of a new, and necessary, tier of storage to the service catalog.¬† I think this applies not just to outsourcers but anyone who runs their environment in a service provider mode.¬† A lot of customer I interact with complain that the performance of their Tier 1 storage is suboptimal and that their backups never finish on time, or aren’t validated, etc.¬† While hardly a novel or new solution appropriate archiving is the answer to these sorts of problems, and if you view it from a TCO perspective you can gather a lot of financial evidence for the executives on why it should be implemented.
I encourage my customers to think of their production data in terms of two classes, this is the highest level of data classification in my opinion, Operational Data and Reference Data.¬† Operational Data is that which the enterprise uses on a regular basis to run the business, the key is understanding where the cut-off is for “regular basis”.¬† Reference Data is that which is helpful to have around,¬† you might use once in awhile, for a quarter close or year end analysis, but which is ignored on a daily basis.¬† Reference Data takes up valuable Tier 1 storage, backup bandwidth and storage, and as a result can lead to blown SLAs.¬† The appropriate archiving of this data provides an opportunity to right-size the environment, delay the purchase of additional Tier 1 arrays, streamline the backup flow and improve Service Levels by administering data according to its value to the business.¬† The creation of an Archive Tier(s) provides an opportunity to deliver a necessary service to the customer while also enabling the provider to structure it at an improved margin.¬† Customers will want to archive Reference Data when they can link it to improved Tier 1 and backup performance, driving archive utilization and with it the improved margin while at the same time improving the margin on the other services due to fewer SLA misses and a lower administration cost.
I’ve been lucky enough to be in our industry for the last 17 or so years and I have seen all sorts of changes, as we all have. If I think back to my days as a research assistant at a university using the engineering lab Sparcs to create lab reports and pass emails back and forth with other researchers, I’d never have envisioned helping to design and run a system that would send out more than six million customized emails per hour less than ten years later.
In the early 90s IT departments, if you could call them that for most organizations, were necessary evils, a band of misfits who toted various cables and dongles and floppies around to who knew what ends. Today IT is at the heart of several large industries, the difference between successful, profitable businesses and those on the bubble. We’ve seen the industry evolve from sysadmins being a bunch of doctoral and master’s students to kids graduating from high school knowing how to program in a number of languages with a CCNA certification. When I try to imagine what the next 17 years will bring I’m mystified to be honest, the change has been rapid and amazing.
There are a lot of challenges facing us as we move forward as a profession. The interconnectedness of today’s market means that everyone wants access to everything, NOW. Cell phones are becoming viable compute platforms, they are fitting 32 cores on a chip and we have a pretty ubiquitous, fast fabric tying most of it together. At the same time there is more regulation now that pretty much the sum of recorded history to about five years ago. My colleague, Chuck Hollis, talks a lot about the need for a CFO of Information, I think he’s on the right track. But that new position requires tools for reporting and analysis that cut across the many silos that make up IT and the heterogeneous infrastructures supporting them.
No IT framework like ITIL or COBIT or MOF will act as a silver bullet, no off the shelf Resource Management system will give you all the insight you need, no new analyst acronym like GRC will encapsulate everything you need to worry about. A change in the way we design, implement and manage our infrastructure is required to ensure that IT continues to be a source of business value and not just a cost center, or worse the place were Information goes to become confused, lost and irrelevant.