Solved Problems

I took time out a few weeks back to attend Edward Tufte’s One-Day Course on “Presenting Data and Information” and learned several new things and had several ideas reinforced by the methods and examples that Edward used. One of my favorite things that Edward brought up was encapsulated in this quote: “These are largely solved problems (displaying information); don’t get an original, get it right”. This of course immediately brought to mind the dreaded “Not Invented Here” syndrome and led me to think about how often I’ve encountered this in the IT world. On the other hand, innovation is terribly important and we take it very seriously at EMC - so how do you find the right balance of “solved problems” and innovation? Continue reading Solved Problems

Share

Join Us!

I’ve mentioned in the past just how much I enjoy working at EMC and since posting that I’ve been privileged to be able to continue hiring outstanding consultants and architects for EMC Consulting.  In addition to the satisfaction of having happy customers, being able to continue to grow the ranks of our talented organization is a real point of pride.  The Cloud and Virtual Data Center practice within EMC Consulting is currently hiring in North America and we are looking for flexible, creative subject matter experts who can help our customers achieve their aspirations while growing their careers within EMC.  I truly believe that EMC Consulting is the place for you if you are looking to help large companies plan and implement their next iteration of IT.  Please check out the positions listed below, or feel free to drop me a line at edward dot newman at emc dot com.

Southeast:
•    Sr. Practice Consultant – 61302   (4 open positions)
•    Practice Team Lead – 61306   (1 open position)
•    Practice Manager – 61301   (1 open position)
West:
•    Sr. Practice Consultant – 61314  (1 open position)
•    Practice Manager – 61315  (1 open position)
Central:
•    Practice Team Lead – 61316  (1 open position)
Northeast:
•    Sr. Practice Consultant – 60655 (3 open positions)
•    Practice Manager – 50999  (1 open position)

Applying to a position with EMC:
1.    Click on the following link – http://www.emc.com/about/jobs/index.htm
2.    Click on the “Apply Now”
3.    Enter the five digit req. number into “Requisition ID “ box
4.    Hit Search
5.    Check of the box and submit to position
6.    Candidates will need to register if they are not already in the system

Share

What a View!

It’s been a great VMWorld so far, and today’s announcements only add to all the buzz amongst the attendees.  I’ve always seen VDI and application virtualization as a way to extend the security, compliance, and availability of the data center out to the end users and VMware’s announcement of VMware View 4.5 with enhancements to security, “check in/check out” and an improved user experience helps further that vision.  Security and compliance has long been a key driver for the adoption of virtualized desktops and VMware delivers with the ability to combine RSA enVision, SecurID and DLP with guidance from an updated RSA SecurBook into your desktop solution.  I think that a ubiquitous and consistent end user experience is vital to the realization of Private Clouds regardless of whether the user is  on campus or not.  It’s not just about a product of course, although EMC and VMware together provide a very robust stack to build upon, you’ve got to approach your virtual desktop infrastructure as a transformation of the desktop, taking the design of the desktop, the virtual infrastructure acting as the delivery mechanism, deployment and migrations, application virtualization, security and systems management all into consideration for your solution.

Desktops have been a growing nightmare for IT organizations, so many different hardware profiles, OS builds, application portfolios, user communities, deployment methods, sprawl, process confusion and dubious security, not to mention spotty backup and recovery capability.  Security has long focused on the end-points as a way to control risk, I’m willing to bet we’ve got more laptops and desktops than we do routers.  And we hire very smart people and give them tools that may or may not meet their needs, a recipe for disaster really.  Desktop and application virtualization affords us the opportunity to do a global reset on a lot of that, pulling back data, applications, and profiles to the virtualized data center or Private Cloud giving your users a secure and compliant set of tools to do their work.  You’ve got to provide an environment that’s not only trusted, but also predictable, IT needs to understand the performance, scalability and interoperability of their virtual environment and application portfolio.

Layer into all of this the fact that many organizations are looking to move to Windows 7 and want that process to be easier than Vista and XP iterations were.  A holistic approach to desktop virtualization can leverage VMware View 4.5 to provide an easier upgrade path for the OS and the opportunity to do security right, building it in from the design of the solution rather than as a bolt-on after deployment.  The number of remote and mobile users is growing every year: security, compliance, systems management and performance concerns are growing along with them.  VMware View 4.5 and the tight integration with RSA’s security products running on top of EMC Proven Solutions for intelligent information infrastructure goes a long way in providing the foundation for an engaging, secure, and compliant end user experience, one of the key promises of cloud computing.

Taking a targeted approach for the implementation of a virtual desktop infrastructure to the most sensitive or highest change environments, like app/dev, is a good way to make use of the enhanced capabilities of VMware View 4.5 and the integration with RSA.  Providing access to a dev/test environment via VDI is a great way to amp up security and compliance if you’ve got a lot of development initiatives always underway or you’re working with offshore resources.  You can extend the security and compliance of a cloud service by making it accessible only via a VDI client, all data now lives in the cloud, secure and available.  Developers, or consultants, working on multiple projects?  Multiple VDI sessions rather than multiple laptops or desktops.  Gain successes and efficiencies and continue to expand the deployment of your VDI solution on their strength and ever important word of mouth.  At EMC we’ve talked about the concept of Information Lifecycle Management for a long time: get the right information, to the right people, at the right time, with an optimized cost structure.  Well RSA takes that concept and extends it with their integration with VMware View 4.5: allow the right people access to the appropriate data via a trusted infrastructure.

Share

Cloud Differentiators

I’ve spent a lot of time talking with my clients and partners lately about what makes a Private Cloud a cloud. There are many schools of thought on this, no end to the opinions really, but I think it comes down to a few differentiators between “just” virtualized infrastructure and a cloud. For me those differentiators are less about technology and more about how you manage and provision things. A virtual infrastructure is still managed and provisioned on a resource or asset basis, where a cloud is managed and provisioned as a service or by policy. A service being some aggregation of resources to deliver something meaningful to your customer. An integrated approach to Governance, Risk Management and Compliance (GRC) is required to accomplish management as a service or by policy. It’s not enough to have a Dashboard that shows you the status of your environment, you need a console that reports and allows you to interact.

The virtual infrastructure is a key enabler of the cloud, but it’s not the cloud. At EMC we’ve developed a product and services portfolio that enables the Private Cloud vision of any device, anywhere accessing your information and your applications regardless of the infrastructure it happens to live on. Our Virtual Computing Environment coalition extends that enablement by including the components of unified internetworking and compute with the cloud operating system. Private Cloud is more expansive than VCE and the first technology solution offered by it in the form of the VBlock. The real differentiator between the virtual infrastructure and the Private Cloud is any device, anywhere is able to access your applications and information with your governance controlling it regardless of the underlying infrastructure, be it internal assets or those provided through the public clouds.

It’s the integration of GRC into the environment that delivers on the Private Cloud promise of all the agility, flexibility, scalability, multi-tenancy and automation associated with cloud computing tempered with the security, availability, resiliency, and control of the data center. This means that getting to a Private Cloud has to be about a lot more than deploying new technologies, it’s a wholesale transformation of IT and a new way of interfacing with the Business and your customers. A lot of what has been promised and demanded by frameworks like ITIL, SOA, MOF, COBIT, etc. is now able to be delivered through the infrastructure and toolsets supporting it. It’s possible to implement the Service Catalog and things like automatically approved changes into the resource management infrastructure to begin to provide real self service of IT where appropriate. The appeal of many existing public cloud solutions are the ease with which users can consume them: a credit card; a few clicks; and bam you have storage, or a server, or a CRM system. An integrated approach for GRC can provide this same user experience, plus the enterprise necessities like Service Levels, Business Continuity, Data Protection and the like for enterprise IT. This is the stuff that gets traction with the people I talk with about cloud and to me is the real promise of Private Cloud, a promise that is actually deliverable today.

Share

Accelerating the Journey to Private Cloud

I argue, frequently and with just about anyone who will engage, that Cloud Computing is the model and there are several different types of instantiations.¬† This certainly isn’t a new or controversial idea, and not a sea change in and of itself.¬† The same could be said for Web 2.0, SOA, N-Tier, Client-Server and back to the Platonic Ideal.¬† The blogosphere and twitterdom is filled with talk of IaaS, PaaS, SaaS &c. as various forms of Cloud Computing and those are interesting forms but not necessarily new ideas or modes of computing.¬† EMC has laid out the vision for a Private Cloud, it’s rather well defined and we have gathered together a number of partners to help us enable our customers in the creation and operation of private clouds.¬† I’m certainly a proponent of Private Cloud, believe in the model and think that it is innovative and a new mode of computing, but I come here not to praise private cloud, but to enable it.

I’ve spent the last few months talking with customers all over the world about Cloud Computing in general and what EMC means by Private Cloud in particular.¬† I’ve been fortunate enough to get a lot of feedback from the CXO level down to the managers and administrators that will be tasked with running these clouds.¬† A few common themes have emerged in these conversations.¬† Rarely does the question, “Why Cloud Computing?” come up, it’s almost as if Cloud is a foregone conclusion, hyped into the mainstream.¬† I am almost consistently asked by people at every level, “So now what?”.¬† EMC and our partners, and the market in general, has done a good job of laying out the groundwork and vision for Cloud Computing and its benefits and a hardware and software portfolio to enable it.¬† The question becomes how do I actually execute against the vision with the products to make it reality, as it does with most paradigm shifts.

It seems to me that a lot of IT organizations are positioning themselves for Private Cloud, knowingly or unknowingly.  The virtualization of the data center, not just of servers, but real enterprise virtualization is a key milestone on the path to Private Cloud.  Not only does it provide the framework to build a Private Cloud on, it brings real benefits to the organization in terms of reduced Capital Expenses, Operating Expenses, time to provision, mean time to repair and improved customer satisfaction for internal and external customers.  These benefits are core to the allure of Private Cloud and IT is keen to realize them as quickly as possible.

I’ve often seen, and industry analysts seem to weekly report, that virtualization efforts seem to hit a wall when around 20-30% of the workloads in the data center have been virtualized.¬† There are many reasons for this, ranging from applicability of previous virtualization solutions to enterprise workloads, and insufficient application owner and line of business buy-in to the transformation leading to lack of approved downtimes and applications not being approved for P2V.¬† We’ve helped a number of customers push through this wall and drive towards their goals of 80-90% of workloads being virtualized through the development of enterprise virtualization programs, acceleration services, documenting the activities and processes surrounding the virtualization of servers and applications, training and comprehensive communication and marketing plans to get the buy-in of the stakeholders and application owners.

It’s not just driving enterprise virtualization that will help IT realize the benefits of Private Cloud, however.¬† A lot of outsourcing companies operated for years on the concept of “Your mess for less”.¬† For this to be a real transformation it can’t just be the same old problems running on a shiny new architecture.¬† A key component of the journey to Private Cloud has to be the rationalization of the application portfolio.¬† We are constantly adding new applications and features and functionality into the environment, and for every “server hugger” out there I’d argue there’s an “application hugger”, we all have our babies and we’re certainly not going to let them be torn from our arms.

A systematic review of the existing application portfolio to identify opportunities for retirement, feature\functionality consolidation, replatforming and virtualization on proprietary unix systems provides the roadmap for how many of the promised savings can be realized.  If you want to embrace x86 as the chosen platform you have to figure out how to get as much of your application portfolio as possible onto it.  Coupling this portfolio rationalization with a comprehensive business case for Private Cloud provides the framework for driving line of business and application team compliance and for a realistic timeline of how quickly you can actually realize Private Cloud.

So that accounts for the infrastructure and the applications, now for the trifecta, governance!¬† A new model of computing requires a new model of governance and the associated tools and processes.¬† Thousands of virtual machines crammed into a small number of cabinets dynamically allocating and deallocating resources is a daunting environment if your key governance tool is Microsoft Excel.¬† The identification of appropriate services to provide, service levels to achieve, and a chargeback model to allocate costs are required, absolutely required, to have any chance of successfully building and operating a Private Cloud successfully.¬† This requires transparency into what you have, what you’re using, where it is, who owns it, what it requires, how it is to be measured and monitored, backed up, replicated, encrypted, allowed to grow or shrink, &c.¬† Sounds scary, I’m sure.

The service catalog, an integrated management tool framework and automated processes allow you to monitor, maintain, provision and recover the costs of such an environment.¬† Your administrators, engineers and operations teams need to be trained on the technologies, service levels, communications plan and have their roles and responsibilities well documented to empower them in this kind of model.¬† New tools and proactive methods for communicating with your clients have to be developed and integrated to ensure they understand what services you are providing them, how they are being charged for them and what service levels you guarantee.¬† I personally think that self-service plays a key role in the development of a Private Cloud, or most cloud models for that matter, and integration of Change, Release and Capacity Management into a self-service portal can make the difference in your client’s adoption of this new paradigm.

We’ve packaged these services up under the umbrella of Accelerating the Journey to Private Cloud and have integrated our Technology Implementation Services, and several new EMC Proven Solutions into a holistic stack to enable our customers. It’s not a light switch or a silver bullet, it still is a journey, but we’ve worked hard to take the lessons learned from many years of data center consolidation and migrations, process automation, custom reporting and dashboards, building innovative solutions and architectures, product training and managing transformative programs and integrate them into an effective services and solutions stack to accelerate the journey to Private Cloud and realize real benefits today.

Share

Clouds on the horizon

There’s been a lot of discussion lately about clouds and the future of IT across the blogosphere: Chuck is always good for a post or two; IBM spoke up the other day; and there are even reports that “Hey, this is real!”.¬† I can’t help but wonder if Cloud Computing is really just the marriage of flexible architecture, ubiquitous networks and IT Service Management?¬† As has been noted on this blog I am highly infrastructure biased, but I think it is apparent that fast, readily available networks are changing IT, your phone, laptop, Kindle, &c. are now viable end devices for application and content delivery almost anywhere on the planet.¬† Exciting times indeed!

If you scratch beneath the surface a bit the magic and mystery of the Cloud becomes a little more apparent: you have a high-performance, omnipresent network; a flexible delivery engine that is highly scalable and efficient; and a management framework that provides the appropriate Service Levels, security, compliance and communications the customer is seeking.  To truly deliver a cloud service you first have to identify and define a service that can be readily doled out to customers clamoring for it.  I can think of tons of services internal to an enterprise that would qualify for this designation, so I think the concept of a private cloud is a cogent one.  Take for example File Sharing, or Email, or Market Data, or Order Processing.

So why now?  The emergence of good allocation and resource management tools certainly makes the management of the service a lot easier, add adaptive authentication, identity management and role based access, couple that with the virtualization capabilities and infrastructure components geared to hypervirtualization and you have the recipe for easy to deploy private and public crowds.  The market adoption of frameworks like ITIL and ISO 20000 and their focus on Service Level Management provides the appropriate mindset for the IT organization looking to become service oriented.  Now ride all of that on a ubiquitous, converged, highly available fabric and you can provide these services to pretty much any client, via any platform, any where.

Suddenly Clouds aren’t so amorphous but really the next logical progression of virtualized infrastructure, Service-Oriented Architecture, and IT Service Management.

Share

Product Management

Awhile back I got a call on a Friday night that is familiar to many consultants, “Can you be in City X on Monday morning?”¬† The program manager on the other end of the phone remembered hearing that I had a degree in Product Management and was eager to get me in front of his customer who was looking to transform his organization into one that managed infrastructure according to a Product Management Lifecycle (PML).¬† Now I admittedly view the world through PML-tinted glasses, but this concept had really piqued my interest.¬† The idea was a pretty simple one: convert his organization to be product-oriented and merge the PML with the IT Infrastructure Library (ITIL) framework and the Software Development Lifecycle (SDLC) that the organization was already spottily using.¬† As a Unified Field Theory devout I was hooked!

The customer, like most, was approaching the development, testing and management of their infrastructure through a number of siloes: people thinking about the long term strategy; another group concerned with the implementation of systems; a group that tested the integrated infrastructure; a group responsible for the daily management of the environment; and an organization dedicated to interfacing with the customer to understand their requirements (and on occasion their satisfaction).¬† Strategy, architecture, engineering and operations were divided across the organization with several silos within each knowledge area.¬† No one was incented to work together, no one had a vision of the entire infrastructure as a “system” and finger pointing was the order of the day during any outage.¬† Walking around the several floors the IT department was spread over there was an air of discontent, people bolted for the door at 5pm, at the latest, were largely disengaged and took pride in the walls they put up around their particular part of the organization.¬† Worst of all the business, their customer, was unhappy and questioning why they were spending so much on that black box called IT.

Continue reading Product Management

Share

Utilization

One of my biggest pet peeves over the years has been utilization or capacity reporting.¬† I firmly believe that in order to figure out how to transform an environment into a more efficient one you have to first know what you’ve got.¬† Over the years I’ve walked into customer after customer, or dealt with admins or peers when I was on the other side of the table, who couldn’t tell me how much storage they had on the floor, or how it was allocated, or what the utilization of their servers were.¬† Part of the problem is that calculating utilization is one of those problems were perspective is reality, a DBA will have a much different idea of storage utilization than a sysadmin or a storage administrator.¬† And depending on how these various stakeholders are incented to manage the environment you will see a great disparity in the numbers you get back.¬† It may sound like the most “no duh” advice ever given but the definition of utilization metrics for each part of the infrastructure is a necessary first step.¬† The second step is publishing those definitions to any and every one and incorporating them into your resource management tools.

Stephen Foskett has a great break down of the problem in his post on “Storage Utilization Remains at 2001 Levels: Low!“, but I’d like to expand on his breakdown to include database utilization at the bottom of his storage waterfall.¬† I often use the “waterfall” to explain utilization to our customers.¬† In this case knowledge truly is power and like Chris Evan’s mentions in his post on “Beating the Credit Crunch” there is free money to be had in reclaiming storage in your environment.

It’s not just knowing about stale snapshots sitting out in the SAN, knowing how many copies of the data that exist is imperative.¬† One customer had a multi-terabyte database that was replicated to a second site, with two full exports on disk and replicated, a BCV at each location and backups to tape at each site.¬† That’s 8 copies of the data on their most expensive disk.¬† Now I’m all for safety, but that’s belt, suspenders and a flying buttress holding up those trousers.¬† A full analysis of utilization needs to take these sorts of outdated/outmoded management practices into account for a full understanding of what is really on the floor.

Old paradigms regarding the amount of overhead at each layer of the utilization cake need to be updated, the concept of 15% – 20% overhead for the environment is a great concept, until that environment gets to be mutli-petabyte, then you’re talking about hundreds of terabytes of storage sucking up your power and cooling.¬† Of course storage virtualization is supposed to solve problems like this, but proper capacity planning and a transparent method of moving data between arrays and/or systems with realistic service levels in place can address it just as effectively.

Share

The changing nature of Information Technology

I’ve been lucky enough to be in our industry for the last 17 or so years and I have seen all sorts of changes, as we all have. If I think back to my days as a research assistant at a university using the engineering lab Sparcs to create lab reports and pass emails back and forth with other researchers, I’d never have envisioned helping to design and run a system that would send out more than six million customized emails per hour less than ten years later.

In the early 90s IT departments, if you could call them that for most organizations, were necessary evils, a band of misfits who toted various cables and dongles and floppies around to who knew what ends. Today IT is at the heart of several large industries, the difference between successful, profitable businesses and those on the bubble. We’ve seen the industry evolve from sysadmins being a bunch of doctoral and master’s students to kids graduating from high school knowing how to program in a number of languages with a CCNA certification. When I try to imagine what the next 17 years will bring I’m mystified to be honest, the change has been rapid and amazing.

There are a lot of challenges facing us as we move forward as a profession. The interconnectedness of today’s market means that everyone wants access to everything, NOW. Cell phones are becoming viable compute platforms, they are fitting 32 cores on a chip and we have a pretty ubiquitous, fast fabric tying most of it together. At the same time there is more regulation now that pretty much the sum of recorded history to about five years ago. My colleague, Chuck Hollis, talks a lot about the need for a CFO of Information, I think he’s on the right track. But that new position requires tools for reporting and analysis that cut across the many silos that make up IT and the heterogeneous infrastructures supporting them.

No IT framework like ITIL or COBIT or MOF will act as a silver bullet, no off the shelf Resource Management system will give you all the insight you need, no new analyst acronym like GRC will encapsulate everything you need to worry about. A change in the way we design, implement and manage our infrastructure is required to ensure that IT continues to be a source of business value and not just a cost center, or worse the place were Information goes to become confused, lost and irrelevant.

Share