Getting started with VDI

Puzzle - AdobeStock_75782349_colors-01Virtual Desktop Infrastructure (VDI) is on the front burner again. Though the technology has been available since server virtualization technologies became popular in the data center, enterprise adoption is recently on the increase. The past couple of years have been an exciting time for VDI because of the many vendors’ feature advances, an improved and simplified licensing cost model and the emergence of mainstream, affordable hyper converged technology. Planning the right VDI strategy can be a time consuming and challenging undertaking for any level of I.T. resource. Selecting the right VDI implementation practices will reduce time, control costs and most importantly deliver greatest value.

Have you ever heard the expression, “A happy customer will tell 1 person, an unhappy customer will tell 10”? In CMI’s experience, understanding users and their work process habits contributes more to a successful project than the technology you are implementing. You can have the greatest assembly of hardware and software serving up virtual desktops to a 1000 users; but it only takes a handful of users who have a bad VDI experience to label a VDI implementation project a failure. Believe us, this type of feedback makes it all the way to the CEO. Our advice? Spend collaborative, quality time with key users in various departments to understand how they use their applications and workstations. This exercise goes a long way in helping identify any potential issues and challenges as well as sets expectations and gains alignment for users transitioning to a virtual desktop from a physical one.

Understanding user habits trickles down into other areas that are equally important to the success of a VDI rollout, including cost control. Most of what you read on the internet regarding VDI includes some mention of the high cost to implement the platform. One way to manage and predict up front cost is to invest time creating a matrix of worker roles and their resource requirements (processor, memory and storage) based on workers’ productivity needs in your organization. Be sure to keep the list of worker roles small and manageable, at most 4-5 worker types. Remember, a major benefit to VDI is standardizing users.  Having too many user profiles gets away from that benefit. Having this information contributes to cost management because the hardware and software is more accurately sized for the project instead of estimated.

Worker role examples:

  • Front desk /Temporary staff users = 1vCPU, 2GB RAM, 20GB Storage
  • Accounting users = 2vCPU, 2GB RAM, 30GB Storage
  • Power Users = 2vCPU, 3GB RAM, 40GB Storage

A final consideration is the expertise that your company chooses to assist in planning and implementing your VDI solution. With CMI’s Adaptable Data Center framework, our VDI experts are in step with the latest practices and technologies available and can help you strategically plan and choose the right VDI features for your business.

DC-KeysLet the games begin.  One of this year’s new buzz phrases is Software Defined Data Center (SDDC).  With every buzz phrase there is some reality, some smoke, some mirrors, and lots of confusion.  Sorting everything out will come with time and further success and failure within the data center – oh joy.

This blog considers four fundamentals when scratching your head about SDDC.

Virtualization extends to all of IT

SDDC starts by abstracting, pooling, automating and monitoring IT infrastructure resources and offering these resources as services.  One result of this abstraction is that buying expensive, premium hardware can be replaced with commodity infrastructure.  To be successful you must become experienced with: compute hypervisors, networking concepts like software defined networking, network functions virtualization and network encapsulation protocols, and storage hypervisors, disk architectures, replication and latency.  Virtualizing compute, storage and networking is the ante into the game.

IT management gives way to automation

Give up the idea of “hand crafting” everything.  Automation and orchestration is one of the keys that have allowed Amazon, Google, Facebook and others to grow quickly and efficiently.  This means if you are going to do something more than once, figure out a way to automate it.  To make this happen you have to take the time to move to a policy-based governance model and enhanced IT Service Management practices.  This is tough sledding, and once embraced, will pay significant dividends in agility, efficiency and alignment to business needs.

For the techies reading this, explore different aspects of automation with some familiar and newer tools.

  • Scripting: PowerShell, Perl, Python, shell
  • Configuration management: Puppet, Chef, Salt, Ansible
  • APIs: JSON, XML, HTTP REST

Compatible hybrid cloud is ubiquitous

Compatible implies interoperable, not necessarily identical, as expectations that an enterprise will have a single cloud provider is as realistic as having a single vendor or operating system.  Interoperability involves multiple aspects including management, execution and data compatibilities.  In a SDDC world, the location of infrastructure becomes irrelevant and applications and workloads will move as needed.  Yeah, I know I am dreaming right now, though we see the path forward.

 Application awareness is critical

Understand which applications are ready for SDDC today, which need some transformational assistance, and which may never get there is another key to success.  This is a fundamental for applications going to the public cloud and equally relevant for SDDC.  In addition to getting application priorities set, we also need to understand their dependencies and relationships to get a complete landscape for business process support.

After over six decades of IT going from mainframes, to distributed networks and the Internet, we may finally be moving into position to deliver the right services, to the right resources and the right time.  The game does continue…

solution_usecases_Big-Data-3We have built data center architecture for the last two decades to serve applications with the same building blocks. Servers, Networking (LAN/SAN) and storage. Everyone will agree on that, we may not agree on what vendors to use in the data center, but that is ok.

 

Then the application landscape changed. Databases became bigger, everyone wants data delivered faster. Does the three tier architecture still hold up under these new requirements / constraints? You can definitely make the servers faster add more memory, make the network low latency, change the storage to flash and lower latency there too. In the end, is it worth the complexity and operational challenges you deal with? Ponder that. And while you do, consider moving to two tier approach. How? A truly converged solution built with web scale in mind. Pull a Google…

 

Converged architecures in the marketplace today are just new riffs on the same three tier theme. See UCS, HP Cloudsystem, etc. Then there is Nutanix. Nutanix is a truly converged. Software defined with a network connection. It is the closest you are going to come to condensing the three tier challenge you are currently dealing with.

 

You may not be ready for this change today, but on your next refresh cycle I guarantee you this will be a consideration. Invest wisely. The three tier architecture is not completely dead. It’s on the back nine though.

Recently the CMI team has seen a dramatic shift in how our customers approach IT.  New technologies in the datacenter, moving into various cloud models (private, off-prem, hybrid, etc.) and ever-growing virtual environments have opened the door to a new set of challenges and opportunities.  And while our clients undergo these transformations, we are constantly keeping a pulse on potential solutions to grow with our clients into the future.

It is with this in mind that CMI is excited to announce our new partnership with Nutanix, a hyper-converged platform that brings compute and storage into a single tier, offering predictable scalability and lower costs for virtualized environments.  Nutanix took the benefits behind the Google file system and brought it to mainstream organizations, delivering a cost-effective “web-scale” solution which is software-defined for resilience, distributed across the cluster for linear scalability, self-healing, and rich with automation and analytics tools.

Using their hyper-converged technology, we’ve seen Nutanix help clients increase datacenter efficiency and lower capital and operating expenses while maximizing performance and delivering linear scalability.  Some of the top workloads that take advantage of this technology are:

  • Virtual Desktop Initiatives (VDI)
  • Big Data
  • Server Virtualization
  • Disaster Recovery (DR)
  • Enterprise Branch Offices

If you’d like more information, this YouTube video gives a high level explanation of Nutanix in 2 Minutes.

And if you’d like to learn more about the Software-Defined Datacenter, this link brings you to a free soft copy of “Software-Defined Storage for Dummies”.  This book discusses how storage is evolving to become an on-demand service running on affordable, off-the-shelf x86 hardware and how to address current storage challenges while saving money.

And if web-scale, hyper-convergence sounds like a fit for you, give us a call – we’re here to help.