Comparison between traditional IT BC plan and an VMware implementation

By Tom McDonald | Apr 15, 2011 12:17:00 PM

Many business’s IT infrastructures are based around this set up, with the operating system bound to a specific set of hardware and a specific Application bound to that OS. From there the server runs at about 5-10% of its capacity for most of the day with it peaking only during heavy usage. The data has to be backed up to a local SAN for recovery purposes, generally needing special software to be employed to ensure its being backed up fully and efficiently.

If this is a vital server and has a disaster recovery and business continuity plan implemented with it to ensure that downtime is kept as low as possible, then it will have an identical server installed for failover. This server is only used if the original server fails, but is still uses power and space. Not only that, but this server has to be the same identical model, containing the same hardware configuration, firmware, and local storage to ensure immediate complete compatibility with the original server. This adds cost as you need to have a second set of the hardware and it has to be that same model, limiting upgrade paths for the business.

This set up generally falls into the “Boot and Pray” model of disaster recovery, as the complexity of the set up causes the admin to hope that it works rather than being able to guarantee a smooth transition from server. This has to be done with every vital server that needs to have a redundant back up and each one has its own unique set up, creating a large amount of complexity that is involved with managing all these different machines. This complexity increases the company’s RTO and RPO and makes recovering a much larger ordeal.

Read More >

5 ways a VDI, Virtualized Desktop Infrastructure, can improve IT for both users and admins

By Tom McDonald | Mar 28, 2011 3:14:00 PM

The benefits of virtualizing your desktop environment are numerous, in today’s world business’s IT departments are growing by leaps and bounds and the work needed to add, integrate, and maintain can push IT resources to the limits. Virtualization was traditionally used to help reduce the number of servers needed to run the IT, but as the software became more advanced, the usefulness of having a Virtualized Desktop infrastructure (VDI) has become more apparent.

Read More >

Downtime not an option? Learn the basics of VMware's Fault Tolerance and what you will need to get up and running

By Tom McDonald | Mar 25, 2011 11:32:00 AM

Is a server crash not an option for your company? Is having your server up and running the life and soul of your business? Then you may want to consider VMware’s Fault Tolerance (FT) feature. VMware Fault Tolerance is a step up from VMware High Availability (HA), with High Availability being VMware’s backup for a VM crash, if a server running a VM happens to go down then the host reboots on a different host. This allows for only a minute or two of downtime as the Virtual Machine starts up on a new server and the primary host that has crashed is restarted, if possible. This is extremely useful and can keep a business functioning with only a moment of downtime. What Fault Tolerance does is eliminate that couple minutes of downtime so that even if a server crashes, nothing is felt by the user. This feature gives companies that can’t stop functioning, even for a minute, the security they need to run their businesses.

How does FT work? Well with HA there is a primary server who runs the VM and a dedicated secondary host that is there in case of failure, if/when that failure occurs the secondary host is started and the VM is restarted on the new host. The failure is detected by using VMware’s heartbeat function that pings the server every second to ensure it is still active on the network, if the host stops responding it is considered to have failed and the VMs are moved to a new machine.  FT continues this trend, but instead of waiting for a host to fail and then restart it uses vLockstep to keep both hosts in sync that way if one was to fail than the other would continue running without having the user notice the server failure. By sharing a virtualized storage, all the files are accessible to both hosts and the primary host updates the secondary host constantly in order to keep both hosts RAM in sync. FT has a few rules to ensure it works properly:

  • Hosts must be in an HA cluster
  • Primary and secondary VMs must run on different hosts
  • Anti- affinity must be enabled (A configuration that ensures that the VM cannot be started on the same host)
  • The VMs must be stored on a shared storage
  • Minimum of 2 Gbps Nics, this is to allow vMotion and FT logging
  • Additional NICs for VM and management network traffic
Read More >

Prevent IT Disasters. How VMware High Availability protects your data center

By Tom McDonald | Mar 9, 2011 10:46:00 AM

VMware HA (High Availability) is a major step in setting up a disaster recovery objective. With HA enabled, each ESXi host checks in on the other hosts and looks for a failure, if a failure should occur the VMs on the failed host are restarted on another server. To enable HA on your network a few prerequisites are required; All VMs and their configuration files must reside on a shared storage, this is required so that all the hosts have access to the VM if the host running it should fail; Each host in a VMware HA cluster must have a host name and a static IP, this will guarantee that each host can monitor each other without having false positives on failure if a host changes IP address; Hosts must be configured to have access to the VM network; Finally VMware recommends a redundant network connection, if a network card should fail this would allow communication to the host it is associated with, without this redundancy the host would seen as failing.

Read More >