Welcome!

Silverlight Authors: Automic Blog, Michael Kopp, AppDynamics Blog, Kaazing Blog, Steven Mandel

Related Topics: Containers Expo Blog, Microsoft Cloud, Silverlight, Apache

Containers Expo Blog: Blog Post

Replication with Hyper-V Replica - Part I

Replication Made Easy Step-By-Step

Overview: Disaster recovery scenarios, simple site-to-site replication, or the Prod-to-Dev refresh scenario are generally what drive IT administrators to look into virtual machine replication.  We want to build our environments so that in the event something happens in our primary data center, our critical machines and data will be up and running somewhere else.  Our developers may reside in a different location but want to work with the most recent datasets available.  There are a slew of questions asked about delivering on results for these different types of requirements.  Replication over wide area networks takes careful planning and consideration for any solution, in this article I focus on achieving results with Windows Server 2012 Hyper-V, however the methodology applies to almost any replication environment.

Important Questions: I was talking with a fellow IT Pro at one of our recent camps, he asked me, "How do I know what kind of bandwidth I need to perform replication from my main data center to my secondary site?"  Great question, and one of many that I have received in my past 7 years of virtualization consulting.  Many people go out and build an infrastructure to support replication functionality, identify the virtual machines they want to replicate and then just give it a whirl.  Most often times, they face long replication times, time outs, and other logistical issues if not immediately then a few weeks down the road.  A discouraging process at times I know, however I believe that with proper planning these scenarios are quite doable, and may not require near as much budget as one would think.  Even if we have identified the virtual machines that would be necessary for replication, the very next thing we should accomplish is understanding how much time can be lost in the event of an outage, and also how quickly can we recover at the alternate location.  For those of you who have already defined your requirements and just want to get down to the more advanced configurations fast forward to the Bandwidth Restrictions in Part II of this series.  If you want to get started but still need a 180 day free trial of Server 2012 click here.

So let's take a peek at the entire process.

1)   Identify the critical workloads and any dependencies these may have (i.e. Active Directory would be required before a File Server)

2)   Identify the current and requested recovery point objective (RPO) for each workload. (i.e. How much time can I afford to lose this computing?)

3)   Identify the current and requested return-to-operations objective (RTO) for each workload.

a)   How fast can I recover to my RPO for this VM?

b)   This value may be more about your infrastructure's abilities than the request of the application owner.

4)   Determine the size of the actual footprint of the workload

5)   Determine the amount of change occurring inside the given workloads.

6)   Review the requirements with the application owners

a)   Hint, the application owner will always say they need 100% uptime, so we need to ask the proper questions.

b)   More on this topic later.

7)   Determine the amount of open bandwidth available, as well as the times of day/week that the maximum available bandwidth is available.

8)   Test replication and bandwidth between site A and site B for performance and reliability.

9)   Document the steps necessary to fail over to the alternate site, then fail back to the production site per application.

One of the most overlooked tasks in a project like this is how quickly can I fail back to my primary site when all is said and done! Windows Server 2012 takes this into consideration and allows for Reverse Replication automatically when a failback event occurs.

Now that we have a process to work from, and believe me, the process shown above can take many different turns and angles, we need to work with a set of tools. Since I work at Microsoft, the first tool that comes to mind is a spreadsheet! I just so happen to have said spreadsheet handy, I will share it with you here.

image

Please continue reading at: Replication with Hyper-V Replica - Part II

More Stories By Tommy Patterson

Tommy Patterson began his virtualization adventure during the launch of VMware's ESX Server's initial release. At a time when most admins were only adopting virtualization as a lab-only solution, he pushed through the performance hurdles to quickly bring production applications into virtualization. Since the early 2000s, Tommy has spent most of his career in a consulting role providing assessments, engineering, planning, and implementation assistance to many members of the Fortune 500. Troubleshooting complicated scenarios, and incorporating best practices into customer's production virtualization systems has been his passion for many years. Now he share his knowledge of virtualization and cloud computing as a Technology Evangelist in the Microsoft US Developer and Platform Evangelism team.

IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...