Welcome!

Silverlight Authors: Automic Blog, Michael Kopp, AppDynamics Blog, Kaazing Blog, Steven Mandel

Related Topics: Containers Expo Blog, Microservices Expo, Microsoft Cloud, Silverlight, Agile Computing, @CloudExpo

Containers Expo Blog: Blog Post

Building an Automated Tiered Storage Lab with Windows Server 2012 R2

New R2 Release adds tiered storage that combines performance benefits of SSD with low cost of HDD

I’ve been speaking with lots of IT Pros over the past several weeks about the new storage improvements in Windows Server 2012 R2, and one feature in particular has gained a ton of attention: automated Storage Tiers as part of the new Storage Spaces feature set.

Automated Storage Tiers in Windows Server 2012 R2

However, not all of us have multiple SSD and HDD disks in our demo or lab gear, so it’s sometimes challenging to evaluate and demo this new feature.  In this article, I’ll step through the process of leveraging Virtual Hard Disks ( VHDs ) to simulate SSD and HDD tiers for the purpose of evaluating and demonstrating the functionality of Storage Tiers in a Windows Server 2012 R2 lab environment.

What are “Storage Tiers”?
Storage Tiers combine the best performance attributes of solid-state disks (SSDs) with the best cost:capacity attributes of hard disk drives (HDDs) together by providing the ability to create individual storage LUNs ( called “Virtual Disks” in Windows Server 2012 R2 – not to be confused with Hyper-V “Virtual Hard Disks” or VHDs ) that span SSD and HDD “tiers”.  Once a tiered LUN is created, Windows Server 2012 R2 analyzes disk IO requests on-the-fly and keeps the most frequently accessed data blocks located on speedy SSDs while moving less frequently accessed data blocks to HDDs – all transparently to applications and users.

You can learn more about Storage Improvements in Windows Server 2012 R2 in Chapter 3 of our latest free eBook, Introducing Windows Server 2012 R2.

The Net Result?
Most organizations tend to frequently access only a small subset ( generally 15% – 20% ) of their overall data storage, and Storage Tiers provide an ideal solution to keep all data online and accessible while providing super-quick access to the most frequently accessed data – without administrators needing to manually move files between separate tiers of storage.  When using storage tiers, you only need to invest in enough SSD storage to hold your most frequently accessed data blocks, so this helps to dramatically reduce storage costs over investing in large amounts of SSDs.  In fact, the cost-performance ratio of tiered Storage Spaces also appears to obviate the need for expensive 15K SAS hard disks – by investing in a small amount of SSD for performance-driven needs and less-expensive high-capacity 7.2K hard disks for capacity-driven needs, I’m not seeing a practical need for more-expensive 15K disks – further helping to reduce costs.

But, I don’t have lots of SSDs for my Storage Tiers lab!
For production environments, you’ll definitely want to invest in SSDs and HDDs to gain the cost-performance benefits of Storage Tiers. But, when learning, evaluating and demonstrating Storage Tiers in a lab environment, you don’t necessarily need all these physical disks to just show Storage Tiers functionality.  Instead, with a Hyper-V VM, a set of Virtual Hard Disks and a bit of PowerShell “magic”, you can create a Storage Tiers demo lab environment.

Here’s the steps to begin building a Storage Tiers demo lab without physical disks …

  1. Download Windows Server 2012 R2 installation bits

    Be sure to download the VHD distribution of the easiest provisioning as a virtual machine.
  2. In Hyper-V Manager, use the VHD downloaded in Step 1 above as the operating system disk to spin-up a new VM on a Hyper-V host.
  3. Once the new VM is provisioned, use Hyper-V Manager to modify the VM settings and hot-add 6 new virtual SCSI hard disks, as follows:

    - Add 3 250GB Dynamic VHDs ( these will be our simulated SSDs )
    - Add 3 500GB Dynamic VHDs ( these will be our simulated 7.2K HDDs )

    When completed, the settings of your VM should resemble the following:

    Hot-Adding Virtual SCSI Hard Disks
    Hyper-V Manager: Hot-adding virtual SCSI hard disks

Now … a bit of PowerShell Magic!
After hot-adding the virtual SCSI hard disks above, you may notice that the Windows Server 2012 R2 guest operating system running inside the VM sees the new VHDs as SAS disks with an “Unknown” media type, as shown below using the Server Manager tool.

Unknown Media Types
Virtual SCSI Hard Disks showing “Unknown” Media Type

Well, if the Windows Server 2012 R2 guest operating system doesn’t see these disks as “SSD” and “HDD” disks, it won’t know how to tier storage properly across them.  And, that’s where PowerShell comes in! Using PowerShell 4.0 in Windows Server 2012 R2, we can create a new Storage Pool and specifically tag the 250GB virtual hard disks with an SSD media type and tag the 500GB virtual hard disks with an HDD media type.  This will allow us to build a tiered storage lab environment with "simulated" SSD and HDD tiers.

Let’s get started with PowerShell …

  1. Launch the PowerShell ISE tool ( as Administrator ) from the Windows Server 2012 R2 guest operating system running inside the VM provisioned above.
  2. Set a variable to the collection of all virtual SCSI hard disks that we’ll use for creating a new Storage Pool:

    $pooldisks = Get-PhysicalDisk | ? {$_.CanPool –eq $true }
  3. Create a new Storage Pool using the collection of virtual SCSI hard disks set above:

    New-StoragePool -StorageSubSystemFriendlyName *Spaces* -FriendlyName TieredPool1 -PhysicalDisks $pooldisks
  4. Tag the disks within the new Storage Pool with the appropriate Media Type ( SSD or HDD ).  In the command lines below, I’ll set the appropriate tag by filtering on the size of each virtual SCSI hard disk:

    Get-PhysicalDisk | Where Size -EQ 267630149632 | Set-PhysicalDisk -MediaType SSD # 250GB VHDs

    Get-PhysicalDisk | Where Size -EQ 536065605632 | Set-PhysicalDisk -MediaType HDD # 500GB VHDs


    If you created your VHDs with different sizes than I’m using, you can determine the appropriate size values to include in the command lines above by running:

    (Get-PhysicalDisk).Size
  5. Create the SSD and HDD Storage Tiers within the new Storage Pool by using the New-StorageTier PowerShell cmdlet:

    $tier_ssd = New-StorageTier -StoragePoolFriendlyName TieredPool1 -FriendlyName SSD_TIER -MediaType SSD

    $tier_hdd = New-StorageTier -StoragePoolFriendlyName TieredPool1 -FriendlyName HDD_TIER -MediaType HDD

At this point, if you jump back into the Server Manager tool and refresh the Storage Pools page, you should see a new Storage Pool created and the Media types for each disk in the pool listed appropriately as shown below.

New Tiered Storage Pool
Server Manager – New Tiered Storage Pool

How do I demonstrate this new Tiered Storage Pool?
To demonstrate creating a new Storage LUN that uses your new Tiered Storage Pool, follow these steps:

  1. In Server Manager, right-click on your new Tiered Storage Pool and select New Virtual Disk…
  2. In the New Virtual Disk wizard, click the Next button until you advance to the Specify virtual disk name page.
  3. On the Specify virtual disk name page, complete the following fields:
    - Name: Tiered Virtual Disk 01
    - Check the checkbox option for Create storage tiers on this virtual disk
    Click the Next button to continue.
  4. On the Select the storage layout page, select a Mirror layout and click the Next button.
    Note: When using Storage Tiers, Parity layouts are not supported.
  5. On the Configure the resiliency settings page, select Two-way mirror and click the Next button.
  6. On the Provisioning type page, click the Next button.
    Note: When using Storage Tiers, only the Fixed provisioning type is supported.
  7. On the Specify the size of the virtual disk page, complete the following fields:
    - Faster Tier (SSD): Maximum Size
    - Standard Tier (HDD): Maximum Size
    Click the Next button to continue.
  8. On the Confirm selections page, review your selections and click the Create button to continue.
  9. On the View results page, ensure that the Create a volume when this wizard closes checkbox is checked and click the Close button.
  10. In the New Volume Wizard select the default values on each page by clicking the Next button.  On the Confirm Selections page, click the Create button to create a new Tiered Storage Volume.
    Note: When using Tiered Storage, the Volume Size should be set to the total available capacity of the Virtual Disk on which it is created.  This is the default value for Volume Size when using the New Volume Wizard.

Storage Tiering Completed!
Congratulations!
You now have a new Tiered Storage Virtual Disk and Volume.  Any disk IO that is written-to or read-from this new volume will automatically be processed across Storage Tiers based on frequency of access.  Keep in mind that this demo configuration is intended for functional demos only – because we’re not really using SSD disks in this demo configuration, you shouldn’t expect to be able to see the same performance as in a production environment that is using real SSD and HDD disks.

What if I know a particular file is always frequently accessed?
Great question!
If there’s particular files in your environment that you absolutely know are always frequently accessed, such as a Parent VHD that is used for multiple linked Child VHDs via Differencing Disks, an IT Pro can specifically pin those files to the SSD tier.  In this manner, Windows Server 2012 R2 doesn’t have to detect frequent access to keep the data blocks associated with those files in the fastest tier – they will always be there!

To pin a file to the SSD tier, use the new Set-FileStorageTier PowerShell cmdlet, as follows:

Set-FileStorageTier -FilePath <PATH> -DesiredStorageTier $tier_ssd

To un-pin a file from the SSD tier, use the new Clear-FileStorageTier PowerShell cmdlet, as follows:

Clear-FileStorageTier –FilePath <PATH>

Additional resources you may also be interested in …

More Stories By Keith Mayer

Keith Mayer is a Technical Evangelist at Microsoft focused on Windows Infrastructure, Data Center Virtualization, Systems Management and Private Cloud. Keith has over 17 years of experience as a technical leader of complex IT projects, in diverse roles, such as Network Engineer, IT Manager, Technical Instructor and Consultant. He has consulted and trained thousands of IT professionals worldwide on the design and implementation of enterprise technology solutions.

Keith is currently certified on several Microsoft technologies, including System Center, Hyper-V, Windows, Windows Server, SharePoint and Exchange. He also holds other industry certifications from IBM, Cisco, Citrix, HP, CheckPoint, CompTIA and Interwoven.

Keith is the author of the IT Pros ROCK! Blog on Microsoft TechNet, voted as one of the Top 50 "Must Read" IT Blogs.

Keith also manages the Windows Server 2012 "Early Experts" Challenge - a FREE online study group for IT Pros interested in studying and preparing for certification on Windows Server 2012. Join us and become the next "Early Expert"!

IoT & Smart Cities Stories
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...