How Virtual Machine data is stored and managed within a HPE SimpliVity Cluster – Part 2 – Automatic Management

Published by


In my previous blog post my aim was to paint a picture of how data, that makes up the contents of a virtual machine is stored within a HPE SimpliVity Cluster. I introduced the concept of both primary and secondary data containers, their creation process, placement, and how we report on physical cluster capacity.

If you are unfamiliar with any of thees concepts, I suggest reading that post here before continuing 🙂

Now that we have a greater understanding of how virtual machine data is stored we can start to dig a little deeper to understand how the DVP automatically manages this data, and as administrators, how we can manage this data as the environment grows.

There is a lot to cover in this topic and in the interest of trying keeping this post concise and building on core concepts incrementally I will concentrate automatic management of data via IWO. I will focus on other automatic data management features (via Auto Balancer) and manual management of data in future posts.

This post has been co-authored with my colleague Scott Harrison, a big thank you to Scott as he has provided and posed some interesting points to consider.

Automatic Management of Data

IWO, a closer look

As previously stated, a core feature of the HPE SimpliVity DVP is IWO (Intelligent Workload Optimizer).

IWO is comprised of two sub components, The Resource Balancing Service and the VMware DRS / SCVMM PRO Integration Service. For the this post I will focus on the VMware DRS integration service, however the architecture remains analogous for Hyper-V.

IWO’s aim is two fold…

1. Ensure all resources of a cluster are properly utilized.

This process is handled by the Resource Balancing Service both at initial virtual machine creation and proactively throughout the life cycle of the VM and its associated backups. This is all achieved without end-user intervention.

2. Enforce data locality.

This process is handled by the VMware DRS / SCVMM PRO integration service by pinning (through VMware DRS affinity rules or Hyper-V SCVMM PRO) a virtual machine to nodes that contain a copy of that data. Again this is all achieved without end-user intervention.

Note ! The resource balancing service is an always on service and does not rely on VMware DRS to be active and enabled on a VMware cluster, it works independently from the DRS integration service.

Ensuring all resources of a cluster are properly utilized – VM & Data Placement scenarios

The primary goal of IWO (through the underlying Resource Balancer Service, a feature of IWO) is to ensure that no single node within an HPE SimpliVity cluster is over-burdened: CPU, memory, storage capacity, or I/O’s.

The objective is not to ensure that each node experiences the same utilization across all resources or dimensions (that may be impossible), but instead, to ensure that each node has sufficient headroom to operate. In other words, each node must have enough physical storage capacity to handle expected future demand, and no single node should handle a much larger number of I/O’s relative to its cluster peers.

The Resource Balancer Service will use different optimization criteria for different scenarios. For example, initial VM placement on a new cluster (i.e., migration of existing VMs from a legacy system), best placement of a newly created VM within an existing system, Rapid Clone of an existing VM, and VDI-specific optimizations for handling linked clones.

Let’s explore the different scenarios.

Scenario #1 New VM Creation – VMware DRS enabled and set to Fully Automated

When creating or storage v-motioning a virtual machine to a HPE SimpliVity Host / Datastore vSphere DRS will automatically place the VM on a node with relatively low CPU & memory utilization according to its own algorithms (default DRS algorithms). No manual selection of a Node is necessary.

DRS Set to Fully Automatic – The VMware Cluster is selected as the compute resource – DRS automatically selects an ESXi server to run the VM

In the below diagram VMware DRS has chosen Nodes 3 and 4 respectively to house VM’s 1 and 2. Independently, the DVP has chosen a pair of ”least” utilized nodes within the cluster (according to Storage Capacity and I/O Demand) for the data containers of those VM’s to be placed.

VM-1 and VM-2 have been placed on nodes 1 & 2 and 3 & 4 respectively via the resource balancer service.

IWO via the DRS integration service will pin VM-1 to Node 1 & 2 and VM-2 is to Nodes 3 & 4 by automatically creating and populating DRS rules into vCenter. Lets look at how that is achieved.

How are DRS Rules Created ?

Each DRS rule consists of the following components:

  • A Virtual Machine DRS Group.
  • A Host DRS Group.
  • A Virtual Machines to Hosts rule. This consists of “must run on these hosts”, “should run on these hosts”, “must not run on these hosts”, or “should not run on these hosts”. For HPE SimpliVity we use the “should run on these hosts” rule.

In our example it is not optimal for VM-1 to be running on node 4 as all of the I/O for the VM must be forwarded to either Node 1 or Node 2 in order to be processed.  If the VM can be moved automatically to those nodes, then one hop is eliminated from the I/O path.

First a Virtual Machine DRS Group is created, as we’re looking to make a group of virtual machines that will run optimally on two nodes. In our case the name of the Virtual Machine DRS Group will be SVTV_<hostID2>_<hostID3> as we’re looking to make a group of virtual machines that will run on these two nodes.

Below we can see VM-1 assigned to this VM Group. VM-1 will share this group with other Virtual Machines that have their data located on the same nodes. Note the Host ID is a GUID and not an IP address or Hostname etc of the node, while this may appear confusing to the end user this is the actual GUID of the HPE SimpliVity node. Unfortunately mapping the GUID to the hostname or IP address of the node is not possible through the GUI and will require the command “dsv-balance-show –showNodeIP” if you do wish to identify the Node IP.

drs vm group
DRS VM Group containing VM-1
balance show
dsv-balance-show –ShowNodeIP – we can map the output of this command (node GUID) to the VM Group

Looking at the VM Group for VM-1 we can deduce that the data is stored on nodes ending in “aaf and 329” which in-turn equates to OVC .185 and .186 which in turn live on esxi nodes ending in 81 and 82 as shown below.

Again all of this is handled automatically for you, however for the post to make sense it is important to know where these values come from.

Identifying the Node(Host) associated with the OVC VM

You can also deduce that, as the Host DRS Groups are created, they are named SVTH_<hostID2>_<hostID3>. The host groups will only ever contain two nodes as the virtual machines only contain data on two nodes. There will be several host groups created depending on how many hosts there are in the cluster, one host group for each combination of nodes. Here I have highlighted the host group for Hosts 81 and 82 which the VM-1 will be tied to.

drs host group

Lastly, a “Virtual Machines to Hosts” rule is made, an HPE SimpliVity rule that consists of a SimpliVity host group and an HPE SimpliVity VM group. This rule directs DRS that the VM-1 “should run on” Hosts 81 and 82.

DRS affinity rules are should rules and not must rules. This is an important distinction  that we will discuss later in the post.

DRS Rule “Run VMs On Hosts” containing appropriate Host Group and VM Group for VM-1

If set to Fully Automated, VMware DRS will vMotion any VM’s violating these rules to one of the data container holding nodes, thus aligning the VM with its data. In this case, VM-1 was violating the affinity rules by being placed on Node 4 and is automatically vMotioned to Node 2 via DRS.


Scenario #2 New VM Creation  – VMware DRS Disabled

As stated previously, if VMware DRS is not enabled the Resource Balancer service continues to function and initial placement decisions continue to operate, however DRS affinity rules will not be populated into to vCenter.

drs manual
DRS Disabled – User must select a compute resource within the cluster

In this scenario when Virtual Machines are provisioned they may reside on a node where there is no data locality. A HPE SimpliVity alarm “VM Data Access Not Optimized” will be generated at the VM object layer within vCenter alerting the user.

not optimized
Data Access Not Optimized refers to a virtual machine running on a host where there is no local copy of the VM Data

The HPE SimpliVity platform through interacting with vCenter Tasks and Events will generate an event and remediation steps directing you to which nodes contain a copy of the Virtual Machine data. In the below diagram I have highlighted the “Data access is not optimized” event that directs the user to v-motion the VM to the outlined hosts.

data access not optimized
Data Access not optimized alarm directing the user to v-motion the VM to one of the outlined hosts

Rapid clone of an existing VM

We have shown how the resource balancing service behaves in regard to new VM creation, however the resource balancing service takes a different approach for HPE SimpliVity Clones and VMware Clones of Virtual Machines (VMware clones can also be handled by HPE SimpliVity via the VAAI plugin for faster operation).

In this scenario the Resource Balancer service will leave cloned VM data on the same nodes as the parent as this achieves best possible cluster-wide storage utilization & de-duplication ratios.

Resource Balancer service will place clones on the same nodes as their parents

If I/O demand exceeds node capabilities the DVP will live-migrate data containers in the background to less-loaded node(s).

hive migration
Automatic Migration of Cloned Data Containers, followed by automatic v-motion of VM due to auto update of affinity rules after data container migration (nice!)

Live migration of a data container does not refer to VMware storage v-motion. It refers to the active migration of a VM Data Container to another Node.

VDI specific optimizations for handling linked clones

The scope of VDI and VDI best practices is beyond this post; however, I did want to mention how the HPE SimpliVity platform handles this scenario. More information on this topic can be found at or in this technical whitepaper: HPE Reference Architecture for VMware Horizon on HPE SimpliVity 380 Gen10.  

A single datastore per HPE SimpliVity node within a cluster is required to ensure even storage distribution across cluster members. This is less important in a two node HPE SimpliVity server configuration; however, following this best practice will ensure a smooth transition to a three (or greater) node HPE SimpliVity environment, should the environment grow over time. This best practice has been proven to deliver better storage performance and is highly encouraged for management workloads. It should be noted that this is a requirement for desktop-supporting infrastructure. VDI environments typically clone a VM, or golden image, many times. These clones (replicas) essentially become read-only templates for new desktops.

VDI Setup – Clone Templates from a Golden Image

As VDI desktops are deployed, linked clones are created on random hosts. Linked clones mostly read from the read only templates and write locally which causes proxying and adds extra load to the nodes that host the read only templates.

Deployed VDI VM’s – Mainly reads from cloned golden image causing I/O to be Proxied over network

To mitigate against this the Resource Balancer service will automatically distribute read-only master images across all nodes for even load. This aligns linked clones with their parents to ensure node-local access. It is also worth noting that Resource Balancer may also relocate links clones.

Linked clones automatically aligned with their parents to ensure node local Read/Writes

4 responses to “How Virtual Machine data is stored and managed within a HPE SimpliVity Cluster – Part 2 – Automatic Management”

  1. How Virtual Machine data is stored and managed within a HPE SimpliVity Cluster – Part 3 Avatar

    […] The automatic management of these Data Containers via IWO and the Resource Balancer service. You can read that post here […]


  2. How Virtual Machine data is stored and managed within a HPE SimpliVity Cluster – Part 4 Avatar

    […] Part 2 – covers the automatic management of these data containers via IWO and the Resource Balancer service. You can read that post here […]


  3. temuri426 Avatar

    first of all thank you for demystifying a HPE simplivity . you did a great job writing so much information about it.

    i have several question regarding SImplivity:
    1) how IWO works in case of stretched cluster? does it maintain same logic as it is in single cluster?
    2) in case of stretched cluster can i have primary and secondary copies only in Main site, not in DR side? if the node with primary copy fails , secondary copy on the same site will become primary, at this moment can i trigger creation of new secondary copy to one of the DR nodes? point is that customer does not need to run VM in DR when primary fails.


  4. How Virtual Machine data is stored and managed within a HPE SimpliVity Cluster – Part 3 – Provisioning Options Avatar

    […] containers, their creation process, placement, and how we report on physical cluster capacity. Part 2 explained how the HPE SimpliVity Intelligent Workload Optimizer (IWO) works and I detailed how it […]


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.