Showing posts with label ROBO. Show all posts
Showing posts with label ROBO. Show all posts

Monday, December 2, 2019

ExploreVM Podcast Season 3 Episode 7 - HCI Series - StorMagic

Today we tackle the last episode in the Hyperconverged series.

As I mentioned in previous episodes, the goal for this series is to help build an understanding of hyperconverged infrastructure, it's place in the datacenter world, and spotlight the various ways that vendors are deploying HCI.

Once again, I have contacted the vendors directly ( or have been contacted) to participate.

That being said, this is NOT a sponsored series.
I am not receiving any sort of compensation for these appearances.
And all vendors appearing have had a strict set of rules laid out for their appearance.

1) No Bashing other HCI Vendors
2) Keep it technical, this is a technical podcast
3) This is NOT a free commercial for your product
4) No bashing other HCI Vendors


Links:

And thank you for listening.

This was the last scheduled HCI Series episode, but if you would like to speak to your product please do not hesitate to reach out. I would love to schedule more HCI related episodes. 

If you'd like to be a guest on the ExploreVM podcast, have a show idea for a future episode, or to continue the conversation on today's topics, please contact me at EmailFacebook, or Twitter

Thursday, December 27, 2018

ExploreVM Podcast - A VMworld 2018 Conversation with Mike Burkhart

As 2018 comes to an end, I look back at some sessions that haven't been featured on the podcast yet this season. This episode was originally intended to be a video featuring Mike Burkhart live at VMworld 2018. Unfortunately due to some technical difficulties during the editing process, we can only enjoy as an audio podcast.


Listen to "A VMworld 2018 Conversation with Mike Burkhart" on Spreaker.



My Guest:
Mike Burkhart

Links:
vBrownBag VMworld 2018 Tech Talks
VMworld US 2018 Day 1 Keynote
VMworld US 2018 Day 2 Keynote
Troubleshoot and Assess the Health of VMware Environments with Free Tools (VIN3257BU)

Do you have an idea or a topic for the show? Would you like to be a guest on the ExploreVM podcast? Or just keep up the conversation about VMworld 2018? If so, please contact me on Twitter, Email, LinkedIn, Instagram, or Facebook.

Thursday, August 2, 2018

Getting Started with StorMagic SvSAN - A Product Review


Getting Started with StorMagic SvSAN - A Product Review

Recently, I had the opportunity to try out StorMagic SvSAN in my home lab to see how it stacks up. The following is an introduction to SvSAN, a description of the deployment, testing, testing results and my findings. 



What is StorMagic SvSAN 6.2?

StorMagic SvSAN provides a Hyperconverged solution that has been designed with the remote office/branch office in mind. Two host nodes with onboard storage can be utilized in a shared storage style deployment in locations where a traditional 3 tiered architecture would prove to be difficult to manage or too cost prohibitive.  SvSAN is vendor agnostic so it can be deployed onto existing infrastructure without the need to acquire additional hardware. The two storage nodes can scale out to support up to 64 compute-only nodes. Licensing is straight forward: perpetual license per pair of clustered storage nodes as one license per pair. Initial pricing is also very accessible, starting at approximately $4,000 for the first 2TB license. Licensing and capacity can scale beyond the initial 2TB.



When asked about their typical customer base, StorMagic provided the following response: "StorMagic SvSAN is designed for large organizations with thousands of sites and companies running small data centers that require a highly available, two-server solution that is simple, cost-effective and flexible. Our typical customers have distributed IT operations in locations like retail stores, branch offices, factories, warehouses and even wind farms and oil rigs. It is also perfect for IoT projects that require a small IT footprint, and the uptime and performance necessary to process large amounts of data at the edge."



Technical Layout of SvSAN

A typical SvSAN deployment consists of the following base components: hypervisor integration, Virtual Storage Appliances, Neutral Storage Host. In my lab environment, I used VMware vSphere, but StorMagic does offer support for Hyper-V as well. A plugin is loaded into the vCenter Server and provides the dashboard for management and deploying the VSAs. Following the wizard, a Virtual Storage Appliance is deployed on each host and the local storage is presented to the VSA. Before creating storage pools the witness service (Neutral Storage Host) must be deployed external to the StorMagic cluster. The NSH can be deployed on a server, Windows PC, or Linux. It is light weight enough that it can run on a Raspberry Pi.



SvSAN 6.2 introduced the ability to encrypt data. A key management server is required for encryption. For this evaluation, I installed Fornetix Key Orchestration as the KMS. Encryption options available include encryption of a new datastore, encryption of an existing datastore, re-keying a datastore, and decrypting the datastore. As I was curious to as what kind of performance hit encryption may have against the environment, I ran my tests against the non-encrypted datastore, then again after encrypting it.



Deployment and Testing

The overall installation process is fairly straight forward. StorMagic provides an Evaluators guide which outlines the installation process, and their website has ample documentation for the product. I had to read through the documentation a couple of times to fully understand the nuances of the deployment. I did encounter a few hiccups during deployment, one IP issue which I resolved and a timeout on the VSA deployment. I did need to contact support to release the license for the Virtual Storage Appliance which timed out, but support was responsive and resolved my issue quickly. The timeout may have been tied to the IP issue as the VSA deployed successfully on the second attempt.



With the underlying infrastructure in place, a shared datastore was deployed across both host nodes. Now the testing could begin. A Windows Server 2012 R2 virtual machine was deployed on the SvSAN datastore to run performance testing against. The provided Evaluation Guide gives many suggested tests to put the SvSAN environment through its paces. As I mentioned previously, I ran the tests against an encrypted datastore, a non-encrypted datastore, and a local datastore.



Following the guidelines set forth by the Evaluation Guide, Iometer was the tool of choice for performance benchmarking. Below is a chart of the metrics used. Outside of the suggested performance testing I also ran various tests to see what the end user experience could feel like on a SvSAN backed server. These tests included RDP session into the VM, continuously pinging locations internal and external to the network, and running various applications.






The final tests ran against the SvSAN cluster included failure scenarios and how it would impact the virtual machine. Drives were removed, connectivity to the Neutral Storage Host was severed, iSCSI & cluster networking were removed. An interesting aspect to the guide is that it gives you testing options to cause failures that will affect VMs running on the SvSAN datastore so you can see first-hand how the systems will handle the loss of storage.



SvSAN Results & Final Thoughts


Performance testing ran against the VM on the SvSAN datastore provided positive results. I was curious as to whether passing through an additional step in the process would affect IOPS, but there were only nominal differences between the local storage and the SvSAN datastore. I found the same to be true when it came to running an encrypted versus a non-encrypted datastore. IOPS performance held steady across all testing scenarios.



The same was true with the user experience performance testing. While running Iometer, Firefox, a popular chat application, and pinging a website the following failures were introduced to no impact:



  • hard drives were remove
  • a Virtual Storage Appliance was powered down
  • an ESXi host was shut down
  • Connectivity to the Neutral Storage host was severed



I was impressed with my experience with StorMagic's SvSAN. From no prior exposure to running production ready datastores in approximately an hour. The solution performed well under duress. Overall, StorMagic SvSAN is an excellent choice for those in need of a solid remote office/branch office solution that is reliable and cost effective.



Lab Technology Specifications:

  • Two Dell R710s
  • 24 GB RAM each
  • 2x X5570 Xeon 2.93 GHz 8M Cache, Turbo, HT, 1333MHz CPU Each
  • One 240 GB SSD drive for caching in each host
    • Presented as a single 240 GB pool from the RAID controller
  • 5 x 600 10k SAS drives configured in RAID 5
    • Presented as two pools; 400GB & 1.8 TB
  • VMware vCenter Server Appliance 6.5
  • VMware ESXi 6.5 U2 Dell Custom ISO
  • Cisco Meraki MS220 1GB Switching 

Further reading on StorMagic:
SvSAN Lets You Go Sans SAN 
 
This blog was originally published at Gestalt IT as a guest blog post. 

If you'd like to continue the conversation about StorMagic SvSAN, do not hesitate to contact me via any of the channels provided below. Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Tuesday, April 17, 2018

Top 3 Features of Nutanix AOS 5.6

On April 16th, Nutanix released it's newest version of AOS (Acropolis Operating System). The 5.6 release has 9+ new features. But rather than cover them all, I wanted to call out what I consider the top 3 features of this release.

One or Two Node Deployment for Remote / Branch Office

Many businesses face the challenge of maintaining IT resources in remote locations. Whether it be a branch office or retail outlet, choosing the right solution to provide infrastructure which is easy to deploy and manage can prove difficult. Enter AOS 5.6. With this release, Nutanix now offers a single or dual node solution for ROBO deployment. The single node deployment can support up to 5 VMs, with disk level resiliency. The two node option supports up to 10 VMs and offers node level resiliency. Both options are managed centrally via Prism Central, allowing admins to administer the nodes alongside their on-premises data center. ROBO sites can also utilize cross hypervisor DR between the remote site and the DR location. And with all other Nutanix, the ROBO sites can be remotely upgraded using the same 1 click upgrades as the local site, saving the complexity of keeping a remote office up to date.

Microsegmentation hits GA

AHV users can now benefit from the GA release of Microsegmentation. Built into 5.6 and managed via Prism, application centric policy models can be deployed using a stateful distributed firewall. Single virtual machines or groups of VMs can all be protected, including blocking east/west traffic between them. Microsegmentation offers granular application isolation and zoning without configuring VLANs.

Volume Group Load Balancing of vDisks 

For users with high IO VMs, AOS 5.6 includes the ability for AHV to load balance vDisks in a volume group. CPU and memory resources are pulled from multiple CVMs (Controller VMs). This distribution across CVMs helps to improve the performance of the virtual machine and reduce bottlenecks.

Related links

What's New | AOS 5.6 - Nutanix
VCDX133.com - Nutanix AOS 5.6 Released
VCDX56.com - Nutanix AOS & Prism Central Version 5.6 & Much More Released

Do you have an idea or a topic for the blog? Would you like to be a guest on the ExploreVM podcast? If so, please contact me on Twitter, Email, or Facebook.

Wednesday, January 10, 2018

Riverbed Steel Fusion: A New Approach to Remote Office Infrastructure

Riverbed Steel Fusion: A New Approach to Remote Office Infrastructure  

 

Approaching the remote office/branch office (ROBO) deployment can be a more complicated task than it originally appears. Many vendors in this space offer a quick deployment, but fail to look deeper into the needs of the business to provide a well rounded solution. How does a business plan for disaster recovery at a remote site? What about offices in foreign countries? Data location and international laws could complicate the ability to protect intellectual property of the business. Enter Riverbed Steel Fusion. Steel Fusion offers administrators simplified ROBO deployments, centralized management, and provides options for problems that could be easily overlooked when planning for remote offices.   

What is the SteelFusion Solution?


SteelFusion is a Software-Defined Edge solution consisting of a two parts: Core and Edge. In the on-premises data center, the SteelFusion Core (a physical appliance or virtual machine) is connected to the SAN or NAS storage. It should be noted that the SteelFusion core can also be linked to AWS or Azure storage components for those with a multi-cloud solution. The SteelFusion Edge appliance is deployed at the remote site. Once the base network configuration is made, the appliance is remotely managed from the Core.


Riverbed's Parimal Puranik white boarding SteelFusion at TFD15

How does SteelFusion differentiate from other ROBO solutions?


SteelFusion separates itself from the pack by keeping corporate data housed in the main data center. With the data on-premises, planning for backup and disaster recovery is simplified to one site. No need to develop a complicated solution encompassing multiple remote offices. Similarly, since data lives in the SAN/NAS, it becomes less vulnerable to foreign data laws. The data required by the ROBO, whether it be files or VM disks, is replicated and cached to the Edge, and changes are written back to the SAN/NAS at the Core.

Aside from passing the data to the remote site, the Edge appliance can also replace traditional compute solutions with Riverbed’s Virtual Services Platform. VSP runs VMware ESXi to on the SteelFusion Edge appliance, eliminating the need for additional hardware.

On paper the solution seems solid, but does it hold up in the real world? The answer appears to be yes. Based on statistics provided by Riverbed in September of 2017, there are over 10,000 appliances deployed across 1,200 customers globally. From experience in my professional life, installation and management is easy. I have encountered a few enterprises in my area currently utilizing SteelFusion, or with plans to implement SteelFusion in 2018. While it may not be a perfect solution for every ROBO use case, SteelFusion is certainly worth investigating.

For a deeper overview of Riverbed’s SteelHead technology, check out their presentation at Tech Field Day 15 Here.



Disclaimer: I was invited to participate as a Tech Field Day Delegate as a guest of Gestalt IT. All expenses, including food, transportation, and hotel were covered by Gestalt IT. I did not receive any compensation to write this post, nor was I requested to write this post. The above post is written of my opinion and not that of Gestalt IT.