Considering VSAN

来源:互联网 发布:php 设置北京时区 编辑:程序博客网 时间:2024/05/21 17:31

Considering VSAN

VSAN_1

For us storage types, one of the more interesting announcements coming out of VMworld is VSAN (formal name "Virtual SAN"), which will now be available in beta form before too long.


Although VSAN does many of things a traditional storage array does, it's not a traditional storage array: it's a software-based storage array that uses clusters of commodity server hardware to do its work.  Storage and compute workloads run together on the exact same resource pool.

虽然VSAN可以提供一些传统阵列所具有的功能,但是它毕竟不是传统意义上的阵列:它是基于软件的阵列,通过服务器集群中的硬件来响应请求。存储和计算负载都运行在同一个资源池之中。

Yes, it's a v1 product in many regards (that's to be expected!) but there are some great ideas to be found in what the team has done -- worthy of deeper understanding -- as I'm sure we'll see more of these newer concepts going forward.

尽管它仅仅是v1版本,但是我们还是能够在其中发现一些伟大的值得我们认真思考的想法。可以预见的是,后续会有更多新的概念出现。


The Basics

  Slide13VSAN has been discussed quietly for a while, but now the covers are coming off.  The name might be a bit of a misnomer, as it really isn't a SAN in the familiar sense, as it doesn't present a familiar LUN/block/volume abstraction.  

I tend to describe it as a VMDK storage server.

For those of us with long storage backgrounds, VSAN forces us to re-think many of our long-held assumptions.  Personally, I found the reorientation refreshing.  

While I was often frustrated searching for familiar signposts (LUNs and filesystems, for example), I found many new concepts to consider.

Software-Defined Storage -- The Big Picture

While there is no industry consensus yet on what is software-defined storage (and what is not), one common thread is the need for storage to be a dynamic, programmable resource in the same sense that compute is today.

VSAN_9Ideally, the compute experience and the storage experience would be completely converged in all aspects: resource pool, provisioning, monitoring, etc.  In this model, storage is no longer a separate domain; it's now just part of the virtual landscape in all regards.

If this is the primary goal of software-defined storage, VSAN nails this in a big way.  Conversely, if one insists on viewing VSAN through a historical storage array lens, much of what makes it unique and special isn't immediately apparent.

Along the way, VSAN offers extremely tight integration with vSphere (it should, as several key subsystems are kernel-based), as well as effective use of the same commodity server resource pool used for server virtualization.

VSAN -- Basic Architecture

VSAN_3A VSAN cluster is initially comprised of 3 to 8 compute nodes, each of which must have at least one SSD and one familiar SAS disk drive.   The compute nodes are not dedicated to VSAN: they support the normal mix of vSphere workloads.

VSAN is simply "turned on" at cluster creation time; doing so means that new storage resources are transparently added to the pool much the same way compute resources are.

Each server node running VSAN supports up to 5 disk groups.  Each disk group can have up to 7 physical disks but must have one SSD associated with it.  The disks can be internal, or -- shortly -- external via JBOD as qualifications progress.

The SSD functions as a distributed read/write cache, and isn't used to persist data.  In this release, a single SDD is supported per disk group: 70% of SSD capacity is used to cache reads, the remaining 30% for writes.  Cache writes can be protected by mirroring them across two or more nodes prior to destaging to disk.  Multi-node mirroring is also used to protect against both disk failures and node failures.  

VSAN_4VSAN presents a single, aggregated datastore across all nodes for use by VMs and their VMDKs. 

Multiple policies can be implemented (redundancy, performance) within the same VSAN datastore, without the need to pre-create the familiar storage pools: gold, silver, etc.

VSAN monitors the required policies, and self-tunes as needed providing there are sufficient resources to do so: striping a data object, using more SSD cache, etc.

In this release, there are no data services which are specific to VSAN, all are provided by vSphere: snaps, linked clones, replication, vSphere HA, DRS, VDP or via third-party technology partners.   Additionally, VSAN has an elegant "node evacuation capability" that allows running processes and associated storage to be relocated prior to taking a node down for maintenance or replacement.

Not all nodes in a vSphere cluster need to have local storage; diskless nodes can access the VSAN datastore over the network.

Under the covers, VSAN uses the forthcoming VVOL abstraction between VMs and storage services.  Before long, VVOLs will be more generally supported for other storage providers, but you can get an early preview of what's to come simply by taking a look at VSAN.

Surfacing Capabilities To VMs

VSAN_6Capabilities are what VSAN can offer, based on physical resources available.  For examples, a capability might be performance or protection oriented. 

policy is a useful combination of capabilities (e.g. default performance, limited data protection). 

And an object is typically a VMDK and its associated snaps.

Although it might sound complicated here, provisioning is straightforward. 

- First, establish an inventory of policies based on capabilities: either using the existing defaults, or by creating your own.   
- Second, at VM provisioning time, choose the policy that matches the application requirement.
- Finally, close the loop to ensure that the chosen policy is meeting application requirements.

VSAN_10The initial list of potential capabilities are enough to cover the basics in v1.  For example, you can indicate how many node failures you can tolerate, which drives the placement of additional copies of data across more nodes. 

For performance, you can reserve read cache, and specify disk striping on the node(s) to increase back-end disk speed.   For efficiency, you can specific the initial provisioning amount, and leave the rest thinly provisioned.

At provisioning time, VSAN looks at its inventory of resources across all nodes, and does a best fit.  If insufficient capabilities exist, administrators can "force provisioning" regardless.   After provisioning, VSAN will monitor the service levels delivered, and self-tune if possible.

Just to be clear: (1) one VSAN cluster can support multiple per-VM policies, and (2) there's no pre-provisioning of storage resources into "policy pools" -- all of that is done dynamically at provisioning time.

Price / Performance

Slide16The VSAN team shared a few slides to give a sense of both comparison pricing as well as performance envelopes. 

While there is no substitute for doing your own homework, it does give a sense of where VSAN fits in to the broader scheme of things.

One focused use case for VSAN is virtual desktops (VDI), and the comparison here is against an all-flash array using linked clones to save capacity.   The comparison here is a modest three-node cluster supporting 288 VMs.  

While it's not entirely clear exactly what components went into the pricing model, it is intriguing -- especially if end-user VDI performance is equivalent to an all-flash external array.

Slide18The second view is a performance view, showing raw IOPS from a 8-node VSAN cluster. 

Remember that (1) these nodes aren't dedicated to storage, they support application VMs concurrently, and (2) 8 nodes is where the product is today -- the goal is for VSAN to scale to match DRS clusters.

Since I'm a storage geek, I'd like to see more specifics around block size, precise read/write mix, sequential vs. random -- but, nonetheless I'm intrigued: it appears that there is more than enough horsepower for a wide variety of workloads.

Basic Architecture -- Personal Commentary

VSAN_2The current restriction of 8 nodes is not much of a limitation at all, especially for a v1 storage software product. 

While I'm sure there are precise upper capacity bounds, as an example seven 2TB drives in five disk groups times 8 nodes gives you over half a petabyte of raw capacity in a single VSAN cluster.

Similarly, the one-per-disk-group SSD requirement is quite reasonable: as the size and type is completely your choice: go for the 2TB model if you've got the cache.  While there are no architectural restrictions on mixing and matching different server and storage types from the HCL in a single VSAN cluster, I'd tend to go easy on this to avoid operational surprises down the road.

VSAN_5The design of VSAN is based around data replication and striping; there is no parity or erasure encoding capabilities currently planned.

While I was originally a bit surprised at the lack of a standard storage presentation (e.g. NAS, iSCSI, etc.), I now realize that in today's VM-centric world, a VMDK storage presentation makes perfect sense, and it's certainly one that enables a better alignment between compute and storage on VM boundaries.

I can see VSAN being attractive in a number of VM-centric use cases: VDI, development and testing, as well as potentially being at the remote end of a replication pair.  Branch office and remote location deployments look especially attractive: there's no longer a need to stand up a separate physical storage array at multiple locations.

Big Ideas In Play

There are a number of bigger ideas in VSAN that I think we'll undoubtedly see elsewhere before too long.

First and foremost is the use of commodity server hardware to provide storage services.   The same server pool is used to provide both compute and storage services: one combined resource pool, one integrated administrative view, etc.  Much simpler, and likely more cost effective from both a capex and opex perspective in many cases.

VSAN being implemented as an extension to the vSphere core provides an interesting model: one where storage resources and operations are fully integrated into compute activities.   Although it's possible to bridge the compute and storage domain with plug-ins and the like, there's nothing like being designed in from the get-go.  

I'll be interested to see how people react to this integration.

The idea of carving multiple storage service levels from available resources isn't a new idea; doing so dynamically at the time of provisioning without having to pre-allocate and pre-configure the pools -- well, that's a new idea.  This on-demand approach to service level provisioning will likely prove to be more popular than its predecessors.

While not in the current product, I also see an interesting opportunity to create all sorts of rich data services that are directly supported (or invoked) from the hypervisor, and fully respect VM boundaries and their associated VMDKs. 

I do think the pricing model is worth noting: it's licensed per-socket the exact same way other VMware products are licensed.  I like it because (a) it frees us from capacity-based storage pricing -- something I have never liked -- and (b) it's consistent and darn simple to understand.

Finally, there's no restriction on using traditional external storage approaches (SAN, NAS) alongside VSAN, meaning that customers can use VSAN where and when it makes sense.  Use a little, use a lot -- it's sort of an incremental feature of the operating system vs. a big storage decision.

Room For Enhancements?

Naturally, I can certainly spot a few v2+ enhancements that I'd like to see going forward.

VSAN_11First, there's replication: we're going to need the same sort of powerful snaps and remote replication capabilities (including continuous replication) that we routinely get on arrays today.  Whether those are part of the VSAN product --  or provided by partners -- doesn't matter that much.

Second, it would be great to provision, invoke and monitor these same advanced data protection capabilities in the same way that VSAN is managed today -- as an application-centric extension of compute.  Better yet -- bubble this all the way up to consumption portals, such as vCD.


Third, this seamless integration model should ideally be extensible to other storage stacks: those provided by VMware, and those provided by its partners.  You'd want to avoid a situation where popular storage products are seen as second-class citizens from an integration perspective.


That's in addition to the usual stuff: more capacity, more nodes, better low-level tools, alternative protection mechanisms, and so on.

The beta -- when it's available -- will be at vsanbeta.com.  Early testers have all remarked on just how easy it is to get up and running, and the tight integration of the provisioning experience.

I'm hoping more than a few people invest the cycles to give it a serious go, and provide feedback to the team.  It's a neat storage product -- and very different in several significant ways.

Industry Reaction?

One might be tempted to think that, with VSAN, VMware is now openly competing with its traditional storage partners: NetApp, EMC, HP and others.  That would be an incorrect conclusion.

Chrome_material_01First, keep in mind that VMware competes vigorously with other infrastructure software vendors: Microsoft, RedHat and others. 

These vendors are clearly extending their storage capabilities, and VMware must certainly do the same.  Taken in this light, VMware is clearly out ahead in this regard.

On a more pragmatic note, it's not hard to see that VSAN in its present form can't really be compared to a familiar storage array in many important regards.  One could make a long list of things that familiar arrays do well that VSAN doesn't attempt to do.   And vice versa.

That being said, VSAN is a seriously interesting product in many regards, and I think it will certainly will find its place in the storage marketplace before too long.

But it really doesn't matter what I think. 

What really matters is what everyone else thinks ...

原创粉丝点击