I
went through the following URL https://technet.microsoft.com/en-us/library/mt126109.aspx
but instead of creating a 4 node Storage Spaces Direct cluster, I decided to
try and see if a 3 node cluster would work. Microsoft documentation says that they will only
support Storage Spaces Direct with 4 servers but I thought it can't hurt to
try a 3 node... and it worked!!
I
did this all with virtual machines and CTP4. So I skipped the RDMA part and
just setup two virtual switches. One
for internal traffic and one for external. I had to add some steps in so the
guest would think the virtual hard drives were either SSDs or HDDs. I also skipped the multi-resilient disks until after testing straight virtual disks.
So
I have 3 nodes. Each node has one 400GB "SSD" and one 1TB
"HDD".
- Install-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools
- #at this point, I hot added the 3 1tb disks to VMs
- Test-Cluster –Node s2dtest01,s2dtest02,s2dtest03 –Include “Storage Spaces Direct”,Inventory,Network,”System Configuration”
- New-Cluster –Name s2dtest –Node s2dtest01,s2dtest02,s2dtest03 –NoStorage –StaticAddress 192.168.1.213
- #ignore warnings
- #if disaggregated deployment, ensure ClusterAndClient access w/ Get-ClusterNetwork & Get-ClusterNetworkInterface. Not needed for hyper-converged deployments.
- Enable-ClusterS2D
- #ihis is just for ssd and hdd configs
- #optional parameters require for all flash or NVMe deployments
- New-StoragePool -StorageSubSystemName s2dtest.test.local -FriendlyName pool01 -WriteCacheSizeDefault 0 -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem -Name s2dtest.test.local | Get-PhysicalDisk)
- Get-StoragePool -FriendlyName pool01 | Get-PhysicalDisk #should see 2 1tb disks
- Get-PhysicalDisk | Where Size -EQ 1097632579584 | Set-PhysicalDisk -MediaType HDD #set the 1tb disks to hdd type
- #I hot added the 3 400gb disks to VMs at this point
- Get-StoragePool -IsPrimordial $False | Add-PhysicalDisk -PhysicalDisks (Get-PhysicalDisk -CanPool $True) #add new disks to pool
- Get-StoragePool -FriendlyName pool01 | Get-PhysicalDisk #should see 4 1tb disks and 4 400gb disks, for a total of 8
- Get-PhysicalDisk | Where Size -EQ 427617681408 | Set-PhysicalDisk -MediaType SSD #set the 400gb disks to ssd type
- Get-StoragePool pool01 | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd01 -FileSystem CSVFS_ReFS -Size 1000GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -NumberOfColumns 1
#scale
out file server…
- New-StorageFileServer -StorageSubSystemName s2dtest.test.local -FriendlyName sofstest -HostName sofstest -Protocols SMB
- New-SmbShare -Name share -Path C:\ClusterStorage\Volume1\share\ -FullAccess s2dtest01$, s2dtest02$,s2dtest03$,test\administrator,s2dtest$,sofstest$
- Set-SmbPathAcl -ShareName share
Now I tested. Everything continues to work if any of
the nodes die! I tried killing each node one at a time and the virtual disk,
the volume and the SOFS share were all still up and still accessible. 2-way
mirroring worked with a 3 node S2D setup.
Then
I created a single parity space and setup sofs share with the following:
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd02 -FileSystem CSVFS_ReFS -Size 500GB -ResiliencySettingName Parity -PhysicalDiskRedundancy 1 -NumberOfColumns 3
- New-SmbShare -Name share2 -Path C:\ClusterStorage\Volume2\share\ -FullAccess s2dtest01$, s2dtest02$,s2dtest03$,test\administrator,s2dtest$,sofstest$
- Set-SmbPathAcl -ShareName share2
It continues to work if any of the nodes die! I tried
killing each node one at a time and the virtual disk, the volume and the SOFS
share were all still up and still accessible. Single parity worked with a 3
node S2D setup.
I
then added another 1TB disk to each of the 3 nodes and then tried to create a
2-way mirror with two columns, a 3-way mirror with 1 column, a 3-way mirror with 2 columns and a parity
space with 6 columns.
- Get-StoragePool -IsPrimordial $False | Add-PhysicalDisk -PhysicalDisks (Get-PhysicalDisk -CanPool $True)
- Get-PhysicalDisk | Where Size -EQ 1097632579584 | Set-PhysicalDisk -MediaType HDD
- Optimize-StoragePool
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd03 -FileSystem CSVFS_ReFS -Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 2 -NumberOfColumns 1
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd04 -FileSystem CSVFS_ReFS -Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -NumberOfColumns 2
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd05 -FileSystem CSVFS_ReFS -Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 2 -NumberOfColumns 1
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd06 -FileSystem CSVFS_ReFS -Size 500GB -ResiliencySettingName Parity -PhysicalDiskRedundancy 1 -NumberOfColumns 6
- New-SmbShare -Name share3 -Path C:\ClusterStorage\Volume3\share\ -FullAccess s2dtest01$, s2dtest02$,s2dtest03$,test\administrator,s2dtest$,sofstest$
- New-SmbShare -Name share4 -Path C:\ClusterStorage\Volume4\share\ -FullAccess s2dtest01$, s2dtest02$,s2dtest03$,test\administrator,s2dtest$,sofstest$
- Set-SmbPathAcl -ShareName share3
- Set-SmbPathAcl -ShareName share4
Well,
the 6 column parity did not work and neither did the 3-way mirror with 2
columns. The PowerShell command would not take. That was somewhat expected. It
appears that the resiliency is dependent on the fault domains. The 2-way and
3-way mirror w/ 1 column were created though and they both continued to work
throughout any single node failure. The 3-way mirror could not withstand a two
node failure though. Perhaps it could withstand two disks? Something to try
another day. I wanted to see if the multi-resilient disks would work in a 3
node S2D with a single parity space. So I wiped away all the virtual disks and
started over.
- Remove-SmbShare share
- Remove-SmbShare share2
- Remove-SmbShare share3
- Remove-SmbShare share4
- Remove-VirtualDisk vd01
- Remove-VirtualDisk vd02
- Remove-VirtualDisk vd03
- Remove-VirtualDisk vd04
- New-StorageTier -StoragePoolFriendlyName pool01 -FriendlyName MT -MediaType HDD -ResiliencySettingName Mirror -NumberOfColumns 2 -PhysicalDiskRedundancy 1
- New-StorageTier -StoragePoolFriendlyName pool01 -FriendlyName PT -MediaType HDD -ResiliencySettingName Parity -NumberOfColumns 3 -PhysicalDiskRedundancy 1
- $mt = Get-StorageTier MT
- $pt = Get-StorageTier PT
- New-Volume -StoragePoolFriendlyName pool01 -FriendlyName vd01_multiresil -FileSystem CSVFS_ReFS -StorageTiers $mt,$pt -StorageTierSizes 100GB, 900GB
- New-SmbShare -Name share -Path C:\ClusterStorage\Volume1\share\ -FullAccess s2dtest01$, s2dtest02$,s2dtest03$,test\administrator,s2dtest$,sofstest$
- Set-SmbPathAcl -ShareName share
It appeared to
create it. I tested failed each node individually and it appeared to work. So in conclusion, it looks like you can build a 3 node
Storage Spaces Direct cluster and use multi-resilient disks!!! Granted you can
only have one node failure but that's fine by me.
I emailed Microsoft
and asked them about supporting 3 node S2D. They said to stay tuned on support
of 3 node deployments… Sounds like and looks like it will be coming!
How do you configure the network?
ReplyDelete