Jan 29, 2020

Notes from the field: Implementing Storage Spaces Direct (S2D) 2019 + Lenovo + WSLab: 2. Setting up BMC/IMM

Out-of-band management (IMM) for x86 servers in Lenovo's world is called XClarity Controller. The default login is USERID and password PASSW0RD (with zero) + requiring periodic changes.

  • Changing properties of IMM for every servers one-by-one seems like a bad idea.
  • Not mentioning the fact that you may have messed up the purchase order and the RAID for booting partition (or boot order in general) might not be correctly set from the factory so we are looking for unified configuration.
Lenovo offers a great and easy and free tool called XClarity Administrator which you can use for:

  • Firmware updates
  • Driver updates (never succeed in making it work because of security requirements)
  • Health monitoring and reporting (email, SNMP, connector to Windows Admin Center...)
  • Auto-discovery
  • OS Deployments
  • Mass-configuration using templates


Last two (+support) are licenced (around 50 EUR/server/year) but the trial period is 90 days - enough for initial configuration. On top of that, it can centrally manage identity and access.


For small scale deployment, you can set up a server one by one but here is the idea:

  • Assuming all management ports of servers and switches are connected to one specific management VLAN (for illustration purposes see pic below)
  • Download appliance for Hyper-V, deploy it to the "Service Server" and connect it up to the out-of-band server management (BMC/IMM/iLO whatever the name) VLAN. Very easy to deploy, nice UX. Use IPv4 (static) + IPv6 (auto-config)
  • Login via the IP you have set during the process = https://[IPADDRESS]/ui/index.html
  • In XClarity Administrator create a set of users you want to use (or even better connect to an identity source such as Active Directory - and always create a backup local user as required by Disaster Recovery!)
  • Instead of setting up DHCP to Management VLAN simple use IPv6 for initial auto-discovery (leave the auto-config for IPv6 on the XClarity Administrator appliance)
  • Set up a naming convention for the managed servers = that is the name that will be visible on the server during start-up, in BIOS/UEFI, when accessing via IMM and in XClarity Administrator. I highly recommend keeping the same everywhere, especially in Windows!
  • If needed create a template to configure properties such as storage/RAID or power performance mode 
A couple of nice features that comes along:

  • Set-up automatic sending of support requests so that no time is wasted to call support
  • One page view with all the support expiration, download logs....
  • Specify position in the rack, for remote navigation of technician




Jan 13, 2020

Notes from the field: Implementing Storage Spaces Direct (S2D) 2019 + Lenovo + WSLab: 1.Hardware consideration

Hardware consideration:

I am deploying a disaggregated S2D scenario, consisting of 3x clusters each with 3 nodes.
  • 2x Compute Clusters (one with a lot of CPU cores the other with high-frequency CPU), each node:
    • ThinkSystem SR630 2.5" Chassis with 8 Bays
    • 2x Intel Xeon Gold 6244 8C 150W 3.6GHz Processor/
    • 24x ThinkSystem 64GB TruDDR4 2933MHz (2Rx4 1.2V) RDIMM
    • 1x ThinkSystem M.2 with Mirroring Enablement Kit
    • 2x ThinkSystem M.2 5100 240GB SATA 6Gbps Non-Hot Swap SSD
    • 2x Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter
    • 2x ThinkSystem 750W (230/115V) Platinum Hot-Swap Power Supply
    • 1x ThinkSystem 1Gb 2-port RJ45 LOM
    • 1x ThinkSystem XClarity Controller Standard to Enterprise Upgrade
  • 1x S2D Cluster, each node
    • ThinkAgile MX Certified Node - All Flash
    • 2x Intel Xeon Silver 4210 10C 85W 2.2GHz Processor
    • 12x ThinkSystem 32GB TruDDR4 2666 MHz (2Rx4 1.2V) RDIMM
    • 3x ThinkSystem 430-8i SAS/SATA 12Gb HBA
    • 16x ThinkSystem 2.5" Intel S4510 3.84TB Entry SATA 6Gb Hot Swap SSD
    • 4x ThinkSystem U.2 Intel P4610 1.6TB Mainstream NVMe PCIe3.0 x4 Hot Swap SSD
    • 1x Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter
    • 1x ThinkSystem 1Gb 2-port RJ45 LOM
    • 2x ThinkSystem 1100W (230V/115V) Platinum Hot-Swap Power Supply
    • 1x ThinkSystem M.2 with Mirroring Enablement Kit
    • 2x ThinkSystem M.2 5100 480GB SATA 6Gbps Non-Hot Swap SSD
    • 1x ThinkSystem XClarity Controller Standard to Enterprise Upgrade
  • Networking
    • 2x Lenovo NE2572, each:
      • 48x SFP28/SFP+ ports (25Gbit/s)
      • 6x QSFP28/QSFP+ ports (100Gbit/s)
  • Service Server
    • ThinkSystem SR630 2.5" Chassis with 8 Bays
    • 1x Intel Xeon Silver 4208 8C 85W 2.1GHz Processor
    • 4x ThinkSystem 32GB TruDDR4 2933MHz (2Rx4 1.2V) RDIMM
    • 1x ThinkSystem RAID 930-8i 2GB Flash PCIe 12Gb Adapter
    • 2x ThinkSystem 2.5" 5200 1.92TB Mainstream SATA 6Gb Hot Swap SSD
    • ThinkSystem M.2 with Mirroring Enablement Kit
    • 2x ThinkSystem M.2 5100 240GB SATA 6Gbps Non-Hot Swap SSD
    • ThinkSystem 1Gb 4-port RJ45 LOM
    • 2x ThinkSystem 750W (230/115V) Platinum Hot-Swap Power Supply
Notes:


Notes from the field: Implementing Storage Spaces Direct (S2D) 2019 + Lenovo + WSLab

As you may or may not already know the documentation for Storage Spaces Direct is just sad. While one may argue it's simple so that everyone can do that, it's missing so many details so in fact there is a high probability that if you follow it you will implement it wrong :-)

In the following blog posts, I will try to sum up the experience of deploying disaggregated Hyper-V/S2D cluster. Most of the scripts and information are taken from the https://github.com/microsoft/WSLab project which is just an amazing collection of information for S2D deployments.

  1. Hardware consideration
  2. Setting up BMC/IMM
  3. Bare-metal deployment with XClarity Administrator and Virtual Machine Manager 2019 
  4. Networking
  5. Performance testing