Microsoft Windows Best Practices
Microsoft Windows Best Practices
September 2022
Vers 1.1
Always check support.ngxstorage.com for the latest version of the document.
Revisions
Date |
Description |
May 2021 |
Initial release for NGX Storage AFA and H Series version 1.8.x and 2.1.x |
September 2022 |
Document version 1.1 for GEN1 Software 1.8.6 and GEN2 Software 2.2.0 |
Table of Contents
1.1. Fiber Channel Configuration 3
1.3.1 Provisioning SMB/CIFS Share to Microsoft Hosts 7
1.3.2 Modifying SMB/CIFS Share 9
1.4.1 Activating MPIO on Windows Server 2019 12
1.4.2 Configuring MPIO on Windows Server 2019 12
2. Administration and Maintenance 14
2.4. Active Directory Integration 19
3.3. Realtime Automated Storage Tiering (RAST) 22
Executive Summary
This guide provides the best practices for NGX Storage by connecting to Microsoft Windows hosts. Our focus is by optimizing NGX Storage features to boost performance and usability of the storage as much as possible. This document is prepared to cover adjustments which are related to the most common use cases. When a storage manager / system administrator
reads this guideline, s/he can easily prepare NGX Storage to be connected from Microsoft Windows hosts via Fiber Channel, iSCSI, CIFS/SMB.
In specific environments and use cases, in order to discuss the applicability of your use
case, please contact NGX Storage from www.ngxstorage.com.
There are several ways to present NGX Storage Volumes to Windows Server hosts via Multipath I/O configuration. In a SAN configuration, FC and iSCSI are supported. In a NAS configuration, CIFS and NFS are supported. In this section, their configurations and related configuration issues are explained in detail.
Fiber Channel (FC) is a high-speed data transfer protocol which enables in-order and lossless delivery of raw block data. In a Storage Area Networks (SAN), FC is primarily used to connect a storage array to hosts (servers). FC networks can be composed of a switched fabric or Point-to-Point network topologies. NGX Storage supports these two network topologies.
Switched Fabric operates SAN Switches in unison and all devices are connected to Fiber Channel Switches. Switched Fabric has some advantages over Point-to-Point topology. Firstly, the switched fabric can scale to ten thousand of ports while Point-to-Point has a limited connectivity. In addition, FC switches manage the state of the Fabric and provide optimized paths via Fabric Shortest Path First (FSPF) data routing protocol. Lastly, multiple pairs of ports may communicate simultaneously in a Fabric and failure of a port does not affect operations of the other ports.
In this point, NGX Storage recommends to choose FC topology according to the size of the SAN. If there are a couple of servers and just one storage, you can prefer Point-to-Point topology to decrease the costs. However, the bigger SAN needs Switched Topology because it provides performance optimization and easy management. To configure topology, follow Maintenance>Services>SAN Settings (Figure 1).
Figure 1: SAN Service Settings
After creating a LUN and choosing the appropriate topology, you should create a new FC target to enable connection between Windows Server host and NGX Storage. While creating an FC target, write an appropriate target name that makes you remember which Windows Server hosts are connected to it. Then, choose FC ports that are connected to correct Windows Server hosts. In fact, redundancy is an important issue. You need to be sure that connections between Windows Server hosts and Storage Controllers are redundant for each target. Lastly, match LUNs with an appropriate target that is connected to the correct Windows Server hosts. To create a new target, follow SAN > FC Targets > New Target (Figure 2).
Figure 2: Creating FC Target
Number of FC ports in an FC target and number of FC targets in an SAN depend upon the system design. System design can change case-by-case in terms of devices, topology, hosts, and even applications. NGX Storage recommends that the customers should consult with NGX Storage system engineers before deciding those metrics.
iSCSI (internet Small Computer Systems Interface) is an IP-based storage networking standard that provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network. Although Fiber Channel can provide 32Gbps bandwidth per port at most, NGX Storage can provide 100Gbps bandwidth with iSCSI per port. In spite of Fiber Channel, iSCSI devices communicate via IPs. In order to assign data IPs to the interfaces, follow Configuration > Network Settings.
NGX Storage does not recommend aggregate interfaces for iSCSI as a best practice. Each interface needs to get an IP and apply multipathing between them. NGX Storage GUI does not allow you to assign IP addresses to the management and data interfaces in the same subnet. They should be in different subnets as a best practice.
iSCSI target needs a portal group which consists of data IPs. Portal groups should be specified by considering groups of Windows Server hosts which are connected to related LUNs. For example, there are a cluster of Windows Server hosts working together on specific LUN or LUNs. A portal group should meet Windows Server hosts appropriately with IPs on the same subnet. To create a new portal group, follow SAN > Portal Groups > Add Group (Figure 3).
Figure 3: New Portal Group
Network settings and portal groups are prerequisites for iSCSI targets. If you are sure that they are appropriate, adding an iSCSI target is so simple. To manage it, follow SAN > iSCSI Targets > New Target (Figure 4).
Figure 4: New iSCSI Target
In this stage, you should specify a suitable iSCSI target name, select the correct portal group and assign LUN or LUNs that you prefer to map with these Windows Server hosts. Afterwards, iSCSI targets will appear from Windows Server hosts unless network configuration and portal groups are wrong.
NGX Storage supports Microsoft NAS SMB/CIFS protocol environments. When an SMB share is created, a CIFS server runs and announces to SMB/CIFS hosts from all TCP/IP interfaces unless there is an IP restriction. In order to provide user authentications from Microsoft hosts, CIFS users can be both created manually and extracted from the Windows Active Directory Domain in which storage array needs to be the part of the Windows AD Domain. SMB shares can be used as a home directory and to host Hyper-V and SQL Server workloads.
1.3.1 Provisioning SMB/CIFS Share to Microsoft Hosts
First of all, it is required to assign IP addresses to the data interfaces. NGX storage denies to assign IP addresses to management and data interfaces of the same subnet. Network manager needs to determine IP addresses for management and data interfaces from the different IP blocks. NGX Storage recommends to aggregate interfaces to failover when any interface is lost. In order to aggregate data interfaces and assign an IP address to it, follow Configuration > Network Settings. NGX Storage provides an option for jumbo frames. However, it requires that all of the system between storage and hosts must operate in jumbo frames.
After data interfaces are aggregated and IP assignments are completed, storage manager may create an SMB share by following Shares > Shares > New Share (Figure 5). When creating an SMB share, Hard Quota must be defined. If you define Soft Quota which needs to be equal or less than Hard Quota. Soft Quota is the space only occupied by data. Snapshots are not included in Soft Quota. Reserve guarantees the amount of space will be reserved for the dataset and its snapshots. In other words, Reserve enables thick provisioning. Amounts of Quotas and reserves need to be determined after needs analysis.
NGX Storage recommends to enable compression in fast mode because it operates compression %30 faster than the competitors and assures that the performance does not degrade. Block size is 128K as default. As a best practice, NGX Storage recommends not to change the default value. After saving the configuration, SMB share starts to announce from all data interfaces and Windows hosts in the same subnet can see and connect to that share.
1.3.2 Modifying SMB/CIFS Share
In production, storage shares can require modifications such as IP restrictions, setting user quotas and so on. In NGX Storage UI, shares list can be seen under Shares > Shares. On the right-hand side of each instance, click Modify > Modify (Figure 6).
In this window, quotas, reserve and block size can be changed. Read / Write / Execute permissions can be defined according to user, group and other. As default, DRAM and Flash Cache options are checked. Compression and deduplication can be enabled. NGX Storage recommends to enable compression in fast mode for all use cases. However, we are not recommending deduplication in production environments due to heavily load on the system.
Figure 6: Edit Share
From the same modification windows, click SMB tab above (Figure 7). This window enables storage manager to restrict determined file extensions such as doc, dll, and so on reaching from hosts. Moreover, “Read Only” option can be activated and hosts cannot write; but only read from the share. Thanks to ABE (Access Based Enumeration) option, the share will only be visible to users who have read or write access to the share during share enumeration. Home Directories option configure the share that maps to different directories based on the user to hold home directories. If check Log Operations option, it enables to collect logs for SMB share activities. By checking “Use Only Share Name for Export Path”, only share name without pool name will be exported and discovered from hosts. Thanks to WORM option, Write Once Read Many can be activated/deactivated for the share. As default, “All CIFS / SMB or Active Directory Users” is checked. In order to grant determined users, access to the share, uncheck this option and choose the users by passing them to the right-hand side. If there is no network access restriction, a share announces from all data interfaces and can be discovered from all hosts which are in the same subnet. In order to restrict determined hosts, add their IP addresses to the restriction list.
Figure 7: Edit SMB Share
In order to set usage quota to the share for determined user or group of users, from share list, click Modify > Set User Quota (Figure 8). In this list, choose a specific user or group of users and then specify a usage quota as the unit of Giga Byte (GB). As default, there is no limit in usage. As best practice, NGX Storage recommends to analyze users or groups ‘capacity needs and specify their usage quota one-by-one.
Figure 8: Setting User Quota
NGX Storage provides highly available storage with good performance where there are multiple paths from storage controller to the Windows servers. Basically, it protects the system against hardware failures such as cable break, failing of controller, HBA, switch and so on. Moreover, it boosts the system performance by aggregating multiple channels to broaden the bandwidth. When one channel or component becomes unavailable, multipathing software shifts the load to the one of the other available channels automatically. MPIO feature of the Windows Server provides multipathing to be used for storage resiliency and load balancing. NGX Storage strictly recommends that MPIO feature must be activated on Windows Servers.
1.4.1 Activating MPIO on Windows Server 2019
In order to activate MPIO on Windows Server 2019, should complete the following steps. This procedure is similar to the other versions of the Windows Servers such as 2012R2, 2016 and so on. If required, you can find it in the link.
-
Log in to Windows Server 2019 as an administrator.
-
Start Server Manager
-
From Dashboard, click Add Roles and Features.
-
Add Roles and Features Wizard is opened. Then, come to the Features section and find Multipath I/O feature from the list. (Figure 9)
-
Continue with the Next button and then click on Install button.
-
MPIO is activated now.
Figure 9: Activating Multipath I/O
1.4.2 Configuring MPIO on Windows Server 2019
If using iSCSI protocol, should apply multipath support to iSCSI devices. For this configuration, complete following steps on Windows Server 2019. If one of other Windows Servers, steps are similar; however, you can find related document from this link.
-
Log in to Windows Server 2019 as an administrator.
-
Start Server Manager
-
From the Tool section, click on MPIO.
-
From Discover Multi-Paths tab, check Add support for iSCSI devices and click Add button (Figure 10).
-
By restarting the Windows Server, the MPIO device is listed under MPIO Devices tab.
Figure 10: MPIO Configuration
As a best practice, NGX Storage strictly recommends to activate MPIO and configure it as Round Robin because it is a must for the system to work smoothly. You can follow the next steps to configure MPIO with Round Robin policy.
-
From Tools section, click Computer Management.
-
In the tree view, click Storage > Disk Management.
-
Find the related disk below and right click and select Properties.
-
From MPIO tab, choose Round Robin MPIO policy under drop-down menu (Figure 11).
Figure 11: MPIO Policy – Round Robin
Note: Microsoft Cluster Shared Volumes (CSV) are officially not supported by NGX Storage with prior software 1.8.4 for Gen1 series and 2.1.0 for Gen2 series. In some cases, it has been observed that after a path is lost (due to storage, switch, HBA, etc.) even if Microsoft’s native DSM application recovers the paths, some device paths do not recover. This may cause a lost device or even may affect cluster health. To use CSV in production you should use at least 1.8.6 Software Version for Gen1 Series and at least 2.2.0 Software Version for Gen2 Series.
In order to reduce data footprint of a volume, should take advantages of compression and deduplication. It is so useful for Windows Server environments to achieve maximum storage capacity efficiently.
Deduplication eliminates duplicate copies of repeating data in a FC/iSCSI LUN and NFS Share to Windows Server hosts. In the deduplication process, unique chunks of data are identified and stored in one time. Deduplication is valuable for both efficient utilization of storage space and data transfer over network (Figure 14).
NGX Storage supports deduplication for FC/iSCSI LUNs and NFS Shares to be allocated to Windows Server hosts. However, if the data is not repetitive, the deduplication is inefficient and leads to intensive back processing and loss of performance. Therefore, before activating deduplication features, we are recommending analyzing your data type.
As a general rule we are not recommending deduplication for VM environments and for the performance intensive applications. Instead of deduplication you can enable compression for data reduction.
Figure 12: Deduplication
Compression is the process of encoding data using fewer bits than the original representation. NGX Storage supports compression in fast mode and high mode. Regardless of data type, compression in fast mode is strictly recommended. NGX Storage assures that compression in fast mode can compress data at a rate; but it never loses performance in NGX Storage Arrays (Figure 15).
Compression in high mode is special in terms of data type. Before activating this mode, consult with NGX Storage engineers to analyze data efficiency and performance tradeoffs.
Compression in high mode is special in terms of data type. Before activating this mode, consult with NGX Storage engineers to analyze data efficiency and performance tradeoffs.
Figure 13: Compression
Data protection in a storage array is a vital issue. To ease and automate data protection, various protection mechanisms are used by NGX Storage. Those are snapshot and synchronous / asynchronous replication.
Thanks to snapshot, recovery point objective (RPO) can be established which assures if a virtual machine crashes or a datastore is loosen, there is a point enabling storage manager to restore the system to that point. NGX Storage supports a limitless number of snapshots; however, it is recommended that for each LUN or Share, a manageable number of snapshots need to be taken which should not reduce the system’s performance. The manageable number is changeable according to the LUN or Share’s capacity as case-by-case.
Figure 14: Taking Snapshot
NGX Storage enables storage managers to schedule snapshots in terms of hour / day / week / month / year. If necessary, a snapshot can be cloned which restores it by keeping the original LUN/Share. Snapshot is a data protection mechanism; however, NGX storage does not recommend to use only snapshot to protect data. Storage managers need to use taking snapshots besides other data protection techniques. Snapshots can be found under Storage Tools > Snapshot Manager (Figure 14).
NGX Storage offers a replication feature to transfer data synchronously or asynchronously to one or more targets. Replication target needs to be a secondary array belonging to NGX Storage Brand. If this secondary array is located in a different geographical site, it can be used as a disaster recovery center.
Figure 15: Creating Replication Target
When creating a replication target from the production site, it is required to insert a target name that makes you remember it, target admin password and target IP address. Replication targets can be found under Storage Tools > Replication Targets (Figure 15).
After defining a replication target, it needs to create a replication profile which specifies target pool, volume which target pool is replicated to, max. remote snapshots, bandwidth and sync type (Figure 16).
Throughput between two replication arrays can be 40.000 Mbit/s at the most. NGX Storage recommends analyzing the need properly and establishing a channel with appropriate bandwidth for minimizing the costs.
Synch type enables storage managers to replicate near-synchronously if chosen continuous or to replicate asynchronously if chosen scheduled. For example, a disaster recovery site is established, it needs to replicate a production site to DRS synchronously. On the other hand, if a storage manager wants to transfer a data backup to a third site, it can be scheduled to transfer data asynchronously. Therefore, it needs to be adjusted in case-by-case.
Figure 16: Replication Profile
Thin provisioning is a mechanism which allows FC/iSCSI LUNs to be allocated to Windows Server hosts on a just-enough and just-in-time basis. Thin provisioning aims to optimize utilization of available storage by allocating blocks of data on-demand. Thanks to thin provisioning, unused spaces (not written to) have not remained. Traditional method is called “thick” or “fat” provisioning which does not allow allocated space to be mapped to the other hosts even if never used by itself.
In order to allocate FC/iSCSI LUNs to Windows Servers host efficiently, NGX Storage supports and recommends applying thin provisioning.
Integrating an NGX storage to your Active Directory environment, need to follow Maintenance > Services. Then, click edit button of SMB/CIFS service. SMB/CIFS settings enable storage administrator to integrate this storage to an existing Active Directory (Figure 17). Fill in your Active Directory information similar to the below.
Figure 17: Active Directory Integration
As a best practice, it is suggested that you set and correct DNS and NTP settings before integrating to an Active Directory.
For DNS settings, follow Configuration > Network Settings and click on the first IP address (management IP address) under IP addresses section. In this window, enter DNS IP addresses and DNS domain as correct.
For NTP settings, follow Maintenance > Services, then click on edit button of NTP service. In order to synchronize time of the storage with the system, you need to enter NTP Server IP address.
Figure 18: Wide Striping
In order to deliver maximum throughput and minimum latencies, NGX Storage uses wide striping architecture. Thanks to wide striping, volumes are striped widely across all disks. For a single read/write, all disks run at the same time and increase IOPS and throughput (Figure 18).
RAID Group Striping is the legacy striping technique which groups disks physically when a RAID group is composed. It leads read/write operations to benefit IOPS from only disks in that RAID group. On the other hand, thanks to wide striping, a single physical disk may be involved in mixed RAID groups. Each disk in the system has an equal burden which means that the system has no hot spots, no stranded capacity. Performance resources of the entire system are equally available for each IOPS.
NGX Storage provides hybrid drive types and unified storage services. That means you can use different workloads in a single system. However you should carefully assign storage resources for your environment. As a general rule we are always recommending flash pools for Hyper-V. Avoid mechanical drives for both virtualization and performance hungry applications. Consult with NGX Storage engineering team for performance numbers and best practices to get optimal benefits.
FC/iSCSI targets need to match LUNs under Disk Pool. While creating a LUN, follow SAN > LUNs > Create a LUN from NGX Storage GUI. After defining name and size, as a best practice, choose MS_SQL as application profile for Microsoft hosts working as SQL Server. Moreover, choose HYPER_V as application profile for Microsoft hosts working as virtualization server. Those values are predefined and NGX Storage recommends not to change QoS Priority and Block Size values automatically set. These predefined values are according to the Microsoft best practice guide.
I/O Optimization is automatically set as Transactional. However, if a disk pool includes in flash ssd, write cache disks and so on, it is recommended that transactional I/O optimization needs to be kept. If a disk pool is composed purely of rotating disk drives (HDDs), NGX Storage recommends to change I/O Optimization as Sequential. (Figure 19)
Figure 19: I/O Optimization
NGX Realtime Automated Storage Tiering (RAST) automatically moves active data to high performance storage tiers and inactive data to slower low cost storage tiers. NGX Storage recommends to always open this feature because it boosts the performance of storage arrays for hot datasets. RAST enables storage to exploit disks’ performance at most; so IOPS and throughput values increase at best (Figure 20).
Figure 20: Realtime Automated Storage Tiering
COPYRIGHT
© 2022 NGX Teknoloji A.Ş. (NGX Storage). All rights reserved. Printed in the Turkey. Specifications subject to change without notice. No part of this document covered by copyright may be reproduced in any form or by any means-graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system-without prior written permission of NGX Storage.
Software derived from copyrighted NGX Storage material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NGX Storage “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NGX Storage BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NGX Storage reserves the right to change any products described herein at any time, and without notice. NGX Storage assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NGX Storage. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NGX Storage.
TRADEMARK
NGX Storage and the NGX Storage logo are trademarks of NGX TEKNOLOJI A.Ş. Other company and product names may be trademarks of their respective owners.