Configuring OpenStack Cinder iSCSI with NGX Storage Driver

INFO
To request access to this driver, please submit a support ticket.
This document provides instructions for installing, configuring and using the NGX Storage Gen2 Block Storage iSCSI driver for OpenStack. It includes setup details, configuration steps and usage guidelines for integrating NGX Storage arrays with OpenStack Cinder.
Driver Details
- Driver module path: cinder.volume.drivers.ngxstorage.iscsi.NGXStorageISCSIDriver
- Driver version: 1.0.0
- Protocol: iSCSI
Requirements
- OpenStack Cinder services installed (api, scheduler, volume)
- On compute and volume hosts: Open-iSCSI and (optionally) multipath-tools
- Network access from cinder-volume host(s) to both NGX Storage controllers over HTTPS (default API uses HTTPS)
- NGX Storage:
- API key with sufficient privileges
- Existing Pool (name or ID)
- Existing iSCSI Portal Group (name or ID)
Repository layout highlights:
- driver/ngxstorage/ — The driver sources
- scripts/add_openstack.bash — Helper to copy the driver into a system-installed Cinder site-packages and restart services
Install the driver
Optional helper (run on the cinder-volume host):
cd scripts && ./add_openstack.bash
If you prefer manual steps, copy driver/ngxstorage into the Cinder drivers path that your environment uses (for example /usr/lib/python3/dist-packages/cinder/volume/drivers/ngxstorage) and restart cinder-volume, cinder-scheduler, and cinder-api services.
$ sudo systemctl restart cinder-volume cinder-scheduler cinder-api
Configure cinder.conf (backend)
Create a backend section and enable it. Key settings required by the driver:
- ngxstorage_controller_A — Controller A hostname or IP
- ngxstorage_controller_B — Controller B hostname or IP
- ngxstorage_api_key — API key for NGXStorage REST
- ngxstorage_pool_name — Pool name or ID to provision in
- ngxstorage_portal_group_name — iSCSI Portal Group name or ID
Common/optional settings:
- volume_backend_name — Logical name used for scheduler routing
- use_chap_auth — Enable CHAP; driver will attach CHAP to array-side auth groups
- chap_username, chap_password — CHAP credentials (set if use_chap_auth = true)
- suppress_requests_ssl_warnings — If true, SSL verification warning logs are suppressed (default true).
- use_multipath_for_image_xfer, enforce_multipath_for_image_xfer — Controls os-brick multipath during controller-side image transfers
Minimal backend example
[DEFAULT]
enabled_backends = ngxstorage-iscsi,...
[ngxstorage-iscsi-1]
volume_driver = cinder.volume.drivers.ngxstorage.iscsi.NGXStorageISCSIDriver
volume_backend_name = NGXStorage-iSCSI-<SERIAL_NUMBER_OF_STORAGE> # Use serial number of storage to identify multiple NGX Storage devices
ngxstorage_controller_A = 192.168.1.201
ngxstorage_controller_B = 192.168.1.202
ngxstorage_api_key = <API_KEY>
ngxstorage_pool_name = <POOL_NAME>
ngxstorage_portal_group_name = <PORTAL_GROUP_NAME>
# Suppress ssl warning logs
suppress_requests_ssl_warnings = true
Full sample backend (annotated)
The following is a full sample backend with recommended options inline.
# You can use more than one backend; list them here
# enabled_backends = ngxstorage-iscsi-1, ngxstorage-iscsi-2, ...
enabled_backends = ngxstorage-iscsi-1
[ngxstorage-iscsi-1]
# Required
volume_driver = cinder.volume.drivers.ngxstorage.iscsi.NGXStorageISCSIDriver
volume_backend_name = NGXStorage-iSCSI- # Use serial number of storage to identify multiple NGX Storage devices
ngxstorage_controller_A = 192.168.1.201
ngxstorage_controller_B = 192.168.1.202
ngxstorage_api_key = <API_KEY>
ngxstorage_pool_name = <POOL_NAME>
ngxstorage_portal_group_name = <PORTAL_GROUP_NAME>
# Optional: controller-side image transfer multipath
use_multipath_for_image_xfer = true
enforce_multipath_for_image_xfer = true
# Optional (recommended)
use_chap_auth = true
chap_username = cinder-chap
chap_password = REPLACE_WITH_A_STRONG_SECRET
# Suppress ssl warning logs
suppress_requests_ssl_warnings = true
After editing cinder.conf, restart Cinder services (volume, scheduler, api). On systemd-based hosts:
sudo systemctl restart cinder-volume cinder-scheduler cinder-api
Notes:
- The driver enforces that Pool owner matches the Portal Group owner during startup.
- If use_chap_auth = true and username/password aren’t set, the driver will generate random values in-memory (not persisted). Prefer setting explicit credentials.
iSCSI and multipath setup for Cinder and Nova hosts
This section describes how to configure iSCSI and multipath on your nodes to use volumes from this backend. Multipath is optional but recommended for high availability and performance. The driver returns multiple target portals when multipath is detected.
1) Install required packages
- RHEL / CentOS / Rocky
$ sudo yum install -y iscsi-initiator-utils device-mapper-multipath device-mapper-multipath-libs $ sudo mpathconf --enable --with_multipathd y
- Ubuntu / Debian / Pardus
$ sudo apt-get update $ sudo apt-get install -y open-iscsi multipath-tools
- SLES / openSUSE
$ sudo zypper -n install open-iscsi multipath-tools
2) Tune the iSCSI initiator
Edit /etc/iscsi/iscsid.conf and set:
...
node.session.cmds_max = 256
node.session.queue_depth = 128
...
node.session.auth.chap_algs = ...,MD5
...
3) Enable and start services
Enable and start iSCSI and multipath services (if using multipath) on all nodes:
- RHEL / CentOS / Rocky / Ubuntu / Debian / Pardus / SLES / openSUSE:
$ sudo systemctl enable --now iscsid $ sudo systemctl start iscsid $ sudo systemctl enable --now multipathd $ sudo systemctl start multipathd
4) Configure multipath for NGX Storage
Recommended /etc/multipath.conf:
defaults {
polling_interval 5
user_friendly_names yes
}
devices {
device {
vendor "NGX-IO.*"
product "NGX-IO ISCSI"
path_grouping_policy group_by_prio
prio alua
path_checker tur
failback immediate
no_path_retry queue
flush_on_last_del yes
dev_loss_tmo infinity
detect_prio yes
}
}
Reload or restart multipath after changes:
sudo systemctl reload multipathd
sudo systemctl restart multipathd
5) Configure Nova to request multipath
Edit /etc/nova/nova.conf on all compute nodes (libvirt driver):
[libvirt]
volume_use_multipath = True
volume_enforce_multipath = True
Restart Nova compute:
sudo systemctl restart nova-compute
Notes:
- These options make Nova/libvirt ask os-brick for multipath connection data. The NGX driver will return multiple target portals, IQNs, and LUNs when multipath is requested.
- Cinder’s image transfer multipath settings affect controller-side image transfers, not host-side attachments.
6) Verify multipath for an attached volume
After attaching a volume to an instance on this compute node:
sudo multipath -ll
You should see a multipath device with multiple active paths (one per portal/IP).
Read this before creating a volume: Image clone/cache workflow
This driver accelerates “Create Volume from Image” by cloning from an on-array cached image LUN and snapshot. The first time you use a given Glance image, you must seed the cache on the array; afterwards, new volumes from that image are opened quickly via fast clone from the cache.
Important
Do not create multiple volumes from an image before that image is cached on the array. Doing so will import the image repeatedly on the backend and can waste capacity and time. Always bootstrap once (seed the cache), then create further volumes which will fast‑clone from the cache.
What this means in practice:
- Bootstrap once per image: Create one initial volume from the image on this backend. The driver will automatically create an internal image LUN and snapshot on NGXStorage and use it as a cache.
- Subsequent volumes from the same image: Are created by cloning from this cached snapshot and complete much faster with minimal space usage.
- Optional pre-seed: If you prefer, an administrator can pre-seed the cache by creating a one-off “bootstrap” volume from the image. After the cache exists, the bootstrap volume may be deleted; the internal cache objects remain available for clones.
Quick bootstrap example (after configuring the backend below):
1. Get image ID from Glance to cache:
$ openstack image list
+--------------------------------------+---------------------------------+--------+
| ID | Name | Status |
+--------------------------------------+---------------------------------+--------+
| dfc1dfb0-d7bf-4fff-8994-319dd6f703d7 | cirros-0.3.5-x86_64-uec | active |
| a3867e29-c7a1-44b0-9e7f-10db587cad20 | cirros-0.3.5-x86_64-uec-kernel | active |
| 4b916fba-6775-4092-92df-f41df7246a6b | cirros-0.3.5-x86_64-uec-ramdisk | active |
| d07831df-edc3-4817-9881-89141f9134c3 | myCirrosImage | active |
+--------------------------------------+---------------------------------+--------+
2. Create a one-off bootstrap volume from the image on this backend/type:
$ openstack volume create \
--type ngx-iscsi \
--size 10 \
--image <IMAGE_ID> \
ngxseed-<IMAGE_ID>
3. (Optional) Once subsequent clones are verified fast, delete the bootstrap:
$ openstack volume delete ngxseed-<IMAGE_ID>
Notes:
- The driver manages the internal cache LUN and snapshot names/IDs automatically; you don’t need to create array objects manually.
- Cache consumes capacity on the array. Plan pool capacity accordingly for frequently used images.
- If you create a volume from an image without a pre-existing cache, the driver will create the cache on first use and then clone from it.
Create a Volume Type and QoS Spec (vendor properties) [Optional]
The driver reads vendor controls from a QoS Spec associated with your volume type. If none is associated, it uses built-in defaults.
-
Create a volume type bound to this backend:
$ openstack volume type create ngx-iscsi $ openstack volume type set ngx-iscsi --property volume_backend_name=NGXStorage-iSCSI-<SERIAL_NUMBER_OF_STORAGE>
Note: Set volume_backend_name to exactly match the value in your backend stanza in cinder.conf (for example, NGXStorage-iSCSI-ABCD12).
-
Create and associate a QoS Spec with NGX vendor keys
-
1. Create the QoS spec:
$ openstack volume qos create ngx-qos
-
2. Set NGX vendor properties (examples shown):
$ openstack volume qos create ngx-qos \ --property NGX:blocksize=auto \ --property NGX:qos_priority=16 \ --property NGX:thin_provision=on \ --property NGX:deduplication=off \ --property NGX:io_type=auto \ --property NGX:compression=off \ --property NGX:dram_cache=on \ --property NGX:rast=on
-
3. Get the QoS ID:
$ QOS_ID=$(openstack volume qos show -f value -c id ngx-qos)
-
4. Associate to the type:
$ openstack volume type set --qos-spec $QOS_ID ngx-iscsi
-
Supported NGX properties (defaults in parentheses):
- NGX:blocksize (auto): one of 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k, auto
- NGX:qos_priority (16): one of 4, 8, 16, 32, 64, 128
- NGX:thin_provision (on): on|off|auto (driver uses on/off)
- NGX:deduplication (off): on|off
- NGX:io_type (auto): sequential|transactional|auto (auto resolves based on pool media)
- NGX:compression (off): on|off
- NGX:dram_cache (on): on|off
- NGX:rast (on): on|off (NGX Real-time Automated Storage Tiering / flash cache)
Basic usage
-
Create a volume on this backend:
$ openstack volume create --type ngx-iscsi --size 10 demo-volume
-
Attach from a host using standard OpenStack workflows. The driver returns standard iSCSI connection properties. Ensure your compute nodes have open-iscsi (and multipath if used) enabled and started.
-
Snapshots and clones are supported (create_snapshot, create_volume_from_snapshot).
-
Image workflows:
- create_volume_from_image is supported
- The driver may cache an internal image LUN and snapshot to accelerate clones on subsequent requests
-
Manage/Unmanage
- The driver implements management of existing LUNs and snapshots. Use “manageable list” and “manage” commands from the OpenStack client appropriate for your deployment.
Troubleshooting (controller/driver)
-
Logs:
- cinder-volume service logs will include ngxstorage.* entries. Example (systemd):
$ sudo journalctl -u cinder-volume -f | grep -i ngxstorage
- cinder-volume service logs will include ngxstorage.* entries. Example (systemd):
-
Common init failures:
- InvalidConfigurationValue: missing any of the required options (controllers, api key, pool, portal group)
- Pool and Portal Group must be in the same ownership: choose a portal group that matches the selected pool’s owner
-
Connectivity:
- Verify the portal group’s listen IPs are reachable from compute nodes on TCP 3260
- For CHAP: ensure the credentials in cinder.conf match your security policies
-
SSL:
- By default this driver suppresses SSL verification when suppress_requests_ssl_warnings = true. For strict SSL, set it to false and ensure the array presents a trusted certificate.
Capabilities summary
- Provision/Extend/Delete volumes
- Snapshots: create/delete, create volume from snapshot, revert to snapshot
- Image import/export, cloning from images
- Manage/Unmanage volumes and snapshots
- iSCSI multipath support (returns multiple portals when multipath is detected)
- No multiattach (driver advertises multiattach = False)
Please open issues or PRs in your project hosting for fixes and enhancements. When reporting a problem, include relevant cinder-volume logs (with ngxstorage lines) and your backend stanza (with secrets redacted).
COPYRIGHT
© 2025 NGX Teknoloji A.Ş. (NGX Storage). All rights reserved. Printed in the Turkey. Specifications subject to change without notice. No part of this document covered by copyright may be reproduced in any form or by any means-graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system-without prior written permission of NGX Storage. Software derived from copyrighted NGX Storage material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NGX Storage “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NGX Storage BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NGX Storage reserves the right to change any products described herein at any time, and without notice. NGX Storage assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NGX Storage. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NGX Storage.
TRADEMARK
NGX Storage and the NGX Storage logo are trademarks of NGX TEKNOLOJI A.Ş. Other company and product names may be trademarks of their respective owners.