Gain access to post-sales support here: https://support.graidtech.com/
Our support team will handle a ticket when it is raised; the appropriate geographic team will manage it. Once the ticket is raised, you will receive an email. It is possible to add comments to the case by hitting “reply all” on these emails.
If you need checksum verification on read operations, you can use SupremeRAID™ in combination with ZFS or Btrfs filesystems. These filesystems offer comprehensive features, including checksum verification, while SupremeRAID™ can offload RAID calculations and I/O operations to the GPU, resulting in a powerful and feature-rich storage solution.
Checksum verification on every read I/O is a feature some filesystems use to detect file corruption, which may occur due to the silent bit-rot commonly associated with hard disks. Nevertheless, it’s crucial to remember that the primary function of RAID is to offer disk redundancy and maintain service continuity. Most contemporary RAID solutions, including traditional hardware RAID cards, do not typically encompass this feature.
Yes, SupremeRAID™ provides write-hole protection as most RAID solutions do in write-through mode (i.e. without data caching), although it depends on the re-synchronization operation to ensure data and parity consistency after the system experiences an unexpected shutdown. Given that SupremeRAID™ doesn’t use any memory cache to enhance performance, we employ this same mechanism to guarantee data consistency. The distributed journalling feature developed by SupremeRAID™ adds further resiliency and reliability.
Physically moving the SSD drives from one server and inserting them into another server is simple and is an operational requirement by some of our customers. The data and the SupremeRAID™ configuration details are stored on the drives and are immediately useable when the destination server is booted, provided the SupremeRAID™ software is installed, licensed and active.
Yes, this is possible with the Linux driver. It is an active/active configuration where each SSD is assigned to a primary SupremeRAID™ controller. If the primary SupremeRAID™ controller fails, all SSDs will seamlessly be managed by the remaining operational SupremeRAID™ controller. There is no need to set the preferred controller manually while creating a drive group because SupremeRAID™ automatically selects the optimal controller.
The process to replace a SupremeRAID™ GPU is detailed in the section of the user guide called “Replacing a SupremeRAID™ Card”. Before doing this, please ensure you have the replacement license key to hand as each SupremeRAID™ GPU requires a unique license key. Find the user guides here:
Yes, we have provided SMTP integration so that alerts can be received by email. These can be filtered by severity, i.e., “Warning,” or “Error” of “info.” The change in the status of Physical drives, Drive Groups, and Virtual drives can be included. This is available for both Windows and Linux drivers.
When using the native Windows RAID5, for example, you may have seen that it can take 10-30 hours to create a RAID volume before it becomes usable. When using SupremeRAID™ RAID5 with Windows, however, the volume is made available immediately for use. The same is true when using SupremeRAID™ with Linux.
To check the SMART information for the gpd device, use the NVMe smart-log or smartctl command detailed in the manual. To avoid data loss, please do not use 3rd party tools to manage the drives. Tools such as Solidigm Storage Tool on Windows should not be used when SupremeRAID™ is managing the drives.
Once the SupremeRAID™ GPU and drivers have been installed, the license key must be applied. The license key is tied to the serial number of the GPU therefore any replacement GPU will also be accompanied by a new license key. Some commands for managing are: $ sudo Graidctl apply license [LICENSE_KEY]
SupremeRAID™ drivers and manuals can be found here: https://docs.graidtech.com/
Buffering and caching are techniques that manage temporary data storage. While these processes introduce overhead and extra latency, they are quite beneficial for traditional hard disk drives due to the substantial latency difference between the hard disk and the host memory. However, when dealing with high-speed NVMe SSDs, the additional latency can counteract the benefits of caching, potentially leading to performance degradation and a potential bottleneck.
SupremeRAID™’s unique architecture allows NVMe SSDs to be directly connected to the CPU. The GPU handles most of the operational control, but data transfer occurs directly between the NVMe SSDs and the host memory. This direct connection allows data to be transferred at full PCIe bandwidth of every NVMe SSDs, thus maximizing performance.
If there is a hot spare configured then a rebuild will happen automatically, otherwise a failed drive would need to be physically removed and replaced first. For modern enterprise-grade SSDs in a RAID group with 10 to 20 drives, the normal rebuild speed is approximately 1GB/second which translates to around 5hrs for a 20TB drive group. If multiple physical drives in the same drive group require rebuilding, the physical drives are rebuilt simultaneously to change the speed at which the rebuild happens the following command can be used: graidctl edit drive_group [DG_ID] rebuild_speed {low|normal|high}
For Windows installations, the dependencies are Visual C++ and the NVIDIA GPU driver. There are several software dependencies for Linux, including automake, dkms, gcc, make, tar, and sqlite-libs. Graid Technology has provided a pre-installation script that automates the installation of all these dependencies.
For sales inquiries, email info@graidtech.com, or complete the form on our Contact page. Our team will contact you shortly.
找不到任何結果。
嘗試搜索其他東西!