DDN IME flash cache, SFA operating system get updates

DataDirect Networks (DDN) updated its Infinite Memory Engine (IME) flash cache and Storage Fusion Architecture (SFA) operating system to try to improve performance and resilience for higher capacity flash storage systems.

Version 1.1 of the DDN IME NVMe-based flash cache and I/O processing system adds support for the latest Intel Xeon Phi x86 processors (code named “Knights Landing”) and 40 and 100 Gigabit Ethernet networking. IME already supported 100 Gbps InfiniBand EDR and Intel’s OmniPath Architecture.
DDN sells IME as software-only or on an appliance. DDN also improved the software to enable a cluster to non-disruptively handle a node failure and redistribute journaled data to the remaining nodes, and boost metadata and flash performance. IME added support for customer-configurable erasure code options, offering protection against the failure of one, two or three drives or nodes.
The DDN IME software has server and client components and deploys in front of a file system or parallel file system. The IME client intercepts I/O fragments, applies erasure coding, and then delivers the fragments to IME servers. The DDN IME servers manage the flash drive pool and internal metadata and arrange the I/O for optimal performance before synchronizing the data with the backend file storage.
Image result for operating system

DDN IME enables faster rebuilds

DDN claims the IME improvements would enable customers to rebuild 1 TB of data in less than four minutes, in comparison to peak rebuild times of about 2.5 hours with hard disk drives under RAID.
Laura Shepard, senior director of product marketing at DDN, said the IME enhancements would help customers who are increasing the amount of storage they deploy for analytics and machine learning applications.
DDN IME 1.1 is due for general release in the third quarter.

“We’re working on availability at scale with erasure coding everywhere across the product line,” Shepard said. “Erasure coding is the data protection of choice on IME and also on our [WOS] object storage, and now we’re also adding it for our persistent file system tier on our SFA product line.”

Shepard said declustered RAID would enable the software-based sharding of parity data bits across a large pool of drives in comparison to a small high availability pool in traditional RAID. DDN will start with support for the equivalent of RAID 1, 5, and 6 and release more options later, she said.
“You can have a much lower percentage of your overall capacity dedicated to parity and still have a very high level of data protection,” Shepard said of declustered RAID. “Plus, because parity can be distributed widely among a much larger number of drives than in a traditional RAID [configuration], you can rebuild much smaller bits from each drive, making the rebuilds much faster.”
Shepard said DDN uses a technique called vertical rotation with its declustered RAID to mitigate latency. She said the system offsets writes from one drive to the next so the drive’s on-board cache is not overwhelmed.
“The adaptive resilience features really help the end user tailor how performant they want their storage to be versus how much redundancy they want built in,” said Addison Snell, CEO of Intersect360 Research in Sunnyvale, Calif. “That’s just a slider bar that people can tune on their own.”

DDN steps up Lustre support

DDN also spotlighted its ExaScaler Enterprise Lustre Distribution and its work in the open source Lustre community in the wake of Intel’s April announcement that it would discontinue its commercially supported Lustre distribution.
“They’re really helping provide a landing space for the stewardship of high-performance Lustre for enterprise in a supported way,” Snell said. He said Intel offered the most significant commercially supported Lustre option, and DDN would provide “a safe haven” for high-performance-computing users that “want an actual enterprise that’s backing and providing support and contributing back to the open source community.”

Leave a Reply

Your email address will not be published. Required fields are marked *