0
It's time for enterprise applications and storage to work more closely together, even to the point where SSDs become a pool of computing power, according to Samsung Semiconductor.
The company wants industry standards for greater coordination between those elements, seeking to make data centers more efficient. The benefits could include CPUs communicating more with SSDs (solid-state drives) in the shorter term and later SSD controllers sharing application processing. The company gave no target dates for what would necessarily be a long-term effort, but it's calling on several industry groups to cooperate to make it a reality.
With just HDDs (hard disk drives), storage performance historically lagged behind computing and memory, so the functions of each have remained separate. But the advent of various tiers of solid-state storage and memory has changed the equation, said Bob Brennan, a senior vice president at Samsung Semiconductor who leads the company's Memory Solutions Lab. There are now faster drives with more built-in computing power and faster connections, he said. Servers aren't getting as much performance out of storage as they could, he told an audience Tuesday at Flash Memory Summit, in Santa Clara, California.
Samsung is looking at two possible ways to improve that picture.
One is to make applications better communicate with storage about their needs. For example, each application should be able to tell an SSD when it needs all the performance of the SSD controller, Brennan said. With that information, the controller could put off storage-specific tasks that could be done at any time, such as "garbage collection," or rearranging the bits on the SSD ahead of time for optimal performance in the future.
"For a fully loaded SSD, if you can control garbage collection at the application level, you get about 1000x reduction in latency," or the delay in delivering bits to where they are needed for processing, Brennan said.
Farther out, better coordination could allow enterprises to tap into SSD controllers as an added computing resource in their data centers. There often is excess capacity in those processors, according to Insight64 research fellow Nathan Brookwood. Even as fast as flash media can deliver bits, the highly tuned, specialized silicon in SSD controllers can work much faster, he said.
What's more, giving computing tasks to SSD controllers could mean having those chips work with the data right under their noses in the drive, slashing the delays that come from transporting bits.
The big hurdle to making that work would be the very specialization that makes those controllers so well suited to their main tasks. Unlike the server chips that do the number-crunching in data centers, they aren't x86 processors.
"Everybody in the industry has a slightly different architecture for their controller," Brennan said in an interview at the event. Splitting up tasks between an x86 CPU and an SSD controller would be a far cry from distributing them across multiple chips and cores.
"It's a non-trivial problem," said Arun Taneja, founder and consulting analyst at Taneja Group. "I don't see a very clear path."

Post a Comment

 
Top