For many years, computer scientists have used Linpack metrics to measure the performance of supercomputers. The findings are used to compile a bi-annual TOP500 list that has many consequences for industry investment and directives. However, according to one of its developers, Jack Dongarra, professor of computer science at the University of Tennessee, the Linpack system may be too outdated to effectively meet the needs of next generation high performance computing systems, reported the university's website. A change in measurement could have far-reaching implications for data center finance and data center design.
As supercomputing systems continue to evolve, the standards used to measure them must keep pace, the UT website reported. The Linpack system, which evaluates the efficiency of linear equation calculations, cannot be used to address more complex calculations that need low latency and high bandwidth.
"This is an important issue to address since we are seeing more applications being dominated by differential equations, and thus, each iteration of the TOP500 will show increasing gaps between real versus Linpack performance," Dongarra stated.
Dongarra and Michael Heroux, a staff member at Sandia National Laboratories in Albuquerque, N.M., are hard at work creating a new performance metric, the High Performance Conjugate Gradient (HPCG), that can address these evolved concerns, according to UT. They anticipate that it will be ready for use alongside the Linpack method in the next TOP500 rankings, which are scheduled for a November release. The HPCG, Dongarra said, will be able to address computation and data access patterns used in real life applications today. This includes data center design, as owners and operators could be making strategic choices about access and networking informed by a metric that is inconsistent with 21st century computing demands.
"We have reached a point where designing a system for good Linpack performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system," Dongarra said. "The hope is that this new rating system will drive computer system design and implementation in directions that will better impact performance improvement for real applications."
The effects of performance evaluation on data center research
Designing "future-proof" data center facilities and systems is a high priority for owners and operators, and drives investment choices for clients seeking data centers for lease that can satisfy long term scalability objectives. Having two different metrics contribute to the TOP500 rankings could create conflict, noted PCWorld contributor James Niccolai. Since HPCG and Linpack will both be used in the rankings, it's possible that the measurements could identify different supercomputers as the fastest. Additionally, the HPCG will be rolled out gradually, and its universal use could be a long time coming. This could generate competition in the industry, as two different entities could claim ownership of the fastest supercomputer in the world.
"I think individuals will have to then evaluate what number makes sense for their particular mix of problems," stated Dongarra, according to Niccolai. "And over time I would hope that the new [benchmark] would carry more weight."