The federal government, academia and industry struck a public-private partnership to allow researchers tackling COVID-19 access to high performance computing resources, the White House Office of Science and Technology announced Monday. Called the COVID-19 High Performance Computing Consortium, researchers can submit proposals to gain access to the high-end computing resources.
Members of the consortium include IBM, Amazon Web Services and Google Cloud, and the Department of Energy national labs, which host some of the world's most powerful supercomputers. The consortium represents 16 systems accounting for more than 330 petaflops, 775,000 CPU cores and 34,000 GPUs, Dario Gil, director of IBM Research, said in an announcement.
Already, researchers at the Oak Ridge National Laboratory — home to the world's most-powerful supercomputer Summit — and the University of Tennessee have used the computing resources to screen 8,000 compounds to determine which are "most likely to bind to the main 'spike' protein of the coronavirus, rendering it unable to infect host cells," Gil said. From the results, researchers recommended 77 small-molecule drug compounds for experiments.
Companies across sectors are dedicating resources to aid COVID-19 recovery. Manufacturers are turning production to medical supplies and software vendors are offering freemiums for collaboration tools. One of the most valuable resources big tech can offer is high-end computing resources.
It's worth taking a step back to understand what supercomputers do. A far cry from the average server farm, supercomputers represent the best of compute, networking and memory put together, Chirag Dekate, senior director analyst with Gartner, told CIO Dive. Thousands of these servers are connected and exposed as one single compute instance.
The U.S. hosts the No. 1 and No. 2 supercomputers in the world — Summit and Sierra, each built by IBM and operated by national laboratories.
Supercomputers tend to operate under public-private partnerships, where vendors complement hardware to software and operating systems for optimal computing, said Dekate. Operated by research facilities, the vendors usually maintain long-term strategic tie-ins, helping to solve technical challenges or system tweaks.
The machines are used to solve complex problems. Whether it's pharmaceutical, automotive or aerospace, companies can use supercomputers to conduct physics-based scenarios.
In trying to simulate what the right compound is before investing resources in the manufacturing process, organizations can use supercomputers to validate hypotheses and design, Dekate said. Especially in pharma, one simulation can span tens of thousands of nodes, running on a large array for days or weeks at the time.