
Everywhere we look these days, data-intensive applications are growing at breakneck speed. One of the companies at the center of this development is Nvidia Corp.which has been riding high lately because of the chips it makes to power artificial intelligence.
Recently, Nvidia held an analyst briefing, where John Zedlewski, senior director of data science engineering at Nvidia, presented how the company is approaching accelerated computing. This post contains some thoughts on the walkthrough.
Nvidia was on the ground floor of accelerated computing a few decades ago. It has come a long way in that time and has picked up considerably in the last year or two. When talking about system architectures, John made an interesting point.
“All that hardware is amazing and sometimes, exotic as it is, it’s not successful without the software to run it,” he said. “We want to make it really easy for developers to get maximum performance out of this incredibly sophisticated hardware and make it easy to lose performance in your application domain.”
Nvidia packages its offering into platforms such as Nvidia AI and end-to-end frameworks such as Nemo for large language models and MONAI for medical imaging, Zedlewski noted. Most people think of Nvidia as a graphics processing unit manufacturer, and while it is arguably the best in class in this area, its systems approach has kept it ahead of its competitors, Intel Corp. and Advanced Micro Devices Inc.
Nvidia packages its GPUs with software development kits, acceleration libraries, system software and hardware for a complete solution. This simplifies the process of using Nvidia technology as it becomes almost ‘plug and play’.
Zedlewski added that before training a large language model, you start by figuring out what data set you need—perhaps even something as broad as all text on the Internet—which presents massive data science and data management problems.
“If you want to do it efficiently, if you want to be able to iterate, refine and improve your data, you need a way to accelerate it so you’re not waiting months for each iteration,” he said. “We hear this from our forecasting partners all the time. They say, ‘Look, we have legacy systems that have been good at doing monthly and weekly forecasts.’
These partners need a way to build models and run them faster to forecast, not monthly, weekly or daily. They need them in real time. Speed is also critical in other applications, such as fraud detection, genomics and cybersecurity, where massive amounts of data must be analyzed as events unfold. The tools that data scientists use cannot keep up with the need to comb through large data stores.
Nvidia’s Tritonan open source inference platform specializing in deep learning inference, has been extended to support many tree-based models that data scientists and machine learning engineers are still building in the industry.
“We’re seeing increasing interest in implementation frameworks that include vector search, whether it’s a large language model that has a vector search component, image search, or a recommender system,” Zedlewski told me. “So we also have accelerators for vector search of RAPIDS Raft.“
Nvidia enables data scientists to work comfortably with datasets of hundreds of millions of rows. The company also recognizes that no one tool can do everything. So it has more than 100 integrations with open source and commercial software. Zedlewski told me that these integrations are in place to make work smooth and seamless, making it easier to build complex multi-component pipelines.
There are 350 contributors to the company’s GitHub open source project, which translates to adoption of the company’s open source tools. More than 25% of Fortune 500 companies use RAPIDS, and enterprise adoption is accelerating, according to Zedlewski. Companies using RAPIDS include Adobe Inc., Walmart Inc. and AstraZeneca PLC.
With a central processing unit model, Walmart simply couldn’t go through enough data in a fixed window each night to predict how much perishables would be shipped to its stores — a decision that could have significant financial consequences. So, to fit the time window, Walmart data scientists compromised their model quality.
That approach didn’t work, so the company became one of the first users of RAPIDS. As a result, Walmart achieved 100x faster feature engineering and 20x faster model training with RAPIDS.
Zedlewski told me that he hears from large partners that when they try an experimental approach that integrates graph features into their models or integrated graph analysis steps when they have to deliver data, it increases model accuracy, especially for fraud and cyber.
For such a challenge, RAPIDS cuGraph can handle pre-processing, post-processing and traditional algorithms needed for modern graph analysis. In the process, it can support trillion-plus edge graphs, all of which can work with familiar application programming interfaces and run 85 times faster than on a CPU.
The RAPIDS RAFT accelerator can handle a challenging problem – sifting through hundreds of millions or even a billion pieces of content, perhaps a product, an image or a piece of text – building on nearest neighbor and approximate nearest neighbor methods. This happens 10 times faster on throughput and 33 times faster on build time. And what used to consume a bank of servers can happen on a single machine.
I’m curious to see how the adoption of Nvidia RAPIDS will continue, and I look forward to more stories about the new applications. I’m also interested to see if it integrates with Ultra Ethernet Consortiumwhich promises to be better for accelerated computing and AI than InfiniBand.
I asked Nvidia about it and was told, “We share the vision that Ethernet needs to evolve in the era of AI, and our Quantum and Spectrum-X end-to-end platforms already embody these AI computing materials. These platforms will continue to evolve and we will support new standards that may emerge.”
That said, network vendors have been trying to replace InfiniBand for decades, and it has yet to replace Ethernet for high-performance workloads. Nvidia has been good at doing what’s best for the customer, so if Ultra Ethernet lives up to its promise, I’m sure it will support it. Until then, proven InfiniBand is there.
We see rapid developments almost daily, but it is important to remember that we are at the beginning of accelerated computing. It’s kind of like the web in 1994. Let’s see where the next 30 years take us.
Zeus Kerravala is principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.
Image: PublicDomainPictures/Pixabay
Your vote of support is important to us and it helps us keep the content FREE.
A click below supports our mission to provide free, deep and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANKS
#Nvidia #enables #accelerated #computing #SiliconANGLE