Nvidia has split its global channel chief role into two, moving the GPU giant’s top channel leader to manage relationships with global systems integrators and service delivery partners while tapping another executive to handle its remaining partners.
Alvin Da Costa was vice president of Nvidia’s global partner organization until June, when he became vice president of a newly formed group, the global consulting organization. As the new group’s leader, Da Costa is now chiefly focused on relationships with global systems integrators (GSIs) like Deloitte, Infosys, WiPro and Capgemini as well as the global expansion of service delivery partners like Quantiphi, Slalom Consulting and Data Monsters, he said.
Darrin Neil Chen, formerly the head of worldwide channels for Nvidia’s networking business, is taking over relationships with Nvidia’s remaining partners, which includes solution providers like World Wide Technology and systems integrators like Mark III Systems. As the new vice president of go-to-market and operations for global partners, Chen is also now in charge of the Nvidia Partner Network program.
Craig Weinstein, Nvidia’s North America channel chief, will continue to report directly to Paul Bommarito, vice president of Americas enterprise sales, according to Da Costa. On the channel side, however, Weinstein will work more closely with Chen instead of Da Costa.ADVERTISEMENT
The role changes were made as Nvidia views partners as a critical way to sell its growing portfolio of components, systems, software and services for AI and other accelerated computing workloads, which has allowed the company to transform from a GPU designer to a “full-stack computing company.”
In an exclusive interview with CRN last week, Da Costa said he moved into the new role because he wanted to focus on Nvidia’s growing number of GSI and service delivery partners.
“We were able to significantly grow it to the point where now I’m building a brand-new organization around it because we see the potential with them as well,” he said.
Da Costa said because of the partner growth, it made sense to divide responsibilities between him, Chen and Geoff Fancher, who has led global distribution since 2017.
“What we’ve done is we decided is to balance out the workload—it makes it easier to have three leaders as opposed to me doing everything,” he said.
Da Costa said Nvidia’s broader set of partners can benefit from the company’s expanding relationships with service delivery partners because a solution provider or systems integrator can subcontract a service delivery partner to fill gaps in services they may have for Nvidia customer deals.
“So, for example, WWT may be a very large partner that gets into a very specific deal but needs help on [Large Language Model] services. Well, they may contract Quantiphi to help them because they’re a very niche player with LLM services,” he said.
“And same thing with our GSIs. When you have a very large GSI, they can subcontract to one of these smaller service partners to help them with an implementation. So it actually benefits the whole channel across the board,” he added.
Nvidia Hunts For New Cloud-Focused Partners Amid Broader Investments
Nvidia made the organizational changes as it continues to “invest in all partner types,” which have grown “significantly over the last three years,” according to Da Costa. These investments have resulted in the company expanding its partner-facing teams.
“We’re growing our programs team. We have some head count there. We’re adding people to our operations team” because the company wants to make it easier for partners to do business with Nvidia, he said. “And then we’re growing the services organization as well, both from the GSI side as well as the [service delivery partner] side.”
The company declined to share any figures illustrating partner growth.
One area where Nvidia is looking to recruit partners are companies that are focused on providing IT services around cloud computing infrastructure, according to Da Costa. The company plans to identify recruits in this area by looking at who is doing significant work with major cloud service providers.
“We’re working with companies like [Amazon Web Services, Google Cloud and Microsoft Azure] to find out what are the top service partners that they would typically work with that are maybe more cloud-specific, so we’re going to be looking at on-boarding some new ones as well,” he said.
Da Costa credited Nvidia CEO Jensen Huang for giving the company’s partner organizations the resources they need to grow with the channel.
“Jensen has been a very big advocate of partners, and he’s really giving us the latitude to make our partners profitable, to make them successful,” he said.
Nvidia’s partner efforts have come a long way since the company hired veteran channel executives like Da Costa and Weinstein several years ago and started the Nvidia Partner Network program, Da Costa added.
“Nvidia is really a channel company,” he said.
Nvidia’s Channel Changes Reflect Growing Software, Cloud Focus
Nvidia has made a major push in recent years into data center software for hybrid cloud environments. This includes Nvidia AI Enterprise, which equips organizations with the software they need to develop and run AI applications, and Nvidia Omniverse, which does the same but for 3-D internet applications.
More recently, the company released the DGX Cloud supercomputing service, which gives enterprises quick access to the tools and GPU-powered infrastructure to create and run AI applications for a monthly cost starting at $36,999. The service is available on Oracle Cloud Infrastructure, and it will eventually expand to Microsoft Azure, Google Cloud and other cloud service providers.
An executive at a top Nvidia service delivery partner told CRN that the organizational changes reflect the fact that the GPU designer is increasingly focused on selling software and cloud services as a full-stack computing company—and doing so with partners represents a “whole different motion.”
This is a big change from when the Nvidia partner ecosystem was mostly focused on “delivering value-added infrastructure and services to customers at the hardware and infrastructure layer,” said Asif Hasan, co-founder at Marlborough, Mass.-based Quantiphi, which won Nvidia’s Service Delivery Partner of the Year award for the second time in a row earlier this year.
“Now, it’s at an application layer, which really needs deep industry expertise to be able to understand the problems that customers are having to solve for,” he said.
This focus on the application layer requires partners like Quantiphi to provide “advisory, application-level engineering and machine learning-level engineering services in addition to all of the hardware and infrastructure services that” other partners have historically provided, Hasan added.
As a result of Nvidia’s growing investments in software and services, Quantiphi’s business with the company has grown roughly six times over 18 to 24 months, according to the executive.
More than half of the revenue Quantiphi makes through Nvidia is around Large Language Models, Hasan said. Other significant areas where the company makes money are around Nvidia’s software development kits for speech-to-text and text-to-speech applications as well as industry-specific SDKs like Nvidia Clara for drug discovery and MONAI for AI-assisted radiology imaging.
The main way Quantiphi makes money through these services and SDKs is the Nvidia AI Enterprise software suite, for which the chip designer charges thousands of dollars per GPU for a subscription or perpetual license in addition to hourly consumption-based pricing for cloud marketplaces.
Quantiphi also makes money through Nvidia Omniverse, which charges hundreds to thousands of dollars per user based on the user type and subscription length.
All of these services drive consumption of Nvidia’s GPUs, whether they’re being purchased in a server or rented through a cloud instance, which can help boost Quantiphi’s business too.
“The entry point of most of our end customers is more from an industry application standpoint, but that also has an effect on the number of GPUs that are being consumed in various scenarios, and we get attribution, regardless of whether that’s happening in the cloud or on-premises,” Hasan said.LEARN MORE: CPUs-GPUs | Generative AI | AI
Dylan Martin is a senior editor at CRN covering the semiconductor, PC, mobile device, and IoT beats. He has distinguished his coverage of the semiconductor industry thanks to insightful interviews with CEOs and top executives; scoops and exclusives about product, strategy and personnel changes; and analyses that dig into the why behind the news. He can be reached at [email protected].
7 Big Announcements Nvidia Made At SIGGRAPH 2023: New AI Chips, Software And MoreIntel Reorganizes Sales Team For AWS, Dell And Other Major Accounts After Exec’s DepartureHow Intel Thinks Partners Can Reap Sales From vPro CPUs For Business PCs, Old And NewAMD Claims Big Uptake In ‘AI Engagements’ As Server, PC Chip Slowdown PersistsIntel CEO Gelsinger: ‘We’re Going To Build AI Into Every Product We Build’ TO TOPADVERTISEMENT
- 20 Hottest New Cybersecurity Tools At Black Hat 2023 | CRN
- Dell Confirms Sales Layoffs As Part Of New Partner-Led Storage Strategy | CRN
- Dell Layoffs, Exec Changes, New AI And Go-To-Market: 5 Big Things To Know | CRN
- 10 Cool New Security Products Unveiled At Black Hat 2023 | CRN
- ‘We’re Taking The Gloves Off’: Dell Sales Boss Talks New Strategy To Take Share | CRN