Executive Vice President, Chief Financial Officer at NVIDIA
Thanks, Simona. Q1 revenue was $7.19 billion, up 19% sequentially and down 13% year-on-year. Strong sequential growth was driven by record datacenter revenue with our gaming and professional visualization platforms emerging from channel inventory corrections. Starting with data center, record revenue of $4.28 billion was up 18% sequentially and up 14% year-on-year, a strong growth. that accelerated computing platform worldwide. Generative AI is driving exponential growth in compute requirements and a fast transition to NVIDIA accelerated computing, which is the most versatile, most energy-efficient and the lowest TCO approach to train and deploy AI.
Generative AI drove significant upside in-demand for our products, creating opportunities and broad-based global growth across our markets. Let me give you some color, across our three major customer categories, cloud service providers or CSPs consumer Internet companies and enterprises. First CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference. Multiple CSPs announced the availability of H100 on their platforms, including private previews and Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure. Upcoming offerings at AWS and general availability at emerging GPU specialized cloud providers like CoreWeave and Lamda.
In addition to enterprise AI adoption these CSPs are setting strong demand for our H100 from Generative AI pioneers.
Second, consumer Internet companies are also at the forefront of adopting Generative AI and deep learning-based recommendation systems, driving strong growth. For example, Meta has now deployed, it's H100 powered grand Teton AI supercomputer for it's AI production and research teams.
Third enterprise demand for AI and accelerated computing is strong. We are seeing momentum in verticals such as automotive, financial services, healthcare and telecom, where AI and accelerated computing are quickly becoming integral to customers and innovation[Phonetic] roadmaps and competitive positioning. For example, Bloomberg announced it has a $50 billion parameter model, Bloomberg GPT to help with financial, natural language processing tasks such as sentiment analysis, named entity recognition, nearest classification and question and answering.
Auto Insurance company CCC intelligence solutions is using AI for estimating repairs, and AT&T is working with us on AI to improve fleet dispatches so their field technicians can better serve customers. Among other enterprise customers using NVIDIA AI are Deloitte for logistics and customer service and Amgen for drug discovery and protein engineering.
This quarter we started shipping DGX H100 our Hopper-generation AI system which customers can deploy on prem and with the launch of DGX Cloud through our partnership with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, we deliver the promise of NVIDIA DGX to customers from the cloud. Whether the customer deploy DGX on-prem or via DGX Cloud, they get access to NVIDIA AI software including NVIDIA Base Command and AI frameworks and pretrained models. We provide them with the blueprint for building and operating AI, expanding our expertise across systems, algorithms, data processing and training methods. We also announced NVIDIA AI Foundations which are model foundry services available on DGX Cloud that enables businesses to build, refine and operate custom large language model and generative AI models trained with their own proprietary data. Credit for unique domain-specific tasks, they include NVIDIA NeMo for large language models, NVIDIA Picasso for images, video and 3D and NVIDIA BioNeMo for life sciences.
Each service has it's elements pretrained models, frameworks for data processing and curation, proprietary knowledge base databases, systems for fine-tuning aligning and bar railing optimized inference engines and support from NVIDIA experts to help enterprises fine-tuned models for their customer use cases. Service now a leading enterprise services platform is an early adopter of DGX Cloud and NeMo, they are developing custom large language models trained on data specifically for the ServiceNow platform. Our collaboration will let ServiceNow create new enterprise-grade Generative AI offerings, with the thousands of enterprises worldwide running on the ServiceNow platform including for IT departments, customer service teams, employees and developers.
Generative AI is also driving a step-function increase in inference platforms, because of their size and complexity these workflows require acceleration. While the latest MLPerf industry benchmarks released in April, showed NVIDIA's inference platforms deliver performance, that is orders of magnitude ahead of the industry with unmatched versatility across diverse workloads. To help customers deploy Generative AI applications at-scale at GTC, we announced four major new inference platforms that leverage the NVIDIA AI software stack. These include L4 Tensor core GPU for AI video, L40 for Omniverse and graphics rendering, H100 NVL for large language models, and the Grace Hopper Superchip for LLMs and also recommendation systems and vector databases.
Google Cloud is the first CSP to adopt our L4 inference platform with the launch of its G2 virtual machines for generative AI inference and other workloads, such as Google Cloud dataproc, Google Alphafold and Google Cloud Immersive Stream which render 3D and AR experiences. In addition, Google is integrating our Triton infrence server with Google Kubernetes engine and it's cloud-based Vertex AI platform.
In networking, we saw strong demand at both CSPs and enterprise customers for generative AI and accelerated computing, which require high-performance networking like NVIDIA's Mellanox networking platforms. Demand relating to general-purpose CPU infrastructure remained soft. As generative AI applications, grow in size and complexity, high-performance networks become essential for delivering accelerated computing a datacenter scale to meet the enormous demand for both training and inferencing. Our 400 gig quantum two InfiniBand platform is the gold standard for AI dedicated infrastructure. With broad adoption across major cloud and consumer Internet platforms such as Microsoft Azure.
With the combination of in-network computing technology and the industry's only end-to-end datacenter scale, optimized software stock customers routinely enjoy a 20% increase in throughput for their sizable infrastructure investment. For multi-tenant cloud transitioning to support generative AI our high-speed Ethernet platform with Bluefield-3 DPUs and Spectrum 4 Ethernet switching offers the highest available Ethernet network performance. Bluefield-3 is in-production and has been adopted by multiple hyperscale and CSP customers including Microsoft Azure, Oracle Cloud, [Indecipherable] Baidu and others. We look-forward to sharing more about our 400 gig spectrum for accelerated AI networking platform next week at the Computex Conference in Taiwan.
Lastly, our Grace data center CPU is sampling with customers. At this week's International Supercomputing Conference in Germany, the University of Bristol, announced a new supercomputer based on NVIDIA Grace CPU Superchip which six times more energy-efficient than the previous supercomputer. This adds to the growing momentum for Grace with both CPU only and CPU, GPU opportunities across AI and cloud and supercomputing applications. The coming wave of BlueField-3, Grace and Grace Hopper Superchips will enable a new-generation of super energy-efficient accelerated data centers.
Now let's move to gaming. Gaming revenue of $2.24 billion was up 22% sequentially and down 38% year-on-year. Strong sequential growth was driven by sales of the 40 Series GeForce RTX GPUs for both notebooks and desktops. Overall end demand was solid and consistent with seasonality, demonstrating resilience against a challenging consumer spending backdrop. The GeForce RTX 40 Series GPUs laptops are off to a great start, featuring for NVIDIA inventions, RTX path tracing, DLSS-3 AI rendering, reflects ultra-low latency rendering and Max-Q energy-efficient technologies. They deliver tremendous gains in industrial design performance and battery life for gamers and creators.
Unlike our desktop offerings 40 Series laptops, support the NVIDIA Studio platform or software technologies, including acceleration for creative, data science and AI Workflows and Omniverse getting content creators unmatched tools and capabilities. In desktop, we ramped the RTX 4070 which joined the previously launched RTX 4090 4080 and the 4070 TI GPUs. The RTX 4070 is nearly three times faster than the RTX 2070 and offers our large installed-base a spectacular upgrade. Last week, we launched the 60 family RTX, 4060 and 4060 TI, bringing our newest architecture to the world's core gamers starting at just $299. These GPUs, for the first time provide two times the performance to the latest gaming console at mainstream price points. The 4060 TI is available starting today, but 4060 will be available in July. Generative AI will be transformative to gaming and content creation from development to runtime. At the Microsoft Build developer conference earlier this week, we showcased how Windows PCs and work stations with NVIDIA RTX GPUs will be AI powered [Indecipherable].
NVIDIA and Microsoft have collaborated on end-to-end software engineering, spanning from the Windows operating system to the NVIDIA graphics drivers and NeMo LLM framework to help make Windows on NVIDIA RTX Tensor Core GPUs a supercharge platform for generative AI. Last quarter we announced a partnership with Microsoft to bring Xbox PC games to GeForce NOW, the first game from this partnership Gears-5 is now available with more set to be released in the coming months. There are now over 1,600 games on GeForce NOW the richest content available on any cloud gaming service.
Moving to Pro Visualization, revenue of $295 million was up 31% sequentially and down 53% year-on-year. Sequential growth was driven by stronger workstation demand across both mobile and desktop form factors with strength in key verticals such as public sector healthcare and automotive. We believe the channel inventory correction is behind us. The ramp of our Ada Lovelace GPU architecture in workstations kicked-off a major product cycle. At GTC, we announced six new RTX GPUs for laptops and desktop workstations with further rollout planned in the coming quarters. Generative AI is a major new workload for NVIDIA powered workstation, our collaboration with Microsoft transformed windows into the ideal platform for creators and designers harnessing Generative AI to elevate their creativity and product scaling[Phonetic].
At GTC we announced NVIDIA Omniverse cloud and NVIDIA fully managed service running in Microsoft Azure, that includes the full suite of Omniverse applications and NVIDIA OVX infrastructure. Using this full stack cloud environment, customers can design, develop, deploy and manage industrial metaverse applications. NVIDIA Omniverse cloud will be available starting in the second half of this year. Microsoft NVIDIA will also connect Office 365 applications with Omniverse. Omniverse cloud is being used by companies to digitalities their workflows from design and engineering to smart factories and 3D content generation from our pool[Phonetic]. The automotive industry has been a leading early adopter of our Omniverse, including companies such as BMW Group, Geely Lotus, General Motors, and Jaguar Land Rover.
Moving to automotive. Revenue was $296 million, up 1%, sequentially and up 14% from a year-ago. Our strong year-on-year growth was driven by the ramp of the NVIDIA Drive Orin across a number of new energy vehicles. As we announced in March, our automotive design-win pipeline over the next six years, now stands at $14 billion up from $11 billion a year-ago, giving us visibility into continued growth over the coming years. Sequentially, growth moderated as some NEVs customers in China our adjusting their production schedules to reflect slower-than-expected demand growth. We expect this dynamic to linger for the most of the calendar year. During the quarter, we expanded our partnership with BYD, the world's leading manufacturer of NEVs, our new design-win will extend BYD's use of the Drive Orin to its next-generation high-volume Dynasty and Ocean series of their cars, set to start production in calendar 2024.
Moving to the rest of the P&L, GAAP gross margins were 64.6%, non-GAAP gross margins were 66.8%, gross margins have now largely recovered to prior peak levels, as we have absorbed higher costs and offset them by innovating and delivering higher-valued products as well as products incorporating more-and-more software. Sequentially. GAAP operating expenses were down 3% and non-GAAP operating expenses were down 1%. We've held OpEx at roughly the same level over the last past four quarters, we're working through the inventory corrections in gaming and professional visualization. We now expect to increase investments in the business while also delivering operating leverage. We returned $99 million to shareholders in the form of cash dividends at the end of Q1, we have approximately $7 billion remaining under our share repurchase authorization through December 2023.
Let me turn to the outlook for the second quarter of fiscal '24. Total revenue is expected to be $11 billion, plus or minus 2%, we expect this sequential growth to largely be driven by data center reflecting a steep increase in-demand related to Generative AI and large language models. This demand has extended our data center visibility out a few quarters and we have procured substantially higher supply for the second half of the year. GAAP and non-GAAP gross margins are expected to be 68.6% and 70%, respectively, plus or minus 50 basis-points. GAAP and non-GAAP operating expenses are expected to be approximately $2.71 billion and $1.9 billion respectively. GAAP and non-GAAP, other income and expenses are expected to be an income of approximately $90 million excluding gains and losses from non-affiliated Investments. GAAP and non-GAAP tax rates are expected to be 14% plus or minus 1% excluding any discrete items. Capital expenditures are expected to be approximately $300 million to $350 million.
Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight some of the upcoming event, Jensen will give the Computex keynote address in-person in Taipei this coming Monday, May 29th, local time which will be Sunday evening in the US. In addition, we will be attending the BofA Global Technology Conference in San Francisco on June 6th, and Rosenblatt Virtual Technology Summit on the age of AI, on June 7th and the New Street future of Transportation Virtual Conference on June 12th. Our earnings call to discuss the results of our second quarter fiscal '24 is scheduled for Wednesday, August 23rd.
Well, that covers our opening remarks, we're now going to open the call for questions. Operator, would you please poll for questions.