Live Earnings Conference Call: Astera Labs will host a live Q1 2025 earnings call on May 6, 2025 at 4:30PM ET. Follow this link to get details and listen to Astera Labs' Q1 2025 earnings call when it goes live. Get details. NASDAQ:ALAB Astera Labs Q2 2024 Earnings Report $71.24 -0.91 (-1.26%) As of 11:56 AM Eastern This is a fair market value price provided by Polygon.io. Learn more. Earnings HistoryForecast Astera Labs EPS ResultsActual EPS$0.13Consensus EPS $0.11Beat/MissBeat by +$0.02One Year Ago EPSN/AAstera Labs Revenue ResultsActual Revenue$76.90 millionExpected Revenue$72.41 millionBeat/MissBeat by +$4.49 millionYoY Revenue Growth+17.80%Astera Labs Announcement DetailsQuarterQ2 2024Date8/6/2024TimeAfter Market ClosesConference Call DateTuesday, August 6, 2024Conference Call Time4:30PM ETUpcoming EarningsAstera Labs' Q1 2025 earnings is scheduled for Tuesday, May 6, 2025, with a conference call scheduled at 4:30 PM ET. Check back for transcripts, audio, and key financial metrics as they become available.Q1 2025 Earnings ReportConference Call ResourcesConference Call AudioConference Call TranscriptPress Release (8-K)Quarterly Report (10-Q)Earnings HistoryCompany ProfilePowered by Astera Labs Q2 2024 Earnings Call TranscriptProvided by QuartrAugust 6, 2024 ShareLink copied to clipboard.There are 14 speakers on the call. Operator00:00:00Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin. Speaker 100:00:08Good afternoon, everyone, and welcome to the Astera Labs' Q2 2024 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co Founder and Sanjay Gajendra, President and Chief Operating Officer and Co Founder and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward looking statements reflect management's current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final prospectus relating to our IPO. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward looking statements. Speaker 100:01:23In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are based on information available to management as of today and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events or changes in our expectations except as required by law. Also during this call, we will refer to certain non GAAP financial measures, which we consider to be an important measure of the company's performance. These non GAAP measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U. S. Speaker 100:02:15GAAP. A discussion of why we use non GAAP financial measures and reconciliations between our GAAP and non GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra? Speaker 200:02:44Thank you, Leslie. Good afternoon, everyone, and thanks for joining our Q2 conference call for fiscal 2024. AI continues to drive a strong investment cycle as entire industries look to expand their creative output and overall productivity. The velocity and dynamic nature of this investment in AI infrastructure is generating highly complex and diverse challenges for our customers. SRL Labs intelligent and flexible connectivity solutions are developed ground up to navigate these fast paced complicated deployments. Speaker 200:03:18We are working closely with our hyperscaler customers to help them solve these challenges across diverse AI platform architectures that feature both 3rd party and internally developed accelerators. In addition to these favorable secular trends, we are also benefiting from new company specific product cycles across multiple technologies which will also contribute to our growth in the form of higher average silicon content per AI platform. A strong leadership position and great execution by our team resulted in record revenue for Astera Labs in the June quarter, supports our strong outlook for the Q3 and gives us confidence in our ability to continue outperforming industry growth rates. AcelR Labs delivered strong Q2 results setting our 4th consecutive record for quarterly revenue, strong non GAAP operating margin and positive operating cash flows. Our revenue in Q2 was $76,900,000 up 18% from the previous quarter and up 6 19% from the same period in 2023. Speaker 200:04:24Non GAAP operating margin was 24.4 percent and we delivered $0.13 of non GAAP diluted earnings per share. Operating cash flow generation was also strong during the quarter coming in at $29,800,000 With continued business momentum and a broadening set of growth opportunities, we are investing in our customers by rapidly scaling the organization. During the quarter, we expanded our cloud scale interop lab with Saewon and announced the opening of a new R and D center in India. We also announced the appointment of Bethany Mayer to our Board of Directors, bringing additional strategic leadership to the company. Today, Hetera Labs is focused on 3 core technology standards, PCI Express, Ethernet and Compute Express Link. Speaker 200:05:11We are shipping 3 separate product families supporting these different connectivity protocols, all generating revenue and in various stages of adoption. Let me touch upon our business with each of these product families and how we support them with our differentiated architecture and Cosmos software suite. Then I will turn the call over to Sanjay to delve deeper into our growth strategy. Finally, Mike will provide additional details on our Q2 results and our Q3 financial guidance. 1st, let's talk about PCI Express. Speaker 200:05:44During the quarter, we saw continued strong demand for our Ares product family to drive reliable PCI Gen5 connectivity in AI systems by delivering robust signal integrity and link stability. While merchant GPU suppliers drove early adoption of PCI Gen 5 into AI systems over the past year, we are now also seeing our hyperscaler customers introduce and ramp new AI server programs based upon their internally developed accelerators utilizing PCI HN5. Looking ahead, AI accelerator processing power is continuing to increase at an incredible pace. The next milestone for the AI technology evolution is the commercialization of PCI Gen 6, which doubles the connectivity bandwidth within AI servers, creating new challenges for link reach, reliability and latency. Our 86 PCI e timer family helps to solve these challenges with the next generation of our software defined architecture, offering a seamless upgrade path to our widely deployed and field tested Gen5 solutions. Speaker 200:06:52We have started shipping initial quantities of pre production orders of our PCIe Gen 6 solution 86. These shipments support our hyperscaler customers' initial program developments that are based on NVIDIA's Blackwell platform including GB200. We look forward to supporting more significant production ramps in the quarters to come. Next, let's talk about Ethernet. Our portfolio of Taurus Ethernet Smart Cable Modules helps relieve connectivity bottlenecks by overcoming reach, signal integrity and bandwidth issues by enabling robust 100 gig per lane connectivity over copper cables or AECs. Speaker 200:07:34Today, we are pleased to announce that our 400 gig Torus Ethernet SCMs have shifted into volume production with an expected ramp through the back half of twenty twenty four. This ramp is happening across multiple platforms in multiple cable configurations and we are working with multiple cable partners to support the expected volumes. Chorus will be ramping across a multitude of 400 gig applications to scale out connectivity on both AI compute platforms as well as general purpose compute systems. We are excited about the breadth and diversity of our product design wins and expect the product family to be accretive to our corporate growth rate going forward. Next is Computer Express Link or CXL. Speaker 200:08:20We continue to work closely with our hyperscaler customers on a variety of use cases and applications for CXL. In Q2, we shipped material volume of our LEO products for pre production rack scale deployment in data centers. We expect to see data center platform architects utilize CXN technology to solve memory bandwidth and capacity bottlenecks using our LEO family of products. The initial deployments are targeting memory expansion use cases with production ramp starting in 2025 when new CXL capable CPUs are broadly available. Finally, I would like to spend a moment on Cosmos which is a software platform that brings all of our product families together. Speaker 200:09:04We have discussed how Cosmos not only runs on our chips, but also in our customers operating stacks to deliver seamless customization, optimization and monitoring. The combination of our semiconductor and hardware solution with Cosmos software enables our product to become the eyes and ears of connectivity infrastructure, helping fleet managers to ensure their AI and cloud infrastructure is operating at peak utilization. By improving the efficiency of their data centers, our customers are able to generate higher ROI and reduce downtime. To summarize, sustained secular trends in AI adoption, design wins across diverse AI platforms at hyperscalers featuring both third party and internally developed accelerators and increasing average dollar content in next generation GPU based AI platforms gives us confidence in our ability to outperform industry growth rates. With that, let me turn the call over to our President and COO, Sanjay Gajendra to discuss some of our recent product announcements and our long term growth strategy. Speaker 200:10:12Thanks, Chitinder, and good afternoon, everyone. Speaker 300:10:15We are pleased with our robust Q2 results and strong top line outlook for Q3, but we are even more excited about the volume and breadth of opportunities that lie ahead. Today, I will focus on 5 growth vectors that we believe will help us to grow our business faster than industry growth rates over the long term. First, Estera Labs is in a unique position with design wins across diverse AI platform architectures, featuring both 3rd party and internally developed accelerators. This diversity gives us multiple paths to grow our business. This hybrid approach of using 3rd party and internally developed accelerators allows hyperscalers to optimize their fleet to support unique workload requirements and infrastructure limitations, while also improving capital investment efficiency. Speaker 300:11:14Our intelligent connectivity platform with its flexible software based architecture enables portability and seamless reuse between platforms while creating growth opportunities for all our product families. In addition to the 3rd party GPU platforms, we also expect to see several large deployments based on internally developed AI accelerators hitting production volume over the next few quarters and driving incremental PCIe and Ethernet volumes for us. 2nd, we see increasing content on next generation AI platforms. NVIDIA's Blackfiled GPU architecture is particularly exciting for us as we expect to see strong growth opportunities based on our design wins as hyperscalers compose solutions based on Blackwell GPUs, including GB200 across their data center infrastructure. To support various AI workloads, infrastructure challenges, software, power and cooling requirements, we expect multiple deployment variants for this new GPU platform. Speaker 300:12:33For example, NVIDIA cited 100 different configurations for Blackwell in their most recent earnings call. This growing trend of complexity and diversity presents an exciting opportunity for Astar Labs as our flexible silicon architecture and Cosmos software suite can be harnessed to customize the connectivity backbone for a diverse set of deployment scenarios. Overall, we expect our business to benefit from the Blackfool introduction with higher average dollar content of our products per GPU, driven by a combination of increasing volumes and higher ASPs. The next growth vector is the broadening applications and use cases for our Ares product family. Ares is in its 3rd generation now and represents the gold standard for PCIe retimers in the industry. Speaker 300:13:33The introduction of the new 86 retimers built upon the company's widely deployed and battle tested PCIe 5 retimers and the industry transition to PCIe Gen 6 will be a catalyst for increasing PCIe retimer content for Astera. Our learnings from hundreds of design wins and production deployment over the last several years enables us to quickly deploy PCIe Gen 6 technology at scale. As Jitendra noted, we are now shipping initial quantities of preproduction volume for 86 and currently have meaningful backlog in place to support the initial deployment of hyperscaler AI servers featuring NVIDIA's Blackwell GPUs, including GB200. We're also very excited about the incremental PCIe connectivity market expansion that will be driven by multi rack GPU clustering. Similar to the dynamic within the Ethernet AEC business, the reach limitations of passive PCIe copper cables are a bottleneck for the number of GPUs that can be clustered together. Speaker 300:14:53Our purpose built AD smart cable modules solve these issues by providing robust signal integrity and link stability or materially longer distances, improving rack airflow while actively monitoring and optimizing link health. This PCIe AEC opportunity is in the early stages of adoption and deployment, and we view the multi rack GPU clustering application as a new and growing market opportunity for our ADES product family. In June, we announced the industry's first demonstration of end to end PCIe optical connectivity to provide unprecedented reach for larger GPU clusters. We are proud to broaden our PCIe leadership once again by demonstrating robust PCIe links over optical interconnects between GPUs, CPUs, CXL memory devices and other PCIe endpoints. This breakthrough expands our intelligent connectivity platform to allow customers to seamlessly scale and extend high bandwidth, low latency PCI interconnects over optics. Speaker 300:16:14Overall, we expect our ADES PCIe retimer business to deliver strong growth as system complexity, platform diversity and speeds continue to increase and on average result in higher retimer content per GPU in next generation AI platforms. Next, in addition to the strong growth prospects of our Ares product family across the PCIe ecosystem, we're also seeing our tallest product family for Ethernet AAC application start to meaningfully contribute to the growth in the back half of twenty twenty four. What is exciting about these ramps is the diversity in applications and use cases. We are seeing demand for our Taurus product family for both AI and general compute platforms. We are supporting the market with multiple cable configurations, including straight Y cables and X cables. Speaker 300:17:20We will be shipping volume into hyperscaler build outs, supporting multiple cable vendors to enable a diverse supply chain that is crucial for hyperscalers. Overall, we are very excited about Taurus becoming yet another engine of growth as we look to expand the top line while also diversifying our product family contributions. Last but not least, CXL is an important technology to solve memory bandwidth capacity bottlenecks in compute platforms. We are working closely with our hyperscaler partners to demonstrate various use cases for this technology and starting to deploy our LEO CXL Controllers in preproduction racks in data centers. We have incorporated the learnings, customization and security requirement into our Cosmos software and have the most robust cloud ready CXL solution in the industry. Speaker 300:18:26We have demonstrated that our LEO CXL Smart Memory Controllers improve application performance and reduce TCO in compute platforms. Very importantly, we can accomplish many of these performance gains with 0 application level software changes or upgrades. Overall, we remain very excited about the potential of CXL in data center applications. Finally, our close collaboration and front row seat with hyperscalers and AI platform providers continues to yield valuable insights regarding the direction of compute technologies and the connectivity topologies that will be required to support them. This close collaboration is helping us identify new product and business opportunities and additional engagement models across our entire intelligent connectivity platform, which we believe will drive strong long term growth for Astera. Speaker 300:19:34With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook. Speaker 400:19:43Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non GAAP basis. The primary difference in Astera Labs' non GAAP metrics is stock based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non GAAP Q3 financial outlook as well as a reconciliation of our GAAP to non GAAP financial measures presented on this call. For Q2 of 2024, Astera Labs delivered record quarterly revenue of $76,900,000 which was up 18% from the previous quarter and 6 19% higher than the revenue in Q2 of 2023. Speaker 400:20:36During the quarter, we shipped products to all major hyperscalers and AI accelerator manufacturers. We recognized revenue across all three of our product families during the quarter with the Ares product being the largest contributor benefiting from continued momentum and AI based platforms. In Q2, Torus revenues contributed to continue to primarily shift into 200 gig Ethernet based systems and we expect Torus revenue to now diversify further as we begin to ship volume into 400 gig Ethernet based systems in the 3rd quarter. Q2 LEO revenues were largely from customers purchasing preproduction volumes for the development of their next generation CXL capable compute platforms, with our customers production launch timing being dependent on the data center server CPU refresh cycle. Q2 non GAAP gross margins was 78% and was down 20 basis points compared to 78.2% in Q1 of 2024 and better than our guidance of 77%. Speaker 400:21:43Non GAAP operating expenses for Q2 were $41,200,000 up from $35,200,000 in the previous quarter and consistent with our guidance. Within non GAAP operating expenses, R and D expenses was $27,100,000 sales and marketing expense was 6,300,000 and general and administrative expenses was $7,800,000 Non GAAP operating margin for Q2 was 24.4%. Interest income in Q2 was $10,300,000 Our non GAAP tax provision was $6,800,000 for the quarter, which represents a tax rate of 23% on a non GAAP basis. Non GAAP fully diluted share count for Q2 was 175,300,000 shares and our non GAAP diluted earnings per share for the quarter was $0.13 Cash flow from operating activities for Q2 was $29,800,000 and we ended the quarter with cash, cash equivalents and marketable securities just over $830,000,000 Now turning to our guidance for Q3 of fiscal 2024. We expect Q3 revenue to increase within a range of $95,000,000 100,000,000 dollars up roughly 24% to 30% sequentially from the prior quarter. Speaker 400:23:06We believe our Ares product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q3, driven by growing volume deployment with our customers' AI servers. We also expect our Torus family to drive solid growth quarter over quarter as design wins within new 400 gig Ethernet based systems ramp into volume production. We expect non GAAP gross margins to be approximately 75%. The sequential decline in gross margin is being driven by an expected product mix shift towards hardware solutions during the quarter. We expect non GAAP operating expenses to be in the range of approximately $46,000,000 to $47,000,000 as we remain aggressive in expanding our R and D resource pool across headcount and intellectual property. Speaker 400:23:57Interest income is expected to be approximately $10,000,000 Our non GAAP tax rate should be approximately 20% and our non GAAP fully diluted share count is expected to be approximately 177,000,000 shares. Adding this all up, we are expecting non GAAP fully diluted earnings per share of a range of approximately $0.16 to $0.17 This concludes our prepared remarks. And once again, we are very much appreciative of everyone joining the call. And now we will open the line for questions. Operator? Speaker 400:24:29Thank Operator00:24:33you. We'll take our first question from Harlan Sur at JPMorgan. Speaker 500:24:44Good afternoon. Thanks for taking my Congratulations on the strong results. During the quarter, lots of concerns around your large GPU customer and one of their next generation GPU SKUs, the GV200. Glad that the team could clarify that your dollar content across all Blackwell GPU SKUs is actually rising versus prior generation hopper. But as you guys mentioned, AI ASIC accelerator mix is rapidly rising and actually we believe outgrowing GPUs both this year and next year and accounting for like 50% of the XPU mix sort of next year, right. Speaker 500:25:22And with ASIC, it's 100% PCIe based. And as you pointed out, right, many of these ASIC customers are still in the early stages of the ramp. So given all of this, given some of the new product ramps with your AEC solutions, what's the team's visibility and confidence level on driving continued quarter on quarter growth from here, maybe over the next several quarters? Speaker 200:25:49Arun, thank you so much for the question. It's great to be in the place that we are here today. We feel very confident about what is to come. Clearly, we don't guide more than 1 quarter out, so please don't take this as any guidance. But we really believe that we are in the early innings of AI here. Speaker 200:26:09All of the hyperscalers are increasing their their CapEx targets for the rest of the year. 2025 is expected to be even even higher. We heard that the LAMA model required 10 times more compute in order to solve that. So all of these trends are basically driving shift in technology. We are seeing, as you correctly pointed out, a lot of our hyperscaler customers ramp their internally developed AI accelerators in addition to deploying 3rd party AI accelerators. Speaker 200:26:39And we are very pleased that we have designed with across all of these different platform. Our our customers are ramping their platforms and we are ramping multiple product families. As Sanjay mentioned, both Aries and Taurus are ramping into these new platforms. So we feel very good about what is in store for the future and feel that with the rising content on a per GPU basis, we'll be able to outpace the market growth in the long term. Speaker 500:27:07I appreciate that. And on top of the strong AI demand trend pulls, on top of the new product ramps that you guys articulated today, One thing I haven't baked into my model is the penetration of your retimer technology into general purpose servers, right. And the good news is that we're finally starting to see the flash vendors aggressively bringing finally bringing Gen 5 PCIe SSDs to the market, which could potentially unlock some retimer opportunities in general purpose servers where the Gen 5 retimer content today is still 0. So what's the team's outlook? Do you see that there may be some penetration starting in 2025 of your retimer solution into general purpose server and maybe if you could size that potential opportunity for us? Speaker 300:28:01Yes, absolutely, Haman. Sanjay here. Good to hear your voice. Yes, so in general, that's a correct statement. We have several design wins on the compute side. Speaker 300:28:12Just for reasons like you highlighted, either SSDs not being Gen 5 ready or for dollars being sort of sucked away into the AI platforms, there has been a slower than expected growth on the general compute. But at some point, like we keep saying, the servers are going to fall off the racks at some point given how long they have been in the fleet. So we do expect, the general compute to start picking up, especially as both AMD and Intel get to production with the Turin and the planet Rapids based CPUs. So overall, 2025, we do expect that the compute platform will start figuring in terms of being meaningful revenue growth. Like I noted, we do have design wins already in these platforms for 80 3 timers. Speaker 300:29:04But also I would like to add that we do have design wins for our Taurus Ethernet module application as well in general compute. So we should see sort of the the 2 engine growth story along the general compute to go along with all of the things we shared on AI both for 3rd party or merchant GPUs as well as the big change that we're seeing now is the ramp in the internally developed accelerators, those things being a meaningful and a significant driver for our growth. Speaker 500:29:36Thank you. Operator00:29:41We'll move next to Joe Moore at Morgan Stanley. Speaker 600:29:45Great. Thank you. I wonder if you could talk about the competitive dynamics within PCI Gen 5 retimers. Are you seeing any number of people have qualified solutions in China, in the U. S, are you seeing any encroachment there? Speaker 600:29:59And then in terms of PCI Gen 6, can you talk about the prospects for when you start to see volume there? Speaker 300:30:07Absolutely. Let me take that, Joe. So overall, this is a big and growing market. I think that fact is clear. I mean, the fact that you have larger names jumping into the mix sort of validates the market that the retimer represents. Speaker 300:30:26Now, a couple of points to keep in mind is that connectivity products, especially PCI Express, tends to have a certain nuance to it, which is the fact that we are the device in the middle. We're always in between GPU, storage, networking and so on. And interoperation, especially at high volume cloud scale deployment becomes critical. So what we have done in the last 3, 4 years is really work shoulder to shoulder with our hyperscaler and AI platform providers to ensure that the intra operation is met. The platform level deployment, whether it is diagnostic, telemetry, firmware management is all addressed, including the Cosmos software that we provide from a management fleet management and diagnostic type of capability. Speaker 300:31:23Those all have been integrated into our customers' operating stacks. So in general, the picture I'm trying to paint here is that the tribal knowledge that we have built, the penetration that we have, not just with those silicon, but also software does give us a significant advantage compared to our competitors. Now having said that, we will continue to work hard. We have several design wins for PCIe Gen 6 like we shared in today's call that are all designed around the next generation GPU platforms, specifically the Blackwell based GPUs from NVIDIA, which are publicly noted to support Gen 6. So we'll continue to work through them. Speaker 300:32:09We are currently shipping preproduction volume to support some of the initial ramps, including for GB200 based platforms. So overall, we feel good about the position that we are in, both in terms of Gen 5 as well as transitioning those design into Gen 6 as the platforms develop and grow. Speaker 600:32:36Great. Thank Operator00:32:40you. We'll move next to Blayne Curtis at Jefferies LLC. Speaker 700:32:45Hey, thanks for taking my question. I just want to ask you, in terms of the September outlook, you talked about meaningful revenue from AAC. I mean, I think the other point was the gross margin was because of mix, which I'm assuming is because of that ramp. But just trying to size it, I know you don't break out the segments, but if you can kind of just give us some broad strokes as to how much of the growth is coming from retimers or series in September? Speaker 400:33:09Yes. Hey, Blake. The margins will come down to the extent we sell more hardware versus silicon. So, Taurus is definitely one of those drivers. Also, we do modules on the ARIES side and both we're seeing growth in both of those. Speaker 400:33:25So, when you look at the growth guidance we're giving in Q3, you have the contribution from Taurus, you have the incremental modules on Ares, but also we're seeing a lot of growth just from Ares Gen 5 going into AI servers and a lot of new platforms and the platforms generally are getting more content per platform. So when you look at the growth, I think it's kind of balanced between those 3 drivers largely. Speaker 700:33:53Got you. Thanks. And then I want to ask on the Gen 6 adoption moving from preproduction to production. The main GPU in the market supports Gen 6. I think the CPUs that would talk Gen 6 are going to be a bit of a way over your way. Speaker 700:34:08So I'm just kind of curious the catalyst there, do you expect Gen 6 to be in the market next year even if there's not, CPUs that kind of speak Gen 6? Speaker 200:34:19Yeah. Blayne, that's a great observation. Let me say that as these compute platforms get more and more powerful to address these growing, AI models, the only way to keep them fed, to keep these GPUs utilized is to get more and more data in and out of these, platforms. So in the past, the CPU played a very central role in terms of navigate in terms of being the conduit for all of this information. But with the new accelerated compute architecture, CPU is largely a orchestration or a control engine for the most part, you know, it does do a few other things. Speaker 200:34:53But in general, you are trying to get the data in and out of the the the GPU using the scale out and scale up networks that are made up of, you know, either PCI Express, Ethernet or enabling protocols. And as these protocols go faster and faster, we end up seeing more and more demand for the products that we have. And as a result, as these new systems get deployed, we see higher content for us on a per GPU basis. And it's largely to improve the GPU utilization through these increased data rates. Speaker 800:35:26Thanks so much. Speaker 300:35:28And then if I can add one more point, you didn't quite directly ask this for the September quarter growth. I do want to be abundantly clear on one point, which is the growth that we are forecasting for September quarter is based upon not just the power of sampling, but all of the additional production ramps that we are seeing for both the 3rd party platforms, but also internally developed accelerators. That is what is modeling and driving the growth that we're highlighting for September, although there may be other things that you can look at the overall stuff. Operator00:36:08We'll move to our next question from Tom O'Malley at Barclays. Speaker 900:36:17I just wanted to ask a broader network architecture question. So you talked a little bit more about PCIe over optical. And when you look at the back end today, I think there's a lot of effort to improve the Ethernet offering as it compares to kind of the leader in the market, as they kind of expand NVLink. Could you talk about when you see the inflection point with PCIe over optical kind of being the majority of the back end? Is that something that's coming sooner? Speaker 900:36:43Just kind of the time frame there. And then just explain a little further, I think you mentioned that it comes with a lot of additional retiming content when you use those cables. Just anything additional there? And then I have a follow-up. Speaker 200:36:55Yes. Let me take that. This is Jitendra, Tom. The the architectures for AI systems are definitely evolving. And actually, I would say they're evolving at a very rapid place, very rapid pace. Speaker 200:37:07Different customers use different architectures, to craft their systems. If you look at NVIDIA based systems, they do use NVLink, which is, of course, a proprietary closed interface. The rest of the world largely uses protocols that are either PCI Express or Ethernet or they are based on PCI Express and Ethernet. And the choice of particular protocol is really dependent upon the infrastructure that the hyperscalers have and how they choose to deploy this technology. Clearly we play in both. Speaker 200:37:34Our Taurus, Ethernet smart cable module support Ethernet and now with our Ares smart cable modules, we are able to support our PCI Express as well. If you think about the evolution, we started with Aries retimers for driving mostly within the box connectivity and shorter distance connectivity over passive cables. As these networking architectures evolved and you needed to cluster more GPUs together, we went with the Ares smart cable modules that allow you to connect a multiple racks together up to 7 meters of of copper cables. And as it expands into even further distances, we go into optical, where we demonstrated running a very popular GPU over 50 meters of optical fiber. So these are all of the tools that we are making available to our hyperscaler partners to for them to craft their solutions and deploy AI at the data center scale. Speaker 900:38:27Helpful. As a follow-up, I know this is a bit of a tougher question, but I do think that there's a lot of confusion out there and just would appreciate your thoughts. You mentioned in the prepared remarks hundreds of different types of deployment styles for the GE200. Obviously, certain hyperscalers are going to do it their way and then certain hyperscalers are going to take what is called the kind of entire system, so the 36 or the 72. Can you talk about your assumptions for what you think will be the percentage that goes towards the full system and then kind of towards the hyperscalers that use kind of their own methods? Speaker 900:38:58And talk about the content opportunities that they would kind of play out in those two scenarios. I do think that NVIDIA and others are talking potentially about more systems than historical, but just maybe the puts and takes upon how different hyperscalers will architect systems and what it means for your content? Thank you. Speaker 200:39:18Yes, great questions. And as you pointed out, a lot of moving food pieces obviously, right? But here is I think what we know and what we can comment on. First of all, all the hyperscalers are indeed deploying new AI platforms that are based on merchant silicon or third party accelerators as well as their own accelerators. And overall, we do expect our retimer content to go up. Speaker 200:39:39Now, if you double click specifically on NVIDIA or the Blackwell system, it comes in many, many different flavors. If you think about the overall Blackwell platform, it is really pushing the technology boundaries. And what that is doing is it's creating more challenges for the hyperscalers, whether it is power delivery or thermals, software complexities or connectivity. As these systems grow bigger, they run faster, become more complex. We absolutely think that the need for retimers goes up, and that drives our content higher on a per GPU basis. Speaker 200:40:16Now it is harder to predict which particular platform will have what kind of share. That's not really our business to predict. What we are doing is we are supporting our customers, our AI platform providers as well as hyperscalers to make sure that these kind of be the high-tech platforms can be deployed as easily as possible. And in at the end of the day, what you will find is hyperscalers will have to either adapt their data centers to these new technologies or they'll have to adapt this new technology to their data centers. And that creates a great opportunity for our products. Speaker 200:40:51We already have design wins across multiple form factors of hyperscaler GPUs as well as the 3rd party GPUs. And overall, we expect our business to continue to grow strongly, very exciting times for us. Speaker 900:41:06Thank you very much. Operator00:41:10Our next question comes from Tore Svanberg at Stifel Nicolaus. Speaker 1000:41:15Yes, good afternoon. This is Jeremy calling Torrey. And let me also add my congratulations on a very strong quarter and outlook. A couple of questions. First, if could you provide maybe a revenue breakout between your 3 product segments here? Speaker 1000:41:32I'm not sure if that was covered at all. Speaker 400:41:36Yes. We don't break out specifically the revenue by product. But like we said on the call, the Q2 revenues was driven heavily by the AI growth for Gen 5 and then broadening out of our design win portfolio. When you look into Q3, it's the 3 main drivers are the initial Taurus ramp in the 400 gig, the broadening out of AI servers for Gen 5 in both merchant as well as internally developed accelerator programs. And then also we're doing back end clustering with our ARIES SCM modules. Speaker 400:42:15So when you look at that, those three drivers are mainly giving us the growth in Q3. Speaker 1000:42:23Great. And then I guess maybe looking more into the LEO CXL, I understand you're shipping preproduction. Is the when can are you expecting to see more of a material ramp for Speaker 300:42:41LEO? Yes. So, in terms of material ramp, it's a function of CPUs being available that support CXL 2.0. So we have, of course, tracking the announcements from AMD and Intel to essentially get to production in the second half of this year for Turin and Granite Rapids. So in general, these things will take a little time in terms of engineering those things into platforms. Speaker 300:43:08So what we are guiding is 2025 is when we expect production ramps to pick up on CXO. Speaker 1000:43:18Great. Thank you. And if I could squeeze one last question in. Can you give us maybe a sense of your revenue and how it might break out between modules and standalone retimers? Is there is a way to kind of look at revenues in that way and how that can impact your stand growth over time? Speaker 1000:43:40Thank you. Speaker 400:43:40Yes. I mean, Taurus predominantly is modules. Ares, we're doing the back end clustering of GPUs with modules, but predominantly, but the bulk of the revenues is standalone retimers in that product family. LEO, once it ramps, we'll do add in cards and silicon, but they'll be heavily skewed towards silicon. Operator00:44:08We'll move next to Quinn Bolton at Needham and Company. Speaker 1100:44:14Hey guys, thanks for taking my question. I guess maybe a follow-up just on the Blackwell question. It looks like there have been some recent architectural or system level changes at NVIDIA with sort of the introduction of the GB200A that looks like it uses PCI interconnect or PCI Express to connect the GPUs and the CPUs, and perhaps a de emphasis in the HGX platforms. Just wondering if you see any shifts in content, if that's favorable, if it's about a wash going from one platform to the other? And then I've got a follow-up. Speaker 200:44:49Yes. Thank you, Quinn. Unfortunately, it would not be appropriate for us to kind of comment on rumors and third party information that seems to be separating around. What we will say is that we are committed to whatever platform our customer want to deploy. Whether it's a full rack or it's an NGA server or something in between, we are working with them very, very closely, shoulder to shoulder every day. Speaker 200:45:13As Sanjay mentioned, we already have multiple design wins in the back wall family, including the GB200. We are shipping initial quantities of preproduction to the early adopters and we do have backlog in place that serves the Blackwell platform including GB 200. Speaker 1100:45:32Got it. Okay. Thank you for that. And just maybe a clarification on the Taurus 400 gig ramp as well as the Ares SDM ramps. Are those ramping across multiple hyperscalers? Speaker 1100:45:45Or are they driven by a lead hyperscaler initially and then you would expect a broadening out to other hyperscalers as we move into 2025? Speaker 300:45:55Okay. Good question. Let me take that. So if you think about AECs, in general, 800 gig, where you're running 100 gig per lane, is the first broad use case that we believe for AAC applications. If you look at data rates lower than that, let's say, 400 gig and so on, it tends to be, very frankly, case by case, depends on the topology, application and so on. Speaker 300:46:24So the good thing about the design wins we have is that these scale across multiple platforms, both from an AI and general compute standpoint and supporting various different topologies. And the revenue drivers that we are essentially highlighting for 3Q and beyond is based on supporting these applications. With 800 gig, it becomes much more broader with several different customers essentially in requiring AECs. Speaker 1100:46:56And is it similar for the Ares SCMs for back end clustering as well? Speaker 300:47:02Exactly. It depends on the topology for what it is, in terms of how the back end networks are designed for the AI subsystems. In general, all of this, when it comes to active cabling type of technology, it becomes case by case depending on the infrastructure and how exactly systems are being put together compared to a component like a PCIe timer that goes across broad array of use cases across multiple different deployment scenarios. So that's the nuance to keep in mind when you look at AEC Markets. Speaker 1100:47:38Got it. Thank you. Speaker 300:47:41Yes. But still the volume and the deployment scale tends to be very broad, right, if you're looking at how infrastructures are being put together. So it is one of those things where you look at case by case, but as long as you're able to address a wide variety of applications, it does very significantly add up. Operator00:48:04We'll take our next question from Ross Seymore at Deutsche Bank. Speaker 1200:48:09Hi, guys. Thanks for asking a question. Apologies to go back to one that's been hit on a couple of times. So I want to do it nonetheless and kind of the Blackwell topic and the content topic. You guys gave us the punch line that you believe your content on average will go up per GPU generation to generation. Speaker 1200:48:23It also seems like you're getting across that the customization of it is still very broad based. And so just looking at the vanilla system SKUs and reference design NVIDIA itself has might be misleading. Two part question to this. Are you of the belief that your content is equal across the board in the same way it was in Hopper? Or do things get more skewed where there'll be places where you'll have a significant step up in content and some configurations and others where you'd have a significant step down? Speaker 1200:48:52And the difference between those 2 might be where investors are getting a little bit confused. Speaker 300:48:58Let me try to add a little bit more color on that. But before I do that, let me give you and remind 2 data points we've already covered in the Q and A so far. First point, let's be very clear that our PCIe retimer content per GPU on average will continue to grow as the AI systems scales across various different topologies. And this applies to both third party like standard merchant GPUs as well as internally developed GPUs. The second reminder that I want to kind of note is that specifically for Blackflow, we expect our PCIe content per GPU to go up. Speaker 300:49:38Now what you're asking is specifically about the deployment scenarios, which right now is evolving, right. So we have design wins for several different topologies, including the GB200. But if you look at the various different options that NVIDIA is offering and how those are being composed and considered by the hyperscalers, that situation is evolving at the moment. The key message that we want to deliver is that overall, our PCIe content is going to be higher than the hopper generation. We expect that the design wins that we're starting to see and we're starting to ship from a production preproduction standpoint are all meaningful that will essentially allow us to continue to have a robust growth engine as far as our PCI retimer business concerned. Speaker 1200:50:33Thanks for that. And I guess as a follow-up, you guys have focused more on this call about the internally developed accelerator than you have in calls in the past. And I realize there haven't been too many since your IPO. But are you trying to get across the key message that those are really growing as a percentage of your mix, that those are penetrating the market and kind of catching up and taking relative share from the GPU side of things? Or is your commentary meant to get across that Astera itself with its retimers and other components will take significant share in that kind of ASIC market relative to the GPU side? Speaker 300:51:11Yes. It's probably both, to be honest with you. In the sense that we do see it, it's no secret, right? I think many of the hyperscalers are doing their own accelerators, which are driven by the workloads or the business models that they pursue. I think that will continue as a macro trend in terms of internally developed accelerators, going hand in hand with GPUs that are available from NVIDIA or AMD or others. Speaker 300:51:41So that's the model that we believe will be here to stay, that hybrid approach. And for us, really, the reason we are highlighting is that, of course, we have had a significant business that has grown in the last year or 2 years from the designs that we have been supporting with the merchant GPU deployments that have happened. But at the same time, now we are reaching a point where the accelerator volumes are also starting to ramp up. And for us, the good news is that we are on all the major AI accelerator platforms from a design win standpoint or least all the major ones that are out there. And for us, we have multiple parts to grow our business and that is a very CPU, GPU architectures come about, just like the NVIDIA's BlackVille platform, we do expect to gain from it both on the retimer content as well as other products that we can service to the space. Speaker 800:52:53Thank you. Operator00:52:56We'll take our next question from Richard Shannon at Craig Hallum Capital Group. Speaker 800:53:02Hi, guys. Thanks for taking my question. Maybe a question on PSAXpress Gen 6 here. Last call, you talked about some of the wins designs being decided in the next 6 to 9 months or obviously 3 to 6 months, 3 more months farther forward here. Obviously, you've got some wins already on Gen 6, but I guess I want to get a sense of the share of the market kind of looking backwards. Speaker 800:53:29How much of that market has been decided versus up for grabs? If you can help characterize what's left here to win in the next 3 to 6 months? Speaker 300:53:39I'm trying to see how best to answer that question. So you got to let me try to provide some color. The design win Windows is whatever for this platform, you're looking at once GPUs become available, you're looking at 6 to 12 months before they go to production. So that's one thing to keep in mind. But also, please also think about how hyperscalers go about doing their stuff, right? Speaker 300:54:06Everyone is in an arms race right now getting to production as quickly as possible. In many different situations, given the number of platforms and how quickly everyone is trying to move. And to that standpoint, what is happening is that many of those engineers are familiar with our Gen 5 retimers. They've designed it across multiple platforms. They've built software tools and capabilities around it. Speaker 300:54:37And now our Gen 6 retimers are essentially a seamless upgrade from a software standpoint, from a hardware standpoint. So it does offer the lowest risk and fastest path to our customers. And that plays well within their own objectives of trying to get something out quickly and dealing with resources that might not be available at the levels that are required. So overall, we are planning to gain from it and we are essentially being the leader in the space, being the one that is getting the first crack at these opportunities. And we are doing everything we can to convert those things into design wins and revenue. Speaker 800:55:23Okay, great. My follow on question is a pretty simple one. Just looking the Taurus line here, great to see the ramp here at 400 gig and I don't want to get too far ahead of what looks to be a pretty nice ramp here in the second half of the year, but I think you've talked about the 800 gig generation ramping later in 2025. Any update on that timing and how are your design wins looking so far? Speaker 200:55:49Yes. Good question. So the 800 gig timing we believe is going to be late in 2025. Right now what we are seeing is 400 gig applications for some of the AI systems as well as actually we are seeing them for general purpose compute as well where you are doing the traditional server to top of the rack connection. So that will continue on for the rest of this year for 400 gig deployments. Speaker 200:56:14And then as we get some of the newer NICs that are capable of 100 gig and 200 gig per lane etcetera, try to get to 800 gig is where we see broadening of this market and more deployments across different hyperscalers across different platforms in the later half of twenty twenty five. Speaker 800:56:34Okay, great. Thanks guys. Operator00:56:38We'll go next to Suji Desilva at ROTH Capital. Speaker 1300:56:42Hi, Chitendra, it's Andre. And Mike, congrats on the progress here. This question maybe may not have been asked explicitly, but can you give us a relative content framework for internally developed versus 3rd party processors, accelerators? Is it higher for internally developed on average or is it hard to generalize like that? Speaker 200:57:03I would say it's a little bit hard to generalize. It varies quite a bit. Even one, you know, even one, particular platform, you can have different form factors. Even if you look at, let's say, BlackRock, you have HDX, you have MGX, you have NBLs, you have custom racks that are getting deployed. And if you look at each one of them, you will find different amount of contacts. Speaker 200:57:23Number of retimers will vary, where they get placed will vary. But what is very consistent is that the overall content does go out for us. The other factor to consider is the choice of back end network. Again, for example, if you look at the Blackwell family, they use NP Link, which is a closed proprietary standard, which we do not participate in. But when somebody uses a PCI Express or PCI Express based protocol at their back end connectivity, then our content goes up pretty significantly because now we are shipping not only our retimers but also the smart cable, AD smart cable modules into that application. Speaker 200:57:59Similarly, if the back end interconnected Ethernet, that will benefit our Taurus family of product lines. So it really varies greatly on what the architecture is of the platform and what form factor is getting deployed in. Speaker 1300:58:14Okay, great. That's very helpful color. Thanks. And then just a quick follow-up here. Was there something inherent in the Blackwell transition from Hopper that made this much platform diversification and architecture diversification possible? Speaker 1300:58:26Or was it just the hyperscalers getting more sophisticated about what they're trying to do? Or was it availability of things like Astera's PCI products? Any color there would be helpful as to how this kind of proliferation of architectures kind of came about? Speaker 200:58:41I mean, if you look at the Blackwell family, it's like a marvel of technology. The amount of content that is being pushed into that platform is incredible. And as I mentioned earlier, that does create other problems. Right? There is so much compute packed in such small space. Speaker 200:58:55They're delivering power to those to those GPUs themselves and the CPUs is a challenge. How to cool them, it becomes a challenge. And the fact is that the bottom data centers are just not equipped to handle many of these issues. So what the hyperscalers are doing is they're taking these raw platforms, the raw technology and trying to adapt it so that it fits into their data centers. And that's where we see a lot of opportunity for our existing products, the ones that that we've talked about, as well as some new products, that that we've been working on. Speaker 200:59:28Again, shoulder to shoulder with our, hyperscaler and AI platform customers. So very excited to see how these new platforms will get rolled out including Blackwell, including the hyperscaler internally AI platforms and the increased content that we have there. Speaker 1300:59:44Okay. So Blackwell pushed the envelope. Great. Thanks for the color there. Operator00:59:50And finally, we'll move to Quinn Bolton at Needham and Company. Speaker 500:59:54Hey, guys. Just a quick follow-up. Speaker 1100:59:56I know you had potential for an early lockup expiring Thursday morning. Just wanted to see if you guys could confirm, are we still within the 10 day measuring period so that you could trigger that early lockup? Or does the release of 2nd quarter results sort of end that period and we're now looking at a September 16 lockup expiration? Thank you. Speaker 401:00:20Yes. The release of our earnings today releases a lockup, so that opens up on Thursday. Speaker 801:00:30It does open Thursday. Okay. Thank you. Speaker 201:00:33And the other lockup already expired long ago. Operator01:00:43And there are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks. Speaker 101:00:49Thank you everyone for your participation and questions. We look forward to updating you on our progress during our Q3 earnings conference call later this fall. Thank you. Operator01:01:00And this concludes today's conference call. Thank you for your participation. You may now disconnect.Read morePowered by Conference Call Audio Live Call not available Earnings Conference CallAstera Labs Q2 202400:00 / 00:00Speed:1x1.25x1.5x2x Earnings DocumentsPress Release(8-K)Quarterly report(10-Q) Astera Labs Earnings HeadlinesAstera Labs, Inc. (ALAB): Among the Large-Cap Stocks Insiders and Short Sellers Are Dumping Like CrazyMay 5 at 7:35 PM | finance.yahoo.comAstera Labs: Positioned To Capitalize On Next-Gen AI Connectivity BoomMay 5 at 3:04 AM | seekingalpha.comOur $1 AI stock to buy right nowDid Elon Musk just set the stage for the next AI stock explosion? One 30-year Wall Street veteran thinks so. Musk has been quietly creating one of the most ambitious AI ventures in history.May 6, 2025 | Behind the Markets (Ad)Astera Labs, Inc. (ALAB): Among the Large-Cap Stocks Insiders and Short Sellers Are Dumping Like CrazyMay 4 at 12:13 PM | insidermonkey.comAstera Labs Announces Production Ramp of Comprehensive PCIe 6 Connectivity Portfolio to Accelerate AI Platform DeploymentsMay 3 at 8:27 PM | nasdaq.comAstera Labs Heating Up The AI Connectivity Market (Rating Upgrade)May 2, 2025 | seekingalpha.comSee More Astera Labs Headlines Get Earnings Announcements in your inboxWant to stay updated on the latest earnings announcements and upcoming reports for companies like Astera Labs? Sign up for Earnings360's daily newsletter to receive timely earnings updates on Astera Labs and other key companies, straight to your email. Email Address About Astera LabsAstera Labs (NASDAQ:ALAB) designs, manufactures, and sells semiconductor-based connectivity solutions for cloud and AI infrastructure. Its Intelligent Connectivity Platform is comprised of a portfolio of data, network, and memory connectivity products, which are built on a unifying software-defined architecture that enables customers to deploy and operate high performance cloud and AI infrastructure at scale. The company was incorporated in 2017 and is based in Santa Clara, California.View Astera Labs ProfileRead more More Earnings Resources from MarketBeat Earnings Tools Today's Earnings Tomorrow's Earnings Next Week's Earnings Upcoming Earnings Calls Earnings Newsletter Earnings Call Transcripts Earnings Beats & Misses Corporate Guidance Earnings Screener Earnings By Country U.S. Earnings Reports Canadian Earnings Reports U.K. Earnings Reports Latest Articles Palantir Stock Drops Despite Stellar Earnings: What's Next?Is Reddit Stock a Buy, Sell, or Hold After Earnings Release?Warning or Opportunity After Super Micro Computer's EarningsAmazon Earnings: 2 Reasons to Love It, 1 Reason to Be CautiousRocket Lab Braces for Q1 Earnings Amid Soaring ExpectationsMeta Takes A Bow With Q1 Earnings - Watch For Tariff Impact in Q2Palantir Earnings: 1 Bullish Signal and 1 Area of Concern Upcoming Earnings Fortinet (5/7/2025)ARM (5/7/2025)AppLovin (5/7/2025)MercadoLibre (5/7/2025)Lloyds Banking Group (5/7/2025)Manulife Financial (5/7/2025)Novo Nordisk A/S (5/7/2025)Uber Technologies (5/7/2025)Johnson Controls International (5/7/2025)Walt Disney (5/7/2025) Get 30 Days of MarketBeat All Access for Free Sign up for MarketBeat All Access to gain access to MarketBeat's full suite of research tools. Start Your 30-Day Trial MarketBeat All Access Features Best-in-Class Portfolio Monitoring Get personalized stock ideas. Compare portfolio to indices. Check stock news, ratings, SEC filings, and more. Stock Ideas and Recommendations See daily stock ideas from top analysts. Receive short-term trading ideas from MarketBeat. Identify trending stocks on social media. Advanced Stock Screeners and Research Tools Use our seven stock screeners to find suitable stocks. Stay informed with MarketBeat's real-time news. Export data to Excel for personal analysis. Sign in to your free account to enjoy these benefits In-depth profiles and analysis for 20,000 public companies. Real-time analyst ratings, insider transactions, earnings data, and more. Our daily ratings and market update email newsletter. Sign in to your free account to enjoy all that MarketBeat has to offer. Sign In Create Account Your Email Address: Email Address Required Your Password: Password Required Log In or Sign in with Facebook Sign in with Google Forgot your password? Your Email Address: Please enter your email address. Please enter a valid email address Choose a Password: Please enter your password. Your password must be at least 8 characters long and contain at least 1 number, 1 letter, and 1 special character. Create My Account (Free) or Sign in with Facebook Sign in with Google By creating a free account, you agree to our terms of service. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
There are 14 speakers on the call. Operator00:00:00Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin. Speaker 100:00:08Good afternoon, everyone, and welcome to the Astera Labs' Q2 2024 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co Founder and Sanjay Gajendra, President and Chief Operating Officer and Co Founder and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward looking statements reflect management's current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final prospectus relating to our IPO. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward looking statements. Speaker 100:01:23In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are based on information available to management as of today and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events or changes in our expectations except as required by law. Also during this call, we will refer to certain non GAAP financial measures, which we consider to be an important measure of the company's performance. These non GAAP measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U. S. Speaker 100:02:15GAAP. A discussion of why we use non GAAP financial measures and reconciliations between our GAAP and non GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra? Speaker 200:02:44Thank you, Leslie. Good afternoon, everyone, and thanks for joining our Q2 conference call for fiscal 2024. AI continues to drive a strong investment cycle as entire industries look to expand their creative output and overall productivity. The velocity and dynamic nature of this investment in AI infrastructure is generating highly complex and diverse challenges for our customers. SRL Labs intelligent and flexible connectivity solutions are developed ground up to navigate these fast paced complicated deployments. Speaker 200:03:18We are working closely with our hyperscaler customers to help them solve these challenges across diverse AI platform architectures that feature both 3rd party and internally developed accelerators. In addition to these favorable secular trends, we are also benefiting from new company specific product cycles across multiple technologies which will also contribute to our growth in the form of higher average silicon content per AI platform. A strong leadership position and great execution by our team resulted in record revenue for Astera Labs in the June quarter, supports our strong outlook for the Q3 and gives us confidence in our ability to continue outperforming industry growth rates. AcelR Labs delivered strong Q2 results setting our 4th consecutive record for quarterly revenue, strong non GAAP operating margin and positive operating cash flows. Our revenue in Q2 was $76,900,000 up 18% from the previous quarter and up 6 19% from the same period in 2023. Speaker 200:04:24Non GAAP operating margin was 24.4 percent and we delivered $0.13 of non GAAP diluted earnings per share. Operating cash flow generation was also strong during the quarter coming in at $29,800,000 With continued business momentum and a broadening set of growth opportunities, we are investing in our customers by rapidly scaling the organization. During the quarter, we expanded our cloud scale interop lab with Saewon and announced the opening of a new R and D center in India. We also announced the appointment of Bethany Mayer to our Board of Directors, bringing additional strategic leadership to the company. Today, Hetera Labs is focused on 3 core technology standards, PCI Express, Ethernet and Compute Express Link. Speaker 200:05:11We are shipping 3 separate product families supporting these different connectivity protocols, all generating revenue and in various stages of adoption. Let me touch upon our business with each of these product families and how we support them with our differentiated architecture and Cosmos software suite. Then I will turn the call over to Sanjay to delve deeper into our growth strategy. Finally, Mike will provide additional details on our Q2 results and our Q3 financial guidance. 1st, let's talk about PCI Express. Speaker 200:05:44During the quarter, we saw continued strong demand for our Ares product family to drive reliable PCI Gen5 connectivity in AI systems by delivering robust signal integrity and link stability. While merchant GPU suppliers drove early adoption of PCI Gen 5 into AI systems over the past year, we are now also seeing our hyperscaler customers introduce and ramp new AI server programs based upon their internally developed accelerators utilizing PCI HN5. Looking ahead, AI accelerator processing power is continuing to increase at an incredible pace. The next milestone for the AI technology evolution is the commercialization of PCI Gen 6, which doubles the connectivity bandwidth within AI servers, creating new challenges for link reach, reliability and latency. Our 86 PCI e timer family helps to solve these challenges with the next generation of our software defined architecture, offering a seamless upgrade path to our widely deployed and field tested Gen5 solutions. Speaker 200:06:52We have started shipping initial quantities of pre production orders of our PCIe Gen 6 solution 86. These shipments support our hyperscaler customers' initial program developments that are based on NVIDIA's Blackwell platform including GB200. We look forward to supporting more significant production ramps in the quarters to come. Next, let's talk about Ethernet. Our portfolio of Taurus Ethernet Smart Cable Modules helps relieve connectivity bottlenecks by overcoming reach, signal integrity and bandwidth issues by enabling robust 100 gig per lane connectivity over copper cables or AECs. Speaker 200:07:34Today, we are pleased to announce that our 400 gig Torus Ethernet SCMs have shifted into volume production with an expected ramp through the back half of twenty twenty four. This ramp is happening across multiple platforms in multiple cable configurations and we are working with multiple cable partners to support the expected volumes. Chorus will be ramping across a multitude of 400 gig applications to scale out connectivity on both AI compute platforms as well as general purpose compute systems. We are excited about the breadth and diversity of our product design wins and expect the product family to be accretive to our corporate growth rate going forward. Next is Computer Express Link or CXL. Speaker 200:08:20We continue to work closely with our hyperscaler customers on a variety of use cases and applications for CXL. In Q2, we shipped material volume of our LEO products for pre production rack scale deployment in data centers. We expect to see data center platform architects utilize CXN technology to solve memory bandwidth and capacity bottlenecks using our LEO family of products. The initial deployments are targeting memory expansion use cases with production ramp starting in 2025 when new CXL capable CPUs are broadly available. Finally, I would like to spend a moment on Cosmos which is a software platform that brings all of our product families together. Speaker 200:09:04We have discussed how Cosmos not only runs on our chips, but also in our customers operating stacks to deliver seamless customization, optimization and monitoring. The combination of our semiconductor and hardware solution with Cosmos software enables our product to become the eyes and ears of connectivity infrastructure, helping fleet managers to ensure their AI and cloud infrastructure is operating at peak utilization. By improving the efficiency of their data centers, our customers are able to generate higher ROI and reduce downtime. To summarize, sustained secular trends in AI adoption, design wins across diverse AI platforms at hyperscalers featuring both third party and internally developed accelerators and increasing average dollar content in next generation GPU based AI platforms gives us confidence in our ability to outperform industry growth rates. With that, let me turn the call over to our President and COO, Sanjay Gajendra to discuss some of our recent product announcements and our long term growth strategy. Speaker 200:10:12Thanks, Chitinder, and good afternoon, everyone. Speaker 300:10:15We are pleased with our robust Q2 results and strong top line outlook for Q3, but we are even more excited about the volume and breadth of opportunities that lie ahead. Today, I will focus on 5 growth vectors that we believe will help us to grow our business faster than industry growth rates over the long term. First, Estera Labs is in a unique position with design wins across diverse AI platform architectures, featuring both 3rd party and internally developed accelerators. This diversity gives us multiple paths to grow our business. This hybrid approach of using 3rd party and internally developed accelerators allows hyperscalers to optimize their fleet to support unique workload requirements and infrastructure limitations, while also improving capital investment efficiency. Speaker 300:11:14Our intelligent connectivity platform with its flexible software based architecture enables portability and seamless reuse between platforms while creating growth opportunities for all our product families. In addition to the 3rd party GPU platforms, we also expect to see several large deployments based on internally developed AI accelerators hitting production volume over the next few quarters and driving incremental PCIe and Ethernet volumes for us. 2nd, we see increasing content on next generation AI platforms. NVIDIA's Blackfiled GPU architecture is particularly exciting for us as we expect to see strong growth opportunities based on our design wins as hyperscalers compose solutions based on Blackwell GPUs, including GB200 across their data center infrastructure. To support various AI workloads, infrastructure challenges, software, power and cooling requirements, we expect multiple deployment variants for this new GPU platform. Speaker 300:12:33For example, NVIDIA cited 100 different configurations for Blackwell in their most recent earnings call. This growing trend of complexity and diversity presents an exciting opportunity for Astar Labs as our flexible silicon architecture and Cosmos software suite can be harnessed to customize the connectivity backbone for a diverse set of deployment scenarios. Overall, we expect our business to benefit from the Blackfool introduction with higher average dollar content of our products per GPU, driven by a combination of increasing volumes and higher ASPs. The next growth vector is the broadening applications and use cases for our Ares product family. Ares is in its 3rd generation now and represents the gold standard for PCIe retimers in the industry. Speaker 300:13:33The introduction of the new 86 retimers built upon the company's widely deployed and battle tested PCIe 5 retimers and the industry transition to PCIe Gen 6 will be a catalyst for increasing PCIe retimer content for Astera. Our learnings from hundreds of design wins and production deployment over the last several years enables us to quickly deploy PCIe Gen 6 technology at scale. As Jitendra noted, we are now shipping initial quantities of preproduction volume for 86 and currently have meaningful backlog in place to support the initial deployment of hyperscaler AI servers featuring NVIDIA's Blackwell GPUs, including GB200. We're also very excited about the incremental PCIe connectivity market expansion that will be driven by multi rack GPU clustering. Similar to the dynamic within the Ethernet AEC business, the reach limitations of passive PCIe copper cables are a bottleneck for the number of GPUs that can be clustered together. Speaker 300:14:53Our purpose built AD smart cable modules solve these issues by providing robust signal integrity and link stability or materially longer distances, improving rack airflow while actively monitoring and optimizing link health. This PCIe AEC opportunity is in the early stages of adoption and deployment, and we view the multi rack GPU clustering application as a new and growing market opportunity for our ADES product family. In June, we announced the industry's first demonstration of end to end PCIe optical connectivity to provide unprecedented reach for larger GPU clusters. We are proud to broaden our PCIe leadership once again by demonstrating robust PCIe links over optical interconnects between GPUs, CPUs, CXL memory devices and other PCIe endpoints. This breakthrough expands our intelligent connectivity platform to allow customers to seamlessly scale and extend high bandwidth, low latency PCI interconnects over optics. Speaker 300:16:14Overall, we expect our ADES PCIe retimer business to deliver strong growth as system complexity, platform diversity and speeds continue to increase and on average result in higher retimer content per GPU in next generation AI platforms. Next, in addition to the strong growth prospects of our Ares product family across the PCIe ecosystem, we're also seeing our tallest product family for Ethernet AAC application start to meaningfully contribute to the growth in the back half of twenty twenty four. What is exciting about these ramps is the diversity in applications and use cases. We are seeing demand for our Taurus product family for both AI and general compute platforms. We are supporting the market with multiple cable configurations, including straight Y cables and X cables. Speaker 300:17:20We will be shipping volume into hyperscaler build outs, supporting multiple cable vendors to enable a diverse supply chain that is crucial for hyperscalers. Overall, we are very excited about Taurus becoming yet another engine of growth as we look to expand the top line while also diversifying our product family contributions. Last but not least, CXL is an important technology to solve memory bandwidth capacity bottlenecks in compute platforms. We are working closely with our hyperscaler partners to demonstrate various use cases for this technology and starting to deploy our LEO CXL Controllers in preproduction racks in data centers. We have incorporated the learnings, customization and security requirement into our Cosmos software and have the most robust cloud ready CXL solution in the industry. Speaker 300:18:26We have demonstrated that our LEO CXL Smart Memory Controllers improve application performance and reduce TCO in compute platforms. Very importantly, we can accomplish many of these performance gains with 0 application level software changes or upgrades. Overall, we remain very excited about the potential of CXL in data center applications. Finally, our close collaboration and front row seat with hyperscalers and AI platform providers continues to yield valuable insights regarding the direction of compute technologies and the connectivity topologies that will be required to support them. This close collaboration is helping us identify new product and business opportunities and additional engagement models across our entire intelligent connectivity platform, which we believe will drive strong long term growth for Astera. Speaker 300:19:34With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook. Speaker 400:19:43Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non GAAP basis. The primary difference in Astera Labs' non GAAP metrics is stock based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non GAAP Q3 financial outlook as well as a reconciliation of our GAAP to non GAAP financial measures presented on this call. For Q2 of 2024, Astera Labs delivered record quarterly revenue of $76,900,000 which was up 18% from the previous quarter and 6 19% higher than the revenue in Q2 of 2023. Speaker 400:20:36During the quarter, we shipped products to all major hyperscalers and AI accelerator manufacturers. We recognized revenue across all three of our product families during the quarter with the Ares product being the largest contributor benefiting from continued momentum and AI based platforms. In Q2, Torus revenues contributed to continue to primarily shift into 200 gig Ethernet based systems and we expect Torus revenue to now diversify further as we begin to ship volume into 400 gig Ethernet based systems in the 3rd quarter. Q2 LEO revenues were largely from customers purchasing preproduction volumes for the development of their next generation CXL capable compute platforms, with our customers production launch timing being dependent on the data center server CPU refresh cycle. Q2 non GAAP gross margins was 78% and was down 20 basis points compared to 78.2% in Q1 of 2024 and better than our guidance of 77%. Speaker 400:21:43Non GAAP operating expenses for Q2 were $41,200,000 up from $35,200,000 in the previous quarter and consistent with our guidance. Within non GAAP operating expenses, R and D expenses was $27,100,000 sales and marketing expense was 6,300,000 and general and administrative expenses was $7,800,000 Non GAAP operating margin for Q2 was 24.4%. Interest income in Q2 was $10,300,000 Our non GAAP tax provision was $6,800,000 for the quarter, which represents a tax rate of 23% on a non GAAP basis. Non GAAP fully diluted share count for Q2 was 175,300,000 shares and our non GAAP diluted earnings per share for the quarter was $0.13 Cash flow from operating activities for Q2 was $29,800,000 and we ended the quarter with cash, cash equivalents and marketable securities just over $830,000,000 Now turning to our guidance for Q3 of fiscal 2024. We expect Q3 revenue to increase within a range of $95,000,000 100,000,000 dollars up roughly 24% to 30% sequentially from the prior quarter. Speaker 400:23:06We believe our Ares product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q3, driven by growing volume deployment with our customers' AI servers. We also expect our Torus family to drive solid growth quarter over quarter as design wins within new 400 gig Ethernet based systems ramp into volume production. We expect non GAAP gross margins to be approximately 75%. The sequential decline in gross margin is being driven by an expected product mix shift towards hardware solutions during the quarter. We expect non GAAP operating expenses to be in the range of approximately $46,000,000 to $47,000,000 as we remain aggressive in expanding our R and D resource pool across headcount and intellectual property. Speaker 400:23:57Interest income is expected to be approximately $10,000,000 Our non GAAP tax rate should be approximately 20% and our non GAAP fully diluted share count is expected to be approximately 177,000,000 shares. Adding this all up, we are expecting non GAAP fully diluted earnings per share of a range of approximately $0.16 to $0.17 This concludes our prepared remarks. And once again, we are very much appreciative of everyone joining the call. And now we will open the line for questions. Operator? Speaker 400:24:29Thank Operator00:24:33you. We'll take our first question from Harlan Sur at JPMorgan. Speaker 500:24:44Good afternoon. Thanks for taking my Congratulations on the strong results. During the quarter, lots of concerns around your large GPU customer and one of their next generation GPU SKUs, the GV200. Glad that the team could clarify that your dollar content across all Blackwell GPU SKUs is actually rising versus prior generation hopper. But as you guys mentioned, AI ASIC accelerator mix is rapidly rising and actually we believe outgrowing GPUs both this year and next year and accounting for like 50% of the XPU mix sort of next year, right. Speaker 500:25:22And with ASIC, it's 100% PCIe based. And as you pointed out, right, many of these ASIC customers are still in the early stages of the ramp. So given all of this, given some of the new product ramps with your AEC solutions, what's the team's visibility and confidence level on driving continued quarter on quarter growth from here, maybe over the next several quarters? Speaker 200:25:49Arun, thank you so much for the question. It's great to be in the place that we are here today. We feel very confident about what is to come. Clearly, we don't guide more than 1 quarter out, so please don't take this as any guidance. But we really believe that we are in the early innings of AI here. Speaker 200:26:09All of the hyperscalers are increasing their their CapEx targets for the rest of the year. 2025 is expected to be even even higher. We heard that the LAMA model required 10 times more compute in order to solve that. So all of these trends are basically driving shift in technology. We are seeing, as you correctly pointed out, a lot of our hyperscaler customers ramp their internally developed AI accelerators in addition to deploying 3rd party AI accelerators. Speaker 200:26:39And we are very pleased that we have designed with across all of these different platform. Our our customers are ramping their platforms and we are ramping multiple product families. As Sanjay mentioned, both Aries and Taurus are ramping into these new platforms. So we feel very good about what is in store for the future and feel that with the rising content on a per GPU basis, we'll be able to outpace the market growth in the long term. Speaker 500:27:07I appreciate that. And on top of the strong AI demand trend pulls, on top of the new product ramps that you guys articulated today, One thing I haven't baked into my model is the penetration of your retimer technology into general purpose servers, right. And the good news is that we're finally starting to see the flash vendors aggressively bringing finally bringing Gen 5 PCIe SSDs to the market, which could potentially unlock some retimer opportunities in general purpose servers where the Gen 5 retimer content today is still 0. So what's the team's outlook? Do you see that there may be some penetration starting in 2025 of your retimer solution into general purpose server and maybe if you could size that potential opportunity for us? Speaker 300:28:01Yes, absolutely, Haman. Sanjay here. Good to hear your voice. Yes, so in general, that's a correct statement. We have several design wins on the compute side. Speaker 300:28:12Just for reasons like you highlighted, either SSDs not being Gen 5 ready or for dollars being sort of sucked away into the AI platforms, there has been a slower than expected growth on the general compute. But at some point, like we keep saying, the servers are going to fall off the racks at some point given how long they have been in the fleet. So we do expect, the general compute to start picking up, especially as both AMD and Intel get to production with the Turin and the planet Rapids based CPUs. So overall, 2025, we do expect that the compute platform will start figuring in terms of being meaningful revenue growth. Like I noted, we do have design wins already in these platforms for 80 3 timers. Speaker 300:29:04But also I would like to add that we do have design wins for our Taurus Ethernet module application as well in general compute. So we should see sort of the the 2 engine growth story along the general compute to go along with all of the things we shared on AI both for 3rd party or merchant GPUs as well as the big change that we're seeing now is the ramp in the internally developed accelerators, those things being a meaningful and a significant driver for our growth. Speaker 500:29:36Thank you. Operator00:29:41We'll move next to Joe Moore at Morgan Stanley. Speaker 600:29:45Great. Thank you. I wonder if you could talk about the competitive dynamics within PCI Gen 5 retimers. Are you seeing any number of people have qualified solutions in China, in the U. S, are you seeing any encroachment there? Speaker 600:29:59And then in terms of PCI Gen 6, can you talk about the prospects for when you start to see volume there? Speaker 300:30:07Absolutely. Let me take that, Joe. So overall, this is a big and growing market. I think that fact is clear. I mean, the fact that you have larger names jumping into the mix sort of validates the market that the retimer represents. Speaker 300:30:26Now, a couple of points to keep in mind is that connectivity products, especially PCI Express, tends to have a certain nuance to it, which is the fact that we are the device in the middle. We're always in between GPU, storage, networking and so on. And interoperation, especially at high volume cloud scale deployment becomes critical. So what we have done in the last 3, 4 years is really work shoulder to shoulder with our hyperscaler and AI platform providers to ensure that the intra operation is met. The platform level deployment, whether it is diagnostic, telemetry, firmware management is all addressed, including the Cosmos software that we provide from a management fleet management and diagnostic type of capability. Speaker 300:31:23Those all have been integrated into our customers' operating stacks. So in general, the picture I'm trying to paint here is that the tribal knowledge that we have built, the penetration that we have, not just with those silicon, but also software does give us a significant advantage compared to our competitors. Now having said that, we will continue to work hard. We have several design wins for PCIe Gen 6 like we shared in today's call that are all designed around the next generation GPU platforms, specifically the Blackwell based GPUs from NVIDIA, which are publicly noted to support Gen 6. So we'll continue to work through them. Speaker 300:32:09We are currently shipping preproduction volume to support some of the initial ramps, including for GB200 based platforms. So overall, we feel good about the position that we are in, both in terms of Gen 5 as well as transitioning those design into Gen 6 as the platforms develop and grow. Speaker 600:32:36Great. Thank Operator00:32:40you. We'll move next to Blayne Curtis at Jefferies LLC. Speaker 700:32:45Hey, thanks for taking my question. I just want to ask you, in terms of the September outlook, you talked about meaningful revenue from AAC. I mean, I think the other point was the gross margin was because of mix, which I'm assuming is because of that ramp. But just trying to size it, I know you don't break out the segments, but if you can kind of just give us some broad strokes as to how much of the growth is coming from retimers or series in September? Speaker 400:33:09Yes. Hey, Blake. The margins will come down to the extent we sell more hardware versus silicon. So, Taurus is definitely one of those drivers. Also, we do modules on the ARIES side and both we're seeing growth in both of those. Speaker 400:33:25So, when you look at the growth guidance we're giving in Q3, you have the contribution from Taurus, you have the incremental modules on Ares, but also we're seeing a lot of growth just from Ares Gen 5 going into AI servers and a lot of new platforms and the platforms generally are getting more content per platform. So when you look at the growth, I think it's kind of balanced between those 3 drivers largely. Speaker 700:33:53Got you. Thanks. And then I want to ask on the Gen 6 adoption moving from preproduction to production. The main GPU in the market supports Gen 6. I think the CPUs that would talk Gen 6 are going to be a bit of a way over your way. Speaker 700:34:08So I'm just kind of curious the catalyst there, do you expect Gen 6 to be in the market next year even if there's not, CPUs that kind of speak Gen 6? Speaker 200:34:19Yeah. Blayne, that's a great observation. Let me say that as these compute platforms get more and more powerful to address these growing, AI models, the only way to keep them fed, to keep these GPUs utilized is to get more and more data in and out of these, platforms. So in the past, the CPU played a very central role in terms of navigate in terms of being the conduit for all of this information. But with the new accelerated compute architecture, CPU is largely a orchestration or a control engine for the most part, you know, it does do a few other things. Speaker 200:34:53But in general, you are trying to get the data in and out of the the the GPU using the scale out and scale up networks that are made up of, you know, either PCI Express, Ethernet or enabling protocols. And as these protocols go faster and faster, we end up seeing more and more demand for the products that we have. And as a result, as these new systems get deployed, we see higher content for us on a per GPU basis. And it's largely to improve the GPU utilization through these increased data rates. Speaker 800:35:26Thanks so much. Speaker 300:35:28And then if I can add one more point, you didn't quite directly ask this for the September quarter growth. I do want to be abundantly clear on one point, which is the growth that we are forecasting for September quarter is based upon not just the power of sampling, but all of the additional production ramps that we are seeing for both the 3rd party platforms, but also internally developed accelerators. That is what is modeling and driving the growth that we're highlighting for September, although there may be other things that you can look at the overall stuff. Operator00:36:08We'll move to our next question from Tom O'Malley at Barclays. Speaker 900:36:17I just wanted to ask a broader network architecture question. So you talked a little bit more about PCIe over optical. And when you look at the back end today, I think there's a lot of effort to improve the Ethernet offering as it compares to kind of the leader in the market, as they kind of expand NVLink. Could you talk about when you see the inflection point with PCIe over optical kind of being the majority of the back end? Is that something that's coming sooner? Speaker 900:36:43Just kind of the time frame there. And then just explain a little further, I think you mentioned that it comes with a lot of additional retiming content when you use those cables. Just anything additional there? And then I have a follow-up. Speaker 200:36:55Yes. Let me take that. This is Jitendra, Tom. The the architectures for AI systems are definitely evolving. And actually, I would say they're evolving at a very rapid place, very rapid pace. Speaker 200:37:07Different customers use different architectures, to craft their systems. If you look at NVIDIA based systems, they do use NVLink, which is, of course, a proprietary closed interface. The rest of the world largely uses protocols that are either PCI Express or Ethernet or they are based on PCI Express and Ethernet. And the choice of particular protocol is really dependent upon the infrastructure that the hyperscalers have and how they choose to deploy this technology. Clearly we play in both. Speaker 200:37:34Our Taurus, Ethernet smart cable module support Ethernet and now with our Ares smart cable modules, we are able to support our PCI Express as well. If you think about the evolution, we started with Aries retimers for driving mostly within the box connectivity and shorter distance connectivity over passive cables. As these networking architectures evolved and you needed to cluster more GPUs together, we went with the Ares smart cable modules that allow you to connect a multiple racks together up to 7 meters of of copper cables. And as it expands into even further distances, we go into optical, where we demonstrated running a very popular GPU over 50 meters of optical fiber. So these are all of the tools that we are making available to our hyperscaler partners to for them to craft their solutions and deploy AI at the data center scale. Speaker 900:38:27Helpful. As a follow-up, I know this is a bit of a tougher question, but I do think that there's a lot of confusion out there and just would appreciate your thoughts. You mentioned in the prepared remarks hundreds of different types of deployment styles for the GE200. Obviously, certain hyperscalers are going to do it their way and then certain hyperscalers are going to take what is called the kind of entire system, so the 36 or the 72. Can you talk about your assumptions for what you think will be the percentage that goes towards the full system and then kind of towards the hyperscalers that use kind of their own methods? Speaker 900:38:58And talk about the content opportunities that they would kind of play out in those two scenarios. I do think that NVIDIA and others are talking potentially about more systems than historical, but just maybe the puts and takes upon how different hyperscalers will architect systems and what it means for your content? Thank you. Speaker 200:39:18Yes, great questions. And as you pointed out, a lot of moving food pieces obviously, right? But here is I think what we know and what we can comment on. First of all, all the hyperscalers are indeed deploying new AI platforms that are based on merchant silicon or third party accelerators as well as their own accelerators. And overall, we do expect our retimer content to go up. Speaker 200:39:39Now, if you double click specifically on NVIDIA or the Blackwell system, it comes in many, many different flavors. If you think about the overall Blackwell platform, it is really pushing the technology boundaries. And what that is doing is it's creating more challenges for the hyperscalers, whether it is power delivery or thermals, software complexities or connectivity. As these systems grow bigger, they run faster, become more complex. We absolutely think that the need for retimers goes up, and that drives our content higher on a per GPU basis. Speaker 200:40:16Now it is harder to predict which particular platform will have what kind of share. That's not really our business to predict. What we are doing is we are supporting our customers, our AI platform providers as well as hyperscalers to make sure that these kind of be the high-tech platforms can be deployed as easily as possible. And in at the end of the day, what you will find is hyperscalers will have to either adapt their data centers to these new technologies or they'll have to adapt this new technology to their data centers. And that creates a great opportunity for our products. Speaker 200:40:51We already have design wins across multiple form factors of hyperscaler GPUs as well as the 3rd party GPUs. And overall, we expect our business to continue to grow strongly, very exciting times for us. Speaker 900:41:06Thank you very much. Operator00:41:10Our next question comes from Tore Svanberg at Stifel Nicolaus. Speaker 1000:41:15Yes, good afternoon. This is Jeremy calling Torrey. And let me also add my congratulations on a very strong quarter and outlook. A couple of questions. First, if could you provide maybe a revenue breakout between your 3 product segments here? Speaker 1000:41:32I'm not sure if that was covered at all. Speaker 400:41:36Yes. We don't break out specifically the revenue by product. But like we said on the call, the Q2 revenues was driven heavily by the AI growth for Gen 5 and then broadening out of our design win portfolio. When you look into Q3, it's the 3 main drivers are the initial Taurus ramp in the 400 gig, the broadening out of AI servers for Gen 5 in both merchant as well as internally developed accelerator programs. And then also we're doing back end clustering with our ARIES SCM modules. Speaker 400:42:15So when you look at that, those three drivers are mainly giving us the growth in Q3. Speaker 1000:42:23Great. And then I guess maybe looking more into the LEO CXL, I understand you're shipping preproduction. Is the when can are you expecting to see more of a material ramp for Speaker 300:42:41LEO? Yes. So, in terms of material ramp, it's a function of CPUs being available that support CXL 2.0. So we have, of course, tracking the announcements from AMD and Intel to essentially get to production in the second half of this year for Turin and Granite Rapids. So in general, these things will take a little time in terms of engineering those things into platforms. Speaker 300:43:08So what we are guiding is 2025 is when we expect production ramps to pick up on CXO. Speaker 1000:43:18Great. Thank you. And if I could squeeze one last question in. Can you give us maybe a sense of your revenue and how it might break out between modules and standalone retimers? Is there is a way to kind of look at revenues in that way and how that can impact your stand growth over time? Speaker 1000:43:40Thank you. Speaker 400:43:40Yes. I mean, Taurus predominantly is modules. Ares, we're doing the back end clustering of GPUs with modules, but predominantly, but the bulk of the revenues is standalone retimers in that product family. LEO, once it ramps, we'll do add in cards and silicon, but they'll be heavily skewed towards silicon. Operator00:44:08We'll move next to Quinn Bolton at Needham and Company. Speaker 1100:44:14Hey guys, thanks for taking my question. I guess maybe a follow-up just on the Blackwell question. It looks like there have been some recent architectural or system level changes at NVIDIA with sort of the introduction of the GB200A that looks like it uses PCI interconnect or PCI Express to connect the GPUs and the CPUs, and perhaps a de emphasis in the HGX platforms. Just wondering if you see any shifts in content, if that's favorable, if it's about a wash going from one platform to the other? And then I've got a follow-up. Speaker 200:44:49Yes. Thank you, Quinn. Unfortunately, it would not be appropriate for us to kind of comment on rumors and third party information that seems to be separating around. What we will say is that we are committed to whatever platform our customer want to deploy. Whether it's a full rack or it's an NGA server or something in between, we are working with them very, very closely, shoulder to shoulder every day. Speaker 200:45:13As Sanjay mentioned, we already have multiple design wins in the back wall family, including the GB200. We are shipping initial quantities of preproduction to the early adopters and we do have backlog in place that serves the Blackwell platform including GB 200. Speaker 1100:45:32Got it. Okay. Thank you for that. And just maybe a clarification on the Taurus 400 gig ramp as well as the Ares SDM ramps. Are those ramping across multiple hyperscalers? Speaker 1100:45:45Or are they driven by a lead hyperscaler initially and then you would expect a broadening out to other hyperscalers as we move into 2025? Speaker 300:45:55Okay. Good question. Let me take that. So if you think about AECs, in general, 800 gig, where you're running 100 gig per lane, is the first broad use case that we believe for AAC applications. If you look at data rates lower than that, let's say, 400 gig and so on, it tends to be, very frankly, case by case, depends on the topology, application and so on. Speaker 300:46:24So the good thing about the design wins we have is that these scale across multiple platforms, both from an AI and general compute standpoint and supporting various different topologies. And the revenue drivers that we are essentially highlighting for 3Q and beyond is based on supporting these applications. With 800 gig, it becomes much more broader with several different customers essentially in requiring AECs. Speaker 1100:46:56And is it similar for the Ares SCMs for back end clustering as well? Speaker 300:47:02Exactly. It depends on the topology for what it is, in terms of how the back end networks are designed for the AI subsystems. In general, all of this, when it comes to active cabling type of technology, it becomes case by case depending on the infrastructure and how exactly systems are being put together compared to a component like a PCIe timer that goes across broad array of use cases across multiple different deployment scenarios. So that's the nuance to keep in mind when you look at AEC Markets. Speaker 1100:47:38Got it. Thank you. Speaker 300:47:41Yes. But still the volume and the deployment scale tends to be very broad, right, if you're looking at how infrastructures are being put together. So it is one of those things where you look at case by case, but as long as you're able to address a wide variety of applications, it does very significantly add up. Operator00:48:04We'll take our next question from Ross Seymore at Deutsche Bank. Speaker 1200:48:09Hi, guys. Thanks for asking a question. Apologies to go back to one that's been hit on a couple of times. So I want to do it nonetheless and kind of the Blackwell topic and the content topic. You guys gave us the punch line that you believe your content on average will go up per GPU generation to generation. Speaker 1200:48:23It also seems like you're getting across that the customization of it is still very broad based. And so just looking at the vanilla system SKUs and reference design NVIDIA itself has might be misleading. Two part question to this. Are you of the belief that your content is equal across the board in the same way it was in Hopper? Or do things get more skewed where there'll be places where you'll have a significant step up in content and some configurations and others where you'd have a significant step down? Speaker 1200:48:52And the difference between those 2 might be where investors are getting a little bit confused. Speaker 300:48:58Let me try to add a little bit more color on that. But before I do that, let me give you and remind 2 data points we've already covered in the Q and A so far. First point, let's be very clear that our PCIe retimer content per GPU on average will continue to grow as the AI systems scales across various different topologies. And this applies to both third party like standard merchant GPUs as well as internally developed GPUs. The second reminder that I want to kind of note is that specifically for Blackflow, we expect our PCIe content per GPU to go up. Speaker 300:49:38Now what you're asking is specifically about the deployment scenarios, which right now is evolving, right. So we have design wins for several different topologies, including the GB200. But if you look at the various different options that NVIDIA is offering and how those are being composed and considered by the hyperscalers, that situation is evolving at the moment. The key message that we want to deliver is that overall, our PCIe content is going to be higher than the hopper generation. We expect that the design wins that we're starting to see and we're starting to ship from a production preproduction standpoint are all meaningful that will essentially allow us to continue to have a robust growth engine as far as our PCI retimer business concerned. Speaker 1200:50:33Thanks for that. And I guess as a follow-up, you guys have focused more on this call about the internally developed accelerator than you have in calls in the past. And I realize there haven't been too many since your IPO. But are you trying to get across the key message that those are really growing as a percentage of your mix, that those are penetrating the market and kind of catching up and taking relative share from the GPU side of things? Or is your commentary meant to get across that Astera itself with its retimers and other components will take significant share in that kind of ASIC market relative to the GPU side? Speaker 300:51:11Yes. It's probably both, to be honest with you. In the sense that we do see it, it's no secret, right? I think many of the hyperscalers are doing their own accelerators, which are driven by the workloads or the business models that they pursue. I think that will continue as a macro trend in terms of internally developed accelerators, going hand in hand with GPUs that are available from NVIDIA or AMD or others. Speaker 300:51:41So that's the model that we believe will be here to stay, that hybrid approach. And for us, really, the reason we are highlighting is that, of course, we have had a significant business that has grown in the last year or 2 years from the designs that we have been supporting with the merchant GPU deployments that have happened. But at the same time, now we are reaching a point where the accelerator volumes are also starting to ramp up. And for us, the good news is that we are on all the major AI accelerator platforms from a design win standpoint or least all the major ones that are out there. And for us, we have multiple parts to grow our business and that is a very CPU, GPU architectures come about, just like the NVIDIA's BlackVille platform, we do expect to gain from it both on the retimer content as well as other products that we can service to the space. Speaker 800:52:53Thank you. Operator00:52:56We'll take our next question from Richard Shannon at Craig Hallum Capital Group. Speaker 800:53:02Hi, guys. Thanks for taking my question. Maybe a question on PSAXpress Gen 6 here. Last call, you talked about some of the wins designs being decided in the next 6 to 9 months or obviously 3 to 6 months, 3 more months farther forward here. Obviously, you've got some wins already on Gen 6, but I guess I want to get a sense of the share of the market kind of looking backwards. Speaker 800:53:29How much of that market has been decided versus up for grabs? If you can help characterize what's left here to win in the next 3 to 6 months? Speaker 300:53:39I'm trying to see how best to answer that question. So you got to let me try to provide some color. The design win Windows is whatever for this platform, you're looking at once GPUs become available, you're looking at 6 to 12 months before they go to production. So that's one thing to keep in mind. But also, please also think about how hyperscalers go about doing their stuff, right? Speaker 300:54:06Everyone is in an arms race right now getting to production as quickly as possible. In many different situations, given the number of platforms and how quickly everyone is trying to move. And to that standpoint, what is happening is that many of those engineers are familiar with our Gen 5 retimers. They've designed it across multiple platforms. They've built software tools and capabilities around it. Speaker 300:54:37And now our Gen 6 retimers are essentially a seamless upgrade from a software standpoint, from a hardware standpoint. So it does offer the lowest risk and fastest path to our customers. And that plays well within their own objectives of trying to get something out quickly and dealing with resources that might not be available at the levels that are required. So overall, we are planning to gain from it and we are essentially being the leader in the space, being the one that is getting the first crack at these opportunities. And we are doing everything we can to convert those things into design wins and revenue. Speaker 800:55:23Okay, great. My follow on question is a pretty simple one. Just looking the Taurus line here, great to see the ramp here at 400 gig and I don't want to get too far ahead of what looks to be a pretty nice ramp here in the second half of the year, but I think you've talked about the 800 gig generation ramping later in 2025. Any update on that timing and how are your design wins looking so far? Speaker 200:55:49Yes. Good question. So the 800 gig timing we believe is going to be late in 2025. Right now what we are seeing is 400 gig applications for some of the AI systems as well as actually we are seeing them for general purpose compute as well where you are doing the traditional server to top of the rack connection. So that will continue on for the rest of this year for 400 gig deployments. Speaker 200:56:14And then as we get some of the newer NICs that are capable of 100 gig and 200 gig per lane etcetera, try to get to 800 gig is where we see broadening of this market and more deployments across different hyperscalers across different platforms in the later half of twenty twenty five. Speaker 800:56:34Okay, great. Thanks guys. Operator00:56:38We'll go next to Suji Desilva at ROTH Capital. Speaker 1300:56:42Hi, Chitendra, it's Andre. And Mike, congrats on the progress here. This question maybe may not have been asked explicitly, but can you give us a relative content framework for internally developed versus 3rd party processors, accelerators? Is it higher for internally developed on average or is it hard to generalize like that? Speaker 200:57:03I would say it's a little bit hard to generalize. It varies quite a bit. Even one, you know, even one, particular platform, you can have different form factors. Even if you look at, let's say, BlackRock, you have HDX, you have MGX, you have NBLs, you have custom racks that are getting deployed. And if you look at each one of them, you will find different amount of contacts. Speaker 200:57:23Number of retimers will vary, where they get placed will vary. But what is very consistent is that the overall content does go out for us. The other factor to consider is the choice of back end network. Again, for example, if you look at the Blackwell family, they use NP Link, which is a closed proprietary standard, which we do not participate in. But when somebody uses a PCI Express or PCI Express based protocol at their back end connectivity, then our content goes up pretty significantly because now we are shipping not only our retimers but also the smart cable, AD smart cable modules into that application. Speaker 200:57:59Similarly, if the back end interconnected Ethernet, that will benefit our Taurus family of product lines. So it really varies greatly on what the architecture is of the platform and what form factor is getting deployed in. Speaker 1300:58:14Okay, great. That's very helpful color. Thanks. And then just a quick follow-up here. Was there something inherent in the Blackwell transition from Hopper that made this much platform diversification and architecture diversification possible? Speaker 1300:58:26Or was it just the hyperscalers getting more sophisticated about what they're trying to do? Or was it availability of things like Astera's PCI products? Any color there would be helpful as to how this kind of proliferation of architectures kind of came about? Speaker 200:58:41I mean, if you look at the Blackwell family, it's like a marvel of technology. The amount of content that is being pushed into that platform is incredible. And as I mentioned earlier, that does create other problems. Right? There is so much compute packed in such small space. Speaker 200:58:55They're delivering power to those to those GPUs themselves and the CPUs is a challenge. How to cool them, it becomes a challenge. And the fact is that the bottom data centers are just not equipped to handle many of these issues. So what the hyperscalers are doing is they're taking these raw platforms, the raw technology and trying to adapt it so that it fits into their data centers. And that's where we see a lot of opportunity for our existing products, the ones that that we've talked about, as well as some new products, that that we've been working on. Speaker 200:59:28Again, shoulder to shoulder with our, hyperscaler and AI platform customers. So very excited to see how these new platforms will get rolled out including Blackwell, including the hyperscaler internally AI platforms and the increased content that we have there. Speaker 1300:59:44Okay. So Blackwell pushed the envelope. Great. Thanks for the color there. Operator00:59:50And finally, we'll move to Quinn Bolton at Needham and Company. Speaker 500:59:54Hey, guys. Just a quick follow-up. Speaker 1100:59:56I know you had potential for an early lockup expiring Thursday morning. Just wanted to see if you guys could confirm, are we still within the 10 day measuring period so that you could trigger that early lockup? Or does the release of 2nd quarter results sort of end that period and we're now looking at a September 16 lockup expiration? Thank you. Speaker 401:00:20Yes. The release of our earnings today releases a lockup, so that opens up on Thursday. Speaker 801:00:30It does open Thursday. Okay. Thank you. Speaker 201:00:33And the other lockup already expired long ago. Operator01:00:43And there are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks. Speaker 101:00:49Thank you everyone for your participation and questions. We look forward to updating you on our progress during our Q3 earnings conference call later this fall. Thank you. Operator01:01:00And this concludes today's conference call. Thank you for your participation. You may now disconnect.Read morePowered by