NASDAQ:ALAB Astera Labs Q2 2025 Earnings Report $232.68 +4.04 (+1.77%) Closing price 05/15/2026 04:00 PM EasternExtended Trading$229.76 -2.92 (-1.25%) As of 05/15/2026 07:59 PM Eastern Extended trading is trading that happens on electronic markets outside of regular trading hours. This is a fair market value extended hours price provided by Massive. Learn more. ProfileEarnings HistoryForecast Astera Labs EPS ResultsActual EPS$0.44Consensus EPS $0.33Beat/MissBeat by +$0.11One Year Ago EPS$0.13Astera Labs Revenue ResultsActual Revenue$191.93 millionExpected Revenue$172.46 millionBeat/MissBeat by +$19.47 millionYoY Revenue Growth+149.50%Astera Labs Announcement DetailsQuarterQ2 2025Date8/5/2025TimeAfter Market ClosesConference Call DateTuesday, August 5, 2025Conference Call Time4:30PM ETUpcoming EarningsAstera Labs' Q2 2026 earnings is estimated for Tuesday, August 4, 2026, based on past reporting schedules, with a conference call scheduled at 4:30 PM ET. Check back for transcripts, audio, and key financial metrics as they become available.Conference Call ResourcesConference Call AudioConference Call TranscriptPress Release (8-K)Quarterly Report (10-Q)Earnings HistoryCompany ProfilePowered by Astera Labs Q2 2025 Earnings Call TranscriptProvided by QuartrAugust 5, 2025 ShareLink copied to clipboard.Key Takeaways Positive Sentiment: Astera Labs reported Q2 revenue of $191.9 million, up 20% sequentially and 150% year-over-year, significantly exceeding guidance. Positive Sentiment: The Scorpio P CD switches supporting PCIe 6 entered volume production, contributed over 10% of total revenue, and marked the fastest product ramp in company history. Positive Sentiment: For Q3, management guides revenues of $200 million to $210 million (up 6%–9% QoQ) and expects a non-GAAP gross margin of ~75%. Neutral Sentiment: Non-GAAP operating expenses rose by about $5 million QoQ to $76 million–$80 million in Q3 as R&D headcount expands to support “AI Infrastructure 2.0.” Positive Sentiment: Astera Labs deepened ecosystem partnerships with NVIDIA (NVLink Fusion), Alchip (ASIC development), AMD (UA Link), and SAP/Microsoft (CXL memory expansion). AI Generated. May Contain Errors.Conference Call Audio Live Call not available Earnings Conference CallAstera Labs Q2 202500:00 / 00:00Speed:1x1.25x1.5x2xThere are 4 speakers on the call. Speaker 200:00:00Good afternoon, my name is Rebecca and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Second Quarter Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press STAR followed by the number one on your telephone keypad. If you would like to withdraw your question, press the pound key. Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin. Thank you, Rebecca. Good afternoon, everyone, and welcome to the Astera Labs second quarter 2025 earnings conference call. Speaker 200:00:51Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder, Sanjay Gajendra, President, Chief Operating Officer and Co-Founder, and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent annual report on Form 10-K and our upcoming filing on Form 10-Q. Speaker 200:01:47It is not possible for the Company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today and the Company undertakes no obligation to update such statements after the date of this call except as required by law. Also during this call we will refer to certain non-GAAP financial measures which we consider to be an important measure of the Company's performance. Speaker 200:02:42These non-GAAP financial measures are provided in addition to, and not as a substitute for, financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the investor relations portion of our website. With that I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Speaker 300:03:11Jitendra, thank you Leslie. Good afternoon everyone and thanks for joining our second quarter conference call for fiscal year 2025. Today I'll provide an overview of our Q2 results followed by a discussion around our rack scale connectivity vision. I will then turn the call over to Sanjay to walk through Astera Labs' near and long term growth profile. Finally, Mike will give an overview of our Q2 2025 financial results and provide details regarding our financial guidance for Q3. Astera Labs delivered strong results in Q2 with all financial metrics coming in favorable to our guidance. Quarterly revenue of $191.9 million was up 20% from the prior quarter and up 150% versus Q2 of last year. Growth within the quarter was driven by both our signal conditioning and switch fabric product line, establishing a meaningful new revenue baseline for the company to build upon. Speaker 300:04:06This quarter we achieved a key milestone with our market leading Scorpio P Series switches supporting PCIe 6 scale-out applications ramping into volume production to support the deployment and general availability of customized rack scale AI system designs based on merchant GPUs. Strong demand for our PCIe 6 solutions helped to drive material top line upside. During the quarter, Scorpio exceeded 10% of total revenue, making it the fastest ramping product line in the history of Astera Labs. Furthermore, we continue to see strong activity and engagement across both our Scorpio P Series and X Series PCIe fabric switches, and we are pleased to report that we won new designs across multiple new customers during the quarter. We remain on track for Scorpio to exceed 10% of total revenue in 2025 while becoming the largest product line for Astera Labs over the next several years. Speaker 300:05:03Our Aries product family grew during the quarter and continues to diversify across both GPU and custom ASIC-based systems for a variety of applications including scale-up and scale-out connectivity. Additionally, our first to market Aries 6 solutions supporting PCIe 6 began volume ramp during the quarter within rack scale merchant GPU-based systems. Our Taurus product family demonstrated strong growth driven by AEC demand supporting the latest merchant GPUs, custom AI accelerators, as well as general purpose compute platforms. LEO continues to ship in pre-production quantities as customers expand their development rack clusters to qualify new systems leveraging the recently introduced CXL capable data center CPU platforms. In addition to strong financial and operational performance during Q2, we continue to expand our strategic relationships across both customers and ecosystem partners as the industry pushes forward with innovative new technologies. Speaker 300:06:07First, we broadened our collaboration with NVIDIA to support NVLink Fusion, providing additional optionality for customers to deploy NVIDIA AI accelerators by leveraging high performance scale up networks based on NVLink technology. Next, we announced a partnership with Alchip Technologies to advance the silicon ecosystem for AI rack scale infrastructure by combining our comprehensive connectivity portfolio with their custom ASIC development capabilities within the CXL ecosystem. Industry progress continues with SAP recently highlighting their collaboration with Microsoft featuring Intel's Xeon 6 processors to optimize SAP HANA database performance by utilizing CXL memory expansion. Lastly, we joined AMD on stage during their Advancing AI 2025 keynote presentation as a trusted partner to showcase UA Link, which is the only truly open memory semantic based scale fabric purpose built for AI workloads. Speaker 300:07:09To continue the relentless pursuit of AI model performance, data center infrastructure providers are beginning a transformation to what we call AI Infrastructure 2.0. We define this AI Infrastructure 2.0 transition as the proliferation of open standards based AI rack scale platforms that leverage broad innovation, interoperability, and a diverse multi vendor supply chain. This transition is in its early stages and we are strategically crafting our roadmaps to help lead these secular connectivity trends over the coming years. The transition to AI Infrastructure 2.0 is especially significant at the rack level as modern AI workloads demand ultra low latency communication between hundreds of tightly integrated accelerators over a scale up network. Astera Labs is well positioned to support this infrastructure transformation as an anchor solution partner with expertise across the entire connectivity stack. Speaker 300:08:09First, we support a variety of interconnect protocols including UA Link and PCIe for scale up, Ethernet for scale out, and CXL for memory. We are very excited about the momentum behind the UA Link scale up connectivity standard, which exemplifies the open ecosystem approach by combining the low latency of PCIe and the fast data rates of Ethernet to deliver best in class end to end latency and bandwidth. Next, we provide a broad suite of intelligent connectivity products to address the entire rack across both purpose-built silicon and hardware solutions, all featuring our Cosmos software for best-in-class fleet monitoring and management. Lastly, our deep partnerships across the entire ecosystem continue to expand as we work closely with ASIC and GPU vendors to align features, interoperability, and roadmaps to solve the rack-scale connectivity challenges of tomorrow. Speaker 300:09:06In summary, Astera Labs has demonstrated strong momentum in our business and the prospects for continued diversification and scale are driving our roadmaps in R&D investment. We are in the early stages of the AI Infrastructure 2.0 transformation, which Astera Labs is uniquely positioned to help proliferate over the coming years. Scale-up connectivity for rack-scale AI infrastructure alone will add close to $5 billion of market opportunity for us by 2030, and we remain committed to supporting our customers as they choose the architectures and technologies that best suit their AI performance goals and business objectives. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to outline our vision for growth over the next several years. Operator00:09:52Thanks Jitendra and good afternoon everyone. Today I want to provide an update on our recent execution, followed by an overview of the meaningful market opportunities and growth catalysts that Astera Labs will address within the forthcoming transition to AI Infrastructure 2.0. Our goal is to deliver a purpose-built connectivity platform that includes silicon hardware and software solutions for rack-scale AI deployments. To achieve this goal, our approach has been to increase our addressable dollar content in AI servers by rapidly expanding our product lines to provide a comprehensive connectivity platform and capture higher value sockets that include smart cable modules, gearboxes, and fabric solutions. We also see increasing attach rates driven by higher speed interconnects in platforms deployed by customers who are collectively investing hundreds of billions of dollars on AI infrastructure annually. Operator00:11:03Starting in Q2 of 2025, Astera Labs executed the next step in its high growth evolution by ramping our PCIe Scorpio fabric switches and 86 retimers into volume production. This latest wave of growth has further diversified our overall business as we now have three product lines contributing above 10% of total sales. During this transition, our silicon dollar content opportunity has expanded into the range of multiple hundreds of dollars per AI accelerator, which has effectively established a new revenue baseline for the company. Looking ahead, we are excited about the opportunities enabled by scale-up interconnect topologies. Given the extreme importance of scale-up connectivity to overall AI infrastructure performance and productivity, we see Scorpio X Series solutions as the anchor socket within next generation AI rack. Operator00:12:13We are engaged with over 10 unique AI platform and cloud infrastructure providers who are looking to utilize our fabric solution for their scale-up networking requirement. We look for Scorpio X Series to begin shipping for customized scale-up architectures in late 2025, with a shift to high volume production over the course of 2026. With the ramp of Scorpio X Series for scale-up connectivity topologies next year, we expect our overall silicon dollar content opportunity per AI accelerator to significantly increase. Overall, we expect this to be another step up from a baseline revenue standpoint. Also, given the size of the scale-up connectivity opportunity, we expect our Scorpio X Series revenue to quickly outgrow Scorpio P Series revenue in 2026 and beyond. Cloud platform providers and hyperscalers will begin to deploy next-generation platforms as the industry transitions to AI Infrastructure 2.0. Operator00:13:30We believe the fastest path to this transformation lies in purpose-built solutions developed within open ecosystems with a multi-vendor supply chain. For Astera Labs, this transformation will be the catalyst for the next wave of overall market opportunity and revenue growth. Our expertise and support for major interconnect protocols including PCIe, Ethernet, CXL, and UA Link puts us in an excellent position to participate in these next-generation design conversations. UA Link represents the cleanest and most optimized scale-up strategy for AI accelerator providers given its robust performance potential, open ecosystem, diverse supply chain, and purpose-built approach. Early industry momentum has been very encouraging with multiple hyperscalers and several compute platform providers looking to incorporate UA Link into their accelerator roadmap and engaging with RFPs as an indication of strong interest. Operator00:14:45As a leading promoter of UA Link, Astera Labs is committed to developing and commercializing a broad portfolio of UA Link connectivity solutions ranging from AI fabrics to signal conditioning solutions and other IO components. Proliferation of UA Link in 2027 and beyond will represent a long-term growth vector for Astera Labs. In conclusion, we are proud of our execution over the past several years, demonstrating strong and profitable revenue growth, diversification of customers and applications, and exposure to a broadening range of AI infrastructure applications and use cases. We believe this momentum is in its early stages as we fully embrace an industry transition to AI Infrastructure 2.0 which will expand our opportunity across even more customers and platforms over the next several years. Operator00:15:50We look to build upon this newly established baseline of business as we partner tightly with our customers and the broader ecosystem to deliver and deploy best-in-class rack-scale solutions to fuel the next wave of AI evolution. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook. Speaker 100:16:19Thanks Sanjay and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q2 of 2025, Astera Labs delivered quarterly revenue of $191.9 million, which was up 20% versus the previous quarter and 150% higher than the revenue in Q2 of 2024. During the quarter, we enjoyed revenue growth from both our Aries and Taurus product lines, supporting both scale-up and scale-out PCIe and Ethernet connectivity for AI rack-level configurations. Speaker 100:17:21Scorpio smart fabric switches transitioned to volume production in Q2 with our P Series product line for PCIe 6 scale-out applications deployed within leading GPU customized rack-scale systems. LEO CXL controllers shipped in pre-production volumes as customers continue to work towards qualifying platforms ahead of volume deployment. Q2 non-GAAP gross margin was 76% and was up 110 basis points from March quarter levels, with product mix remaining largely constant across higher volumes. Non-GAAP operating expenses for Q2 of $70.7 million were up roughly $5 million from the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q2 non-GAAP operating expenses, R&D expenses were $48.9 million, sales and marketing expenses were $9.4 million, and general and administrative expenses were $12.4 million. Non-GAAP operating margin for Q2 was 39.2%, up 550 basis points from the previous quarter. Speaker 100:18:36Interest income in Q2 was $10.9 million. Our non-GAAP tax rate for Q2 was 9.4%. Non-GAAP fully diluted share count for Q2 was 178.1 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.44. Cash flow from operating activities for Q2 was $135.4 million, and we ended the quarter with cash, cash equivalents, and marketable securities of $1.07 billion. Now turning to our guidance for Q3 of fiscal 2025, we expect Q3 revenues to increase to within a range of $203 million to $210 million, up roughly 6% to 9% from the second quarter levels. For Q3, we expect Aries, Taurus, and Scorpio to provide growth in the quarter. For Aries, we are seeing growth from a number of end customer platforms where we support scale-up and scale-out connectivity. Taurus growth is driven by new designs going into volume production for scale-out connectivity. Speaker 100:19:43Scorpio will primarily be driven by the continued deployment of our P Series solutions for scale-out applications on third-party GPU platforms. We expect non-GAAP gross margins to be approximately 75%. With the mix between our silicon and hardware module businesses remaining largely consistent with Q2, we expect third quarter non-GAAP operating expenses to be in the range of approximately $76 million to $80 million. Operating expense growth in Q3 is driven by the continued investment in our research and development function as we look to expand our product portfolio and grow our addressable market opportunity. Interest income is expected to be $10 million. Our non-GAAP tax rate should be approximately 20%. The increase in our non-GAAP Q3 tax rate reflects the impact of the recent change in the tax law passed in July with an expectation that our full year non-GAAP tax rate for 2025 to now be approximately 15%. Speaker 100:20:50Following this tax law change, our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of $0.38 to $0.39. This concludes our prepared remarks and once again we appreciate everyone joining the call and now we will open the line for questions. Speaker 200:21:14Operator, at this time I would like to remind everyone in order to ask a question, press Star then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from the line of Harlan Sur with JP Morgan. Your line is open. Speaker 300:21:53Good afternoon. Operator00:21:54Congratulations on the very strong results. You know within your Scorpio family of switching products, it's good to see the strong ramp of Scorpio P this past quarter. Within the same portfolio, looks like the team is qualified and set to ramp its Scorpio X Series for XPU to XPU ASIC connectivity. We talked about 10 platform wins. What's been the biggest differentiator? Is it performance, that is, latency, throughput? Is it fully optimized with your signal conditioning products? Is that consideration, and how much does the familiarity with Cosmos software play a role? You guys have always called this an anchor product which pulls in more of your solutions alongside your Cosmos software suite. Is this how it's playing out with your basic XPU customers? You lead with Scorpio X, and you've been successful at driving higher attachments or other products. Speaker 300:22:49Thank you so much for the question. You're absolutely right. The success that we have enjoyed so far is rooted on primarily, I would say, three things. First is just our closeness to our customers. Over this time period, we earned the kind of a trusted partner status with our customers. We get a ringside view of what their plans are, what it is that they're planning to deploy, and when. The second part of that is really our execution track record. We have shown time and again that our team executes with purpose, and we deliver to our promises. With both of these, we get the first sort of call for developing new products, for going into new product platforms at our customers. That's where the Cosmos software suite comes in. Cosmos, for the audience here, is our software suite that unites all of our products together. Speaker 300:23:41This is how we allow our products to be customized, optimized for unique applications, as well as collect a lot of very rich diagnostics information that allows our customers to really see how their connectivity infrastructure is operating. With the use of Cosmos, we can customize our products to deliver higher performance, which translates to sometimes lower latency, sometimes higher throughput, sometimes different diagnostics features for our customers. As a result of that, we've been able to use Scorpio as an anchor socket in these applications because this is something that gets designed in upfront, and then we figure out signal conditioning opportunities with our Aries and Taurus product in these platforms. In particular, the Scorpio X in particular, because the customers use kind of derivatives of PCIe, we have been able to customize Core PX to deliver this lower latency and higher throughput. Operator00:24:40Thank you for that. Very insightful. For my second question, just over the past 90 days we put a lot of focus and announcements on scale-up networking connectivity on UA Link as you mentioned. The team did the Wall Street teach-in back in May. Obviously the team is a key member of the UA Link consortium. AMD recently fully endorsed UA Link as its scale-up networking architecture of choice for all future generations of its rack-scale solutions. We know of at least one other ASIC XPU vendor that's going to be moving to UA Link as well. Beyond this, what's been the reception and interest level on UA Link? Can or will the Astera Labs team speed up its time to market on UA Link-based products or is the timing still to sample products next year with volume deployments in calendar 2027? Yeah, Harlan, this is Sanjay here. Operator00:25:33Thank you for the question. To your point, absolutely. We see a tremendous amount of interest with UA Link. There are obviously the technical advantages that you get with low latency and familiarity with how the transport layer works based on its roots, which is PCIe. Also, the fact that it supports memory semantics natively is a strong reason why customers are liking that interface. The big upside, of course, is the physical layer, which now has been upgraded to support up to 200 gig on the Ethernet side. There are several technical reasons that are going in favor of UA Link. Customers that were using PCIe or PCIe-lite fabrics see this as a natural progression in order to support the AI infrastructure needs going forward. Operator00:26:27What we'll also note is that it's not just about technical stuff, it's about ecosystem and the broad availability of components that are required for scale-up. That's again where UA Link shines in the sense that it's truly an open standard, it's truly a multi-vendor supply chain. Those are additional reasons why customers tend to gravitate towards UA Link. We do have, like noted, several customers—we're counting 10 plus right now—that are looking at leveraging some of the open standards, whether it's PCIe in the short term, a combination of PCIe and UA Link in the midterm, and transitioning perhaps to a broader UA Link deployment in 2027 and later. Overall, I think the momentum is shifting positively and we are excited to be in the middle of it and driving the adoption of open and scalable supply chain in the market. Great, thank you. Speaker 200:27:34Your next question comes from the line of Ross Seymore with Deutsche Bank. Your line is open. Speaker 100:27:42Hi guys, thanks for letting me ask a couple questions and congrats on the strong results and guidance. Maybe to no surprise, I wanted to stay on the Scorpio family. The diversity of engagements is also interesting to me. As far as you're talking about it as an anchor tenant, I wondered if you could go into a little. Operator00:27:58Bit of the profile, the types of. Speaker 100:27:59Customers, how it's changed from your initial customer, and then perhaps how much incremental business and interest those customers are showing in other products as they realize as well it's an anchor tenant sort of. How are you leveraging that Scorpio relationship to bring in more business? Any sort of illustrations of that would be helpful. Operator00:28:18Yeah, absolutely. Again, thank you for that question. Just to kind of remind, we have two product series within Scorpio. One is the Scorpio P Series that just started ramping to production to support some of the third party GPUs that are ramping. The P Series is designed for scale-out connectivity, very broad use case from interconnecting GPUs to custom next to storage and things like that. Scorpio P Series, we have a broad base of customers that are leveraging that solution, designing in, going to production, deep in technical evaluations and so on. That would be a broad play for us with PCIe-based scale-out interconnect and storage type of interconnect. Scorpio X Series, which is designed for scale-up networking to interconnect the GPUs and accelerator. This we see, like you noted, as an anchor socket because that is truly the socket that holds all the GPUs together. Operator00:29:24Today, like we noted, we have 10+ customers that we are engaging when it comes to scale-up networking using Scorpio X Series. This is also pulling in rest of our products, both because of the advantages that Cosmos brings to the table by unifying all of our product, plus at the same time the fact that someone is using a fabric solution and they would need a gearbox or a retimer or other controller type of products. Those are all playing into having that first call with the customer or having that early access at an architectural stage, which translates into an opportunity for us where we can not only offer the fabric device but also the surrounding components that come along with it as a connectivity platform. Speaker 100:30:17Thanks for that color. I guess as my second question, one for Mike, I think the first one's going to be pretty quick, so I might have a clarification in there as well. Operator00:30:25The gross margin is beaten, and you're. Speaker 100:30:26Staying solidly above your 70% long-term target, is there anything that slows down your trajectory to the 70%? The clarification would be the tax rate at 20%. Operator00:30:36Is that this year, but not next year? Speaker 100:30:39Which is the number we should think of going forward, the 15, the 20, or the 10? Operator00:30:42Thank you. Speaker 100:30:44Okay, thanks Ross. I'll start with the taxes. The 20% is specifically to Q3 because that was the quarter that the tax law changed. We have to catch up for the previous two quarters. For Q4, you should expect it to normalize around 15%. Longer term with this new tax law in place, it is probably in the around the 13% range for the gross margins. When we have an inflection up in revenues like we did, you do have the benefit of higher revenues over fixed operating costs. That was the incremental benefit for us. We do expect to see some pretty good growth from our hardware modules going into the back half of this year into 2026. As we make it through 2026, we still encourage people to think of our long term target model, 70%, as something that we'll be delivering. Operator00:31:38Thank you. Speaker 200:31:43Your next question comes from the line of Blayne Curtis with Jefferies. Your line is open, guys. Speaker 100:31:49I'll echo the congrats on the results. I guess I want to ask on the Scorpio products. I mean I think 10% in the June quarter was ahead of what many people were looking at. Speaker 300:31:59Maybe you could just help us. Speaker 100:32:00With the shape of that product, you still said 10% for the year. I'm assuming it's greater than 10%, but I'm sure it's much greater than that. Can you help us a little bit with as you look to September, you know you have $50 million of growth. How to think about Aries for Scorpio and any kind of thoughts on how to guide us to model this Scorpio. Operator00:32:19Product line this year? Speaker 300:32:22Yeah, this is Mike Tate. Speaker 100:32:24Yeah. For Q2 the Scorpio P launched into volume production a little ahead of what we anticipated, so provided the upside in the quarter from this base level. Now it continues to grow in Q3 and Q4. We have more P Series designs kind of coming into play that will layer on top of that. That's more in 2026. For the X Series, we do have pre-production volumes here, but really that starts to go into high volume production during the course of 2026 and, Larry, even more growth. Ultimately, what we called out is the X Series is going to grow to be bigger than P Series. It's a very exciting opportunity just given the dollar value of the design opportunities are much higher than the P Series just given the use cases of the scale-up connectivity. Both will grow. Speaker 100:33:15We did reiterate that it will exceed 10% of our revenues for the year, which is quite an accomplishment for the first year out of a product line. It is poised to be our largest product line of the company as we make it through the following two years. Thanks. Operator00:33:32I just want to ask, I. Speaker 100:33:34Think in terms of the scale-up opportunity, clearly you were clear that X will be more material next year, kind of pre-production this year. Just want to ask this because there was a lot of rumors out there in terms of are there any opportunities for scale-up with Scorpio P or maybe ensured. Operator00:33:50Are you going to be shipping to? Speaker 100:33:52Anything material this year for scale-up versus the scale-out? Operator00:33:56You already talked about. Speaker 100:34:00The scale up this year is predominantly pre-production volumes, and these systems are pretty complex that they're shipping into. We try to be conservative on how we, you know, telegraph those going forward. The volume opportunities, scale up connectivity for switching, is a much bigger dollars opportunity for us as we look forward. Those designs really will start to enter into full volume production during the course of 2026, not a driver in the next couple quarters. Thanks, Mike. Speaker 200:34:40Your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open. Operator00:34:48Great, thank you. Speaker 100:34:49I wonder if you could talk about UA Link versus other architectures and I guess your involvement with NVLink Fusion. Operator00:34:57Are you agnostic to those various solutions? Speaker 100:35:00Are you more favorable towards open source or proprietary? Just walk us through the potential outcomes for you with these battles that are being fought. Speaker 300:35:10Yeah, this is Jitendra, happy to do that. Let's start with NVLink. Just because NVLink is perhaps the most widely deployed scale-up architecture that's available today, we are very happy to be part of the NVLink Fusion ecosystem. If you look at the history of NVLink, it really is a fabric that is built ground up for AI. It uses memory semantics to make sure that all of the GPUs can be addressed as if they are one large GPU. It has low latencies. It does add Ethernet-based services to get the higher speeds, and of course NVIDIA has popularized that with their NVL72 deployment. If you go from there to, let's say, UA Link, you find many similarities. UA Link also has this genesis in PCIe. It is a memory semantics-based protocol. Speaker 300:36:00It uses lossless networking and several other technical advancements that are suitable for AI workload, and the whole protocol is really custom built for optimizing the throughput for AI type of traffic. I think it does offer several advantages over other more proprietary protocols, some of which happen to be Ethernet-based and some are completely proprietary as well. The other advantage of UA Link is it's an open ecosystem. We know that many hyperscalers are part of the promoter board members as well as many vendors, frankly, who are working to deploy solutions for this UA Link. As a result, we expect to see a very vibrant ecosystem of provider vendors and customers with the UA Link. I think that will be a defining characteristic and why we believe UA Link will be adopted widely over time. Speaker 300:36:52As promoter members of UA Link consortium ourselves, we are very happy to both participate in this standard, and not only participate, but come up with a full portfolio of solutions that includes switches, retimers, cables, and what have you to enable our customers to build a full UA Link. To answer the question that you asked, with UA Link we have a lot of dollar content opportunity, but at the same time we will continue to service our customers who are today using PCIe, and we have a huge opportunity there, as well as Ethernet for scale-out applications, for cabling applications, and over time also with NVLink Fusion. Operator00:37:32That's very helpful, thank you. Speaker 100:37:33I get the question a lot. If you guys can size your exposure to merchant GPU platforms versus ASIC. I know there's probably a little bit higher content opportunity for you on the. Operator00:37:44ASIC side, any sense for what. Speaker 100:37:45that split looks like and where that may be going over time? Speaker 300:37:50Yeah, Joseph, we do address both of these opportunities. Our opportunity on the merchant GPU platform comes when our customers customize the rack design. This is the opportunity for both our Aries, Scorpio P Series that Sanjay and Mike touched upon earlier. We saw a lot of ramp happening with that this last quarter. In addition to that, we are also shipping the Taurus Ethernet cables for scale-out applications. When you go to the scale-up, that becomes a very big opportunity for us just because of the density of interconnect when you're trying to connect all of these GPUs together. When that network happens to be based on PCIe, we have an even larger attach rate, which drives our dollar content on these XPU platforms into several hundreds of dollars per XPU. Speaker 300:38:38Over time, we do see the Scorpio X Series as our largest revenue contributor and largely deployed on XPUs. Operator00:38:48Great, thank you very much. Speaker 200:38:52Your next question comes from the line of Thomas O'Malley with Barclays. Your line is open. Speaker 100:38:59Hey guys, thanks for taking my question. You mentioned that you were engaged with 10+ customers on the X Series. Which side? Could you just give us a picture of how many of those are engaged on PCIe today and how many of those are engaged on the UA Link side? If you're engaged with one on PCIe, are you often engaged with one on UA Link as well? Can you maybe talk about that split right now? Operator00:39:21Yeah. This is Sanjay here. What we can notice is that the 10+ opportunities that we highlighted, these are both hyperscalers as well as AI platform providers. These are all today based on PCIe. These are nearer term opportunities that we're tracking. Having noted that, like Jitendra highlighted, UA Link is a standard and open standard that contemplates the requirements of scale-up networking in terms of speed and other capabilities going forward. Many of these customers that we're engaging with today with PCIe are also looking at UA Link. Some of them might continue to stay with PCIe, some of them will transition to UA Link in the midterm. Longer term, as the UA Link ecosystem develops and matures, we do expect that UA Link will continue to be a solution that both the merchant GPU as well as custom accelerator providers will standardize on. Helpful. Speaker 300:40:30As my follow up, I'm. Speaker 100:40:32Curious and there's been obviously a lot of news articles intra quarter about switching attach rates with XPUs and then also general purpose silicon. If you look at the large guy in the market in a 72 array, there's nine switch trays, a couple switches per, so like a 25% switching attach rate to a single XPU or general piece of silicon. In that instance, when you're ramping an XPU with a custom silicon customer, can you maybe walk us through specifically with the X switch, if that attach rate is higher or lower or what's the reason for that, that'd be super helpful. Speaker 300:41:06Thank you. Operator00:41:09We don't comment on individual platforms and customer deployment scenario, but in general the Scorpio switches, X Series switches, interconnect GPUs and there are, depending on the platform, different configurations for number of GPUs in a pod. Within Astera and the product portfolio that we are developing, it is designed in a way that it addresses a variety of different use cases and the attach rate varies. It probably will be a broad answer to your question. In general, we have the engagements, we have the design wins. Now it's a matter of all of these platforms getting qualified and ramping to production. With due course, as they get into production, we'll be able to add more color on how that's shaping our revenue and our growth. Speaker 200:42:10Your next question comes from the line of Tore Svanberg with Stifel Financial Corp. Your line is open. Operator00:42:18Yes, thank you. Let me add my congratulations as well. I guess my first question is on you talked about this new revenue base. I mean you now have three product lines in production that obviously doubled your revenue base. Now you're talking about AI Infrastructure 2.0 and Scorpio P Series or X Series really, you know, sort of creating a new revenue level. Should we infer with that that you will double the sort of run rate again as X Series starts to ramp? Is that the way we should look at it? Yeah, great question. I always like to make this correction. It's not retiment, it's retimer. Just to keep our engineering folks happy. You make a great point. That's exactly what we believe is the beauty of our business model where we have approached the business in a series of growth steps. Operator00:43:17We started the journey being on all the NVIDIA based platforms with the PCIe retimers which got the company off the ground from a revenue growth standpoint. The second step that we hit was to expand our PCIe retimer and Ethernet retimer business to go after custom ASICs. This transition happened in Q3 of last year. Now where we are is our third step in that growth journey where we have ramped up our Scorpio P Series PCIe based fabric switch products along with our 86 retimers. That's going on all the third party NVIDIA based GPU platforms that are ramping up. The fourth step that we are highlighting as part of the call today is the Scorpio X Series which is designed for scale-up networking. Operator00:44:14That transition is currently underway in the sense that we are still in pre-production and like we highlighted throughout 2026, we expect that wave to transition to high volume production providing us a new baseline for revenue. These are of course higher value sockets meaning the dollar content with the Scorpio X Series switches are significantly higher than what we have done so far. You could expect that to play into the overall revenue projections that we would have as we get towards 2026. The fifth step that we called out as part of the communication is the UA Link. Operator00:44:55That is going to be a growth story in 2027 and that is a greenfield application for us with a much broader deployment of scale-up networking along with a variety of other products that we intend to develop for UA Link and that is going to be the fifth step that we are executing towards. Yeah, thank you for walking through all that Sanjay. I really appreciate it. As my follow up and related to UA Link, it does feel like the standard is sort of regaining a lot of traction. I'm just curious why that is. Is it because of AI moving more into inferencing? Is it because of the 128 gig version? It just feels like there's been a little bit of a change in the last few months. Any color you can add on that would be great. If you don't mind, could you repeat your question? Operator00:45:42We didn't quite get the question that you asked. Yeah, I was asking about UA Link sort of regaining a lot of traction. At least that's the way it feels to us and I'm just wondering why that is. Is it because of AI moving more towards inferencing? Is it because of the 128 gig version? Or is there anything else that's going on there? Speaker 300:46:02Thank you for clarifying that. Ulink is gaining actually a lot of traction. If you just for as a reminder, UA Link was only introduced, the specification was only introduced towards the end of Q1 of this year. Since then, it has gained tremendous amount of traction. We've got, you know, AMD talked about it very recently in Taipei as part of the OCP Summit, and several of the hyperscalers are very closely engaged in figuring out what their roadmap intercepts will look like for UA Link. For all the reasons that we talked about earlier in the call, I will also say that majority of these engagements are at 200 gigabit per second per lane data and not at the 128. Operator00:46:46Perfect. Thank you. Speaker 200:46:49Your next question comes from the line of Sebastian Nagy with William Blair. Your line is open. Speaker 100:46:58Good afternoon. Thank you for taking the questions. A lot of the focus is rightfully on the AI tailwinds, but could you. Operator00:47:05Maybe comment on what you're seeing. Speaker 100:47:07Non-AI adoption and in particular what. Operator00:47:09You might be seeing on Gen 5. Speaker 100:47:11PCIe adoption and general purpose service drivers? Operator00:47:14Could that be a meaningful contributor? Speaker 100:47:15To Aries growth going forward? Operator00:47:19Yeah, absolutely. Thanks for highlighting that. We always overlook the general compute nowadays, but to your point that's a transition that we're tracking. AMD released their Venice CPU which does support PCIe Gen 6 as well. We do see that sort of playing out in terms of design opportunities and a new set of production ramps happening for our Aries product line, both on the retimer class devices as well as other sockets that we develop, whether it is the Taurus modules or Gearbox devices. In general those are additional opportunities for us to grow our business and we're tracking those things as part of our overall outlook. Let's not forget LEO products which are our CXL controllers. These are designed for memory expansion for CPUs in particular. Finally we have CPUs that support CXL technology and are ready for deployment. Operator00:48:27We are excited about the opportunities that we're tracking between all the three product lines, Aries, Taurus, and LEO, going into the general compute use cases. Great. Speaker 100:48:41Okay, that's really helpful. Operator00:48:43If I could, a second question. Speaker 100:48:45I want to ask about the use. Operator00:48:47Of Ethernet and scale-up. Speaker 100:48:48Going forward. Operator00:48:49You have Broadcom positioning itself to address. Speaker 100:48:52Both the scale-out and scale-up part of the network with its latest. Operator00:48:55Generation of Ethernet chips. I'm wondering how do you see. Speaker 100:48:59Scale-up Ethernet potentially eating into that. Operator00:49:01PCIe part of the market where Astera Labs has such a strong position? Speaker 300:49:06This is Jitendra. Maybe I'll take this question. If you look at our customers today, they are deploying the scale-up network with the technologies that are available to them, which is NVLink for NVIDIA designs, of course, PCIe for several of the customers that we touched upon earlier in the call. Some of the customers are also using Ethernet. Largely this has to do with the availability of the switching infrastructure. The two protocols, PCIe as well as NVLink, are basically kind of custom built for memory access, for memory semantics. You can use that to make your multiple GPUs in a cluster look like one large GPU. Ethernet is a fantastic protocol, but it was never designed for scale-up. It was designed for kind of large-scale Internet traffic and it is very, very good at that. Speaker 300:49:52However, because of the availability of the switches, some of the customers have tried to run RDMA and other proprietary protocols over Ethernet to do scale-up. In that scenario it does suffer from higher latencies and throughput. I think what you are referring to is scale-up Ethernet, where Broadcom has tried to actually borrow several of the same features that are present in PCIe and UA Link, such as memory semantics, lossless networking, etc., and put them on top of Ethernet. At that point, it looks something quite different from Ethernet. The switching infrastructure as well as the XP infrastructure has to evolve for somebody to use that. I believe that the real differentiation between the two has to do with the openness of the ecosystem. Speaker 300:50:37The SUV is still dominated by Broadcom, whereas if you look at UA Link, it's a very open ecosystem, very vibrant ecosystem with multiple vendors working on products and multiple hyperscalers looking to really take their destiny in their own hands and, you know, relying on UA Link over time. Speaker 100:50:57Great, that's really helpful. Operator00:50:58Thank you so much and congrats on the quarter. Thank you. Speaker 200:51:04Your next question comes to the line of Quinn Bolton with Needham and Company. Your line is open. Speaker 100:51:12Hey Jason, I just wanted to follow up. Operator00:51:14Upon that question about Suji. Speaker 100:51:16Broadcom introduced their Tomahawk Ultra switch recently with a 250 nanosecond latency, which seems like it significantly reduces the latency problems that traditionally Ethernet has had. Can you give us some sense how does that 250 nanosecond latency, for sure, compare to what you're able to achieve on PCIe and UA Link? I have a follow up. Speaker 300:51:39Yes, we are able to achieve even lower latencies with some of the products that we have and other products that we have in development. It comes back to designing something that is purpose built for AI. It is not about just the point-to-point latency. If you look at the end-to-end latency in the system, we believe that UA Link and indeed PCIe today is going to be lower latency. The second point about that is utilization of bandwidth. Even though over time the current offering from Broadcom uses 100 gigabits per second per lane, over time every standard will migrate towards 200 gigabits per second per lane. Both UA Link Ethernet as well as NVLink is already there today. However, how efficiently you use that raw data rate varies from protocol to protocol. Speaker 300:52:24UA Link has been designed to be extremely efficient with that and really achieve very high utilization of the data pipe that is available. On a technical basis, I do think that UA Link will be superior to other protocols. The big advantage of UA Link is in its openness, that it's an open standard, that our customers, the hyperscalers, can build their infrastructure once and then ideally plug in whichever GPU or XPU they want that supports an open interoperable ecosystem like UA Link. Speaker 100:53:01My follow up question, I think in the script you guys talked about. Operator00:53:06An. Speaker 100:53:06Expansion in the opportunities with Taurus, and I'm kind of wondering if you could expand on that. Operator00:53:12Is that. Speaker 100:53:13Are you seeing sort of adoption of higher per lane speeds on that Taurus product and adoption of 800 gig cables? Are you seeing adoption beyond your lead customer in Taurus? Just any additional color you could provide. Operator00:53:28On Taurus would be. Speaker 100:53:30Would be helpful. Operator00:53:30Thank you. Yeah. Like you correctly said, and what we have shared in the past as well, we expect broader adoption of AECs when the Ethernet data rate transitions to 800 gig. That's starting to happen. We expect most of the deployments to be ramping up in volume in 2026. To that standpoint, we're tracking and we're engaged with the customers that are deploying it. One point to keep in mind is that our business model for AECs is designed for scale. In other words, we developed these cable modules that fit into the cable assemblies of existing cable vendors, and there are a variety of them that service the data center market. Our business model is to go after the RAM and not necessarily the initial few volume that might be deployed. To that standpoint, we're tracking and we're engaged with the right customers. Operator00:54:37As the volume starts ramping, we do expect to have a significant diversification and growth in our Taurus module business. Most of this we are modeling in 2026 versus this year. Got it. Speaker 100:54:51It sounds like the volume this year continues to be more 50 gig per lane, and then you see that diversification in 2026 as 100 gig per lane becomes. Speaker 300:55:00More. Operator00:55:02Seize wider adoption. Exactly. Our business model, like noted, is designed for that multi-vendor cable supply chain. We do believe that's the right strategy. That's what hyperscalers look for. The initial POC limited volume deployment, they might go with one vendor, but very quickly each one of these hyperscalers want to have the diversity as well as the supply chain capacity to drive volume. That has essentially been our focus when it comes to a business model on the AEC side. Got it. Thank you. Speaker 200:55:44Your next question comes from the line of Suji Desilva with Citi. Your line is open. Operator00:55:53Thank you for taking my question and great, thank you. Congrats for the great result. I guess my first question is kind of following your announcement of a partnership with a high kind of performance ASIC leader recently, I guess can you touch a little bit more on the kind of extent of that collaboration? Is it more at a chip level in terms of the IO chip type of kind of partnership or is it more at a kind of device level with your agent Scorpio portfolio? Yeah, I'll answer that question by sort of sharing our vision and goal that we're executing towards. Our vision is to provide purpose-built connectivity platform for AI infrastructure that includes silicon products, hardware products, and software products. Of course, the focus for us has been on the connectivity side of the AI rack. Operator00:56:52When you think of an AI rack, there are other components that go, which primarily includes the compute nodes, whether it's based on third-party merchant GPUs and CPUs or custom ASICs that Alchip and others develop for hyperscalers. We are a strong believer in that the AI rack, the way it's defined today, is not scalable in the sense that it's more proprietary. As the industry transitions to what we are calling AI Infrastructure 2.0, the entire AI rack has to be based on an open, scalable, multi-vendor type of approach. To that standpoint, what we're doing is not only developing the connectivity products for addressing the various aspects of an AI rack, whether it's scale-up or scale-out and other connectivity at the same time. We are partnering with third-party GPU vendors. We talked about the announcement that we did with AMD. Operator00:57:55We're also engaging with custom ASIC providers including Alchip, so that end of the day, the hyperscalers, who are our common customers, get a rack that is well tested, interoperable, the software is all consistent, and so on to ensure that it delivers the highest level of performance. That is the scope of the collaboration that we're having with Alchip and other providers. Over time you will see us announce more partnerships as we seek to establish the open rack that we believe is critical for deploying AI at scale. Speaker 300:58:35Got it. Operator00:58:35No, that's very helpful. If I can squeeze just one more, and this might be more for Mike. On the gross margin, it seems like over the last two quarters, particularly since the Scorpio announcement, gross margin keeps going up. In the September quarter, you are guiding it to 75%, which at the very least at the midpoint seems to be down a little bit. I'm just curious on any additional color on that because it seems like by all indications Scorpio will continue to go up and the mix trend we are seeing currently seems to be moving in the same direction in September as well. We're just curious on that guide down in gross margin in the September quarter. Speaker 100:59:17Yeah, we do see growth from Scorpio, but we also see good solid growth in Taurus as well during the quarter. You know the Taurus as a module, it's hardware, so it carries a little bit lower gross margin to stand on silicon. You'll see that dynamic play out to a smaller extent in the quarter. As we move into 2026, we still want to have people thinking of us going towards our longer term model. 70%. Speaker 200:59:56Your next question comes from the line of Suji Desilva with Roth Capital. Your line is open. Operator01:00:05Hi Jitendra, Sanjay, Mike, congrats on the strong quarter here. Speaker 101:00:08Maybe you could give us a framework. Operator01:00:09On the retimer content for a link. Speaker 101:00:12That's for scale-out versus scale-up. Operator01:00:14Maybe it's similar, but maybe there's some differences. I'd be curious to understand what the unit opportunities might be and how they might be different. Speaker 301:00:22Yeah, so when you look at the retimers, you know, the contrast with the switches is the following, which is the switches get designed in right at the inception at the architecture stage. Customers will think about how they're going to connect either their GPU to other GPUs in a scale up or the GPU to NICs or storage as part of that scale-out system. Once the switch is designed in and as the rack starts to get put together, we look at the question of reach, and sometimes you find that you need retimers in a link, other times actually you don't need retimers in the link. Sometimes the retimers go on the board as a kind of a chip down format. At other times they are better suited to be put in cables in an AEC format. Speaker 301:01:05The good news with Astera Labs is that we provide this full portfolio of devices for our customers to choose from. From switches to gearboxes to chip down retimers to retimers in active electrical cables, they can look at, you know, one company, one Astera Labs, to figure out their entire, all the solutions at the rack level. Operator01:01:27Just trying to clarify, neither one would be higher than the other necessarily. Just to be clear. Speaker 301:01:35Can you repeat that? Neither one will be higher than the other. Operator01:01:38Scale up versus scale out necessarily. Speaker 301:01:42Yeah, it really depends upon the system architecture. In scale-up there are many, many more links than there are in scale-out. However, it is prohibitive from a power standpoint to put retimers on all the links. Typically, you will see the links that are shorter, where you are able to go from the switch to the GPU over a shorter distance, will not use retimers. The links that are longer will potentially use retimers. Sometimes we have scale-up domains that exceed one rack. You might have two racks side by side that are part of a scale-up domain, in which case you end up with a cable solution and you need retimers in the scale-up domain in those scenarios. Operator01:02:19Helpful. Thanks. Speaker 301:02:20My follow ups on Scorpio. Speaker 101:02:22You talked about 10 customer engagements. I'm wondering if that implies multiple programs. Operator01:02:26Per customer, if they're going to think about using you standard in their platforms, any color on how those are kind of shaping up would be helpful in programs versus customers. Yes, the 10+ we noted are unique customers. Now, within each customer, there are multiple opportunities that we're tracking. Some of them are design wins, and some of them are ramping to production. Some of them are design ins going through qualification. Some of those are early engagement. In general, we are very pleased with the amount of traction that we're seeing for our Scorpio family. Excellent. Thanks, Sanjay. Thanks, everybody. Thank you. Thanks. Speaker 201:03:11There are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks. Thank you, everyone, for your participation today and questions. Please refer to our investor relations website for information regarding upcoming financial conferences and events. Thanks so much. This concludes today's conference call. You may now disconnect.Read morePowered by Earnings DocumentsPress Release(8-K)Quarterly report(10-Q) Astera Labs Earnings HeadlinesAstera Labs (ALAB) price target increased by 20.85% to 246.06May 14 at 11:41 PM | msn.comShareholders Can Be Confident That Astera Labs' (NASDAQ:ALAB) Earnings Are High QualityMay 13 at 2:53 PM | finance.yahoo.comThis stock has 30 days of quiet leftA small power equipment company with $1.5 billion in orders is flying under the radar - but not for long. When the SpaceX and xAI S-1 filing hits the SEC in June, analysts will comb through supplier disclosures and this company's name is expected to surface. Dylan Jovine has identified the ticker and laid out the full investment thesis. The stock is still quiet - but that window may be closing fast.May 16 at 1:00 AM | Behind the Markets (Ad)Here’s Why Artisan Mid Cap Fund Reduced Its Holdings in Astera Labs (ALAB)May 12, 2026 | finance.yahoo.comAstera Labs Announces Second Quarter 2026 Financial Conference ParticipationMay 12, 2026 | globenewswire.comAstera Labs (NASDAQ:ALAB) CEO Jitendra Mohan Sells 139,951 Shares of StockMay 12, 2026 | americanbankingnews.comSee More Astera Labs Headlines Get Earnings Announcements in your inboxWant to stay updated on the latest earnings announcements and upcoming reports for companies like Astera Labs? Sign up for Earnings360's daily newsletter to receive timely earnings updates on Astera Labs and other key companies, straight to your email. Email Address About Astera LabsAstera Labs (NASDAQ:ALAB) is a fabless semiconductor company that develops connectivity solutions for data center and cloud infrastructure. The firm focuses on addressing signal integrity and link management challenges that arise as server architectures incorporate higher-bandwidth processors and accelerators. Its technology is aimed at improving reliability and performance for high-speed interconnects used in servers, storage systems and compute accelerators. The company's product portfolio centers on silicon devices and accompanying firmware and software that enhance and manage high-speed links. Offerings include retimers, link controllers and other interface devices designed to support protocols such as PCI Express and emerging coherent interfaces used between CPUs and accelerators. In addition to silicon, Astera Labs provides reference designs, evaluation platforms and software tools to help customers integrate its components into system designs and manage link behavior across complex server topologies. Astera Labs serves original equipment manufacturers, cloud service providers, hyperscalers and system integrators globally, positioning its products to address the needs of large-scale data centers and enterprise compute environments. Operating as a design-focused semiconductor supplier, the company partners with ecosystem players and relies on external foundries for manufacturing. Its solutions are intended to simplify system design, improve scalability and reduce time-to-market for customers deploying next-generation compute and accelerator architectures.View Astera Labs ProfileRead more More Earnings Resources from MarketBeat Earnings Tools Today's Earnings Tomorrow's Earnings Next Week's Earnings Upcoming Earnings Calls Earnings Newsletter Earnings Call Transcripts Earnings Beats & Misses Corporate Guidance Earnings Screener Latest Articles Peloton Stock Gives Back Gains After Upbeat Earnings ReportDatavalut Gains Traction: 5 Reasons to Sell NowTMC Stock: Why This Pre-Revenue Miner Is Worth WatchingViking Sails to All-Time Highs—Fundamentals Signal More to ComeYETI Rallies After Earnings Beat and Raised OutlookAeluma's Post-Earnings Dip Creates a Buying OpportunityCisco’s Vertical Rally May Still Be in the Early Innings Upcoming Earnings Palo Alto Networks (5/19/2026)Home Depot (5/19/2026)Keysight Technologies (5/19/2026)Analog Devices (5/20/2026)Intuit (5/20/2026)NVIDIA (5/20/2026)Lowe's Companies (5/20/2026)Medtronic (5/20/2026)Target (5/20/2026)TJX Companies (5/20/2026) Get 30 Days of MarketBeat All Access for Free Sign up for MarketBeat All Access to gain access to MarketBeat's full suite of research tools. Start Your 30-Day Trial MarketBeat All Access Features Best-in-Class Portfolio Monitoring Get personalized stock ideas. Compare portfolio to indices. Check stock news, ratings, SEC filings, and more. Stock Ideas and Recommendations See daily stock ideas from top analysts. Receive short-term trading ideas from MarketBeat. Identify trending stocks on social media. Advanced Stock Screeners and Research Tools Use our seven stock screeners to find suitable stocks. Stay informed with MarketBeat's real-time news. Export data to Excel for personal analysis. Sign in to your free account to enjoy these benefits In-depth profiles and analysis for 20,000 public companies. Real-time analyst ratings, insider transactions, earnings data, and more. Our daily ratings and market update email newsletter. Sign in to your free account to enjoy all that MarketBeat has to offer. Sign In Create Account Your Email Address: Email Address Required Your Password: Password Required Log In Email Me a Login Link or Sign in with Facebook Sign in with Google Forgot your password? Your Email Address: Please enter your email address. Please enter a valid email address Choose a Password: Please enter your password. Your password must be at least 8 characters long and contain at least 1 number, 1 letter, and 1 special character. Create My Account (Free) or Sign in with Facebook Sign in with Google By creating a free account, you agree to our terms of service. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
There are 4 speakers on the call. Speaker 200:00:00Good afternoon, my name is Rebecca and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Second Quarter Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press STAR followed by the number one on your telephone keypad. If you would like to withdraw your question, press the pound key. Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin. Thank you, Rebecca. Good afternoon, everyone, and welcome to the Astera Labs second quarter 2025 earnings conference call. Speaker 200:00:51Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder, Sanjay Gajendra, President, Chief Operating Officer and Co-Founder, and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent annual report on Form 10-K and our upcoming filing on Form 10-Q. Speaker 200:01:47It is not possible for the Company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today and the Company undertakes no obligation to update such statements after the date of this call except as required by law. Also during this call we will refer to certain non-GAAP financial measures which we consider to be an important measure of the Company's performance. Speaker 200:02:42These non-GAAP financial measures are provided in addition to, and not as a substitute for, financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the investor relations portion of our website. With that I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Speaker 300:03:11Jitendra, thank you Leslie. Good afternoon everyone and thanks for joining our second quarter conference call for fiscal year 2025. Today I'll provide an overview of our Q2 results followed by a discussion around our rack scale connectivity vision. I will then turn the call over to Sanjay to walk through Astera Labs' near and long term growth profile. Finally, Mike will give an overview of our Q2 2025 financial results and provide details regarding our financial guidance for Q3. Astera Labs delivered strong results in Q2 with all financial metrics coming in favorable to our guidance. Quarterly revenue of $191.9 million was up 20% from the prior quarter and up 150% versus Q2 of last year. Growth within the quarter was driven by both our signal conditioning and switch fabric product line, establishing a meaningful new revenue baseline for the company to build upon. Speaker 300:04:06This quarter we achieved a key milestone with our market leading Scorpio P Series switches supporting PCIe 6 scale-out applications ramping into volume production to support the deployment and general availability of customized rack scale AI system designs based on merchant GPUs. Strong demand for our PCIe 6 solutions helped to drive material top line upside. During the quarter, Scorpio exceeded 10% of total revenue, making it the fastest ramping product line in the history of Astera Labs. Furthermore, we continue to see strong activity and engagement across both our Scorpio P Series and X Series PCIe fabric switches, and we are pleased to report that we won new designs across multiple new customers during the quarter. We remain on track for Scorpio to exceed 10% of total revenue in 2025 while becoming the largest product line for Astera Labs over the next several years. Speaker 300:05:03Our Aries product family grew during the quarter and continues to diversify across both GPU and custom ASIC-based systems for a variety of applications including scale-up and scale-out connectivity. Additionally, our first to market Aries 6 solutions supporting PCIe 6 began volume ramp during the quarter within rack scale merchant GPU-based systems. Our Taurus product family demonstrated strong growth driven by AEC demand supporting the latest merchant GPUs, custom AI accelerators, as well as general purpose compute platforms. LEO continues to ship in pre-production quantities as customers expand their development rack clusters to qualify new systems leveraging the recently introduced CXL capable data center CPU platforms. In addition to strong financial and operational performance during Q2, we continue to expand our strategic relationships across both customers and ecosystem partners as the industry pushes forward with innovative new technologies. Speaker 300:06:07First, we broadened our collaboration with NVIDIA to support NVLink Fusion, providing additional optionality for customers to deploy NVIDIA AI accelerators by leveraging high performance scale up networks based on NVLink technology. Next, we announced a partnership with Alchip Technologies to advance the silicon ecosystem for AI rack scale infrastructure by combining our comprehensive connectivity portfolio with their custom ASIC development capabilities within the CXL ecosystem. Industry progress continues with SAP recently highlighting their collaboration with Microsoft featuring Intel's Xeon 6 processors to optimize SAP HANA database performance by utilizing CXL memory expansion. Lastly, we joined AMD on stage during their Advancing AI 2025 keynote presentation as a trusted partner to showcase UA Link, which is the only truly open memory semantic based scale fabric purpose built for AI workloads. Speaker 300:07:09To continue the relentless pursuit of AI model performance, data center infrastructure providers are beginning a transformation to what we call AI Infrastructure 2.0. We define this AI Infrastructure 2.0 transition as the proliferation of open standards based AI rack scale platforms that leverage broad innovation, interoperability, and a diverse multi vendor supply chain. This transition is in its early stages and we are strategically crafting our roadmaps to help lead these secular connectivity trends over the coming years. The transition to AI Infrastructure 2.0 is especially significant at the rack level as modern AI workloads demand ultra low latency communication between hundreds of tightly integrated accelerators over a scale up network. Astera Labs is well positioned to support this infrastructure transformation as an anchor solution partner with expertise across the entire connectivity stack. Speaker 300:08:09First, we support a variety of interconnect protocols including UA Link and PCIe for scale up, Ethernet for scale out, and CXL for memory. We are very excited about the momentum behind the UA Link scale up connectivity standard, which exemplifies the open ecosystem approach by combining the low latency of PCIe and the fast data rates of Ethernet to deliver best in class end to end latency and bandwidth. Next, we provide a broad suite of intelligent connectivity products to address the entire rack across both purpose-built silicon and hardware solutions, all featuring our Cosmos software for best-in-class fleet monitoring and management. Lastly, our deep partnerships across the entire ecosystem continue to expand as we work closely with ASIC and GPU vendors to align features, interoperability, and roadmaps to solve the rack-scale connectivity challenges of tomorrow. Speaker 300:09:06In summary, Astera Labs has demonstrated strong momentum in our business and the prospects for continued diversification and scale are driving our roadmaps in R&D investment. We are in the early stages of the AI Infrastructure 2.0 transformation, which Astera Labs is uniquely positioned to help proliferate over the coming years. Scale-up connectivity for rack-scale AI infrastructure alone will add close to $5 billion of market opportunity for us by 2030, and we remain committed to supporting our customers as they choose the architectures and technologies that best suit their AI performance goals and business objectives. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to outline our vision for growth over the next several years. Operator00:09:52Thanks Jitendra and good afternoon everyone. Today I want to provide an update on our recent execution, followed by an overview of the meaningful market opportunities and growth catalysts that Astera Labs will address within the forthcoming transition to AI Infrastructure 2.0. Our goal is to deliver a purpose-built connectivity platform that includes silicon hardware and software solutions for rack-scale AI deployments. To achieve this goal, our approach has been to increase our addressable dollar content in AI servers by rapidly expanding our product lines to provide a comprehensive connectivity platform and capture higher value sockets that include smart cable modules, gearboxes, and fabric solutions. We also see increasing attach rates driven by higher speed interconnects in platforms deployed by customers who are collectively investing hundreds of billions of dollars on AI infrastructure annually. Operator00:11:03Starting in Q2 of 2025, Astera Labs executed the next step in its high growth evolution by ramping our PCIe Scorpio fabric switches and 86 retimers into volume production. This latest wave of growth has further diversified our overall business as we now have three product lines contributing above 10% of total sales. During this transition, our silicon dollar content opportunity has expanded into the range of multiple hundreds of dollars per AI accelerator, which has effectively established a new revenue baseline for the company. Looking ahead, we are excited about the opportunities enabled by scale-up interconnect topologies. Given the extreme importance of scale-up connectivity to overall AI infrastructure performance and productivity, we see Scorpio X Series solutions as the anchor socket within next generation AI rack. Operator00:12:13We are engaged with over 10 unique AI platform and cloud infrastructure providers who are looking to utilize our fabric solution for their scale-up networking requirement. We look for Scorpio X Series to begin shipping for customized scale-up architectures in late 2025, with a shift to high volume production over the course of 2026. With the ramp of Scorpio X Series for scale-up connectivity topologies next year, we expect our overall silicon dollar content opportunity per AI accelerator to significantly increase. Overall, we expect this to be another step up from a baseline revenue standpoint. Also, given the size of the scale-up connectivity opportunity, we expect our Scorpio X Series revenue to quickly outgrow Scorpio P Series revenue in 2026 and beyond. Cloud platform providers and hyperscalers will begin to deploy next-generation platforms as the industry transitions to AI Infrastructure 2.0. Operator00:13:30We believe the fastest path to this transformation lies in purpose-built solutions developed within open ecosystems with a multi-vendor supply chain. For Astera Labs, this transformation will be the catalyst for the next wave of overall market opportunity and revenue growth. Our expertise and support for major interconnect protocols including PCIe, Ethernet, CXL, and UA Link puts us in an excellent position to participate in these next-generation design conversations. UA Link represents the cleanest and most optimized scale-up strategy for AI accelerator providers given its robust performance potential, open ecosystem, diverse supply chain, and purpose-built approach. Early industry momentum has been very encouraging with multiple hyperscalers and several compute platform providers looking to incorporate UA Link into their accelerator roadmap and engaging with RFPs as an indication of strong interest. Operator00:14:45As a leading promoter of UA Link, Astera Labs is committed to developing and commercializing a broad portfolio of UA Link connectivity solutions ranging from AI fabrics to signal conditioning solutions and other IO components. Proliferation of UA Link in 2027 and beyond will represent a long-term growth vector for Astera Labs. In conclusion, we are proud of our execution over the past several years, demonstrating strong and profitable revenue growth, diversification of customers and applications, and exposure to a broadening range of AI infrastructure applications and use cases. We believe this momentum is in its early stages as we fully embrace an industry transition to AI Infrastructure 2.0 which will expand our opportunity across even more customers and platforms over the next several years. Operator00:15:50We look to build upon this newly established baseline of business as we partner tightly with our customers and the broader ecosystem to deliver and deploy best-in-class rack-scale solutions to fuel the next wave of AI evolution. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook. Speaker 100:16:19Thanks Sanjay and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q2 of 2025, Astera Labs delivered quarterly revenue of $191.9 million, which was up 20% versus the previous quarter and 150% higher than the revenue in Q2 of 2024. During the quarter, we enjoyed revenue growth from both our Aries and Taurus product lines, supporting both scale-up and scale-out PCIe and Ethernet connectivity for AI rack-level configurations. Speaker 100:17:21Scorpio smart fabric switches transitioned to volume production in Q2 with our P Series product line for PCIe 6 scale-out applications deployed within leading GPU customized rack-scale systems. LEO CXL controllers shipped in pre-production volumes as customers continue to work towards qualifying platforms ahead of volume deployment. Q2 non-GAAP gross margin was 76% and was up 110 basis points from March quarter levels, with product mix remaining largely constant across higher volumes. Non-GAAP operating expenses for Q2 of $70.7 million were up roughly $5 million from the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q2 non-GAAP operating expenses, R&D expenses were $48.9 million, sales and marketing expenses were $9.4 million, and general and administrative expenses were $12.4 million. Non-GAAP operating margin for Q2 was 39.2%, up 550 basis points from the previous quarter. Speaker 100:18:36Interest income in Q2 was $10.9 million. Our non-GAAP tax rate for Q2 was 9.4%. Non-GAAP fully diluted share count for Q2 was 178.1 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.44. Cash flow from operating activities for Q2 was $135.4 million, and we ended the quarter with cash, cash equivalents, and marketable securities of $1.07 billion. Now turning to our guidance for Q3 of fiscal 2025, we expect Q3 revenues to increase to within a range of $203 million to $210 million, up roughly 6% to 9% from the second quarter levels. For Q3, we expect Aries, Taurus, and Scorpio to provide growth in the quarter. For Aries, we are seeing growth from a number of end customer platforms where we support scale-up and scale-out connectivity. Taurus growth is driven by new designs going into volume production for scale-out connectivity. Speaker 100:19:43Scorpio will primarily be driven by the continued deployment of our P Series solutions for scale-out applications on third-party GPU platforms. We expect non-GAAP gross margins to be approximately 75%. With the mix between our silicon and hardware module businesses remaining largely consistent with Q2, we expect third quarter non-GAAP operating expenses to be in the range of approximately $76 million to $80 million. Operating expense growth in Q3 is driven by the continued investment in our research and development function as we look to expand our product portfolio and grow our addressable market opportunity. Interest income is expected to be $10 million. Our non-GAAP tax rate should be approximately 20%. The increase in our non-GAAP Q3 tax rate reflects the impact of the recent change in the tax law passed in July with an expectation that our full year non-GAAP tax rate for 2025 to now be approximately 15%. Speaker 100:20:50Following this tax law change, our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of $0.38 to $0.39. This concludes our prepared remarks and once again we appreciate everyone joining the call and now we will open the line for questions. Speaker 200:21:14Operator, at this time I would like to remind everyone in order to ask a question, press Star then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from the line of Harlan Sur with JP Morgan. Your line is open. Speaker 300:21:53Good afternoon. Operator00:21:54Congratulations on the very strong results. You know within your Scorpio family of switching products, it's good to see the strong ramp of Scorpio P this past quarter. Within the same portfolio, looks like the team is qualified and set to ramp its Scorpio X Series for XPU to XPU ASIC connectivity. We talked about 10 platform wins. What's been the biggest differentiator? Is it performance, that is, latency, throughput? Is it fully optimized with your signal conditioning products? Is that consideration, and how much does the familiarity with Cosmos software play a role? You guys have always called this an anchor product which pulls in more of your solutions alongside your Cosmos software suite. Is this how it's playing out with your basic XPU customers? You lead with Scorpio X, and you've been successful at driving higher attachments or other products. Speaker 300:22:49Thank you so much for the question. You're absolutely right. The success that we have enjoyed so far is rooted on primarily, I would say, three things. First is just our closeness to our customers. Over this time period, we earned the kind of a trusted partner status with our customers. We get a ringside view of what their plans are, what it is that they're planning to deploy, and when. The second part of that is really our execution track record. We have shown time and again that our team executes with purpose, and we deliver to our promises. With both of these, we get the first sort of call for developing new products, for going into new product platforms at our customers. That's where the Cosmos software suite comes in. Cosmos, for the audience here, is our software suite that unites all of our products together. Speaker 300:23:41This is how we allow our products to be customized, optimized for unique applications, as well as collect a lot of very rich diagnostics information that allows our customers to really see how their connectivity infrastructure is operating. With the use of Cosmos, we can customize our products to deliver higher performance, which translates to sometimes lower latency, sometimes higher throughput, sometimes different diagnostics features for our customers. As a result of that, we've been able to use Scorpio as an anchor socket in these applications because this is something that gets designed in upfront, and then we figure out signal conditioning opportunities with our Aries and Taurus product in these platforms. In particular, the Scorpio X in particular, because the customers use kind of derivatives of PCIe, we have been able to customize Core PX to deliver this lower latency and higher throughput. Operator00:24:40Thank you for that. Very insightful. For my second question, just over the past 90 days we put a lot of focus and announcements on scale-up networking connectivity on UA Link as you mentioned. The team did the Wall Street teach-in back in May. Obviously the team is a key member of the UA Link consortium. AMD recently fully endorsed UA Link as its scale-up networking architecture of choice for all future generations of its rack-scale solutions. We know of at least one other ASIC XPU vendor that's going to be moving to UA Link as well. Beyond this, what's been the reception and interest level on UA Link? Can or will the Astera Labs team speed up its time to market on UA Link-based products or is the timing still to sample products next year with volume deployments in calendar 2027? Yeah, Harlan, this is Sanjay here. Operator00:25:33Thank you for the question. To your point, absolutely. We see a tremendous amount of interest with UA Link. There are obviously the technical advantages that you get with low latency and familiarity with how the transport layer works based on its roots, which is PCIe. Also, the fact that it supports memory semantics natively is a strong reason why customers are liking that interface. The big upside, of course, is the physical layer, which now has been upgraded to support up to 200 gig on the Ethernet side. There are several technical reasons that are going in favor of UA Link. Customers that were using PCIe or PCIe-lite fabrics see this as a natural progression in order to support the AI infrastructure needs going forward. Operator00:26:27What we'll also note is that it's not just about technical stuff, it's about ecosystem and the broad availability of components that are required for scale-up. That's again where UA Link shines in the sense that it's truly an open standard, it's truly a multi-vendor supply chain. Those are additional reasons why customers tend to gravitate towards UA Link. We do have, like noted, several customers—we're counting 10 plus right now—that are looking at leveraging some of the open standards, whether it's PCIe in the short term, a combination of PCIe and UA Link in the midterm, and transitioning perhaps to a broader UA Link deployment in 2027 and later. Overall, I think the momentum is shifting positively and we are excited to be in the middle of it and driving the adoption of open and scalable supply chain in the market. Great, thank you. Speaker 200:27:34Your next question comes from the line of Ross Seymore with Deutsche Bank. Your line is open. Speaker 100:27:42Hi guys, thanks for letting me ask a couple questions and congrats on the strong results and guidance. Maybe to no surprise, I wanted to stay on the Scorpio family. The diversity of engagements is also interesting to me. As far as you're talking about it as an anchor tenant, I wondered if you could go into a little. Operator00:27:58Bit of the profile, the types of. Speaker 100:27:59Customers, how it's changed from your initial customer, and then perhaps how much incremental business and interest those customers are showing in other products as they realize as well it's an anchor tenant sort of. How are you leveraging that Scorpio relationship to bring in more business? Any sort of illustrations of that would be helpful. Operator00:28:18Yeah, absolutely. Again, thank you for that question. Just to kind of remind, we have two product series within Scorpio. One is the Scorpio P Series that just started ramping to production to support some of the third party GPUs that are ramping. The P Series is designed for scale-out connectivity, very broad use case from interconnecting GPUs to custom next to storage and things like that. Scorpio P Series, we have a broad base of customers that are leveraging that solution, designing in, going to production, deep in technical evaluations and so on. That would be a broad play for us with PCIe-based scale-out interconnect and storage type of interconnect. Scorpio X Series, which is designed for scale-up networking to interconnect the GPUs and accelerator. This we see, like you noted, as an anchor socket because that is truly the socket that holds all the GPUs together. Operator00:29:24Today, like we noted, we have 10+ customers that we are engaging when it comes to scale-up networking using Scorpio X Series. This is also pulling in rest of our products, both because of the advantages that Cosmos brings to the table by unifying all of our product, plus at the same time the fact that someone is using a fabric solution and they would need a gearbox or a retimer or other controller type of products. Those are all playing into having that first call with the customer or having that early access at an architectural stage, which translates into an opportunity for us where we can not only offer the fabric device but also the surrounding components that come along with it as a connectivity platform. Speaker 100:30:17Thanks for that color. I guess as my second question, one for Mike, I think the first one's going to be pretty quick, so I might have a clarification in there as well. Operator00:30:25The gross margin is beaten, and you're. Speaker 100:30:26Staying solidly above your 70% long-term target, is there anything that slows down your trajectory to the 70%? The clarification would be the tax rate at 20%. Operator00:30:36Is that this year, but not next year? Speaker 100:30:39Which is the number we should think of going forward, the 15, the 20, or the 10? Operator00:30:42Thank you. Speaker 100:30:44Okay, thanks Ross. I'll start with the taxes. The 20% is specifically to Q3 because that was the quarter that the tax law changed. We have to catch up for the previous two quarters. For Q4, you should expect it to normalize around 15%. Longer term with this new tax law in place, it is probably in the around the 13% range for the gross margins. When we have an inflection up in revenues like we did, you do have the benefit of higher revenues over fixed operating costs. That was the incremental benefit for us. We do expect to see some pretty good growth from our hardware modules going into the back half of this year into 2026. As we make it through 2026, we still encourage people to think of our long term target model, 70%, as something that we'll be delivering. Operator00:31:38Thank you. Speaker 200:31:43Your next question comes from the line of Blayne Curtis with Jefferies. Your line is open, guys. Speaker 100:31:49I'll echo the congrats on the results. I guess I want to ask on the Scorpio products. I mean I think 10% in the June quarter was ahead of what many people were looking at. Speaker 300:31:59Maybe you could just help us. Speaker 100:32:00With the shape of that product, you still said 10% for the year. I'm assuming it's greater than 10%, but I'm sure it's much greater than that. Can you help us a little bit with as you look to September, you know you have $50 million of growth. How to think about Aries for Scorpio and any kind of thoughts on how to guide us to model this Scorpio. Operator00:32:19Product line this year? Speaker 300:32:22Yeah, this is Mike Tate. Speaker 100:32:24Yeah. For Q2 the Scorpio P launched into volume production a little ahead of what we anticipated, so provided the upside in the quarter from this base level. Now it continues to grow in Q3 and Q4. We have more P Series designs kind of coming into play that will layer on top of that. That's more in 2026. For the X Series, we do have pre-production volumes here, but really that starts to go into high volume production during the course of 2026 and, Larry, even more growth. Ultimately, what we called out is the X Series is going to grow to be bigger than P Series. It's a very exciting opportunity just given the dollar value of the design opportunities are much higher than the P Series just given the use cases of the scale-up connectivity. Both will grow. Speaker 100:33:15We did reiterate that it will exceed 10% of our revenues for the year, which is quite an accomplishment for the first year out of a product line. It is poised to be our largest product line of the company as we make it through the following two years. Thanks. Operator00:33:32I just want to ask, I. Speaker 100:33:34Think in terms of the scale-up opportunity, clearly you were clear that X will be more material next year, kind of pre-production this year. Just want to ask this because there was a lot of rumors out there in terms of are there any opportunities for scale-up with Scorpio P or maybe ensured. Operator00:33:50Are you going to be shipping to? Speaker 100:33:52Anything material this year for scale-up versus the scale-out? Operator00:33:56You already talked about. Speaker 100:34:00The scale up this year is predominantly pre-production volumes, and these systems are pretty complex that they're shipping into. We try to be conservative on how we, you know, telegraph those going forward. The volume opportunities, scale up connectivity for switching, is a much bigger dollars opportunity for us as we look forward. Those designs really will start to enter into full volume production during the course of 2026, not a driver in the next couple quarters. Thanks, Mike. Speaker 200:34:40Your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open. Operator00:34:48Great, thank you. Speaker 100:34:49I wonder if you could talk about UA Link versus other architectures and I guess your involvement with NVLink Fusion. Operator00:34:57Are you agnostic to those various solutions? Speaker 100:35:00Are you more favorable towards open source or proprietary? Just walk us through the potential outcomes for you with these battles that are being fought. Speaker 300:35:10Yeah, this is Jitendra, happy to do that. Let's start with NVLink. Just because NVLink is perhaps the most widely deployed scale-up architecture that's available today, we are very happy to be part of the NVLink Fusion ecosystem. If you look at the history of NVLink, it really is a fabric that is built ground up for AI. It uses memory semantics to make sure that all of the GPUs can be addressed as if they are one large GPU. It has low latencies. It does add Ethernet-based services to get the higher speeds, and of course NVIDIA has popularized that with their NVL72 deployment. If you go from there to, let's say, UA Link, you find many similarities. UA Link also has this genesis in PCIe. It is a memory semantics-based protocol. Speaker 300:36:00It uses lossless networking and several other technical advancements that are suitable for AI workload, and the whole protocol is really custom built for optimizing the throughput for AI type of traffic. I think it does offer several advantages over other more proprietary protocols, some of which happen to be Ethernet-based and some are completely proprietary as well. The other advantage of UA Link is it's an open ecosystem. We know that many hyperscalers are part of the promoter board members as well as many vendors, frankly, who are working to deploy solutions for this UA Link. As a result, we expect to see a very vibrant ecosystem of provider vendors and customers with the UA Link. I think that will be a defining characteristic and why we believe UA Link will be adopted widely over time. Speaker 300:36:52As promoter members of UA Link consortium ourselves, we are very happy to both participate in this standard, and not only participate, but come up with a full portfolio of solutions that includes switches, retimers, cables, and what have you to enable our customers to build a full UA Link. To answer the question that you asked, with UA Link we have a lot of dollar content opportunity, but at the same time we will continue to service our customers who are today using PCIe, and we have a huge opportunity there, as well as Ethernet for scale-out applications, for cabling applications, and over time also with NVLink Fusion. Operator00:37:32That's very helpful, thank you. Speaker 100:37:33I get the question a lot. If you guys can size your exposure to merchant GPU platforms versus ASIC. I know there's probably a little bit higher content opportunity for you on the. Operator00:37:44ASIC side, any sense for what. Speaker 100:37:45that split looks like and where that may be going over time? Speaker 300:37:50Yeah, Joseph, we do address both of these opportunities. Our opportunity on the merchant GPU platform comes when our customers customize the rack design. This is the opportunity for both our Aries, Scorpio P Series that Sanjay and Mike touched upon earlier. We saw a lot of ramp happening with that this last quarter. In addition to that, we are also shipping the Taurus Ethernet cables for scale-out applications. When you go to the scale-up, that becomes a very big opportunity for us just because of the density of interconnect when you're trying to connect all of these GPUs together. When that network happens to be based on PCIe, we have an even larger attach rate, which drives our dollar content on these XPU platforms into several hundreds of dollars per XPU. Speaker 300:38:38Over time, we do see the Scorpio X Series as our largest revenue contributor and largely deployed on XPUs. Operator00:38:48Great, thank you very much. Speaker 200:38:52Your next question comes from the line of Thomas O'Malley with Barclays. Your line is open. Speaker 100:38:59Hey guys, thanks for taking my question. You mentioned that you were engaged with 10+ customers on the X Series. Which side? Could you just give us a picture of how many of those are engaged on PCIe today and how many of those are engaged on the UA Link side? If you're engaged with one on PCIe, are you often engaged with one on UA Link as well? Can you maybe talk about that split right now? Operator00:39:21Yeah. This is Sanjay here. What we can notice is that the 10+ opportunities that we highlighted, these are both hyperscalers as well as AI platform providers. These are all today based on PCIe. These are nearer term opportunities that we're tracking. Having noted that, like Jitendra highlighted, UA Link is a standard and open standard that contemplates the requirements of scale-up networking in terms of speed and other capabilities going forward. Many of these customers that we're engaging with today with PCIe are also looking at UA Link. Some of them might continue to stay with PCIe, some of them will transition to UA Link in the midterm. Longer term, as the UA Link ecosystem develops and matures, we do expect that UA Link will continue to be a solution that both the merchant GPU as well as custom accelerator providers will standardize on. Helpful. Speaker 300:40:30As my follow up, I'm. Speaker 100:40:32Curious and there's been obviously a lot of news articles intra quarter about switching attach rates with XPUs and then also general purpose silicon. If you look at the large guy in the market in a 72 array, there's nine switch trays, a couple switches per, so like a 25% switching attach rate to a single XPU or general piece of silicon. In that instance, when you're ramping an XPU with a custom silicon customer, can you maybe walk us through specifically with the X switch, if that attach rate is higher or lower or what's the reason for that, that'd be super helpful. Speaker 300:41:06Thank you. Operator00:41:09We don't comment on individual platforms and customer deployment scenario, but in general the Scorpio switches, X Series switches, interconnect GPUs and there are, depending on the platform, different configurations for number of GPUs in a pod. Within Astera and the product portfolio that we are developing, it is designed in a way that it addresses a variety of different use cases and the attach rate varies. It probably will be a broad answer to your question. In general, we have the engagements, we have the design wins. Now it's a matter of all of these platforms getting qualified and ramping to production. With due course, as they get into production, we'll be able to add more color on how that's shaping our revenue and our growth. Speaker 200:42:10Your next question comes from the line of Tore Svanberg with Stifel Financial Corp. Your line is open. Operator00:42:18Yes, thank you. Let me add my congratulations as well. I guess my first question is on you talked about this new revenue base. I mean you now have three product lines in production that obviously doubled your revenue base. Now you're talking about AI Infrastructure 2.0 and Scorpio P Series or X Series really, you know, sort of creating a new revenue level. Should we infer with that that you will double the sort of run rate again as X Series starts to ramp? Is that the way we should look at it? Yeah, great question. I always like to make this correction. It's not retiment, it's retimer. Just to keep our engineering folks happy. You make a great point. That's exactly what we believe is the beauty of our business model where we have approached the business in a series of growth steps. Operator00:43:17We started the journey being on all the NVIDIA based platforms with the PCIe retimers which got the company off the ground from a revenue growth standpoint. The second step that we hit was to expand our PCIe retimer and Ethernet retimer business to go after custom ASICs. This transition happened in Q3 of last year. Now where we are is our third step in that growth journey where we have ramped up our Scorpio P Series PCIe based fabric switch products along with our 86 retimers. That's going on all the third party NVIDIA based GPU platforms that are ramping up. The fourth step that we are highlighting as part of the call today is the Scorpio X Series which is designed for scale-up networking. Operator00:44:14That transition is currently underway in the sense that we are still in pre-production and like we highlighted throughout 2026, we expect that wave to transition to high volume production providing us a new baseline for revenue. These are of course higher value sockets meaning the dollar content with the Scorpio X Series switches are significantly higher than what we have done so far. You could expect that to play into the overall revenue projections that we would have as we get towards 2026. The fifth step that we called out as part of the communication is the UA Link. Operator00:44:55That is going to be a growth story in 2027 and that is a greenfield application for us with a much broader deployment of scale-up networking along with a variety of other products that we intend to develop for UA Link and that is going to be the fifth step that we are executing towards. Yeah, thank you for walking through all that Sanjay. I really appreciate it. As my follow up and related to UA Link, it does feel like the standard is sort of regaining a lot of traction. I'm just curious why that is. Is it because of AI moving more into inferencing? Is it because of the 128 gig version? It just feels like there's been a little bit of a change in the last few months. Any color you can add on that would be great. If you don't mind, could you repeat your question? Operator00:45:42We didn't quite get the question that you asked. Yeah, I was asking about UA Link sort of regaining a lot of traction. At least that's the way it feels to us and I'm just wondering why that is. Is it because of AI moving more towards inferencing? Is it because of the 128 gig version? Or is there anything else that's going on there? Speaker 300:46:02Thank you for clarifying that. Ulink is gaining actually a lot of traction. If you just for as a reminder, UA Link was only introduced, the specification was only introduced towards the end of Q1 of this year. Since then, it has gained tremendous amount of traction. We've got, you know, AMD talked about it very recently in Taipei as part of the OCP Summit, and several of the hyperscalers are very closely engaged in figuring out what their roadmap intercepts will look like for UA Link. For all the reasons that we talked about earlier in the call, I will also say that majority of these engagements are at 200 gigabit per second per lane data and not at the 128. Operator00:46:46Perfect. Thank you. Speaker 200:46:49Your next question comes from the line of Sebastian Nagy with William Blair. Your line is open. Speaker 100:46:58Good afternoon. Thank you for taking the questions. A lot of the focus is rightfully on the AI tailwinds, but could you. Operator00:47:05Maybe comment on what you're seeing. Speaker 100:47:07Non-AI adoption and in particular what. Operator00:47:09You might be seeing on Gen 5. Speaker 100:47:11PCIe adoption and general purpose service drivers? Operator00:47:14Could that be a meaningful contributor? Speaker 100:47:15To Aries growth going forward? Operator00:47:19Yeah, absolutely. Thanks for highlighting that. We always overlook the general compute nowadays, but to your point that's a transition that we're tracking. AMD released their Venice CPU which does support PCIe Gen 6 as well. We do see that sort of playing out in terms of design opportunities and a new set of production ramps happening for our Aries product line, both on the retimer class devices as well as other sockets that we develop, whether it is the Taurus modules or Gearbox devices. In general those are additional opportunities for us to grow our business and we're tracking those things as part of our overall outlook. Let's not forget LEO products which are our CXL controllers. These are designed for memory expansion for CPUs in particular. Finally we have CPUs that support CXL technology and are ready for deployment. Operator00:48:27We are excited about the opportunities that we're tracking between all the three product lines, Aries, Taurus, and LEO, going into the general compute use cases. Great. Speaker 100:48:41Okay, that's really helpful. Operator00:48:43If I could, a second question. Speaker 100:48:45I want to ask about the use. Operator00:48:47Of Ethernet and scale-up. Speaker 100:48:48Going forward. Operator00:48:49You have Broadcom positioning itself to address. Speaker 100:48:52Both the scale-out and scale-up part of the network with its latest. Operator00:48:55Generation of Ethernet chips. I'm wondering how do you see. Speaker 100:48:59Scale-up Ethernet potentially eating into that. Operator00:49:01PCIe part of the market where Astera Labs has such a strong position? Speaker 300:49:06This is Jitendra. Maybe I'll take this question. If you look at our customers today, they are deploying the scale-up network with the technologies that are available to them, which is NVLink for NVIDIA designs, of course, PCIe for several of the customers that we touched upon earlier in the call. Some of the customers are also using Ethernet. Largely this has to do with the availability of the switching infrastructure. The two protocols, PCIe as well as NVLink, are basically kind of custom built for memory access, for memory semantics. You can use that to make your multiple GPUs in a cluster look like one large GPU. Ethernet is a fantastic protocol, but it was never designed for scale-up. It was designed for kind of large-scale Internet traffic and it is very, very good at that. Speaker 300:49:52However, because of the availability of the switches, some of the customers have tried to run RDMA and other proprietary protocols over Ethernet to do scale-up. In that scenario it does suffer from higher latencies and throughput. I think what you are referring to is scale-up Ethernet, where Broadcom has tried to actually borrow several of the same features that are present in PCIe and UA Link, such as memory semantics, lossless networking, etc., and put them on top of Ethernet. At that point, it looks something quite different from Ethernet. The switching infrastructure as well as the XP infrastructure has to evolve for somebody to use that. I believe that the real differentiation between the two has to do with the openness of the ecosystem. Speaker 300:50:37The SUV is still dominated by Broadcom, whereas if you look at UA Link, it's a very open ecosystem, very vibrant ecosystem with multiple vendors working on products and multiple hyperscalers looking to really take their destiny in their own hands and, you know, relying on UA Link over time. Speaker 100:50:57Great, that's really helpful. Operator00:50:58Thank you so much and congrats on the quarter. Thank you. Speaker 200:51:04Your next question comes to the line of Quinn Bolton with Needham and Company. Your line is open. Speaker 100:51:12Hey Jason, I just wanted to follow up. Operator00:51:14Upon that question about Suji. Speaker 100:51:16Broadcom introduced their Tomahawk Ultra switch recently with a 250 nanosecond latency, which seems like it significantly reduces the latency problems that traditionally Ethernet has had. Can you give us some sense how does that 250 nanosecond latency, for sure, compare to what you're able to achieve on PCIe and UA Link? I have a follow up. Speaker 300:51:39Yes, we are able to achieve even lower latencies with some of the products that we have and other products that we have in development. It comes back to designing something that is purpose built for AI. It is not about just the point-to-point latency. If you look at the end-to-end latency in the system, we believe that UA Link and indeed PCIe today is going to be lower latency. The second point about that is utilization of bandwidth. Even though over time the current offering from Broadcom uses 100 gigabits per second per lane, over time every standard will migrate towards 200 gigabits per second per lane. Both UA Link Ethernet as well as NVLink is already there today. However, how efficiently you use that raw data rate varies from protocol to protocol. Speaker 300:52:24UA Link has been designed to be extremely efficient with that and really achieve very high utilization of the data pipe that is available. On a technical basis, I do think that UA Link will be superior to other protocols. The big advantage of UA Link is in its openness, that it's an open standard, that our customers, the hyperscalers, can build their infrastructure once and then ideally plug in whichever GPU or XPU they want that supports an open interoperable ecosystem like UA Link. Speaker 100:53:01My follow up question, I think in the script you guys talked about. Operator00:53:06An. Speaker 100:53:06Expansion in the opportunities with Taurus, and I'm kind of wondering if you could expand on that. Operator00:53:12Is that. Speaker 100:53:13Are you seeing sort of adoption of higher per lane speeds on that Taurus product and adoption of 800 gig cables? Are you seeing adoption beyond your lead customer in Taurus? Just any additional color you could provide. Operator00:53:28On Taurus would be. Speaker 100:53:30Would be helpful. Operator00:53:30Thank you. Yeah. Like you correctly said, and what we have shared in the past as well, we expect broader adoption of AECs when the Ethernet data rate transitions to 800 gig. That's starting to happen. We expect most of the deployments to be ramping up in volume in 2026. To that standpoint, we're tracking and we're engaged with the customers that are deploying it. One point to keep in mind is that our business model for AECs is designed for scale. In other words, we developed these cable modules that fit into the cable assemblies of existing cable vendors, and there are a variety of them that service the data center market. Our business model is to go after the RAM and not necessarily the initial few volume that might be deployed. To that standpoint, we're tracking and we're engaged with the right customers. Operator00:54:37As the volume starts ramping, we do expect to have a significant diversification and growth in our Taurus module business. Most of this we are modeling in 2026 versus this year. Got it. Speaker 100:54:51It sounds like the volume this year continues to be more 50 gig per lane, and then you see that diversification in 2026 as 100 gig per lane becomes. Speaker 300:55:00More. Operator00:55:02Seize wider adoption. Exactly. Our business model, like noted, is designed for that multi-vendor cable supply chain. We do believe that's the right strategy. That's what hyperscalers look for. The initial POC limited volume deployment, they might go with one vendor, but very quickly each one of these hyperscalers want to have the diversity as well as the supply chain capacity to drive volume. That has essentially been our focus when it comes to a business model on the AEC side. Got it. Thank you. Speaker 200:55:44Your next question comes from the line of Suji Desilva with Citi. Your line is open. Operator00:55:53Thank you for taking my question and great, thank you. Congrats for the great result. I guess my first question is kind of following your announcement of a partnership with a high kind of performance ASIC leader recently, I guess can you touch a little bit more on the kind of extent of that collaboration? Is it more at a chip level in terms of the IO chip type of kind of partnership or is it more at a kind of device level with your agent Scorpio portfolio? Yeah, I'll answer that question by sort of sharing our vision and goal that we're executing towards. Our vision is to provide purpose-built connectivity platform for AI infrastructure that includes silicon products, hardware products, and software products. Of course, the focus for us has been on the connectivity side of the AI rack. Operator00:56:52When you think of an AI rack, there are other components that go, which primarily includes the compute nodes, whether it's based on third-party merchant GPUs and CPUs or custom ASICs that Alchip and others develop for hyperscalers. We are a strong believer in that the AI rack, the way it's defined today, is not scalable in the sense that it's more proprietary. As the industry transitions to what we are calling AI Infrastructure 2.0, the entire AI rack has to be based on an open, scalable, multi-vendor type of approach. To that standpoint, what we're doing is not only developing the connectivity products for addressing the various aspects of an AI rack, whether it's scale-up or scale-out and other connectivity at the same time. We are partnering with third-party GPU vendors. We talked about the announcement that we did with AMD. Operator00:57:55We're also engaging with custom ASIC providers including Alchip, so that end of the day, the hyperscalers, who are our common customers, get a rack that is well tested, interoperable, the software is all consistent, and so on to ensure that it delivers the highest level of performance. That is the scope of the collaboration that we're having with Alchip and other providers. Over time you will see us announce more partnerships as we seek to establish the open rack that we believe is critical for deploying AI at scale. Speaker 300:58:35Got it. Operator00:58:35No, that's very helpful. If I can squeeze just one more, and this might be more for Mike. On the gross margin, it seems like over the last two quarters, particularly since the Scorpio announcement, gross margin keeps going up. In the September quarter, you are guiding it to 75%, which at the very least at the midpoint seems to be down a little bit. I'm just curious on any additional color on that because it seems like by all indications Scorpio will continue to go up and the mix trend we are seeing currently seems to be moving in the same direction in September as well. We're just curious on that guide down in gross margin in the September quarter. Speaker 100:59:17Yeah, we do see growth from Scorpio, but we also see good solid growth in Taurus as well during the quarter. You know the Taurus as a module, it's hardware, so it carries a little bit lower gross margin to stand on silicon. You'll see that dynamic play out to a smaller extent in the quarter. As we move into 2026, we still want to have people thinking of us going towards our longer term model. 70%. Speaker 200:59:56Your next question comes from the line of Suji Desilva with Roth Capital. Your line is open. Operator01:00:05Hi Jitendra, Sanjay, Mike, congrats on the strong quarter here. Speaker 101:00:08Maybe you could give us a framework. Operator01:00:09On the retimer content for a link. Speaker 101:00:12That's for scale-out versus scale-up. Operator01:00:14Maybe it's similar, but maybe there's some differences. I'd be curious to understand what the unit opportunities might be and how they might be different. Speaker 301:00:22Yeah, so when you look at the retimers, you know, the contrast with the switches is the following, which is the switches get designed in right at the inception at the architecture stage. Customers will think about how they're going to connect either their GPU to other GPUs in a scale up or the GPU to NICs or storage as part of that scale-out system. Once the switch is designed in and as the rack starts to get put together, we look at the question of reach, and sometimes you find that you need retimers in a link, other times actually you don't need retimers in the link. Sometimes the retimers go on the board as a kind of a chip down format. At other times they are better suited to be put in cables in an AEC format. Speaker 301:01:05The good news with Astera Labs is that we provide this full portfolio of devices for our customers to choose from. From switches to gearboxes to chip down retimers to retimers in active electrical cables, they can look at, you know, one company, one Astera Labs, to figure out their entire, all the solutions at the rack level. Operator01:01:27Just trying to clarify, neither one would be higher than the other necessarily. Just to be clear. Speaker 301:01:35Can you repeat that? Neither one will be higher than the other. Operator01:01:38Scale up versus scale out necessarily. Speaker 301:01:42Yeah, it really depends upon the system architecture. In scale-up there are many, many more links than there are in scale-out. However, it is prohibitive from a power standpoint to put retimers on all the links. Typically, you will see the links that are shorter, where you are able to go from the switch to the GPU over a shorter distance, will not use retimers. The links that are longer will potentially use retimers. Sometimes we have scale-up domains that exceed one rack. You might have two racks side by side that are part of a scale-up domain, in which case you end up with a cable solution and you need retimers in the scale-up domain in those scenarios. Operator01:02:19Helpful. Thanks. Speaker 301:02:20My follow ups on Scorpio. Speaker 101:02:22You talked about 10 customer engagements. I'm wondering if that implies multiple programs. Operator01:02:26Per customer, if they're going to think about using you standard in their platforms, any color on how those are kind of shaping up would be helpful in programs versus customers. Yes, the 10+ we noted are unique customers. Now, within each customer, there are multiple opportunities that we're tracking. Some of them are design wins, and some of them are ramping to production. Some of them are design ins going through qualification. Some of those are early engagement. In general, we are very pleased with the amount of traction that we're seeing for our Scorpio family. Excellent. Thanks, Sanjay. Thanks, everybody. Thank you. Thanks. Speaker 201:03:11There are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks. Thank you, everyone, for your participation today and questions. Please refer to our investor relations website for information regarding upcoming financial conferences and events. Thanks so much. This concludes today's conference call. You may now disconnect.Read morePowered by