Nebius Group Q3 2024 Earnings Call Transcript

Key Takeaways

  • Positive Sentiment: Revenue in Q3 grew 2.7 times sequentially, supported by a cash position of approximately US$2.3 billion and US$400 million in CapEx YTD, with Q4 investments set to accelerate.
  • Positive Sentiment: Plans to deploy over 20,000 GPUs by year-end and secure next-generation chips including NVIDIA Blackwell to meet surging AI compute demand.
  • Positive Sentiment: Data center footprint is expanding globally, including a tripled capacity in Finland, a new Paris colocation facility, and upcoming US greenfield and build-to-suit projects.
  • Positive Sentiment: Introduced a bespoke Azure-focused AI cloud platform and the Nebius AI Studio inference service, enhancing flexibility and performance for generative AI workloads.
  • Positive Sentiment: Non-core units also excelled, with Teloka’s revenue up 4× YoY, Aviorite partnering with Uber on autonomous delivery robots, and Triple10 tripling bootcamp enrollments.
AI Generated. May Contain Errors.
Earnings Conference Call
Nebius Group Q3 2024
00:00 / 00:00

There are 6 speakers on the call.

Operator

Hello everyone and welcome to the Niebius Group Third Quarter 2024 Earnings Call, our first results call since we are back to NASDAQ. My name is Julia Baumgartner and I represent the Investor Relations team. You can find our earnings release published on our IR website earlier today. Now, let me quickly walk you through the Safe Harbor statement. Various remarks that we make during the call regarding our financial performance and operations may be considered forward looking and such statements involve a number of risks and uncertainties that could cause actual results to differ materially.

Operator

For more information, please refer to risk factors section on our most recent annual report on Form 20 F filed with the SEC. You can find full forward looking statements in our press release. During the call we will be referring to certain non GAAP financial measures. You can find a reconciliation of non GAAP to GAAP measures in the earnings release we published today. With that, let me turn the call over to our host, Tom Blackwell, our Chief Communications Officer.

Speaker 1

Thanks very much, Julia, and hello to everyone. So let me quickly introduce our other speakers that we have on the line today. So I'm pleased to be have here with me in San Francisco this morning, our CEO, Arkady Balos and Chief Business Officer, Roman Cernin. And also joining us from our office in Amsterdam, we have Ophir Nabe, our COO Andrey Karalenko, our Chief Product and Infrastructure Officer and our CFO, Ron Jacobs. So you've probably seen we put out quite a lot of material on our business a couple of weeks ago ahead of the resumption of trading on NASDAQ.

Speaker 1

And you've hopefully had a chance to have a quick look at our Q3 results release this morning. So some of you have actually already sent your questions, which thank you. And for those who haven't, you can submit any further questions via the special Q and A tab below at any point during our call, and we'll get to them. So my suggestion is that we keep our opening remarks relatively brief to make sure we have plenty of time for Q and A. And so on that note, let me hand straight over to Arkady.

Speaker 2

Thank you, Tom, and welcome, everyone. I'm very excited to be having this first earnings call since our assumption of trading on NASDAQ almost 2 weeks ago. Let me briefly highlight our key points to set the scene for today's discussion. Our ambition is to build 1 of the world's largest AI infrastructure companies. This entails building data centers, providing AI compute infrastructure and a wide range of value added services to the global AI industry.

Speaker 2

We have a proven track record with significant expertise in running data centers with 100 of megawatts power loads, so we know what we're doing here. We're already pushing full steam ahead. We're securing plots for new data centers, ensuring we have stable power supplies, confirming orders for the latest GPUs, while also launching new software and other value added services, addressing the news and the needs of the AI industry. In short, we're working hard to rapidly put in place the infrastructure that will underpin our future success. Let me turn briefly to today's financial and operating results before we go to Q and A.

Speaker 2

First, I will start with our core infrastructure business. As you can see, we're growing rapidly. Revenue grew 5 sorry, revenue grew 2.7 times compared to the previous quarter. We have a strong cash position. Cash and cash equivalents as of September 30 stood at around US2.3 billion dollars Capital expenditures totaled around US400 dollars for the 1st 9 months of 2024.

Speaker 2

And looking forward, we anticipate capital expenditures in the Q4 to exceed this amount as we plan to accelerate investments in GPU procurement and data center capacity expansion. This includes tripling capacity of our existing data center in Finland. We also recently announced a new colocation data center facility in Paris, with more to come and be announced very soon. We expect to have deployed more than 20,000 GPUs by the end of this year. But really, we're just warming up, as you understand.

Speaker 2

While our financial performance has been strong, what's even more important at this stage is what we have been doing on the product side. In the last quarter, we introduced a number of strategic product developments. For example, we launched the 1st cloud computing platform built from scratch specifically for the Azure AI. This platform offers increased flexibility and performance and will help us to expand further our customer base. This model assumes selling GPU time by hours, And now customers can buy both managed services and self-service with the latest H200 GPUs.

Speaker 2

We also launched our Nebo's AI Studio, a high performance, cost efficient, self-service inference platform for users of foundational models and AI application developers. This allows businesses of all sizes, big and small, to use generative AI quickly and easily. Here, we provide a full stack solution and use another business model. We sell access to Gen AI models by tokens. Outside of our core AI infrastructure business, our other businesses also performed well.

Speaker 2

Teloka offers solutions that provide high quality expert data at scale for the Gen AI industry, and they grew revenue around 4 times year over year. Aviorite is one of the most experienced teams developing autonomous driving technology both for self driving cars and delivery robots. Earlier this month, Aderide announced a multi year strategic partnership with Uber in the U. S. And we also just rolled out our new generation of delivery robots, offering improved energy efficiency and maneuverability.

Speaker 2

Triple 10 is a leading educational technology player. In the last quarter, they increased 3 times year over year in number of students enrolled in Express Boot Camp across the key markets, US and Latin America. In summary, we have been busy developing strong results, but this is just the start of our journey. The big opportunities are still to come. And with that, let me wrap up and hand back over to Tom for the Q and A.

Speaker 1

Thank you very much, Akari. And so just a reminder to everyone that you can submit your questions through the Q and A function below. But we have a few questions that have come in already, so we'll get going. First question actually relates to the latest status on the buyback and whether that's something that's still under consideration. Ophir, can I suggest that you take that question?

Speaker 3

Yes, sure. Thanks, Tom. This is actually a great question because it has a direct impact on our 2025 guidance. But first, maybe it will be a good idea just to take a step back and to remind all of us where the idea of the buyback actually came from. After the divestment of our Russian business, we viewed a potential buyback as an instrument to provide our legacy shareholders an opportunity to exit our business, especially in the absence of trading.

Speaker 3

And as we all know, at our latest AGM, our shareholders authorized a potential buyback within certain parameters. One of them is a maximum price of $10.5 per share, which represent the pro rata of the net cash proceeds of the divestment transaction at closing. It does not put any value whatsoever to the business that we are actually building. Our shares resumed trading on NASDAQ about 2 weeks ago and we are very happy to see the investors' interest in our story. We also see strong liquidity levels.

Speaker 3

We hope that this is a sign that our investors see the great opportunity in our business. And if this is the case and the market for our shares remains strong, Actually, a buyback may not be required to accomplish the idea for which it was originally planned. In this case, we may actually have the opportunity to allocate much more capital to our AI infrastructure and deliver our plans even faster. But let's let's try to put this into actual numbers. As probably everyone knows, we originally provided a $500,000,000 to $1,000,000,000 ARR guidance range for the year of 2025.

Speaker 3

In this scenario, where buyback will ultimately will not be required, we actually estimate that we will be able to deliver above the midpoint of this range, the range of €500,000,000 to €1,000,000,000 ARR. And we think that this is actually very exciting.

Speaker 1

Great. Thank you very much for that, Nafia. So the next question is really around competition and sort of how we're seeing the market. And Arkadiy, I'll come to you on this. And specifically, the question is, so how does Nebbius differentiate against the hyperscalers and the other private competitors in the GPU cloud space?

Speaker 2

Yes. That's a great question because

Speaker 4

I will

Speaker 2

so why actually the question is why we believe Neveos will be among the strongest leading independent GPU cloud providers, right? Yes, absolutely. And actually, I have usually several answers to this question. 1st, first, we provide full stack solution, which means that we build data centers, we build hardware, servers and racks inside the data centers. We build AI cloud platform on top of it, full cloud.

Speaker 2

And we build services, and we have expertise for build services for those who build models and train train train these models and build applications, and we have our own expertise in this area. So we have full stack solution. And this translates into better operational efficiency. We believe that our costs may be lower because of this, and our product portfolio is much stronger. So this is like FUKA FOL stack gives us the first block of differentiation.

Speaker 2

Then this is a platform, the solution we're providing. Actually, this is a solution which was built from scratch in the last several months. This is the first, full integrated solution for AI infrastructure built, like a new without any, any need to support any unnecessary functions or old old code. It's a brand new solution. And what is more important, it's specifically targeted for the tasks of very dynamic AI industry.

Speaker 2

So we offer much more flexible. We can offer better pricing and so on. And the 3rd block of differentiation, I would say, comes from the fact that we have a very strong and very, very good team. This team of people, more than 500 specialists, 400 AI specialists, specifically in cloud engineers, who are ready to support our growth, who have experienced building systems, which are even larger than what we have today and have ambitions to build much larger systems. And this actually translates into faster time to market when we launch new product and better customer support and better understanding of our cloud of our clients because we are the same.

Speaker 2

We are like that. So these are actually 3 major things which we see is our competitive advantage. Full stack solution, brand new specifically tailored platform which we just launched, and us, the people and engineers who really understand the area.

Speaker 1

Thank you very much, Akari. So actually, the next question is around our data center expansion strategy. So, Andre, I'll come to you on this and specifically, how do we think about it overall? How do we think about geographic locations? And what are potential constraints in terms of rolling out this strategy?

Speaker 5

Yes. Thanks, Tom. Hi, everyone. Andrei Karlyanka speaking. So we have three ways to source the data center capacity.

Speaker 5

These are collocations. So just renting out the someone else's data center, build to suit projects is when someone else builds the data center on their side with their CapEx according to our design for the long lease contracts from ourselves and the greenfield projects when we build the data center from this creation, iterate it ourselves. Basically, our preference would be greenfield. This allows us realized value from the full stack approach and reach maximum efficiencies as the team have decades of experience operating the data centers, building designing building and operating the data centers. But that's subject to the availability of the capital and the greenfield projects are longer in terms of delivery times.

Speaker 5

So we're going to use all three of these sources of ways to get the capacity, but we view the applications as a shorter term solutions, mostly speaking about the next few quarters while the other 2 will be kicking in. But our first location, our first data center is in Finland, which we already commenced the capacity expansion for it. We actually are tripling the capacity for the next during the next year. Maybe the first phase will be in mid next year and the last phase of that site will be later 2025. Also wanted to mention that it's fully capable of the liquid cooling technologies for the newer GPUs and the newer trends in that area as well.

Speaker 5

In the midterm, we plan and actively engage in build to suit arrangements. It's a less capital intensive alternative to the greenfield, but still allows us to stick to our design and maintain most of our operational effectiveness, I would say, but still keeping us more flexible in terms of the of capital. Talking about the geographical locations, so the first the finish was our home base. The 1st data centers, we couple of months ago, we or months ago, I believe, we announced the Paris location, which is kicking in durations as we speak. The next one will be announced in U.

Speaker 5

S. And looking forward, we will be building infrastructure both in Europe and in U. S. Mostly.

Speaker 2

That's it.

Speaker 5

I think I covered.

Speaker 1

No, I think that's very good. Thank you very much, Andre. And so just for people's reference, so Andre referred to our Finnish data center, and there was a press release a few weeks ago with more specific detail around the expansion plans there. And there's also, it was an announcement a few weeks ago about the Paris data center, if people want to refer to that with some more additional detail. So the next question is actually we've received a few questions around the NVIDIA partnership and relationship.

Speaker 1

So I'm going to combine them into one if that's okay. So, and actually, Andres, maybe I can stick with you here. So first of all, the questions are really what's the history as well as current status of the NVIDIA partnership and what that brings to Nevius, current status of NVIDIA orders, shipments, and as well as our ability to secure new GPU generations, going forward, including the, the Blackwells. So let me give you that set of questions if I can.

Speaker 5

Yeah. Thanks, Tom. So talking about our collaboration with NVIDIA, we have a long term experience. A team actually has the long term experience working with NVIDIA for more than a decade, building the GPU clusters and running them at a pretty significant scale. About the partnership, we are cloud and OEM partner, official cloud and NCP and OEM partner of NVIDIA that helps us to develop the data center design, the rack design to get all the advantages of the NVIDIA software part and just to collaborate on the both technical and business sides.

Speaker 5

About the future shipments, well, the GPU availability is always a tricky one. But I'm quite sure that we have a good track of shipments throughout this year, And we are feeling confident talking with NVIDIA that the next Q1, Q2 shipments of the newer generations are secured for

Speaker 1

us. Okay. Thank you, Andre. So Roman, I'll come to you. So we've had a couple of questions around GPU pricing and sort of generally how we see the evolution of GPU pricing and overall how we think about the sustainability of pricing in light of the regular flow of new generation launches and so on.

Speaker 1

So, Roman, perhaps I can come to you to address that.

Speaker 4

Yes. Thank you, Tom. So talking about the pricing, I think it's important to talk in respect to generations. So for the next generation GB200s black holes next year, I think the pricing is not fully set. But we can expect that there will be the premium and margin as it normally happens at the beginning of generation.

Speaker 4

For hoppers, which is current active generation, everybody is talking about it. I would say that for H100s, the most popular model today, the pricing came to some like stable situation at the moment. So there was obviously very high premium, which now reduced, but it's still the prices that let have the healthy unit economics. And we also have H200s, which is not such big volumes in the market. And for them, we see the price is also like pretty healthy.

Speaker 4

Important to mention here that we were kind of a little bit late on hoppers generation comparing to some of our competitors. And next year, the most of our fleet will be with black wells that kind of give us the advantage to benefit for from being in the beginning of this generation without the large legacy. And looking forward, I would say that it's normal that when the generations of chips are shifting, the prices kind of go down. But what we do is what is our angle, we invest a lot on the software layer to kind of prolong the life cycle of the previous generation when we can provide the service to the customer, not as a raw compute of some chips, but as a service. Like Arkady mentioned that we launched our token as a service platform for inference, and there could be down the road new services that kind of hide the specific models of the chips under the hood, and we can extract the value for the customers.

Speaker 1

Great. Thank you very much, Roman. And actually, Roman, I'll stay with you because next question is really around the clients as of today, and what are the sort of typical contract terms and durations for current customers, and how we think that will evolve over time.

Speaker 4

Yes. So like just to remind, we started less than like less than a year ago. Our first commercial customer started in Q4 2023. As of now, we have something like 40 managed clients. And it's important to mention that the customer base is pretty much diversified.

Speaker 4

We don't have any single dominating client. If we talk about the customer profile, most of our customers are AI centric, like Gen AI developers, like people for whom AI is bread and butter. And we also see that there is step by step growing our exposure to more enterprise customers. Talking about the contracts. As of today, since the fleet is H100s, most of our customer most of our contracts are under 1 year.

Speaker 4

Like this is the reason for that, that on the market, customers don't feel comfortable to commit for H1 hybrids for more than a year, which is natural. Again, want to remind that next year, the fleet will be mostly around black wells, and we expect that the contracts for black wells in the next year will be again coming to more long term arrangements. And also important to say that for H200, still hoppers, but we see a lot of discussions in the pipeline of like 1, 2 year contract, like healthy price, healthy duration. We also have a lot of on demand customers, and this is honestly the part of our strategy to position us as the most flexible GPU cloud provider as of today. So we really want our customers to benefit from the platform that let them combine reserves and pay as you go, like be more flexible in their capacity planning because this is a real pain of the market that we address.

Speaker 4

And again, so with the Blackwells, we anticipate the duration of contracts will significantly grow. And when the most of the fleet of chips will shift it to Blackwells, the mixed contract structure in the portfolio will be like shifted to more long term.

Speaker 1

Great. Thank you very much, Ramon. So actually, the next question is about Arkady. But since we have Arkady on the line, I'll let him field it. But the question was, how engaged is Arkady with the business today and what are his plans for the future, I guess with respect to Nephius?

Speaker 2

How much engaged? Well, I'm fully engaged. If you ask my family, maybe too much engaged. But seriously, it's a totally new venture, and it's a new startup. If you look at the team and the enthusiasm and the mood, it's just a nice feeling to be there.

Speaker 2

But it's not just a startup. It's a very unusual startup. It's new projects on one hand. On the other hand, we start with a huge amount of resources. It's not just the team.

Speaker 2

It's also the platform which we have, hard brain software and a lot of capital. And this is a huge opportunity to build something really, really big here, an AI infrastructure space, which will go for long and will be visible. So I'm engaged. It's a very interesting new game, new startup. And aside of just pure enthusiasm, just to remind you that I personally may be better than this.

Speaker 2

I would say, I don't know, something like maybe 90% of my personal wealth is in this company. So I truly believe that we're building something big and serious here, and we have great prospects. And we are very much enthused, and we want to make this thing going.

Speaker 1

Thank you very much, Arkady. So actually, Ophir, I'll come to you. We have a question around sort of margins and unit economics. So specifically, the question is, can you elaborate on Nebius' gross margin and unit economics for the GPUs? And what are the returns on invested capital or payback expectations that we see?

Speaker 3

These are actually three questions.

Speaker 1

Yes, sorry.

Speaker 3

So let's start with the unit economics. Our unit economics is actually different from most of the reference points that investors have. We see that investors compare us to data centers providers on the one hand or plain vanilla GPU as a service players, bare metals as we call them on the other end. Most of them are actually most of these players are actually sitting on very long term contracts with fixed unit economics margins. We are neither of these 2.

Speaker 3

We are actually a truly full stack provider. So what does it mean? Regarding our unit economics, obviously, we do not disclose specific numbers. But let me try and share with you how we think about this. To start with, we believe we are more efficient than our peers.

Speaker 3

Why? We have efficient data centers, we have in house designed hardware and we have full stack capability. But furthermore, we also create value from our core GPU cloud. This is already part of our unit economy and we anticipate that this part will grow. But this is actually another benefit.

Speaker 3

It allows us to serve a wider customer base and we believe that this customer base will drive the demand in this space in the future. So these are a few words about the unique economics on the on our potential returns. Our returns are already solid, but we are yet to get access to the new generations of GPUs for which we see a huge demand. And with our intention to continue developing our software stack and value added services, we believe that we will be able to even improve our returns on the invested capital. And I think that you also asked about the payback expectations.

Speaker 3

So the payback period obviously depends on the generation of the GPUs. It can be somewhere around 2 years for the old ones and much less for the new ones. So it's a bit a little bit, I would say, premature to talk about it. But we will be probably in a much better position to share specifics once we deploy and sell our first GB200. Fortunately, we expect to be among the first to do so.

Speaker 3

So hopefully it will not be too long. I hope that answered the questions.

Speaker 1

No, I think you did. There was indeed a lot in the question, but well unpacked. Thank you. So Roman, let me come back to you. So there's a question which is basically in the context of our 2025 guidance range.

Speaker 1

Can you help us to sort of understand what is already contractually secured versus elements that might still be uncertain? So I guess this relates to things like power supply, GPUs, procurement, clients, contracts, etcetera.

Speaker 4

Yeah. Thank you, Tom. I think there is like really three lines of the things. So that determine the growth. So one is DC capacities, that center capacities and like access to the power as a part of it.

Speaker 4

And as Andre shared before, we secured the growth for our core facility in Finland, and we are now in advanced discussions to add more colocation capacities in the U. S. Like I expect that till the end of the year, we'll disclose more. On GPU side, we already mentioned today also that there is a long standing relations with NVIDIA, and that let us be in the first line to bring the state of the art new NVIDIA Blackvall platform to the customers, and we expect to double down on it in 2025. And client wise, like demand side, I think we have quite a good visibility for the end of this year.

Speaker 4

And for the next year, our forecast is mostly based on the capacity available. We believe that it's still more supply driven model because given our current size and given the total addressable market, we don't see real limitations to secure enough demand during the next year. So that's again three lines. To grow, you need enough DC space, you need secure GPU supply, and you need demand. So we feel kind of on three lines pretty comfortable.

Speaker 1

No, that's great. And I suppose I could point out that we have 1 Blackwell already secured, but definitely more to come in 2025. So, anyway, so the next question is around what are the remaining links back into Russia following the divestments? So actually, probably I can take that one. I think the simple answer is that while the remaining links are no longer there, in reality, when thinking about it actually the sort of the separation started back in early 2022 when we embarked on the process to do the divestments.

Speaker 1

But when that divestment came to a completion in July of this year, that sort of severed all of the remaining links at that stage. So just to kind of put that into some context what that means. So we don't have any assets in Russia. We don't have any revenue in Russia. We don't have any employees in Russia.

Speaker 1

And at a sort of technological data level and so on, all of the sort of the links are broken at this stage. So effectively, it was a clean and comprehensive break. So I think it's also just it's probably good to just to point out, so this is not just our own assessment or self assessment here. So first of all, the divestment transaction, which I referred to, which was the largest corporate exit from Russia since the start of the war, this had broad support from Western regulators and so on. And also the resumption of trading on NASDAQ, which that followed a fairly extensive review process.

Speaker 1

And eventually, they concluded that we were in full sort of compliance of the listing criteria, which is, in other words, the Russia Nexus was considered to be gone at that point. So our Russia chapter is over, but we look forward to the next chapter, and it's one that we're very excited about. So next question, Ophir, I'll come back to you. So it's how long will your cash balances last and what are your investment plans for 2016 and beyond? And will you need to raise more external capital beyond that and how and in what form?

Speaker 1

And again, apologies, it's a few questions baked into 1.

Speaker 3

And I guess that you meant 2026, not 2020.

Speaker 1

Yes. Sorry, 2026 indeed, yes.

Speaker 3

We have no plans for 2016 actually. But for the future, it's clearly our first priority by far is our CapEx investments into our core NABUS business by far. So for this reason, our cash efficiency period

Speaker 4

at the

Speaker 3

end of the day is basically a function of how aggressive we want to be in our investment in data centers and in GPUs. Now given the strong demand for our product and services that we see in the market, our plan is actually to invest aggressively. But on the other end, it is important for us and we actually make sure that we maintain sufficient liquidity to cover our cash burn for a reasonable period of time. But I think that it's worth mentioning in this context, as we actually previously disclosed that we are exploring together with Goldman Sachs, our financial advisor, different strategic options to even accelerate our investment in our AI infrastructure. And actually, our public status provide us with an access to wide range of instruments and options.

Speaker 3

So to summarize, we plan to move aggressively into the AI infrastructure, but at the same time to keep sufficient cash for our burn rates and while exploring other potential options to actually even move faster in our plans.

Speaker 1

That's great. Thank you, Ophelia. So actually, the next question is around the ARR guidance range of $500,000,000 to $1,000,000,000 for next year. I think Ophelia actually, to some degree, covered this in his first answer, but let me just kind of add a couple of points on top of that. So again, that guidance took into a range of possible scenarios, including timing of the GB200 delivery, but also actually the key factor here is availability of capital.

Speaker 1

So there's a couple of things to think about here. So as of now, we have around $2,000,000,000 on the balance sheet. And question is how much of this we can allocate to CapEx. And so here there are a couple of points. We've made reference to a potential withholding tax that we have to that we have that we may have to cover.

Speaker 1

And so depending on how the discussions of the Dutch tax authorities go, that could be a reasonable share of that that has been allocated for a potential tax payment that could be reallocated towards CapEx. And there's also, as Ofero pointed out, Ali, I think the key point here is that there could be a scenario when we don't have any impact from a potential buyback, which would mean that we have an opportunity to allocate a lot more capital to the AI infrastructure CapEx and deliver on our plans even faster. But again, I think Ofer covered that well in his first answer. So in that scenario where we're able to reallocate, we estimate that we'll be able to deliver above the midpoint of the $501,000,000,000 ARR guidance range that we gave for year end 2025. Okay.

Speaker 1

Very good. So moving on, the next question actually, Orfea, if I can come back to you. So it's around really what's the strategy for the, for the let's say non core business units and are there any plans for the portfolio company? So again, just for clarity here, we're probably talking about some AV Ride, Triple 10 and Teloka.

Speaker 3

So. Yes. So first, we truly believe that each one of these businesses is among the leaders in it. Each one of them has great prospects. That said, as we said time and again, the majority of our focus and capital is being allocated toward our core AI infrastructure.

Speaker 3

And for this reason, we are very flexible in our strategic development of our other businesses. So as one example, this may include with respect to some of the businesses joining forces with strategic partners or seeking external investments, etcetera. So again, we truly believe that these businesses will do great, will be profitable to us, but our focus, our main focus, both from business attention and capital is in our core AI infrastructure business.

Speaker 1

Very good. Thank you. So actually, next question is, which I can take, which is around sort of thinking around Investor Relations going forward. And so specifically, some points, do we expect to see broker research coming out soon? And just more generally, what are the plans around investor relations over the coming months to sort of as we reintroduce our company to the markets?

Speaker 1

So indeed, we had a fairly lengthy sort of slightly strange period where we were dark while we were finishing the divestments and putting in place all of the infrastructure for this new company. And we were very pleased to get back onto NASDAQ, and that sort of sparks very much a new chapter and a return to, I would say, slightly more normal life. Exciting, but maybe a bit more normal. So definitely we're starting to reengage with the various banks in terms of trying to sort of reinitiate sell side coverage, research coverage. That's a process that's underway right now.

Speaker 1

And so expected to see more coming out sort of over the coming months, get back into a more normalized sort of IR period, the quarterly reporting going forward. We're going to be look out for us at conferences, investor conferences and so on sort of over the coming months. And so, yeah, so I think, you know, apologies for the sort of the blackout period for some time, but we're back, and we'll be doing that. We also we had a lot of inbound interest from investors. And so we're going to be doing a lot of 1 on 1 calls also over the coming weeks.

Speaker 1

So, feel free to get in touch with all of us and we'll engage as much as we can. It's a busy time, but we'll find time. So, that's on the IR side. Well, Fiona, I'll come to you perhaps. So there's a question about the ClickHouse stake, which I remind people we have an approximately 28% stake in ClickHouse.

Speaker 1

Question is, can we give more details on that business, how it's performing? And what is the revenue of ClickStart? And is there a plan to go public? So probably not all of that we can address, but Ophea, perhaps that you can comment on that to the extent that we can.

Speaker 3

Yes. Actually, it's very simple. We treat our 2nd ClickHouse as a passive investment. So first of all, we don't have any immediate plans for it. We will right now we continue to own it.

Speaker 3

Now as a minority shareholder, we are not in a position to provide any more details on the business, not about its projections, plans, business plans, etcetera. It's not for us to say, but we can say and we are actually very happy to say that, that to our understanding, the business is well regarded by partners and other market participants. So we are very happy about that.

Speaker 1

Thank you, Ophir. So actually next question, Andrey, I'll come to you on this one. It's around actually power supply and power access. And do we think that we have sufficient access to support sort of future computing needs and GPU requirements?

Speaker 5

Yes. Thanks, Tom. Well, I would say that, for term, as I just mentioned, short term, we are relying on the rented capacity plus the expansion of our Finnish data center and then switching to the greenfield and build to suit projects, again, subject to the available capital. But generally, midterm, we don't see problems for the growth to support the growth even if we are talking just growing the magnitudes. So the only challenging times would be might be the next 3, 4 quarters.

Speaker 5

But I truly believe that we are in a very good position not to be blocked by the data center capacity availability. That's in short.

Speaker 1

Great. Thank you, Andre. So actually, the next question is about the U. S. And so what expansion plans do you have in the United States?

Speaker 1

Do you already have corporate customers in the U. S. A? And generally, how do you see development there? At Omen, maybe I can come to you to have a crack at that.

Speaker 4

Yeah, thank you. It's super relevant since we are in San Francisco now. I think that we can say that we have a very great focus on U. S. Already.

Speaker 4

We see that organically many of our customers coming from U. S. We don't have yet so much awareness here. We just started, but already the big portion of our customers coming from U. S.

Speaker 4

And in the previous questions, we said that the big part of our customers are AI focused companies. It's like obviously many of them here. So we developing the team. We planning to expand capacity here, as Andre mentioned already. So like, yes, I think it will be super important part of the game.

Speaker 1

Fantastic. And actually, next question flips to the other side of the pond to Europe. And so Akari, maybe I can come to you on this one, which is what's the rationale for building the infrastructure currently in Europe? And just generally, how do we sort of see the opportunity in Europe?

Speaker 2

Well, first of all, from a corporate side, we are a Dutch company traded on NASDAQ, so a European company historically. Then after this big split, we inherited a big data center, which is in Finland, which is, as you know, also in Europe, which we're now tripling and will be a pretty big facility. We also recently launched Paris. Paris is also in Europe. And we're discussing several greenfield projects, which will be also in particular in Europe, not only.

Speaker 2

So Europe definitely have some advantages for us in terms of competition and easy access to stable power supplies and cheap power supplies. But at the same time, although we started our infrastructure in Europe, we are building a global business. First of all, we have global customers. More than half of our customers today, I think, could come outside of Europe, US, first of all. And going forward, we definitely are looking just actually we act to expand our geography to become a really global AI infrastructure provider, and in particular in US.

Speaker 2

And there will be some news following very soon about our expansions here in terms of infrastructure, but also we already announced that we follow our customers and we opened several offices in U. S. And San Francisco and Dallas and New York is coming soon. It's mostly sales and services offices. Again, the infrastructure are moving 3 years, not only Europe and customer services moving to US.

Speaker 2

But again, it's not only just Europe, not only just US, it will be a global network of data centers and a global service provider. We are looking into other regions as well pretty actively. So, yeah. Great.

Speaker 1

Sorry, I didn't mean to cut you off. But Yeah. No. No.

Speaker 2

That's actually that's it. Yeah. So just watch our coming announcements, which will come very soon.

Speaker 1

Very good. So actually, Roman, maybe I can come to you. There's a question about how you see customer needs and sort of use cases evolving. So it's obviously this is a rapidly developing industry. So any color that you can add around the

Speaker 4

Yes, this is actually the brilliant question, love to answer. So I think that the main like the most significant shift that we see now is a lot of inference scenarios coming. So if some time ago, most of GPU hours were consumed for the large training jobs, so like developing the products, now we see that a lot of compute is consumed to serving customers, which is like we consider as a great development of the market and we're moving forward in general as an industry. And for us, I believe it's this shift is also super important because since we are very much in the software platform, when it comes to more complex scenarios, we can create much more value for our customers. Another thing to mention is that the number of scenarios like the verticals also diversifying.

Speaker 4

So we see, for example, a lot of customers coming from life science, biotech, health tech. We see a lot of interest in robotics. Other like video generation now like is blow mining. So we think that there will be a lot of sectors and niches where AI is penetrating. And our mission here is to support those people who build the products with the infrastructure and move develop the platform together with them.

Speaker 1

Great. Thank you. And actually, maybe Roman, let me stay with you because there's a kind of a follow on from this one, which is sort of how you think about the evolution of the customer base going forward sort of over time.

Speaker 4

Yes. So like I think that mostly we already covered it. So and there are a few dimension like, 1, the structure of the contracts. We said that down the road with the new generation of chips, we see that will be, again, like more long term contracts. Again, large trading jobs will come and so on.

Speaker 4

And then like on a scenario perspective, we see that shift to inference is the most important. And from the market sectors perspective, I think that, again, we see more and more diversified portfolio of scenarios and types of kind of types of tasks that people address with AI. And I think, again, the next big thing is when AI will start be adopted more in enterprises. Like, the market will go from, AI like now the most of the customers are AI native. And then we'll see a lot, a lot of adoption in enterprises.

Speaker 4

So, it will be important shift maybe during the next year.

Speaker 1

Okay. Very good. So and today, maybe I'll come to you. There's a specific question here about do you have the capability to support heterogeneous GPUs?

Speaker 5

Shortly, yes. So first of all, I would like just to mention that we are following the demand that we see on the market. And as the market develops, we will be developing we are developing what we can provide. So at this point of time, NVIDIA is the state of our solution. But in our R and D, we have a lot of different options in development, and we'll just follow the demand and we'll try to deliver the best possible solution to customers.

Speaker 1

Great. So we're kind of coming up at time here. There were a few questions remaining that were around this of GPUs, how many we have in operation now, how many we anticipate to have by year end and what the outlook is going into 2025. I'll just I'll refer people to the various materials that we disclosed a couple of weeks ago because we go into quite a bit of detail around some of the specific capacity numbers there. So I think you'll find the answers to those questions there.

Speaker 1

But if you have any follow ups, don't hesitate to come to us. But otherwise, we're coming up at 6 am in San Francisco time. So let me thank everybody, management and all of our investors, potential current for joining us. We're very happy to be back into the public markets. And as I think Arkady said, this is really just the beginning.

Speaker 1

So we're very excited about continuing the discussion with all of you. So with that, thank you very much and wish everybody a good remaining day, afternoon,