Inside the Amazon race to overcome the “Stargate” project from Openai

PAmi Sino was assembled next to a file cabinet, and a beach -sized tablet wrestling with a box, when he wore a boring echo around his laboratory.

“I have dropped tens of thousands of dollars in the material,” says laughing.

In straightness, Sinno reveals the goods: a golden silicone chip, which shines in the laboratory fluorescent light. This circular dish is divided into about 100 rectangular tiles, each with billions of microscopic keys. These are the brains of the most advanced Amazon chip to date: The Treerium 2, which was announced in December.

For years, artificial intelligence companies depend on one company, NVIDIA, to design advanced chips required to train the most powerful artificial intelligence models in the world. But with the high temperature of the artificial intelligence race, the cloud giants such as Amazon and Google accelerated their internal efforts to design their own chips, in seeking to achieve their market share in the rapidly growing cloud computing industry, which was value 900 billion dollars at the beginning of 2025.

This humble laboratory Austin, Texas, where Amazon escalated its width of semiconductor sovereignty. Sinno is a main player. He is the director of engineering at Annapurna Labs, the subsidiary of the chips design at Amazon’s Cloud Arm, Amazon Web Services (AWS). After wearing the ear protection and hitting its card to enter a safe room, Sinno is proudly displays a set of Terium 2S, which helped in designing it, and running the way that is usually in the data center. It should be shouted to be heard on the CACOPHONY for the Whirring fans who whisk the hot air, which is heated through the urgent demand of these chips on energy, in the building’s air conditioning system. Each slice can be easily proportional to the comfort of Sinno, but the mathematical infrastructure surrounded by-boards, memory experts, data cables, fans, sensors, transistors, and energy files-only 64 towers exposure to its voice.

Sign as this unit may be, it is just a miniature simulation of natural habitats. Soon after, thousands of giant computers will be placed in the size of the refrigerator in several unannounced sites in the United States and are linked together to form “Project Rainier”-one of the largest data center collections that was built anywhere in the world, which was named after the giant mountain name that is on the horizon on the Amazon Seattle headquarters.

Project Rainier is Amazon’s answer to the Openai and Microsoft project, with a value of $ 100 billion, which President Trump announced at the White House in January. Meta and Google are currently creating data centers similar to the so -called “Hyperscale”, at the cost of tens of billions of dollars each, to train the next generation of strong artificial intelligence models. Large technology companies spent the past decade collecting huge cash piles; Now they all spend it in a race to build the huge physical infrastructure needed to create artificial intelligence systems, as they think, will change the world mainly. The arithmetic infrastructure of this scale is unprecedented in the history of mankind.

The exact number of chips participating in the Project Rainier, the total cost of the data center, and their locations are close secrets. (Although Amazon will not comment on the cost of Rainier on its own, the company has it Shown It is expected to invest about $ 100 billion in 2025, when the majority is heading towards AWS.) The feeling of fierce competition. Amazon claims that the final project will be the “largest group of artificial intelligence in the world” – Bigger, and even from Stargate. The employees here resort to modern fighting in response to questions about the challenge of the likes of Openai. “It is easy to announce Stargate,” says Jadi Hot, Annapurna Product Manager. “Let’s see that it was implemented first.”

Amazon builds the Project Rainier specifically for one customer: the anthropipopier intelligence company, which has approved a long lease contract on the huge positions. (How long is it classified, too). The chips inside Rainier will be five times stronger than the systems that have been used for the best of these models. “It is the road, the road, the biggest,” says Tom Brown, the founder of Anthropor participant.

Nobody knows what the results of this huge jump in the muddy fire. The Executive Director of the anthropologist, Dario Amoudi, has publicly predicted that “strong artificial intelligence” (the term prefers on artificial general intelligence – a technique that can perform most of the tasks better and faster than human experts) early in 2026. This means that Antarubor believes that there is a strong possibility of the Rainier project, or one of its competitors, will be the place where you are born in AGI.

The effect of the budget wheel

Antarbur is not just a client from Amazon; It is also partially owned by the technology giant. Amazon has invested 8 billion dollars of Anthropor for the company’s minority share. Many of these funds, in a strange way, will end up on the costs of renting the AWS Center. This strange relationship reveals an interesting aspect of the forces that lead the artificial intelligence industry: Amazon mainly uses Anthrobur as proof of the concept of AI’s database works.

It is a dynamic similar to Microsoft’s relationship with Openai and Google’s relationship with its DeepMind. “The presence of a Frontier laboratory on your cloud is a way to improve your cloud,” says Brown, co -founder of the Antarbur who runs the company’s relationship with Amazon. Compare it with AWS partnership with Netflix: in early 2010, signs were among the first AWS agents. Because of the huge infrastructure challenge of connecting a quick video to users all over the world, “this means that AWS got all the comments they need to make all different systems work on this range,” says Brown. “They paved the way for the entire cloud industry.”

Brown says that all cloud service providers are now trying to repeat this style in the era of artificial intelligence. “They want someone who will go through the woods and use a sickle to cut a path, because no one has gone on this path before. But once you do this, there is a nice way, and everyone can follow you.” By investing in Antarbur, which spends most of this money on AWS, Amazon creates what you like to call the budget wheel: a self -enhancement process that helps it build more advanced chips and advanced materials, and reduces the cost of “account” required to operate AI systems, and other companies show AI, which in turn in customers are more than customers in the long term. Emerging companies like Openai and Anthropic Get the Glory, but the real winners are the major technology companies that run the world’s major cloud platforms.

Certainly, Amazon still relies heavily on NVIDIA chips. Meanwhile, the designated Google chips, known as TPUS, are It is considered By many in industry to be superior to Amazon. Amazon is not the only large company for technology that has a share in humans. Google also invested about 3 billion dollars for a 14 % stake. Anthropor uses all of the clouds in Google and Amazon in an attempt so as not to depend on either of them. Despite all this, the Project Rainier and Trenium 2 slices whose databases will fill them are the culmination of Amazon’s efforts to accelerate the budget wheel to the pole site.

Senno says, Treerium 2 Chips, designed with the help of intense human reactions, which share details with AWS on how its program interacted with Traium 1 devices, and submit suggestions to improve the next generation of chips. Sino says such a narrow cooperation is not typical for AWS clients, but it is necessary for a person to compete in the world of “Frontier” AI. The capabilities of the model are mainly related to the amount of account that is spent for training and operation, so the more your account, the better than artificial intelligence. “At the scope they run, every point of improved percent of performance has a great value,” says Sino on Anthropor. “The more they can take advantage of the infrastructure, the better the investment return for them, as a customer.”

The more sophisticated Amazon chips at home, the more need to rely on the NVIDIA leader that you wear to the east exceeds the offer, which means that NVIDIA can choose its customers and choose it with shipping much higher than production costs. But there is another dynamic to play, too, Annapurna employees hope to give Amazon a long -term structural feature. NVIDIA sells material chips (known as graphics processing units) directly to customers, which means that each graphics processing unit must be improved to run on its own. Amazon, at the same time, do not sell training chips. It simply sells access For them, run in AWS data centers. This means that Amazon can find competencies you will find NVIDIA that is difficult to repeat. “We have many degrees of freedom,” says Hot.

Returning to the laboratory, Sinno returns the silicon chip to his box and moves to another part of the room, and indicates in the various stages of the design process that may – very soon – call a new strong AIS to existence. He is enthusiastically reeling statistics on training 3, expected later this year, which he says will be twice the speed and 40 % energy efficient than its predecessor. He says that the 2S nervous networks helped design the team for the next sheet. This is an indication of how to accelerate artificial intelligence already from its rapid development, in a process that increases faster and faster. “It is a budget wheel,” says Sino. “definitely.”

Leave a Comment