My takeaways from Empire of AI: the book anyone building with AI should read
Last year, AI became my daily companion. Research, writing, deep thinking on behavioral changes, and a quite big FOMO for not being able to keep up the pace. And I know I am not alone, most people around me started the same relationship with their chatbots.
At the beginning I did not really reflect about it. It was something new, shiny, exciting and that I needed to explore. But after the first big wave of "run, run, don't think" I started to feel like I was in Inception. You know the moment when the characters start questioning things, can't remember how they got there, and realize they're in a dream?
That's how I felt about AI: how did we end up here? When did this become the most important technology ever, something we have to adopt or get left behind?
In the middle of my questions, two people recommended me Empire of AI by journalist Karen Hao . With journalistic rigor, she digs into how OpenAI was born, the dynamics driving one of the biggest AI companies, and the personal stories of the people building it.
If you want to understand the what and who behind the chatbots, and getting a glimpse at the incentives and motivators, I highly recommend reading the book.
Warning: parts of this book are hard to process.
There are stories of dominance and abuse that made me shake. This is not an easy read, but it is a really important one for everyone using, thinking, and building with AI.
Here are my main takeaways.
My main takeaways from Empire of AI
The book mainly focuses on the rise of OpenAI and mass marketed chatbots and LLMs. Hao's main thesis is that the companies building these models operate as imperialists.
They come, take what they need without asking, and sacrifice whatever comes in the way.
If that feels dramatic, think about it:
The LLMs are trained on data, who gave consent for them to be used?
The data centers need land and drinkable water, were people living in those lands, and drinking that water informed?
The models need moderation, at scale and at a low cost, who has to pay the price and look at this abusive content?
The empire strikes and takes it all, but we hardly reflect, know or think about it when using a chatbot.
# 1- The scale that consumers everything
OpenAI was born as a no profit, for humanity, with the goal of doing good for the world. They promised AGI, a superior intelligence that would help humanity. Exactly how it would help, and with what, was (and still remains) unclear. But it doesn't matter anymore, during the journey, the dream of AGI has been sacrificed on the altar of scale and money.
To understand why scale became everything, we need to understand the power and market dynamics.
After ChatGPT launched, the competition became fierce, and a new evil circle was unlocked: the race for a dominant market position started:
to get money to finance it all, you need a dominant position
to have a dominant position, you need a product people use and you can monetize
to have a product that people use and you can monetize, you need a better algo
to have a better algo, you need more data
to have more data, you need to take more and more
after taking more, you need more resources to process it all
after the resources, you need speed to market
to get faster to market, you need to cut corners
to cut corners ethical, safety, and privacy concerns fly off the window
One important point: the technology in itself isn't the problem, there are ethical players out there. The challenge is the speed and scale at which the technology is forced to move by the big players.
This scale demands too much. Data centers consume drinkable water before people can. Content moderators paid as little as possible, exposed to the worst imaginable for a few cents per task, digital labor that feels like digital slavery.
Hao calls this hyper-extraction and it is something I now constantly think of before mindlessly asking a chatbot to "fix a quick email" because I became too lazy to think. I know, it might sound like a drop in the ocean, but even drops add up.
The thinking point here is that scale doesn't come for free, and it is important to always have in mind the 4 main consequences this development has on society:
It generates an enormous environmental footprint
It demands an amount of data that is becoming toxic and abusive
It creates toxic content, as it is hard to control and filter the data
It gives the illusion of being factual, when it is not
These chatbots can feel like a helping hand, but what is the real cost? And what happens when millions of people use tools that create the illusion of thinking and feeling, but don't?
# 2- Welcome to the era of taking without asking
By now most of us know the basics: LLM take the information from the internet, clean it, organize it, and then…. use it for lucrative AI products.
But where does all the training content come from?
At first it was from open sources, people who made their work available. But soon that wasn't enough (remember the scale needed?). And the companies building the model needed more, and more, so they started taking without asking.
Love how the chatbot writes? That's the work of countless authors.
Impressed by the artwork it creates? That's the talent of artists, taken to train the models.
Feel like you have your own developer now? That knowledge came from GitHub's public repositories.
The list goes on. Every "wow" moment from a chatbots comes at someone's expenses, work taken in most cases without consent. The rights? Nobody cares.
And the taking doesn't stop at artists and creators, sometimes it happens in a extremely subtle way. Remember the mannequin challenge that went viral in 2016? That data was used to train AI models.
Like the old empires, all is taken without asking or caring, there is no consent, nor anybody who cares to ask. Everything is taken because scale and speed demand it.
Are you outraged? You should be.
As an author, creator, thinker, or simply someone with principles, this matters. We should demand clarity and fairness. Especially because this spiral of taking is accelerated by something else: greed for power.
# 3- The hyper consolidation: more for fewer
One thing is becoming clear: this AI race is concentrating more and more resources in fewer hands.
At the current pace and scale (and with the lack of profit we have seen so far) only an handful of companies can afford to invest in this technology. This means two things:
More concentration, less diversity
The commercial race for better algorithms, became a race to secure the best talent. And the best talent follow the money.
More professors now have double affiliations with universities and private companies, because universities cannot afford the chips or electricity needed for cutting-edge AI research.
As the result of this hyper concentration, the model are less diverse, creating more and more bias in the algorithms we increasingly use for everything.
More data, more control
Looking at who is behind the technology we find only a handful of companies, mostly US-based. This concentration is moved by money and resources, and reinforced by the political powers, adopting private AI technology in a "first one wins" battle against China.
At the heart of it all is a misalignment of incentives: a technology that is still developing gets less regulated to come faster to results, is supported by government but is optimized for users growth and target revenues instead than the public's best interest.
Where all this investment is leading, and where the profit will come from is still unclear. But the one who proposed a scenario that is in my opinion closer to what will happen is Benedict Evans. He predicted a new waves of advertisement hyper-personalization.
So putting his take and the book's insights together, you get a black mirror scenario: these companies collect data as they want, to then sell it back to you. We could call this a new frontier of colonization.
# 4- It is not the models, it is the people building them
While writing this I caught myself "humanizing" the models, as if they had their own will. "The model did this, or that", the reality is that the model in itself doesn't do anything.
This is a technique often used to bury the human choices behind "the technology". Latest case at hand is the one from Grok creating abusive images without women's consent, and Musk downplaying the issue rather than acknowledging the conscious choices that his team made to cut corners on safety.
We need to stop humanizing the technology, and start holding human accountable. In the book Karen Hao points at some really intentional choices that were made, and brought us to where we are today:
Valuing the perceived user value over the data and privacy concerns, following the "move fast and break things" Silicon Valley's philosophy.
Cutting privacy and ethical corners to get faster to market, because the battle to win market shares and consumers preferences is the only thing that matters (to gain more funding and continuing the race).
Building a "cult" where you follow your leader, almost blindly, and without questioning his authority too much.
Feeding hyper Egos, from people who want to prove themselves as the smartest, biggest, fastest in the world.
One core principle stood out for me throughout the book: always asks where the incentives are, and what is being optimized for. Many of the companies driving and pushing this technology on us are optimizing for profit, and scale. That's it.
What started as a technology for humanity's greatest good is now the biggest money bet of the decade, concentrating even more wealth in fewer hands.
So much is happening, so fast, and it is easy to lose sight of what is happening behind the curtains. But understanding matters, especially for the one of us building AI products. Be the one who cares, where nobody else does.
Karen Hao ends her book on a hopeful note: another future for AI is possible. But it requires understanding how the system works and intentionally redistributing power, knowledge, and resources outside the empire. Legislation that demands transparency, limits on mass data usage, strong labour and environmental protections. And teaching people how things work and help them make their own choices.
I'd add one thing for all of us building and scaling products. We've seen the negative consequences that digital products can have, we are now aware that when we break things it is not only about a database or a line of code, it is people's wellbeing.
We can choose to care, we can choose to respect, we can choose intention over shortcuts. It won't break the empire overnight, but it matters how we choose to not be part of it.