AI's Insatiable Appetite: Are We Trading Bandwidth for Brainpower?
Amazon's plan for a new trans-Atlantic subsea cable to support AI workloads is just the latest symptom of a much larger trend: AI is becoming a data hog of unprecedented proportions. The numbers are staggering, and frankly, they're starting to look unsustainable. It's not just about faster downloads or smoother streaming anymore; it's about the fundamental infrastructure required to feed the AI beast.
The Terrestrial Cost of Digital Dreams
The article in The Guardian lays it out: Nvidia's valuation exceeds Germany's entire annual economic output. (That's Germany, the powerhouse of Europe.) And that's just one company. The cloud infrastructure deals—OpenAI promising to spend nearly $600 billion with various providers—are equally mind-boggling. Alphabet alone is revising its capital expenditure upwards to a potential $93 billion for the coming year. Why? AI. All those parameters, those trillions of unseen knobs tweaked within large language models, require massive computing power and, critically, massive data transfer.
That physical footprint is equally staggering. The Tahoe-Reno Industrial Center, home to data centers for Google, Microsoft, and Tesla, spans tens of thousands of acres. Dara Kerr's description of "multiple football fields long" data centers snaking through the Nevada desert isn't just a vivid image, it's a tangible representation of AI's material cost. It's easy to get lost in the digital abstraction, but these are real buildings, consuming real energy, and requiring real resources.
The Truthiness Problem
And here's where I start to get concerned. The article "The number one sign you're watching an AI video" highlights a disturbing trend: AI-generated content is often passed off as genuine, and one of the easiest ways to spot it right now is poor image quality. But Hany Farid, the computer-science professor from Berkeley, admits this tip will soon be useless. AI is getting better, faster. The implication is clear: we're heading toward a world where discerning reality from fabrication becomes increasingly difficult. This has real-world implications for trust, for journalism, for the very fabric of our society.

But let's pause and ask a crucial question: how is "image quality" even being measured here? Is it pixel density? Color accuracy? The presence of artifacts? The lack of specific metrics makes it hard to quantify the problem, but the anecdotal evidence—the proliferation of deepfakes and AI-generated "news"—is hard to ignore. And this is the part of the analysis that I find genuinely puzzling. We're investing trillions in infrastructure to support AI, but are we investing enough in the tools to detect AI-generated fakery? The discrepancy between the investment in creating AI and the investment in verifying AI seems dangerously skewed.
The other problem? AI's "essential use case," as The Guardian points out, seems to be "cheating on homework assignments." 95% of AI pilots in businesses have failed, according to MIT researchers. So, we're building this massive infrastructure, consuming vast resources, to support a technology that, so far, hasn't proven its real-world utility.
Are We Being Fooled?
The AI boom is creating a massive demand for bandwidth and computing power. Amazon's new subsea cable is a direct response to this demand. But are we sure that the benefits of AI justify the costs? Are we blindly investing in a technology that will ultimately erode trust and consume unsustainable amounts of resources?
I'm not suggesting that we abandon AI development altogether. But we need to be more critical, more data-driven, and more aware of the potential pitfalls. We need to ask tougher questions about the real-world applications of AI, the resources it consumes, and the societal impact it will have.
