Connect with us

Artificial Intelligence

The Biggest Energy Miscalculation of 2024 by Global Leaders – Artificial Intelligence

Published

13 minute read

From EnergyNow.ca

By Maureen McCall

It’s generally accepted that the launch of Artificial Intelligence (AI) occurred at Dartmouth College in a 1956 AI workshop that brought together leading thinkers in computer science, and information theory to map out future paths for investigation. Workshop participants John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude E. Shannon coined the term “artificial intelligence” in a proposal that they wrote for that conference. It started AI as a field of study with John McCarthy generally considered as the father of AI.

AI was developed through the 1960s but in the 1970s-1980s, a period generally referred to as “the AI Winter”, development was stalled by a focus on the limitations of neural networks. In the late 1980s, advancements resumed with the emergence of connectionism and neural networks. The 1990s-2000s are considered to be the beginning of the AI/ Machine Learning Renaissance. In the 2010s, further growth was spurred by the expansion of Big Data and deep learning, computer power and large-scale data sets. In 2022 an AI venture capital frenzy took off (the “AI frenzy”), and AI plunged into the mainstream in 2023 according to Forbes which was already tracking applications of AI across various industries.

By early 2024, the implementation of AI across industries was well underway- in healthcare, finance, creative fields and business. In the energy industry, digitalization conferences were addressing digital transformation in the North American oil & gas industry with speakers and attendees from E&P majors, midstream, pipeline, LNG companies and more as well as multiple AI application providers and the companies speaking and attending already had AI implementations well underway.

So how did global leaders not perceive the sudden and rapid rise of AI and the power commitments it requires?

How has the 2022 “AI frenzy” of investment and subsequent industrial adoption been off the radar of global policymakers until just recently? Venture capital is widely recognized as a driver of innovation and new company formation and leaders should have foreseen the surge of AI improvement and implementation by “following the money” so to speak. Perhaps the incessant focus of “blaming and shaming” industry for climate change blinded leaders to the rapid escalation of AI development that was signaled by the 2022 AI frenzy

Just as an example of lack of foresight, in Canada, the grossly delayed 2024 Fall Economic Statement had a last-minute insertion of “up to $15 billion in aggregate loan and equity investments for AI data center projects”. This policy afterthought is 2 years behind the onset of the AI frenzy and 12+ months behind the industrial adoption of AI. In addition, the Trudeau/Guilbeault partnership is still miscalculating the enormous AI power requirements.

As an example of the size of the power requirements of AI, one can look at the Wonder Valley project- the world’s largest AI data center industrial park in the Greenview industrial gateway near Grande Prairie Alberta. It is planned to “generate and offer 7.5 GW of low-cost power to hyperscalers over the next 5-10 years.” The cost of just this one project is well beyond the funding offered in the 2024 Fall Economic Statement.

“We will engineer and build a redundant power solution that meets the modern AI compute reliability standard,” said Kevin O’Leary, Chairman of O’Leary Ventures. “The first phase of 1.4 GW will be approximately US$ 2 billion with subsequent annual rollout of redundant power in 1 GW increments. The total investment over the lifetime of the project will be over $70 billion.”

To further explore the huge power requirements of AI, one can look at the comparison of individual AI queries/searches vs traditional non-AI queries. As reported by Bloomberg, “Researchers have estimated that a single ChatGPT query requires almost 10 times as much electricity to process as a traditional Google search.” Multiply this electricity demand by the millions of industrial users as industrial AI implementation continues to expand worldwide. As in the same Bloomberg article- “By 2034, annual global energy consumption by data centers is expected to top 1,580 terawatt-hours—about as much as is used by all of India—from about 500 today.”

This is the exponential demand for electricity that North American & global leaders did not see coming – a 24/7 demand that cannot be satisfied by unreliable and costly green energy projects – it requires an “all energies” approach. Exponential AI demand threatens to gobble up supply and dramatically increase electricity prices for consumers. Likewise, leadership does not perceive that North American grids are vulnerable and outdated and would be unable to deliver reliable supply for AI data centers that cannot be exposed to even a few seconds of power outage. Grid interconnections are unreliable as mentioned in the following excerpt from a September 2024 article in cleanenergygrid.org.

“Our grid, for all of its faults, is now a single interconnected “machine” over a few very large regions of the country. Equipment failures in Arizona can shut the lights out in California, just as overloaded lines in Ohio blacked out 55 million people in eight states from Michigan to Boston – and the Canadian province of Ontario – in 2003.”

AI’s power demands are motivating tech companies to develop more efficient means of developing AI. Along with pressure to keep fossil fuels in the mix, billions are being invested in alternative energy solutions like nuclear power produced by Small Nuclear Reactors (SMRs).

Despite SMR optimism, the reality is that no European or North American SMRs are in operation yet. Only Russia & China have SMRs in operation and most data centers are focusing on affordable natural gas power as the reality sets in that nuclear energy cannot scale quickly enough to meet urgent electricity needs. New SMR plants could be built and operational possibly by 2034, but for 2025 Canada’s power grid is already strained, with electricity demand to grow significantly, driven by electric vehicles and data centers for AI applications.

AI has a huge appetite for other resources as well. For example, the most energy and cost-efficient ways to chill the air in data centers rely on huge quantities of potable water and the exponential amount of data AI produces will require dramatic increases in internet networks as well as demand for computer chips and the metals that they require. There is also an intense talent shortage creating AI recruitment competitions for the talent pool of individuals trained by companies like Alphabet, Microsoft and OpenAI.

AI development is now challenging the public focus on climate change. In Canada as well as in the U.S. and globally, left-leaning elected officials who focused keenly on policies to advance the elimination of fossil fuels were oblivious to the tsunami of AI energy demand about to swamp their boats. Canadian Member of Parliament Greg McLean, who has served on the House of Commons Standing Committees of Environment, Natural Resources, and Finance, and as the Natural Resources critic for His Majesty’s Loyal Opposition, has insight into the reason for the change in focus.

“Education about the role of all forms of energy in technology development and use has led to the logical erosion of the ‘rapid energy transition’ mantra and a practical questioning of the intents of some of its acolytes. The virtuous circle of technological development demanding more energy, and then delivering solutions for society that require less energy for defined tasks, could not be accomplished without the most critical input – more energy. This has been a five-year journey, swimming against the current — and sometimes people need to see the harm we are doing in order to objectively ask themselves ‘What are we accomplishing?’ … ‘What choices are being made, and why?’…. and ‘Am I getting the full picture presentation or just the part someone wants me to focus on?’”

With the election of Donald Trump, the “Trump Transition” now competes with the “Energy Transition” focus, changing the narrative in the U.S. to energy dominance. For example, as reported by Reuters, the U.S. solar industry is now downplaying climate change messaging.

“The U.S. solar industry unveiled its lobbying strategy for the incoming Trump administration, promoting itself as a domestic jobs engine that can help meet soaring power demand, without referencing its role in combating climate change.”

It’s important to note here that the future of AI is increasingly subject to societal considerations as well as technological advancements. Political, ethical, legal, and social frameworks will increasingly impact AI’s development, enabling or limiting its implementations. Since AI applications involve “human teaming” to curate and train AI tools, perceptions of the intent of AI implementations are key. In the rush to implementation, employees at many companies are experiencing changing roles with increased demand for workers to train AI tools and curate results. Will tech optimism be blunted by the weight of extra tasks placed on workers and by suspicions that those workers may ultimately be replaced? Will resistance develop as humans and AI are required to work together more closely?

Business analyst Professor Henrik von Scheel of the Arthur Lok Jack Global School of Business describes the importance of the human factor in AI adoption.

“It’s people who have to manage the evolving environment through these new tools,” von Scheel explains. “It’s been this way ever since the first caveperson shaped a flint, only now the tools are emerging from the fusion of the digital, physical and virtual worlds into cyber-physical systems.”

A conversation with a recent graduate who questioned the implementation of AI including the design of guardrails and regulations by members of an older generation in management made me wonder…Is there a generational conflict brewing from the lack of trust between the large proportion of baby boomers in the workforce- predominantly in management- and the younger generation in the workforce that may not have confidence in the ability of mature management to fully understand and embrace AI tech and influence informed decisions to regulate it?

It’s something to watch in 2025.

Maureen McCall is an energy professional who writes on issues affecting the energy industry.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

DeepSeek: The Rise of China’s Open-Source AI Amid US Regulatory Shifts and Privacy Concerns

Published on

logo

By

DeepSeek offers open-source generative AI with localized data storage but raises concerns over censorship, privacy, and disruption of Western markets.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

A recent regulatory clampdown in the United States on TikTok, a Chinese-owned social media platform, triggered a surge of users migrating to another Chinese app, Rednote. Now, another significant player has entered the spotlight: DeepSeek, a Chinese-developed generative artificial intelligence (AI) platform, which is rapidly gaining traction. The growing popularity of DeepSeek raises questions about the effectiveness of bans like TikTok and their ability to curtail the use of Chinese digital services by Americans.

President Donald Trump has called attention to a recent Chinese AI development, describing it as a “wake-up call” for the US tech industry.

Speaking to Republican lawmakers in Florida on Monday evening, the president emphasized the need for America to strengthen its competitive edge against China’s advancements in technology.

During the event, Trump referenced the launch of DeepSeek AI, highlighting its potential implications for the global tech landscape. “Last week, I signed an order revoking Joe Biden’s destructive artificial intelligence regulations so that AI companies can once again focus on being the best, not just being the most woke,” Trump stated. He continued by explaining that he had been closely following developments in China’s tech sector, including reports of a faster and more cost-effective approach to AI.

“That’s good because you don’t have to spend as much money,” Trump remarked, adding that while the claims about this Chinese breakthrough remain unverified, the idea of achieving similar results with lower costs could be seen as an opportunity for US companies. He stressed, “The release of DeepSeek AI from a Chinese company should be a wake-up call for our industries, that we need to be laser-focused on competing to win because we have the greatest scientists in the world.”

Trump also pointed to what he views as a recognition by China of America’s dominance in scientific and engineering talent. “This is very unusual, when you hear a DeepSeek when you hear somebody come up with something, we always have the ideas,” he said. “We’re always first. So I would say that’s a positive that could be very much a positive development.”

DeepSeek, created by a Chinese AI research lab backed by a hedge fund, has made waves with its open-source generative AI model. The platform rivals offerings from major US developers, including OpenAI. To circumvent US sanctions on hardware and software, the company allegedly implemented innovative solutions during the development of its models.

DeepSeek’s approach to sensitive topics raises significant concerns about censorship and the manipulation of information. By mirroring state-approved narratives and avoiding discussions on politically charged issues like Tiananmen Square or Winnie the Pooh’s satirical association with Xi Jinping, DeepSeek exemplifies how AI can be wielded to reinforce government-controlled messaging.

This selective presentation of facts, or outright omission of them, deprives users of a fuller understanding of critical events and stifles diverse perspectives. Such practices not only limit the free flow of information but also normalize propaganda under the guise of fostering a “wholesome cyberspace,” calling into question the ethical implications of deploying AI that prioritizes political conformity over truth and open dialogue.

While DeepSeek provides multiple options for accessing its AI models, including downloadable local versions, most users rely on its mobile apps or web chat interface.

The platform offers features such as answering queries, web searches, and detailed reasoning responses. However, concerns over data privacy and censorship are growing as DeepSeek collects extensive information and has been observed censoring content critical of China.

DeepSeek’s data practices raise alarm among privacy advocates. The company’s privacy policy explicitly states, “We store the information we collect in secure servers located in the People’s Republic of China.”

This includes user-submitted data such as chat messages, prompts, uploaded files, and chat histories. While users can delete chat history via the app, privacy experts emphasize the risks of sharing sensitive information with such platforms.

DeepSeek also gathers other personal information, such as email addresses, phone numbers, and device data, including operating systems and IP addresses. It employs tracking technologies, such as cookies, to monitor user activity. Additionally, interactions with advertisers may result in the sharing of mobile identifiers and other information with the platform. Analysis of DeepSeek’s web activity revealed connections to Baidu and other Chinese internet infrastructure firms.

While such practices are common in the AI industry, privacy concerns are heightened by DeepSeek’s storage of data in China, where stringent cybersecurity laws allow authorities to demand access to company-held information.

The safest option is running local or self-hosted versions of AI models, which prevent data from being transmitted to the developer.

And with Deepseek, this is simple as its models are open-source.

Open-source AI stands out as the superior approach to artificial intelligence because it fosters transparency, collaboration, and accessibility. Unlike proprietary systems, which often operate as opaque black boxes, open-source AI allows anyone to examine its code, ensuring accountability and reducing biases. This transparency builds trust, while the collaborative nature of open-source development accelerates innovation by enabling researchers and developers worldwide to contribute to and improve upon existing models.

Additionally, open-source AI democratizes access to cutting-edge technology, empowering startups, researchers, and underfunded regions to harness AI’s potential without the financial barriers of proprietary systems.

It also prevents monopolistic control by decentralizing AI development, reducing the dominance of a few tech giants.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.
You subscribe to Reclaim The Net because you value free speech and privacy. Each issue we publish is a commitment to defend these critical rights, providing insights and actionable information to protect and promote liberty in the digital age.

Despite our wide readership, less than 0.2% of our readers contribute financially. With your support, we can do more than just continue; we can amplify voices that are often suppressed and spread the word about the urgent issues of censorship and surveillance.

Consider making a modest donation — just $5, or whatever amount you can afford. Your contribution will empower us to reach more people, educate them about these pressing issues, and engage them in our collective cause.

Thank you for considering a contribution. Each donation not only supports our operations but also strengthens our efforts to challenge injustices and advocate for those who cannot speak out.


Thank you.
Continue Reading

Artificial Intelligence

Everyone is freaking out over DeepSeek. Here’s why

Published on

From The Deep View

$600 billion collapse

Volatility is kind of a given when it comes to Wall Street’s tech sector. It doesn’t take much to send things soaring; it likewise doesn’t take much to set off a downward spiral.
After months of soaring, Monday marked the possible beginning of a spiral, and a Chinese company seems to be at the center of it.
Alright, what’s going on: A week ago, Chinese tech firm DeepSeek launched R1, a so-called reasoning model, that, according to DeepSeek, has reached technical parity with OpenAI’s o1 across a few benchmarks. But, unlike its American competition, DeepSeek open-sourced R1 under an MIT license, making it significantly cheaper and more accessible than any of the closed models coming from U.S. tech giants.
  • But the real punchline here doesn’t have to do with R1 at all, but with a previous language model — called V3 — that DeepSeek released in December. DeepSeek was reportedly able to train V3 using a small collection of older Nvidia chips (about 2,000 H800s) at a cost of about $5.6 million.
  • Still, training is only one cost of many tied to AI development/deployment; while the costs associated with researching, developing, training and operating both R1 and V3 remain either unknown or unconfirmed, DeepSeek’s apparent ability to reach technical parity at a far reduced cost, without state-of-the-art GPU chips or massive GPU clusters, has a lot of implications for America’s now tenuous position in AI leadership. (Though DeepSeek says it is open-sourced, the company did not release its training data).
Since the release of R1, DeepSeek has become the top free app in Apple’s App Store, bumping ChatGPT to the number two slot. In the midst of its spiking popularity, DeepSeek restricted new sign-ups due to large-scale cyberattacks against its servers. And, as Salesforce Chief Marc Benioff noted, “no Nvidia supercomputers or $100M needed,” a point that the market heard loud and clear. 
What happened: Led by Nvidia, a series of tech and chip stocks, in addition to the three major stock indices, fell hard in pre-market trading early Monday morning. All told, $1.1 trillion of U.S. market cap was erased within a half hour of the opening bell.
  • Performance didn’t get better throughout the day. Nvidia closed Monday down 17%, erasing some $600 billion in market capitalization, a Wall Street record. TSMC was down 14%, Arm was down 11%, Broadcom was down 17%, Google was down 4% and Microsoft was down 2%. The S&P fell 1.4% and the Nasdaq fell 3.3%. An Nvidia spokesperson called R1 an “excellent AI advancement.”
  • This is all going into a week of Big Tech earnings, where Microsoft and Meta will be held to account for the billions of dollars ($80 billion and $65 billion, respectively) they plan to spend on AI infrastructure in 2025, a cost that Wall Street no longer seems to feel quite so good about.
It’s hard to miss the political tensions underlying all of this. The tail end of former President Joe Biden’s time in office was marked in part by an increasingly tense trade war with China, wherein both countries issued bans on the export of materials needed to build advanced AI chips. And with President Trump hell-bent on maintaining American leadership in AI, and despite the chip restrictions that are in place, Chinese companies seem to be turning hardware challenges into a motivation for innovation that challenges the American lead, something they seem keen to drive home.
R1, for instance, was announced at around the same time as OpenAI’s $500 billion Project Stargate, two impactfully divergent approaches.
What’s happening here is that the market has finally come around to the idea that maybe the cost of AI development (hundreds of billions of dollars annually) is too high, a recognition “that the winners in AI will be the most innovative companies, not just those with the most GPUs,” according to Writer CTA Waseem Alshikh. “Brute-forcing AI with GPUs is no longer a viable strategy.”
Wedbush analyst Dan Ives, however, thinks this is just a good time to buy into Nvidia — Nvidia and the rest are building infrastructure that, he argues, China will not be able to compete with in the long run. “Launching a competitive LLM model for consumer use cases is one thing,” Ives wrote. “Launching broader AI infrastructure is a whole other ballgame.”
“I view cost reduction as a good thing. I’m of the belief that if you’re freeing up compute capacity, it likely gets absorbed — we’re going to need innovations like this,” Bernstein semiconductor analyst Stacy Rasgon told Yahoo Finance. “I understand why all the panic is going on. I don’t think DeepSeek is doomsday for AI infrastructure.”
Somewhat relatedly, Perplexity has already added DeepSeek’s R1 model to its AI search engine. And DeepSeek on Monday launched another model, one capable of competitive image generation.
Last week, I said that R1 should be enough to make OpenAI a little nervous. This anxiety spread way quicker than I anticipated; DeepSeek spent Monday dominating headlines at every publication I came across, setting off a debate and panic that has spread far beyond the tech and AI community.
Some are concerned about the national security implications of China’s AI capabilities. Some are concerned about the AI trade. Granted, there are more unknowns here than knowns; we do not know the details of DeepSeek’s costs or technical setup (and the costs are likely way higher than they seem). But this does read like a turning point in the AI race.
In January, we talked about reversion to the mean. Right now, it’s too early to tell how long-term the market impacts of DeepSeek will be. But, if Nvidia and the rest fall hard and stay down — or drop lower — through earnings season, one might argue that the bubble has begun to burst. As a part of this, watch model pricing closely; OpenAI may well be forced to bring down the costs of its models to remain competitive.
At the very least, DeepSeek appears to be evidence that scaling is one, not a law, and two, not the only (or best) way to develop more advanced AI models, something that rains heavily on OpenAI and co.’s parade since it runs contrary to everything OpenAI’s been saying for months. Funnily, it actually seems like good news for the science of AI, possibly lighting a path toward systems that are less resource-intensive (which is much needed!)
It’s yet another example of the science and the business of AI not being on the same page.
Continue Reading

Trending

X