Connect with us
[bsa_pro_ad_space id=12]

Artificial Intelligence

Poll: Despite global pressure, Americans want the tech industry to slow down on AI

Published

6 minute read

From The Deep View

A little more than a year ago, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of AI systems more powerful than GPT-4. Of course, the pause never happened (and we didn’t seem to stumble upon superintelligence in the interim, either) but it did elicit a narrative from the tech sector that, for a number of reasons, a pause would be dangerous.
  • One of these reasons was simple: sure, the European Union could potentially instate a pause on development — maybe the U.S. could do so as well — but there’s nothing that would require other countries to pause, which would let these other countries (namely, China and Russia) to get ahead of the U.S. in the ‘global AI arms race.’
As the Pause AI organization themselves put it: “We might end up in a world where the first AGI is developed by a non-cooperative actor, which is likely to be a bad outcome.”
But new polling shows that American voters aren’t buying it.
The details: A recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) — and first published by Time — found that Americans would rather fall behind in that global race than skimp on regulation.
  • 75% of Republicans and 75% of Democrats said that “taking a careful controlled approach” to AI — namely by curtailing the release of tools that could be leveraged by foreign adversaries against the U.S. — is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”
  • A majority of voters are also in favor of the application of more stringent security measures at the labs and companies developing this tech.
The polling additionally found that 50% of voters surveyed think the U.S. should use its position in the AI race to prevent other countries from building powerful AI systems by enforcing “safety restrictions and aggressive testing requirements.”
Only 23% of Americans polled believe that the U.S. should eschew regulation in favor of being the first to build a more powerful AI.
  • “What I perceive from the polling is that stopping AI development is not seen as an option,” Daniel Colson, the executive director of the AIPI, told Time. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way.”
  • “And when we present that in the polling — that third path, mitigated AI development with guardrails — is the one that people overwhelmingly want.”
This comes as federal regulatory efforts in the U.S. remain stalled, with the focus shifting to uneven state-by-state regulation.
Previous polling from the AIPI has found that a vast majority of Americans want AI to be regulated and wish the tech sector would slow down on AI; they don’t trust tech companies to self-regulate.
Colson has told me in the past that the American public is hyper-focused on security, safety and risk mitigation; polling published in May found that “66% of U.S. voters believe AI policy should prioritize keeping the tech out of the hands of bad actors, rather than providing the benefits of AI to all.”
Underpinning all of this is a layer of hype and an incongruity of definition. It is not clear what “extremely powerful” AI means, or how it would be different from current systems.
Unless artificial general intelligence is achieved (and agreed upon in some consensus definition by the scientific community), I’m not sure how you measure “more powerful” systems. As current systems go, “more powerful” doesn’t mean much more than predicting the next word at slightly greater speeds.
  • Aggressive testing and safety restrictions are a great idea, as is risk mitigation.
  • However, I think it remains important for regulators and constituents alike to be aware of what risks they want mitigated. Is the focus on mitigating the risk of a hypothetical superintelligence, or is it on mitigating the reality of algorithmic bias, hallucination, environmental damage, etc.?
Do people want development to slow down, or deployment?
To once again call back Helen Toner’s comment of a few weeks: how is AI affecting your life, and how do you want it to affect your life?
Regulating a hypothetical is going to be next to impossible. But if we establish the proper levels of regulation to address the issues at play today, we’ll be in a better position to handle that hypothetical if it ever does come to pass.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

Everyone is freaking out over DeepSeek. Here’s why

Published on

From The Deep View

$600 billion collapse

Volatility is kind of a given when it comes to Wall Street’s tech sector. It doesn’t take much to send things soaring; it likewise doesn’t take much to set off a downward spiral.
After months of soaring, Monday marked the possible beginning of a spiral, and a Chinese company seems to be at the center of it.
Alright, what’s going on: A week ago, Chinese tech firm DeepSeek launched R1, a so-called reasoning model, that, according to DeepSeek, has reached technical parity with OpenAI’s o1 across a few benchmarks. But, unlike its American competition, DeepSeek open-sourced R1 under an MIT license, making it significantly cheaper and more accessible than any of the closed models coming from U.S. tech giants.
  • But the real punchline here doesn’t have to do with R1 at all, but with a previous language model — called V3 — that DeepSeek released in December. DeepSeek was reportedly able to train V3 using a small collection of older Nvidia chips (about 2,000 H800s) at a cost of about $5.6 million.
  • Still, training is only one cost of many tied to AI development/deployment; while the costs associated with researching, developing, training and operating both R1 and V3 remain either unknown or unconfirmed, DeepSeek’s apparent ability to reach technical parity at a far reduced cost, without state-of-the-art GPU chips or massive GPU clusters, has a lot of implications for America’s now tenuous position in AI leadership. (Though DeepSeek says it is open-sourced, the company did not release its training data).
Since the release of R1, DeepSeek has become the top free app in Apple’s App Store, bumping ChatGPT to the number two slot. In the midst of its spiking popularity, DeepSeek restricted new sign-ups due to large-scale cyberattacks against its servers. And, as Salesforce Chief Marc Benioff noted, “no Nvidia supercomputers or $100M needed,” a point that the market heard loud and clear. 
What happened: Led by Nvidia, a series of tech and chip stocks, in addition to the three major stock indices, fell hard in pre-market trading early Monday morning. All told, $1.1 trillion of U.S. market cap was erased within a half hour of the opening bell.
  • Performance didn’t get better throughout the day. Nvidia closed Monday down 17%, erasing some $600 billion in market capitalization, a Wall Street record. TSMC was down 14%, Arm was down 11%, Broadcom was down 17%, Google was down 4% and Microsoft was down 2%. The S&P fell 1.4% and the Nasdaq fell 3.3%. An Nvidia spokesperson called R1 an “excellent AI advancement.”
  • This is all going into a week of Big Tech earnings, where Microsoft and Meta will be held to account for the billions of dollars ($80 billion and $65 billion, respectively) they plan to spend on AI infrastructure in 2025, a cost that Wall Street no longer seems to feel quite so good about.
It’s hard to miss the political tensions underlying all of this. The tail end of former President Joe Biden’s time in office was marked in part by an increasingly tense trade war with China, wherein both countries issued bans on the export of materials needed to build advanced AI chips. And with President Trump hell-bent on maintaining American leadership in AI, and despite the chip restrictions that are in place, Chinese companies seem to be turning hardware challenges into a motivation for innovation that challenges the American lead, something they seem keen to drive home.
R1, for instance, was announced at around the same time as OpenAI’s $500 billion Project Stargate, two impactfully divergent approaches.
What’s happening here is that the market has finally come around to the idea that maybe the cost of AI development (hundreds of billions of dollars annually) is too high, a recognition “that the winners in AI will be the most innovative companies, not just those with the most GPUs,” according to Writer CTA Waseem Alshikh. “Brute-forcing AI with GPUs is no longer a viable strategy.”
Wedbush analyst Dan Ives, however, thinks this is just a good time to buy into Nvidia — Nvidia and the rest are building infrastructure that, he argues, China will not be able to compete with in the long run. “Launching a competitive LLM model for consumer use cases is one thing,” Ives wrote. “Launching broader AI infrastructure is a whole other ballgame.”
“I view cost reduction as a good thing. I’m of the belief that if you’re freeing up compute capacity, it likely gets absorbed — we’re going to need innovations like this,” Bernstein semiconductor analyst Stacy Rasgon told Yahoo Finance. “I understand why all the panic is going on. I don’t think DeepSeek is doomsday for AI infrastructure.”
Somewhat relatedly, Perplexity has already added DeepSeek’s R1 model to its AI search engine. And DeepSeek on Monday launched another model, one capable of competitive image generation.
Last week, I said that R1 should be enough to make OpenAI a little nervous. This anxiety spread way quicker than I anticipated; DeepSeek spent Monday dominating headlines at every publication I came across, setting off a debate and panic that has spread far beyond the tech and AI community.
Some are concerned about the national security implications of China’s AI capabilities. Some are concerned about the AI trade. Granted, there are more unknowns here than knowns; we do not know the details of DeepSeek’s costs or technical setup (and the costs are likely way higher than they seem). But this does read like a turning point in the AI race.
In January, we talked about reversion to the mean. Right now, it’s too early to tell how long-term the market impacts of DeepSeek will be. But, if Nvidia and the rest fall hard and stay down — or drop lower — through earnings season, one might argue that the bubble has begun to burst. As a part of this, watch model pricing closely; OpenAI may well be forced to bring down the costs of its models to remain competitive.
At the very least, DeepSeek appears to be evidence that scaling is one, not a law, and two, not the only (or best) way to develop more advanced AI models, something that rains heavily on OpenAI and co.’s parade since it runs contrary to everything OpenAI’s been saying for months. Funnily, it actually seems like good news for the science of AI, possibly lighting a path toward systems that are less resource-intensive (which is much needed!)
It’s yet another example of the science and the business of AI not being on the same page.
Continue Reading

Artificial Intelligence

World Economic Forum pushes digital globalism that would merge the ‘online and offline’

Published on

From LifeSiteNews

By Frank Wright

If we do not limit the freedom of reach of AI now, we will have neither liberty nor security. The digital world is already here. Who will watch whom, and according to whose rules? With the World Economic Forum, you get policed by liberal extremists.

The real-world influence of the World Economic Forum (WEF) is certainly waning – which may explain a fresh report of its push towards digital globalism.

A white paper published by the WEF last November is a roadmap for a transition from the real to the virtual world. This transition is not only about methods of governing, of course.

It means the mass migration of humanity into a virtual world.

As the document says, the World Economic Forum is calling for “global collaboration” to “redefine the norms” of a future digital state, which it calls “the metaverse.”

Merging online and offline

Titled “Shared Commitments in a Blended Reality: Advancing Governance in the Future Internet,” this agenda presumes a borderless reality for humans in which “online and offline” are merged.

As usual, there is a disturbing method in the diabolical madness of the WEF. Saying that the required technology has already arrived, it urges “aligning global standards and policies of internet governance” to moderate our increasingly digital lives.

Yet this is not about policing online speech. It is about ruling the new “blended reality.”

Mentioning mobile phones, virtual reality and the refinement of artificial intelligence in predicting and reproducing human activity, the WEF report states: “These technologies are blurring the line between online and offline lives, creating new challenges and opportunities … that require a coordinated approach from stakeholders for effective governance.”

Stakes and their holders

Yet the people holding the stakes in this online and offline game of life are not only globalists like Schwab and Soros. The vampire hunters of populism are all strong critics of globalism – the replacement of all nation states with a single world government.

It would seem that the WEF’s dream of digital globalism may be terminally interrupted by the new software running through the machinery of power.

Yet digital globalism is not the only game in town.

Amidst the welcome relief and tremendous hope sparked in the West by Trump’s “Common Sense Revolution,” there is a devil in the details of the death of the liberal order.

The algorithm of power is not going anywhere. It is here, now, and it is simply a question of how far it goes.

Digital globalism, or national digitalism?

Digital globalism may simply be swapped for national digitalism – government by algorithm in one country. Its values are not liberal, which is a change. Yet neither are the values of China, where a form of digitalism has been long established.

It is worthwhile taking a look at the community whose guidelines may rule your “online and offline” life in the absence of those of the globalists.

Here is an announcement from one globalist “datagarch,” Oracle’s Larry Ellison, one of the billionaires whose monopoly of your data enriched their lives at the expense of the capture of yours. Ellison says “citizens will be on their best behavior” with an all-pervasive AI surveillance system. 

 

Oracle’s founder CEO has said a government powered by AI could make everyone safer – because everyone would be under permanent surveillance. Comforting, isn’t it?

Ellison was named after his place of arrival in the U.S. – Ellis Island. In 2017 he donated $16 million to the Israeli army, calling Israel “our home.”

Wikipedia states, “As of January 20, 2025, he is the fourth-wealthiest person in the world, according to Bloomberg Billionaires Index, with an estimated net worth of US$188 billion, and the second wealthiest in the world according to Forbes, with an estimated net worth of $237 billion.”

In 2021, he offered Benjamin Netanyahu a “lucrative position on the board of Oracle.” That seems to partly help understand why Netanyahu, with such friends in very high places, has such an extraordinary influence on almost every single member of the U.S. Congress and Senate.

Ellison’s Oracle was named after a database he created for the CIA, in his first major programming project. In fact, “the CIA made Larry Ellison a billionaire,” as Business Insider reported.

What kind of values inspire his vision of digital governance? His biography supplies one answer:

“Ellison says that his fondness for Israel is not connected to religious sentiments but rather due to the innovative spirit of Israelis in the technology sector.”

Israel has a massive, lucrative, military-industrial complex and related software industry as revealed in “The Palestine Laboratory: How Israel exported its occupation to the world“ by Antony Loewenstein, one of many Israeli Jews who have become highly critical of the surveillance industry.

Israel’s “innovation” includes the use of predictive AI to identify, target and kill people, and systems like Pegasus – which can enter literally any phone or computer undetected and read everything. It is an astonishingly powerful program that sells for a high price and earns Israel a lot of income.

The company which makes the “no click spyware” Pegasus is called NSO. This Israeli company was sanctioned by the U.S. in 2021 to prevent its undetectable intrusion into phones and computers being used on Americans by any company, or agency, which buys it.

On January 10, an Israeli report said that Donald Trump’s Gaza ceasefire deal could see these sanctions lifted.

Do you buy the idea that this will make you safe? Do you think AI will be effective? Ellison thinks so. He says AI can produce “new mRNA vaccines in 48 hours to cure cancer.”

Do you want to live in his world? 

Buyer beware

Buyer – beware. The algorithm of digital power is here, and it is powered by data mined from your life.

People like Oracle’s Ellison, Palantir’s Alex Karp, Facebook’s Mark Zuckerberg, and Google’s Larry Page and Sergey Brin are all data miners. So is X’s Elon Musk – who is the only one of the data oligarchs warning you that AI needs to be controlled by humans – and not the other way around.

 

Two forms of digital tyranny

So what are the dangers? Under the “metaverse” proposed by the WEF, your life can be partnered with a “digital twin.”

This is the symbiotic merger of human with machine presented as the vision of our future by Klaus Schwab and the digital globalists.

Of course, your online life can be suspended or even ended if you violate the community guidelines. These rules are not written by people who agree with you.

Some people you may agree with are proposing quite the reverse. Under the algorithm of the “national digigarchy” – you will be watched, recorded, filed, and assessed for the potential commission of future crimes. You will be free to say what you like online, but depending on what you say, maybe only the algorithm will see you.

And what it sees it will never forget.

Limiting the reach of AI

If we do not limit the freedom of reach of artificial intelligence now, we will have neither liberty nor security.

The digital world is already here. Who will watch whom, and according to whose rules? With the World Economic Forum, you get policed by liberal extremists. You will be free to agree with Net Zero, degeneracy, denationalization, and a diet of meat-like treats supplied to the wipe-clean mausoleum in which you will cleanly and efficiently live.

Yet the alternative emerging also says that the rule of machines will make everything safe and effective.

Safe and effective AI?

Alex Karp sells his all-seeing Palantir as the only guarantee of public safety. He also says your secrets are safe with him – because he is “a deviant” who might like to take drugs or have an affair.

After years of crisis manufactured by policy, and with the West sick of liberal insanity, this moment of tremendous relief contains a serious threat. More people than ever have the number of the globalists, and it is not a number most faithful Christians would want to call.

People generally have seen what the WEF is selling, and they are not buying it. The danger presented by the likes of Schwab is now out in the open, shouting the quiet part out loud.

As liberal-globalist bureaucracies like these become more isolated in the Trump Revolution, they will fight for their lives. In doing so, they are displaying their true intentions. This is the only thing they can do to survive.

Everyone will see what is really on offer, few will want this devil’s bargain, and so the business model will go bust.

Yet this is not the only dangerous game being played with your life.

Beware the specter at the feast

The data miners whose programs refine the algorithm of power are selling you a new digital reality. They are telling you that it will make you safe – because everyone will be watched, forever, by machines which have no values and no heart at all, whether liberal or otherwise.

If we are not watching out, no one will notice that the new algorithm of digital power has simply been limited to the West.

In Shakespeare’s play it was the guilty man, Macbeth, who saw the specter at the feast he held for his coronation.

The ghost in the machine is not dead. The danger is that the innocent may not see it or may foolishly not want to see it. Yet it sees you. This is the algorithm of power, and for now – but not for long – we still have the power to say who it watches – and where.

Continue Reading

Trending

X