Connect with us
[bsa_pro_ad_space id=12]

Artificial Intelligence

Poll: Despite global pressure, Americans want the tech industry to slow down on AI

Published

6 minute read

From The Deep View

A little more than a year ago, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of AI systems more powerful than GPT-4. Of course, the pause never happened (and we didn’t seem to stumble upon superintelligence in the interim, either) but it did elicit a narrative from the tech sector that, for a number of reasons, a pause would be dangerous.
  • One of these reasons was simple: sure, the European Union could potentially instate a pause on development — maybe the U.S. could do so as well — but there’s nothing that would require other countries to pause, which would let these other countries (namely, China and Russia) to get ahead of the U.S. in the ‘global AI arms race.’
As the Pause AI organization themselves put it: “We might end up in a world where the first AGI is developed by a non-cooperative actor, which is likely to be a bad outcome.”
But new polling shows that American voters aren’t buying it.
The details: A recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) — and first published by Time — found that Americans would rather fall behind in that global race than skimp on regulation.
  • 75% of Republicans and 75% of Democrats said that “taking a careful controlled approach” to AI — namely by curtailing the release of tools that could be leveraged by foreign adversaries against the U.S. — is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”
  • A majority of voters are also in favor of the application of more stringent security measures at the labs and companies developing this tech.
The polling additionally found that 50% of voters surveyed think the U.S. should use its position in the AI race to prevent other countries from building powerful AI systems by enforcing “safety restrictions and aggressive testing requirements.”
Only 23% of Americans polled believe that the U.S. should eschew regulation in favor of being the first to build a more powerful AI.
  • “What I perceive from the polling is that stopping AI development is not seen as an option,” Daniel Colson, the executive director of the AIPI, told Time. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way.”
  • “And when we present that in the polling — that third path, mitigated AI development with guardrails — is the one that people overwhelmingly want.”
This comes as federal regulatory efforts in the U.S. remain stalled, with the focus shifting to uneven state-by-state regulation.
Previous polling from the AIPI has found that a vast majority of Americans want AI to be regulated and wish the tech sector would slow down on AI; they don’t trust tech companies to self-regulate.
Colson has told me in the past that the American public is hyper-focused on security, safety and risk mitigation; polling published in May found that “66% of U.S. voters believe AI policy should prioritize keeping the tech out of the hands of bad actors, rather than providing the benefits of AI to all.”
Underpinning all of this is a layer of hype and an incongruity of definition. It is not clear what “extremely powerful” AI means, or how it would be different from current systems.
Unless artificial general intelligence is achieved (and agreed upon in some consensus definition by the scientific community), I’m not sure how you measure “more powerful” systems. As current systems go, “more powerful” doesn’t mean much more than predicting the next word at slightly greater speeds.
  • Aggressive testing and safety restrictions are a great idea, as is risk mitigation.
  • However, I think it remains important for regulators and constituents alike to be aware of what risks they want mitigated. Is the focus on mitigating the risk of a hypothetical superintelligence, or is it on mitigating the reality of algorithmic bias, hallucination, environmental damage, etc.?
Do people want development to slow down, or deployment?
To once again call back Helen Toner’s comment of a few weeks: how is AI affecting your life, and how do you want it to affect your life?
Regulating a hypothetical is going to be next to impossible. But if we establish the proper levels of regulation to address the issues at play today, we’ll be in a better position to handle that hypothetical if it ever does come to pass.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

US House report exposes Biden admin push to use AI for censorship of ‘misinformation’

Published on

From LifeSiteNews

By Didi Rankovic

In a recent report the U.S. House Subcommittee on the Weaponization of the Federal Government included proposed steps to ensure that future federal governments are not using AI for censorship, such as new legislation and decentralized development.

For a while now, emerging AI has been treated by the Biden-Harris administration, but also the EU, the UK, Canada, the UN, etc., as a scourge that powers dangerous forms of “disinformation” – and should be dealt with accordingly.

According to those governments/entities, the only “positive use” for AI as far as social media and online discourse go, would be to power more effective censorship (“moderation”).

A new report from the U.S. House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government puts the emphasis on the push to use this technology for censorship as the explanation for the often disproportionate alarm over its role in “disinformation.”

We obtained a copy of the report for you here.

The interim report’s name spells out its authors’ views on this quite clearly: the document is called, “Censorship’s Next Frontier: The Federal Government’s Attempt to Control Artificial Intelligence to Suppress Free Speech.”

The report’s main premise is well-known – that AI is now being funded, developed, and used by the government and third parties to add speed and scale to their censorship, and that the outgoing administration has been putting pressure on AI developers to build censorship into their models.

What’s new are the proposed steps to remedy this situation and make sure that future federal governments are not using AI for censorship. To this end, the Committee wants to see new legislation passed in Congress, AI development that respects the First Amendment and is open, decentralized, and “pro-freedom.”

The government should also be prohibited from funding censorship-related research or collaboration with foreign entities on AI regulation that leads to censorship.

Lastly, “[a]void needless AI regulation that gives the government coercive leverage,” the document recommends.

The Committee notes the current state of affairs where the Biden-Harris administration made a number of direct moves to regulate the space to its political satisfaction via executive orders, but also by pushing its policy through by giving out grants via the National Science Foundation, once again, aimed at building AI tools that “combat misinformation.”

But “[i]f allowed to develop in a free and open manner, AI could dramatically expand Americans’ capacity to create knowledge and express themselves,” the report states.

Reprinted with permission from Reclaim The Net.

Continue Reading

Artificial Intelligence

World’s largest AI chip builder Taiwan wants Canadian LNG

Published on

Taiwan Semiconductor Manufacturing Company’s campus in Nanjing, China

From the Canadian Energy Centre

By Deborah Jaremko

Canada inches away from first large-scale LNG exports

The world’s leading producer of semiconductor chips wants access to Canadian energy as demand for artificial intelligence (AI) rapidly advances.  

Specifically, Canadian liquefied natural gas (LNG).  

The Taiwan Semiconductor Manufacturing Company (TSMC) produces at least 90 per cent of advanced chips in the global market, powering tech giants like Apple and Nvidia.  

Taiwanese companies together produce more than 60 per cent of chips used around the world. 

That takes a lot of electricity – so much that TSMC alone is on track to consume nearly one-quarter of Taiwan’s energy demand by 2030, according to S&P Global. 

“We are coming to the age of AI, and that is consuming more electricity demand than before,” said Harry Tseng, Taiwan’s representative in Canada, in a webcast hosted by Energy for a Secure Future. 

According to Taiwan’s Energy Administration, today coal (42 per cent), natural gas (40 per cent), renewables (9.5 per cent) and nuclear (6.3 per cent), primarily supply the country’s electricity 

The government is working to phase out both nuclear energy and coal-fired power.  

“We are trying to diversify the sources of power supply. We are looking at Canada and hoping that your natural gas, LNG, can help us,” Tseng said. 

Canada is inches away from its first large-scale LNG exports, expected mainly to travel to Asia.  

The Coastal GasLink pipeline connecting LNG Canada is now officially in commercial service, and the terminal’s owners are ramping up natural gas production to record rates, according to RBN Energy. 

RBN analyst Martin King expects the first shipments to leave LNG Canada by early next year, setting up for commercial operations in mid-2025.  

Continue Reading

Trending

X