Connect with us
[the_ad id="89560"]

Artificial Intelligence

Yuval Noah Harari warns against AI’s ‘ability to manipulate people,’ pretend to be human

Published

5 minute read

From LifeSiteNews

By Emily Mangiaracina

The transhumanist has highlighted the fact that AI has a real ability to deceive human beings. The question is, who is using AI, and for what purposes?

 Transhumanist philosopher and World Economic Forum (WEF) senior adviser Yuval Noah Harari recently warned on MSNBC that AI can be used to manipulate us, having already been shown to be capable of impersonating a human.

He shared the story of how the AI tool GPT-4 was programmed to seek out a real human being — a TaskRabbit worker — to convince them to solve a CAPTCHA puzzle that is designed to distinguish between human beings and AI.

“It asked a human worker, ‘Please solve the CAPTCHA puzzle for me,’” shared Harari. “This is the interesting part. The human got suspicious. It asked GPT-4, ‘Why do you need somebody to do this for you? Are you a robot?’ GPT-4 told the human, ‘No, I’m not a robot, I have a vision impairment, so I can’t see the CAPTCHA puzzles, this is why I need help.’”

The human fell for the AI tool’s lie and completed the CAPTCHA puzzle on its behalf, he recounted, pointing out that this is evidence that AI is “able to manipulate people.”

He further warned that AI has a newfound ability to “understand and manipulate” human emotions, which he said could be employed for good purposes, such as in AI “teachers” and “doctors,” but could also be used to “sell us everything from products to politicians.”

Harari suggested that regulations by which AI would be legally required to identify itself for what it is — artificial intelligence — would be a desirable solution to this potential problem.

“AI should be welcome to human conversations as long as it identifies itself as AI,” said Harari, adding that this is something both Republicans and Democrats can get behind.

What the WEF adviser did not reveal during this particular interview, however, is that he believes speech on social media should be censored under the pretext of regulating AI.

He recently argued regarding social media, “The problem is not freedom of speech. The problem is that there are algorithms on Twitter, Facebook, and so forth that deliberately promote information that captures our attention even if it’s not true.”

Harari, an atheist, has previously claimed that AI can manipulate human beings to such a degree that it renders democratic functioning as well as free will obsolete. He explained to journalist Romi Noimark in 2020, “If you have enough data and you have enough computing power, you can understand people better than they understand themselves. And then you can manipulate them in ways which were previously impossible … And in such a situation, the old democratic situation stops functioning.

Acclaimed author and investigative reporter Leo Hohmann points to the human beings behind AI as the real manipulators and real danger to the masses, rather than characterizing AI itself as a prime danger.

Hohmann believes that AI “may very well turn out to be the nerve center of the coming beast system” — referring to a potential AI system with centralized access to intimate information about ourselves, as well as the power to manipulate or control our behavior — and that in the hands of globalists like the WEF, “its core mission is to eliminate free will in the human being.”

Alberta

AI-driven data centre energy boom ‘open for business’ in Alberta

Published on

From the Canadian Energy Centre

By Deborah Jaremko and Will Gibson

“These facilities need 24/7, super-reliable power, and there’s only one power generation fuel that has any hope of keeping up with the demand surge: natural gas”

Data centres – the industrial-scale technology complexes powering the world’s growing boom in artificial intelligence – require reliable, continuous energy. And a lot of it.

“Artificial Intelligence is the next big thing in energy, dominating discussions at all levels in companies, banks, investment funds and governments,” says Simon Flowers, chief analyst with energy consultancy Wood Mackenzie.

The International Energy Agency (IEA) projects that the power required globally by data centres could double in the next 18 months. It’s not surprising given a search query using AI consumes up to 10 times the energy as a regular search engine.

The IEA estimates more than 8,000 data centres now operate around the world, with about one-third located in the United States. About 300 centres operate in Canada.

It’s a growing opportunity in Alberta, where unlike anywhere else in the country, data centre operators can move more swiftly by “bringing their own power.”

In Alberta’s deregulated electricity market, large energy consumers like data centres can build the power supply they need by entering project agreements directly with electricity producers instead of relying solely on the power of the existing grid.

Between 2018 and 2023, data centres in Alberta generated approximately $1.3 billion in revenue, growing on average by about eight percent per year, lawyers with Calgary-based McMillan LLP wrote in July.

“Alberta has a long history of building complex, multi-billion-dollar infrastructure projects with success and AI data centres could be the next area of focus for this core competency,” McMillan’s Business Law Bulletin reported.

In recent years, companies such as Amazon and RBC have negotiated power purchase agreements for renewable energy to power local operations and data centres, while supporting the construction of some of the country’s largest renewable energy projects, McMillan noted.

While the majority of established data centres generally have clustered near telecommunications infrastructure, the next wave of projects is increasingly seeking sites with electricity infrastructure and availability of reliable power to keep their servers running.

The intermittent nature of wind and solar is challenging for growth in these projects, Rusty Braziel, executive chairman of Houston, Texas-based consultancy RBN Energy wrote in July

“These facilities need 24/7, super-reliable power, and there’s only one power generation fuel that has any hope of keeping up with the demand surge: natural gas,” Braziel said.

TC Energy chief operating officer Stan Chapman sees an opportunity for his company’s natural gas delivery in Canada and the United States.

“In Canada, there’s around 300 data centre operations today. We could see that load increasing by one to two gigawatts before the end of the decade,” Chapman said in a conference call with analysts on August 1.

“Never have I seen such strong prospects for North American natural gas demand growth,” CEO François Poirier added.

Alberta is Canada’s largest natural gas producer, and natural gas is the base of the province’s power grid, supplying about 60 percent of energy needs, followed by wind and solar at 27 percent.

“Given the heavy power requirements for AI data centres, developers will likely need to bring their own power to the table and some creative solutions will need to be considered in securing sufficient and reliable energy to fuel these projects,” McMillan’s law bulletin reported.

The Alberta Electric System Operator (AESO), which operates the province’s power grid, is working with at least six proposed data centre proposals, according to the latest public data.

“The companies that build and operate these centres have a long list of requirements, including reliable and affordable power, access to skilled labour and internet connectivity,” said Ryan Scholefield, the AESO’s manager of load forecasting and market analytics.

“The AESO is open for business and will work with any project that expresses an interest in coming to Alberta.”

Continue Reading

Artificial Intelligence

Poll: Despite global pressure, Americans want the tech industry to slow down on AI

Published on

From The Deep View

A little more than a year ago, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of AI systems more powerful than GPT-4. Of course, the pause never happened (and we didn’t seem to stumble upon superintelligence in the interim, either) but it did elicit a narrative from the tech sector that, for a number of reasons, a pause would be dangerous.
  • One of these reasons was simple: sure, the European Union could potentially instate a pause on development — maybe the U.S. could do so as well — but there’s nothing that would require other countries to pause, which would let these other countries (namely, China and Russia) to get ahead of the U.S. in the ‘global AI arms race.’
As the Pause AI organization themselves put it: “We might end up in a world where the first AGI is developed by a non-cooperative actor, which is likely to be a bad outcome.”
But new polling shows that American voters aren’t buying it.
The details: A recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) — and first published by Time — found that Americans would rather fall behind in that global race than skimp on regulation.
  • 75% of Republicans and 75% of Democrats said that “taking a careful controlled approach” to AI — namely by curtailing the release of tools that could be leveraged by foreign adversaries against the U.S. — is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”
  • A majority of voters are also in favor of the application of more stringent security measures at the labs and companies developing this tech.
The polling additionally found that 50% of voters surveyed think the U.S. should use its position in the AI race to prevent other countries from building powerful AI systems by enforcing “safety restrictions and aggressive testing requirements.”
Only 23% of Americans polled believe that the U.S. should eschew regulation in favor of being the first to build a more powerful AI.
  • “What I perceive from the polling is that stopping AI development is not seen as an option,” Daniel Colson, the executive director of the AIPI, told Time. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way.”
  • “And when we present that in the polling — that third path, mitigated AI development with guardrails — is the one that people overwhelmingly want.”
This comes as federal regulatory efforts in the U.S. remain stalled, with the focus shifting to uneven state-by-state regulation.
Previous polling from the AIPI has found that a vast majority of Americans want AI to be regulated and wish the tech sector would slow down on AI; they don’t trust tech companies to self-regulate.
Colson has told me in the past that the American public is hyper-focused on security, safety and risk mitigation; polling published in May found that “66% of U.S. voters believe AI policy should prioritize keeping the tech out of the hands of bad actors, rather than providing the benefits of AI to all.”
Underpinning all of this is a layer of hype and an incongruity of definition. It is not clear what “extremely powerful” AI means, or how it would be different from current systems.
Unless artificial general intelligence is achieved (and agreed upon in some consensus definition by the scientific community), I’m not sure how you measure “more powerful” systems. As current systems go, “more powerful” doesn’t mean much more than predicting the next word at slightly greater speeds.
  • Aggressive testing and safety restrictions are a great idea, as is risk mitigation.
  • However, I think it remains important for regulators and constituents alike to be aware of what risks they want mitigated. Is the focus on mitigating the risk of a hypothetical superintelligence, or is it on mitigating the reality of algorithmic bias, hallucination, environmental damage, etc.?
Do people want development to slow down, or deployment?
To once again call back Helen Toner’s comment of a few weeks: how is AI affecting your life, and how do you want it to affect your life?
Regulating a hypothetical is going to be next to impossible. But if we establish the proper levels of regulation to address the issues at play today, we’ll be in a better position to handle that hypothetical if it ever does come to pass.
Continue Reading

Trending

X