Connect with us
[bsa_pro_ad_space id=12]

Artificial Intelligence

OpenAI and Microsoft negotiations require definition of “artificial general intelligence”

Published

6 minute read

From The Deep View

Ian Krietzberg

 

OpenAI’s bargaining chip 

A couple of relatively significant stories broke late last week concerning the — seemingly tenuous — partnership between OpenAI and Microsoft.
The background: OpenAI first turned to Microsoft back in 2019, after the startup lost access to Elon Musk’s billions. Microsoft — which has now sunk more than $13 billion into the ChatGPT-maker — has developed a partnership with OpenAI, where Microsoft provides the compute (and the money) and OpenAI gives Microsoft access to its generative technology. OpenAI’s tech, for instance, powers Microsoft’s Copilot.
According to the New York Times, OpenAI CEO Sam Altman last year asked Microsoft for more cash. But Microsoft, concerned about the highly publicized boardroom drama that was rocking the startup, declined.
  • OpenAI recently raised $6.6 billion at a $157 billion valuation. The firm expects to lose around $5 billion this year, and it expects its expenses to skyrocket over the next few years before finally turning a profit in 2029.
  • According to the Times, tensions have been steadily mounting between the two companies over issues of compute and tech-sharing; at the same time, OpenAI, focused on securing more computing power and reducing its enormous expense sheet, has been working for the past year to renegotiate the terms of its partnership with the tech giant.
Microsoft, meanwhile, has been expanding its portfolio of AI startups, recently bringing the bulk of the Inflection team on board in a $650 million deal.
Now, the terms of OpenAI’s latest funding round were somewhat unusual. The investment was predicated on an assurance that OpenAI would transition into a fully for-profit corporation. If the company has not done so within two years, investors can ask for their money back.
According to the Wall Street Journal, an element of the ongoing negotiation between OpenAI and Microsoft has to do with this restructuring, specifically, how Microsoft’s $14 billion investment will transfer into equity in the soon-to-be for-profit company.
  • According to the Journal, both firms have hired investment banks to help advise them on the negotiations; Microsoft is working with Morgan Stanley and OpenAI is working with Goldman Sachs.
  • Amid a number of wrinkles — the fact the OpenAI’s non-profit board will still hold equity in the new corporation; the fact that Altman will be granted equity; the risks of anti-trust scrutiny, depending on the amount of equity Microsoft receives — there is another main factor that the two parties are trying to figure out: what governance rights either company will have once the dust settles.
Here’s where things get really interesting: OpenAI isn’t a normal company. It’s mission is to build a hypothetical artificial general intelligence, a theoretical technology that is pointedly lacking in any sort of universal definition. The general idea here is that it would possess, at least, human-adjacent cognitive capabilities; some researchers don’t think it’ll ever be possible.
There’s a clause in OpenAI’s contract with Microsoft that if OpenAI achieves AGI, Microsoft gets cut off. OpenAI’s “board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”
To quote from the Times: “the clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations.”
This is a good example of why the context behind definitions matters so much when discussing anything in this field. There is a definitional problem throughout the field of AI. Many researchers dislike the term “AI” itself; it’s a misnomer — we don’t have an actual artificial intelligence.
The term “intelligence,” is itself vague and open to the interpretation of the developer in question.
And the term “AGI” is as formless as it gets. Unlike physics, for example, where gravity is a known, hard, agreed-upon concept, AGI is theoretical, hypothetical science; further, it is a theory that is bounded by resource limitations and massive limitations in understanding around human cognition, sentience, consciousness and intelligence, and how these all fit together physically.
This doesn’t erase the fact that the labs are trying hard to get there.
But what this environment could allow for is a misplaced, contextually unstable definition of AGI that OpenAI pens as a ticket either out from under Microsoft’s thumb, or as a means of negotiating the contract of Sam Altman’s dreams.
In other words, OpenAI saying it has achieved AGI, doesn’t mean that it has.
As Thomas G. Dietterich, Distinguished Professor Emeritus at Oregon State University said: “I always suspected that the road to achieve AGI was through redefining it.”

After 15 years as a TV reporter with Global and CBC and as news director of RDTV in Red Deer, Duane set out on his own 2008 as a visual storyteller. During this period, he became fascinated with a burgeoning online world and how it could better serve local communities. This fascination led to Todayville, launched in 2016.

Follow Author

Artificial Intelligence

Canadian Court Upholds Ban on Clearview AI’s Unconsented Facial Data Collection

Published on

logo

By

Clearview AI is said to subjecting billions of people to this, without consent. From there, the implications for privacy, free speech, and even data security are evident.

Facial recognition company Clearview AI has suffered a legal setback in Canada, where the Supreme Court of British Columbia decided to throw out the company’s petition aimed at cancelling an Information and Privacy Commissioner’s order.

The order aims to prevent Clearview AI from collecting facial biometric data for biometric comparison in the province without the targeted individuals’ consent.

We obtained a copy of the order for you here.

The controversial company markets itself as “an investigative platform” that helps law enforcement identify suspects, witnesses, and victims.

Privacy advocates critical of Clearview AI’s activities, however, see it as a major component in the burgeoning facial surveillance industry, stressing in particular the need to obtain consent – via opt-ins – before people’s facial biometrics can be collected.

And Clearview AI is said to subjecting billions of people to this, without consent. From there, the implications for privacy, free speech, and even data security are evident.

The British Columbia Commissioner appears to have been thinking along the same lines when issuing the order, that bans Clearview from selling biometric facial arrays taken from non-consenting individuals to its clients.

In addition, the order instructs Clearview to “make best efforts” to stop the practice in place so far, which includes collection, use, and disclosure of personal data – but also delete this type of information already in the company’s possession.

Right now, there is no time limit to how long Clearview can retain the data, which it collects from the internet using an automated “image crawler.”

Clearview moved to try to get the order dismissed as “unreasonable,” arguing that on the one hand, it is unable to tell if an image of a persons face is that of a Canadian, while also claiming that no Canadian law is broken since this biometric information is available online publicly.

The legal battle, however, revealed that images of faces of residents of British Columbia, children included, are among Clearview’s database of more than three billion photos (of Canadians) – while the total figure is over 50 billion.

The court also finds the Commissioner’s order to be very reasonable indeed – including when rejecting “Clearview’s bald assertion” that, in British Columbia, “it simply could not do” what it does in the US state of Illinois, to comply with the Biometric Information Privacy Act (BIPA).

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Continue Reading

Artificial Intelligence

Death of an Open A.I. Whistleblower

Published on

By John Leake

Suchir Balaji was trying to warn the world of the dangers of Open A.I. when he was found dead in his apartment. His story suggests that San Francisco has become an open sewer of corruption.

According to Wikipedia:

Suchir Balaji (1998 – November 26, 2024) was an artificial intelligence researcher and former employee of OpenAI, where he worked from 2020 until 2024. He gained attention for his whistleblowing activities related to artificial intelligence ethics and the inner workings of OpenAI.

Balaji was found dead in his home on November 26, 2024. San Francisco authorities determined the death was a suicide, though Balaji’s parents have disputed the verdict.

Balaji’s mother just gave an extraordinary interview with Tucker Carlson that is well worth watching.

If her narrative is indeed accurate, it indicates that someone has induced key decision makers within the San Francisco Police and Medical Examiner’s Office to turn a blind eye to the obvious indications that Balaji was murdered. Based on the story that his mother told Tucker Carlson, the key corrupt figure in the medical examiner’s office is David Serrano Sewell—Executive Director of the Office of the Chief Medical Examiner.

A quick Google search of Mr. Serrano Sewell resulted in a Feb. 8, 2024 report in the San Francisco Standard headlined San Francisco official likely tossed out human skull, lawsuit saysAccording to the report:

The disappearance of a human skull has spurred a lawsuit against the top administrator of San Francisco’s medical examiner’s office from an employee who alleges she faced retaliation for reporting the missing body part.

Sonia Kominek-Adachi alleges in a lawsuit filed Monday that she was terminated from her job as a death investigator after finding that the executive director of the office, David Serrano Sewell, may have “inexplicably” tossed the skull while rushing to clean up the office ahead of an inspection.

Kominek-Adachi made the discovery in January 2023 while doing an inventory of body parts held by the office, her lawsuit says. Her efforts to raise an alarm around the missing skull allegedly led up to her firing last October.

If the allegations of this lawsuit are true, they suggest that Mr. Serrano is an unscrupulous and vindictive man. According to the SF Gov website:

Serrano Sewell joined the OCME with over 16 years of experience developing management structures, building consensus, and achieving policy improvements in the public, nonprofit, and private sectors. He previously served as a Mayor’s aideDeputy City Attorney, and a policy advocate for public and nonprofit hospitals.

In other words, he is an old denizen of the San Francisco city machine. If a mafia-like organization has penetrated the city administration, it would be well-served by having a key player run the medical examiner’s office.

According to Balaji’s mother, Poornima Ramarao, his death was an obvious murder that was crudely staged to look like a suicide. The responding police officers only spent forty minutes examining the scene, and then left the body in the apartment to be retrieved by medical examiner field agents the next day. If true, this was an act of breathtaking negligence.

I have written a book about two murders that were staged to look like suicides, and to me, Mrs. Ramarao’s story sounds highly credible. Balaji kept a pistol in his apartment for self defense because he felt that his life was possibly in danger. He was found shot in the head with this pistol, which was purportedly found in his hand. If his death was indeed a murder staged to look like a suicide, it raises the suspicion that the assailant knew that Balaji possessed this pistol and where he kept it in his apartment.

Balaji was found with a gunshot wound to his head—fired from above, the bullet apparently traversing downward through his face and missing his brain. However, he had also sustained what—based on his mother’s testimony—sounds like a blunt force injury on the left side of the head, suggesting a right-handed assailant initially struck him with a blunt instrument that may have knocked him unconscious or stunned him. The gunshot was apparently inflicted after the attack with the blunt instrument.

A fragment of a bloodstained whig found in the apartment suggests the assailant wore a whig in order to disguise himself in the event he was caught in a surveillance camera placed in the building’s main entrance. No surveillance camera was positioned over the entrance to Balaji’s apartment.

How did the assailant enter Balaji’s apartment? Did Balaji know the assailant and let him in? Alternatively, did the assailant somehow—perhaps through a contact in the building’s management—obtain a key to the apartment?

All of these questions could probably be easily answered with a proper investigation, but it sounds like the responding officers hastily concluded it was a suicide, and the medical examiner’s office hastily confirmed their initial perception. If good crime scene photographs could be obtained, a decent bloodstain pattern analyst could probably reconstruct what happened to Balaji.

Vernon J. Geberth, a retired Lieutenant-Commander of the New York City Police Department, has written extensively about how homicides are often erroneously perceived to be suicides by responding officers. The initial perception of suicide at a death scene often results in a lack of proper analysis. His essay The Seven Major Mistakes in Suicide Investigation should be required reading of every police officer whose job includes examining the scenes of unattended deaths.

However, judging by his mother’s testimony, Suchir Balaji’s death was obviously a murder staged to look like a suicide. Someone in a position of power decided it was best to perform only the most cursory investigation and to rule the manner of death suicide based on the mere fact that the pistol was purportedly found in the victim’s hand.

Readers who are interested in learning more about this kind of crime will find it interesting to watch my documentary film in which I examine two murders that were staged to look like suicides. Incidentally, the film is now showing in the Hollywood North International Film Festival. Please click on the image below to watch the film.

If you don’t have a full forty minutes to spare to watch the entire picture, please consider devoting just one second of your time to click on the vote button. Many thanks!

Share

Continue Reading

Trending

X