Connect with us

Artificial Intelligence

Death of an Open A.I. Whistleblower

Published

9 minute read

By John Leake

Suchir Balaji was trying to warn the world of the dangers of Open A.I. when he was found dead in his apartment. His story suggests that San Francisco has become an open sewer of corruption.

According to Wikipedia:

Suchir Balaji (1998 – November 26, 2024) was an artificial intelligence researcher and former employee of OpenAI, where he worked from 2020 until 2024. He gained attention for his whistleblowing activities related to artificial intelligence ethics and the inner workings of OpenAI.

Balaji was found dead in his home on November 26, 2024. San Francisco authorities determined the death was a suicide, though Balaji’s parents have disputed the verdict.

Balaji’s mother just gave an extraordinary interview with Tucker Carlson that is well worth watching.

If her narrative is indeed accurate, it indicates that someone has induced key decision makers within the San Francisco Police and Medical Examiner’s Office to turn a blind eye to the obvious indications that Balaji was murdered. Based on the story that his mother told Tucker Carlson, the key corrupt figure in the medical examiner’s office is David Serrano Sewell—Executive Director of the Office of the Chief Medical Examiner.

A quick Google search of Mr. Serrano Sewell resulted in a Feb. 8, 2024 report in the San Francisco Standard headlined San Francisco official likely tossed out human skull, lawsuit saysAccording to the report:

The disappearance of a human skull has spurred a lawsuit against the top administrator of San Francisco’s medical examiner’s office from an employee who alleges she faced retaliation for reporting the missing body part.

Sonia Kominek-Adachi alleges in a lawsuit filed Monday that she was terminated from her job as a death investigator after finding that the executive director of the office, David Serrano Sewell, may have “inexplicably” tossed the skull while rushing to clean up the office ahead of an inspection.

Kominek-Adachi made the discovery in January 2023 while doing an inventory of body parts held by the office, her lawsuit says. Her efforts to raise an alarm around the missing skull allegedly led up to her firing last October.

If the allegations of this lawsuit are true, they suggest that Mr. Serrano is an unscrupulous and vindictive man. According to the SF Gov website:

Serrano Sewell joined the OCME with over 16 years of experience developing management structures, building consensus, and achieving policy improvements in the public, nonprofit, and private sectors. He previously served as a Mayor’s aideDeputy City Attorney, and a policy advocate for public and nonprofit hospitals.

In other words, he is an old denizen of the San Francisco city machine. If a mafia-like organization has penetrated the city administration, it would be well-served by having a key player run the medical examiner’s office.

According to Balaji’s mother, Poornima Ramarao, his death was an obvious murder that was crudely staged to look like a suicide. The responding police officers only spent forty minutes examining the scene, and then left the body in the apartment to be retrieved by medical examiner field agents the next day. If true, this was an act of breathtaking negligence.

I have written a book about two murders that were staged to look like suicides, and to me, Mrs. Ramarao’s story sounds highly credible. Balaji kept a pistol in his apartment for self defense because he felt that his life was possibly in danger. He was found shot in the head with this pistol, which was purportedly found in his hand. If his death was indeed a murder staged to look like a suicide, it raises the suspicion that the assailant knew that Balaji possessed this pistol and where he kept it in his apartment.

Balaji was found with a gunshot wound to his head—fired from above, the bullet apparently traversing downward through his face and missing his brain. However, he had also sustained what—based on his mother’s testimony—sounds like a blunt force injury on the left side of the head, suggesting a right-handed assailant initially struck him with a blunt instrument that may have knocked him unconscious or stunned him. The gunshot was apparently inflicted after the attack with the blunt instrument.

A fragment of a bloodstained whig found in the apartment suggests the assailant wore a whig in order to disguise himself in the event he was caught in a surveillance camera placed in the building’s main entrance. No surveillance camera was positioned over the entrance to Balaji’s apartment.

How did the assailant enter Balaji’s apartment? Did Balaji know the assailant and let him in? Alternatively, did the assailant somehow—perhaps through a contact in the building’s management—obtain a key to the apartment?

All of these questions could probably be easily answered with a proper investigation, but it sounds like the responding officers hastily concluded it was a suicide, and the medical examiner’s office hastily confirmed their initial perception. If good crime scene photographs could be obtained, a decent bloodstain pattern analyst could probably reconstruct what happened to Balaji.

Vernon J. Geberth, a retired Lieutenant-Commander of the New York City Police Department, has written extensively about how homicides are often erroneously perceived to be suicides by responding officers. The initial perception of suicide at a death scene often results in a lack of proper analysis. His essay The Seven Major Mistakes in Suicide Investigation should be required reading of every police officer whose job includes examining the scenes of unattended deaths.

However, judging by his mother’s testimony, Suchir Balaji’s death was obviously a murder staged to look like a suicide. Someone in a position of power decided it was best to perform only the most cursory investigation and to rule the manner of death suicide based on the mere fact that the pistol was purportedly found in the victim’s hand.

Readers who are interested in learning more about this kind of crime will find it interesting to watch my documentary film in which I examine two murders that were staged to look like suicides. Incidentally, the film is now showing in the Hollywood North International Film Festival. Please click on the image below to watch the film.

If you don’t have a full forty minutes to spare to watch the entire picture, please consider devoting just one second of your time to click on the vote button. Many thanks!

Share

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Alberta

Wonder Valley – Alberta’s $70 Billion AI Data Center

Published on

From the YouTube page of Kevin O’Leary

Interview with Kyle Reiling, Executive Director of the Greenview Industrial Gateway. 

“This is the only place on earth that can do something this scale”

When Kevin O’Leary heard Alberta Premier Danielle Smith reveal just how much energy Alberta has, he knew Alberta has the solution for the coming explosion in energy consumption.

Kevin O’Leary: The demand for AI is skyrocketing—and America is out of power. Enter Alberta, with abundant natural gas and a bold premier. I’m raising $70 billion to create the world’s lowest-cost, highest-efficiency data center. Hyperscalers like Tesla, Microsoft, and Google need it, and we’re making it happen. This is how you lead the AI revolution.

Continue Reading

Artificial Intelligence

The Biggest Energy Miscalculation of 2024 by Global Leaders – Artificial Intelligence

Published on

From EnergyNow.ca

By Maureen McCall

It’s generally accepted that the launch of Artificial Intelligence (AI) occurred at Dartmouth College in a 1956 AI workshop that brought together leading thinkers in computer science, and information theory to map out future paths for investigation. Workshop participants John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude E. Shannon coined the term “artificial intelligence” in a proposal that they wrote for that conference. It started AI as a field of study with John McCarthy generally considered as the father of AI.

AI was developed through the 1960s but in the 1970s-1980s, a period generally referred to as “the AI Winter”, development was stalled by a focus on the limitations of neural networks. In the late 1980s, advancements resumed with the emergence of connectionism and neural networks. The 1990s-2000s are considered to be the beginning of the AI/ Machine Learning Renaissance. In the 2010s, further growth was spurred by the expansion of Big Data and deep learning, computer power and large-scale data sets. In 2022 an AI venture capital frenzy took off (the “AI frenzy”), and AI plunged into the mainstream in 2023 according to Forbes which was already tracking applications of AI across various industries.

By early 2024, the implementation of AI across industries was well underway- in healthcare, finance, creative fields and business. In the energy industry, digitalization conferences were addressing digital transformation in the North American oil & gas industry with speakers and attendees from E&P majors, midstream, pipeline, LNG companies and more as well as multiple AI application providers and the companies speaking and attending already had AI implementations well underway.

So how did global leaders not perceive the sudden and rapid rise of AI and the power commitments it requires?

How has the 2022 “AI frenzy” of investment and subsequent industrial adoption been off the radar of global policymakers until just recently? Venture capital is widely recognized as a driver of innovation and new company formation and leaders should have foreseen the surge of AI improvement and implementation by “following the money” so to speak. Perhaps the incessant focus of “blaming and shaming” industry for climate change blinded leaders to the rapid escalation of AI development that was signaled by the 2022 AI frenzy

Just as an example of lack of foresight, in Canada, the grossly delayed 2024 Fall Economic Statement had a last-minute insertion of “up to $15 billion in aggregate loan and equity investments for AI data center projects”. This policy afterthought is 2 years behind the onset of the AI frenzy and 12+ months behind the industrial adoption of AI. In addition, the Trudeau/Guilbeault partnership is still miscalculating the enormous AI power requirements.

As an example of the size of the power requirements of AI, one can look at the Wonder Valley project- the world’s largest AI data center industrial park in the Greenview industrial gateway near Grande Prairie Alberta. It is planned to “generate and offer 7.5 GW of low-cost power to hyperscalers over the next 5-10 years.” The cost of just this one project is well beyond the funding offered in the 2024 Fall Economic Statement.

“We will engineer and build a redundant power solution that meets the modern AI compute reliability standard,” said Kevin O’Leary, Chairman of O’Leary Ventures. “The first phase of 1.4 GW will be approximately US$ 2 billion with subsequent annual rollout of redundant power in 1 GW increments. The total investment over the lifetime of the project will be over $70 billion.”

To further explore the huge power requirements of AI, one can look at the comparison of individual AI queries/searches vs traditional non-AI queries. As reported by Bloomberg, “Researchers have estimated that a single ChatGPT query requires almost 10 times as much electricity to process as a traditional Google search.” Multiply this electricity demand by the millions of industrial users as industrial AI implementation continues to expand worldwide. As in the same Bloomberg article- “By 2034, annual global energy consumption by data centers is expected to top 1,580 terawatt-hours—about as much as is used by all of India—from about 500 today.”

This is the exponential demand for electricity that North American & global leaders did not see coming – a 24/7 demand that cannot be satisfied by unreliable and costly green energy projects – it requires an “all energies” approach. Exponential AI demand threatens to gobble up supply and dramatically increase electricity prices for consumers. Likewise, leadership does not perceive that North American grids are vulnerable and outdated and would be unable to deliver reliable supply for AI data centers that cannot be exposed to even a few seconds of power outage. Grid interconnections are unreliable as mentioned in the following excerpt from a September 2024 article in cleanenergygrid.org.

“Our grid, for all of its faults, is now a single interconnected “machine” over a few very large regions of the country. Equipment failures in Arizona can shut the lights out in California, just as overloaded lines in Ohio blacked out 55 million people in eight states from Michigan to Boston – and the Canadian province of Ontario – in 2003.”

AI’s power demands are motivating tech companies to develop more efficient means of developing AI. Along with pressure to keep fossil fuels in the mix, billions are being invested in alternative energy solutions like nuclear power produced by Small Nuclear Reactors (SMRs).

Despite SMR optimism, the reality is that no European or North American SMRs are in operation yet. Only Russia & China have SMRs in operation and most data centers are focusing on affordable natural gas power as the reality sets in that nuclear energy cannot scale quickly enough to meet urgent electricity needs. New SMR plants could be built and operational possibly by 2034, but for 2025 Canada’s power grid is already strained, with electricity demand to grow significantly, driven by electric vehicles and data centers for AI applications.

AI has a huge appetite for other resources as well. For example, the most energy and cost-efficient ways to chill the air in data centers rely on huge quantities of potable water and the exponential amount of data AI produces will require dramatic increases in internet networks as well as demand for computer chips and the metals that they require. There is also an intense talent shortage creating AI recruitment competitions for the talent pool of individuals trained by companies like Alphabet, Microsoft and OpenAI.

AI development is now challenging the public focus on climate change. In Canada as well as in the U.S. and globally, left-leaning elected officials who focused keenly on policies to advance the elimination of fossil fuels were oblivious to the tsunami of AI energy demand about to swamp their boats. Canadian Member of Parliament Greg McLean, who has served on the House of Commons Standing Committees of Environment, Natural Resources, and Finance, and as the Natural Resources critic for His Majesty’s Loyal Opposition, has insight into the reason for the change in focus.

“Education about the role of all forms of energy in technology development and use has led to the logical erosion of the ‘rapid energy transition’ mantra and a practical questioning of the intents of some of its acolytes. The virtuous circle of technological development demanding more energy, and then delivering solutions for society that require less energy for defined tasks, could not be accomplished without the most critical input – more energy. This has been a five-year journey, swimming against the current — and sometimes people need to see the harm we are doing in order to objectively ask themselves ‘What are we accomplishing?’ … ‘What choices are being made, and why?’…. and ‘Am I getting the full picture presentation or just the part someone wants me to focus on?’”

With the election of Donald Trump, the “Trump Transition” now competes with the “Energy Transition” focus, changing the narrative in the U.S. to energy dominance. For example, as reported by Reuters, the U.S. solar industry is now downplaying climate change messaging.

“The U.S. solar industry unveiled its lobbying strategy for the incoming Trump administration, promoting itself as a domestic jobs engine that can help meet soaring power demand, without referencing its role in combating climate change.”

It’s important to note here that the future of AI is increasingly subject to societal considerations as well as technological advancements. Political, ethical, legal, and social frameworks will increasingly impact AI’s development, enabling or limiting its implementations. Since AI applications involve “human teaming” to curate and train AI tools, perceptions of the intent of AI implementations are key. In the rush to implementation, employees at many companies are experiencing changing roles with increased demand for workers to train AI tools and curate results. Will tech optimism be blunted by the weight of extra tasks placed on workers and by suspicions that those workers may ultimately be replaced? Will resistance develop as humans and AI are required to work together more closely?

Business analyst Professor Henrik von Scheel of the Arthur Lok Jack Global School of Business describes the importance of the human factor in AI adoption.

“It’s people who have to manage the evolving environment through these new tools,” von Scheel explains. “It’s been this way ever since the first caveperson shaped a flint, only now the tools are emerging from the fusion of the digital, physical and virtual worlds into cyber-physical systems.”

A conversation with a recent graduate who questioned the implementation of AI including the design of guardrails and regulations by members of an older generation in management made me wonder…Is there a generational conflict brewing from the lack of trust between the large proportion of baby boomers in the workforce- predominantly in management- and the younger generation in the workforce that may not have confidence in the ability of mature management to fully understand and embrace AI tech and influence informed decisions to regulate it?

It’s something to watch in 2025.

Maureen McCall is an energy professional who writes on issues affecting the energy industry.

Continue Reading

Trending

X