AI Regulation Chronicles --- EXTINCTION RISK: the AGI Frankenstein scenario

 


Hundreds of leading figures in artificial intelligence", according to the Financial Times, issued a statement in May 2023 describing the existential threat that the technology which they helped to create poses to humanity.

 “Mitigating the risk of extinction from AI should be a global priority,” it said, “alongside other societal-scale risks such as pandemics and nuclear war.” 

A shame they forgot climate change, which we expect you have noticed is already arriving!

So, what’s your p(AI doom)?

After the shock of the extinction risk statement, the West Coast AI community immediately became embroiled in heated conversations about doomsday scenarios with a runaway superintelligence.

Symptomatic of this Silicon Valley excitement is a sort of fashionable “parlor game” called P(AI doom), in which a participant provides a personal estimate of the "probability" of the destruction of humanity by a genocidal AGI. On a Hard Fork podcast,in late May,  Kevin Roose of The NY Times set his P(AI doom) at 5%. Ajeya Cotra, an AI safety specialist with Open Philanthropy, set her P(AI Doom) at 20 to 30%.

But wait! Worse is yet to come! 

A while ago, what we would call rumours (picked up by the indispensable Dr Gary Marcus) began floating around social media that "multiple unnamed AI experts allegedly (a) think there is a 75% extinction risk, (b) continue to work on AI, and (c) refuse to report their estimates publicly."No time frame was given, but it still sounds pretty awful.

The most succinct answer to all this talk about genocidal AGI comes from the Chief AI Scientist of Meta, Yann Le Cun, who has dismissed existential fears around AI as “preposterous” and akin to an “apocalyptic cult”. Going back to 2019, Le Cun wrote an article in the Scientific American entitled Don’t fear the Terminator. The key point is that “Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others.”

We agree, but would like to add a few points. 


Our POV on “AGI Extinction Risk” 

Not a  “well defined event”

The hypothetical arrival of a genocidal AI is not the sort of “well defined event” for which assigning probabilities even makes sense. As Math Professor Noah Giansiracusa wrote in an extended exchange on this question on Twitter: “What irks me (as a mathematician) is so many people rush to state their p(AI doom) without defining what the heck this is. A probability estimate is meaningless if the event is not well defined.”

But then putting a quantitative (subjective) probability on this sort of personal speculation makes it all sound so very scientific and precise… which of course it is not.

While some AI thought leaders feel the P(AI doom) cannot be Zero and should not be dismissed outright, we would say that –without objective evidence -  a more appropriate affirmation would be P(AI doom) = undefined. Even so, we support continued precautionary study of a hypothetical AI Extinction Risk, but NOT making it the center of the worldwide AI risk and regulation conversation. 

Too many definitions of AGI

A related problem was raised in a recent post by Jack Clark, Policy Director at OpenAI for four years and a co-founder of Anthropic: “Discussions about AGI tend to be pointless as no one has a precise definition of AGI, and most people have radically different definitions. In many ways, AGI feels more like a shibboleth used to understand if someone is in- or out-group with some issues.”

In addition to the very ill defined AGI which might - or might not - run amuck, people are also throwing around terms like “Frontier AI” and now “Super Intelligence”, which OpenAI claims will be even more awesome than AGI.  None of them are well defined scientifically, but they do sound intimidating…

Origin story: “AGI” as a rebrand for AI

To get some additional perspective, we went back to an article on the origin of the term “Artificial General Intelligence.” “In the mid-2000s Ben Goertzel (a cognitive scientist & AI researcher) and others felt the need to rebrand AI in a way more suited to the grand possibilities of more human like AI that lay ahead.

They settled on the term “artificial general intelligence,” aka AGI. The important point is that the term did not refer to a specific technology or set of mechanisms…It was an aspirational term, not a technical one.

      Our stab at an AGI definition

Since we have been criticizing how everyone else is using “AGI”, it is only fair to give our own position. In our opinion (take it or leave it), it will make sense to talk about an AGI as a real technology inflexion point, an AI that is radically new and more powerful when and if it attains consciousness.  If it can be done, we think it will have a lot to do with neuroscience. 

According to an article in The Gradient -  An Introduction to the Problems of AI Consciousness -  “Once considered a forbidden topic in the AI community, discussions around the subject of AI consciousness  are now taking center stage, marking a significant shift since the current AI resurgence began over a decade ago.

Given the difficulty of this topic, we asked for help from some of our talented AI friends. First, we asked for input from Nat, a fine Tech woman who runs an always useful technology news letter:  The AI Observer  on SubStack She generously shared a good amount of research by herself and others, in which found an article by Susan Schneider published on Edge entitled The Future of the Mind that is particularly relevant to this conversation.

Nat told us that Susan Schneider is a “visionary woman” who inspires her greatly, and holds esteemed positions as the Director of the AI, Mind and Society Group at The University of Connecticut, a Distinguished Scholar at the US Library of Congress, and is an author known for her book ‘Artificial You’.

According to Dr Schneider, “I think about the fundamental nature of the mind and the nature of the self.” Her provocative article raised some crucial very open questions about consciousness and AGI:

Question 1: Conscious experience is the felt quality of your mental life. As AI gets more sophisticated, one thing that I’ve been very interested in is whether the beings that we might create could have conscious experiences.

Question 2: If we have AGI, intelligence that’s capable of flexibly connecting ideas across different domains and maybe having something like sensory experience, what I want to know is whether it would be conscious or if it would all just be computing in the dark—engaging in things like visual recognition tasks from a computational perspective and thinking sophisticated thoughts, but not truly being conscious.

Question 3: Unlike many philosophers, I tend to take a wait-and-see approach about machine consciousness. For one thing, I reject a full skeptical line....but I think it's too early to tell. There will be many variables that determine whether there will be conscious machines.

Question 4: For another thing, we have to ask whether it is even compatible with the laws of nature to create machines that are conscious. We just don’t know if consciousness can be something that’s implemented in other substrates. …we don’t know what an AGI would be made out of. So, until that time, it’s awfully difficult for us to say that something which is highly intelligent would even be conscious.

Question 5: It is probably safest right now to drive a conceptual wedge between the idea of sophisticated intelligence on the one hand and consciousness on the other. … for all we currently know… the most sophisticated intelligences won't be conscious. There are a lot of issues, and not just issues involving substrates, that will determine whether conscious machines are possible.

These are all great questions that not very many people are talking about. Our favorite is Question 5 where Dr Schneider writes: “for all we currently know…the most sophisticated intelligences won't be conscious.” This is a hugely important point, since we consider consciousness to be a make or break criterion for an AI to be an AGI. 

About the same time, we got some feedback from a heavy hitter Tech and Bio Tech friend Eveline Ruehlin about current research on consciousness and AI. In a short LinkedIn blog “Thoughts on Artificial General Intelligence (#AGI) and #Consciousness” she wrote:  

“Notably with regard to AI and consciousness, leading scientists like Yann Le Cun and Yoshua Bengio, who are currently working on integrating conscious processing into AI systems using neural networks, indicate “we are not there yet.” When faced with complex challenges, machines are still unable to consciously reason and adapt like humans are capable of.

After which, Eveline thoughtfully added: “The challenge of building morally responsible machines, starts with the individuals building them, their ethical standards, civility, morals….. ”

We will come back to this great point from Eveline at the end of this Chronicle.

      Can a genocidal AGI run amuck and kill us all?

A recent Reuter/IPSOS poll found that more than two-thirds of Americans are concerned about the negative effects of AI and 61% believe it could threaten civilization.

Tech CEO’s are busily warning clueless politicians around the world about the terrible dangers of genocidal AGI. They say in effect: put enough data into one end of our increasingly sophisticated AI models with billions or even trillions of parameters, and a genocidal AGI Frankenstein may well walk out the other end, ready and willing to destroy us all.

Given the roles of Tech CEO’s in this crazy mess, their dire warnings are not lacking in irony...

They are also complete nonsense, according to the Silicon Valley legend Marc Andreessen (he who famously predicted “software is eating the world”).“AI does not want, it does not have goals, it does not want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.”

Also according to Andreessen in  WIRED,  the idea that AI could even decide to kill us all is a “category error”—it assumes AI has a mind of its own. Rather, he says, AI “is math—code—computers, built by people, owned by people, used by people, controlled by people.”

      The apocalypse cult of AI Risk

So how does all this add up as a worldwide phenomenon of psychosocial techno babble?  We will again cite Andreessen, since he knows all about Tech creativity and craziness:

"The reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation…. a set of AI risk doomers who sound so extreme (because) they’ve whipped themselves into a frenzy…. It turns out that this type of cult is not new – there is a longstanding Western tradition of millenarianism, which generates apocalypse cults. The AI risk cult has all the hallmarks of a millenarian apocalypse cult.”

Andreessen adds: “Some of these true believers are even …arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war.”

This way surely lies madness…..

Now let’s take a look at how Big Tech has gotten involved in this crazy business. 

        Big Tech and the regulatory capture playbook.

Now it’s mainly the Big Tech companies that are busily warning the world about the terrible dangers we are facing due to technologies…which they themselves developed. And they want tough regulation (so they say) in order to protect everyone from the damage that may (or may not) happen as a result

That of course is not the whole story

According to the highly respected AI expert François Chollet: “The extinction narrative is being fed to you by a small set of LLM companies, both as a marketing ploy (our tech is so powerful it could destroy everything!) and as a way to gain the attention of (endlessly gullible) lawmakers in order to achieve regulatory capture.

Is it any surprise that “In the world of generative AI, it is the big names that get the most airtime. Big Tech players like Microsoft and lavishly funded startups like OpenAI have earned invitations to the White House and the earliest of what will likely be many, many congressional hearings? They’re the ones that get big profile pieces to discuss how their technology will end humanity

It’s no coincidence that OpenAI recently introduced its new super alignment initiative, a four-year effort involving the formation of a new team to help humanity ensure AI systems (especially Super Intelligence) much smarter than humans follow human intent -- and thus avoid an AI doomsday scenario. As one Tech magazine wrote on July 7th, “OpenAI Launches 'Superalignment' to Guard Against AI Destroying Humanity”

We can also note that as AWS gets into the AI large model game, it has invested up to $4 billion for a minority stake and further AWS integration into Anthropic, a key player in AI safety. This is a very smart move by AWS in today’s paranoid environment.

To wrap this up, we will cite a world class prestigious expert who only recently joined the conversation:  Andrew Ng, a professor at Stanford University who taught machine learning to the likes of OpenAI co-founder Sam Altman, and who himself co-founded Google Brain.An interview with him was published October 30th entitled Google Brain founder says big tech is lying about AI extinction danger. In addition to the incendiary title Prof Ng said:

 “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.” He added: “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.

BOTTOM LINE: We would never claim that this AI panic was started as a conspiracy by industry players in a backroom. We do believe however that a purely hypothetical genocidal AGI risk has translated into a “millenarian apocalypse cult” AND that Big Tech has done a great job of leveraging it for its own designs.

After all, the “AGI Frankenstein scenario” is a great “two for the price of one” opportunity for the big players (with enormous financial and regulatory compliance resources) to ensure that there won’t be much space in the market for pesky open source competitors, nor for that matter for customer choice. 

After all, the “AGI Frankenstein scenario” is a great “two for the price of one” opportunity for the big players (with enormous financial and regulatory compliance resources) to ensure that there won’t be much space in the market for pesky open source competitors, nor for that matter for customer choice. 

Concluding remarks

In the upcoming Chronicles, we will continue to focus on analyzing large scale AI risks, before getting around to dissecting the different regulatory approaches, especially the EU AI Act. We will continue with several more large scale, international AI risks:  

-          - Protecting Privacy and Intellectual Property rights

-          - Mitigating negative Environmental impact of large AI models

-          - Dealing with Labor Market turbulence and transformation

We will NOT try to take on all the numerous risks related to LLM hallucinations, biased data and assorted dysfunctional systems. Most of them are intimately tied to numerous AI use cases, which is where the regulation should happen… but probably will not.  

Coming back to our AGI Frankenstein scenario, we believe that the dire warning being pushed by the AI Risk Cultists and Big Tech - where an immensely intelligent and capable AGI acquires or is built with agency and then decides to run amuck out of all control - is only a wacky narrative that has had a certain success. We would put it in the category Black Swan Event -  however unlikely, we should still keep an eye on it.

To conclude, we asked our friend Bill Mew,  a well known European visionary thinker and tireless advocate for ethics in tech, for his take:

“I don’t think that machines are a real threat, but humans operating machines are.”

Bill then added: “Rather than fear machines that are far from sentient, biased or self-serving, I fear the programmers that introduce bias and their overlords who task them to utilise the machines (and weapons of war) for their own, only-too-sentient and self-serving aims."

 


Comments

Popular posts from this blog

AI REGULATION CHRONICLES (1) Background: from Watson to AlphaGoZero and into the weird world of ChatGPT