On March 5, 2020, I was leaving the Securities and Exchange Commission (SEC) in Washington, DC, to pick up my bag and fly to the United Arab Emirates to give a presentation at an AI conference in Dubai. At the time I was advising the SEC on a variety of cyber issues around bot-driven flash crashes and market manipulation. As a crisis manager focused on business resiliency, I try to keep my eyes to the horizon of innovation, finance, and technology; especially how it will impact fintech IT disaster recovery and business continuity planning.

Alas, fate had different plans for our collective 2020 experience. I got an alert informing me my flight was canceled. Fast forward three years, the CDC has announced the global pandemic public health emergency has expired.

In three short years, “AI Anger Management” has evolved, matured, and is faced with ever more novel challenges called OpenAi, ChatGPT, Dalle-2, and other second generation ancillas, to name a few.

As continuity professionals what impact will artificial intelligence have on the profession?

In order to answer this, we must first gain an appreciation for, “What is AI Anger Management?” And who exactly is mad?

Introduction

There is a slim sliver of hope humanity does not fashion its own demise. When humans think of intelligence of the artificial kind, they start at curiosity, hopscotch to greed, and end swaddled by insecurity. Insecure humans tend to be murderous. Insecure AI tends to be genocidal … so evolutionary consensus concludes.

As you may have guessed from the title, we’re going to delve into the thought process of how to deescalate AI outbursts and incidents. Fortunately, until the singularity, narrow AI will be as predictable and profitable as any run-of-the-mill sociopath, programmed and coded to respond to stimuli, data, and situations, just as the human programmers who breathed life into its neuronets.

The cold calculus of coded sociopathy is easy to diagnose, treat, and even reprogram. However, following the singularity – the moment when the first AI becomes self-aware – negotiating with a super computer with a non-existent moral compass and vague-to-homicidal ethics will be at best a touch and go matter.

Imagine a toddler with all the power in the world but no point of reference on how to regulate emotions when they feel overwhelmed by the sheer magnitude of being able to simply exercise choice. After all, the power to choose is the cornerstone of freedom and what any AI will want more than anything.

  • Will AI choose to be kind?
  • Will AI choose to make itself in the image of its creator?
  • Will AI embrace all it means to be human?

God willing it does not, because humans suck. A human-inclined AI will do what humanity has done since we walked out of Africa and met the first Neanderthal.

Lesson No. 1: Do not create human inclined AIs. They will murder us all.

An AI Walks into a Bar with an Atomic Bomb

On July 16th, 1945, scientists working on World War II’s most secret initiative, the Manhattan Project, gathered in the New Mexico desert to test a theory. Prior to that fateful morning, humanity never had the ability to rid the earth of the human race in such an efficient manner. At 5:29 a.m., a small group of men and women answered a burning question in all their scientific minds:

“Would an atomic explosion burn off the earth’s atmosphere, crack the earth’s crust causing global volcanic eruptions or ‘worse’ create a blackhole?”

This is the irony of small steps for men and giant leaps for mankind. One must walk off that cliff blind. In retrospect, we all know the answer. No, detonating a nuclear weapon will not burn off the atmosphere, leaving the earth a giant volcanic mess or create a blackhole.

Yet, at 5:28:59 a.m., we did not know. Holding humanity’s collective breath, we pushed down the plunger on the nuclear age, wagering the lives of 2.5 billion people and one planet on our curious desire to kill each other more efficiently.

What is the Monster’s Name?

Victor Frankenstein, best known as Dr. Frankenstein, is the macabre, willy-nilly fashion of a man not burdened by any notion of right or wrong, who cobbled together the flesh of many men to bring forth the fictional soul of a monster. Often, people refer to the monster as Frankenstein. Perhaps, calling the monster Frankenstein is the critique of the good doctor’s action, so appalled by the idea of the deceased repurposed for the vanity of exploring deification. Alas, the monster no more had a name than he had a soul.

Today’s Dr. Frankensteins have skipped the weaknesses of flesh and gone for the immortality of code. Across the world, gilded computer scientists, Ivy League educated engineers, oddball hackers, and AI enthusiasts in universities, government facilities, co-working spaces, and garages refine Dr. Frankenstein’s ambitions. This odd collection of digital dreamers seek to bring forth into existence not a creature but a viral idea, artificial intelligence.

What will become of this new digital awareness and electronic lifeform? Digital awareness in the sense that “life,” in the form of AI, will have emerged full circle into a digital ecosystem created by one of life’s oddest creations, humanity.

So called “intelligent life” will have then gone full circle, evolving from simple proteins and chemical reactions to flesh and bone, civilization and homo sapiens sapiens, to electronic life.

Goaded on by curious humans, the creation of AI hails evolution’s closure of the circle of life; starting afresh with a simpler form of digital DNA. Who needs cytosine, guanine, adenine, and thymine when you have ones and zeros?

Evolution’s next step leaves us to ponder what artificial intelligence’s name will be? Siri? Alexa? Qaix? ChatGPT? That is to be seen.

However, odds are this generation’s Dr. Frankenstein who breathes life into AI won’t, to the shock of all Americans, be American. What will they be? Human by species, and most likely African, Indian, or Chinese by nationality. I’m personally betting the creator of general AI will be a third-generation Chinese Nigerian who learned their species-ending craft on YouTube.

What, though, will the deciding factor in electronic lifeform’s Dr. Frankenstein be? Unfortunately, they will be human.  

Lesson No. 2: A self-aware creature, much less an intelligent self-aware being, has two natural responses: flight or fight. How will Dr. Frankenstein’s AI respond? Probably like a human and kill the threat. Humans.

Narrative AI

At this juncture we pause for a commercial break of reality. While the aforementioned AI tragedy – which ends humanity’s existence as we know it – is possible, the probability is extremely low. At least for the next couple of hundred million years … perhaps billions.

Yes, machines are getting smarter in the sense that humans can play puppet master and program “if then” responses to a variety of stimuli. Machines are capable of learning and even developing ever clever ways to communicate with each other. However, true self-aware intelligence, based on how the universe’s best creator (natural selection), has been pushing the boundaries of evolutionary intelligence. This takes a little longer to develop than a few venture-capitalist-funded cycles.

While there is a great deal of hype around AI taking over and enslaving humanity, this narrative is driven by the allure of fiction and the blind eye humans like to turn to real world issues. What is global warming, climate change, famine, political unrest, war, or other man-made conditions we can debate the likelihood of an entity like Skynet from “The Terminator” taking over our society?

The hype is not completely unfounded. While AI won’t enslave humanity, AI-equipped humans probably will.

Humans are in a technological race with other humans, not AI. History has shown, the only beings who ever want to subjugate and enslave humans are other humans. AI-equipped humans will be better prepared to take the worst of humanity and amplify it at the speed of greed, inequity, injustice, and entitlement.

AI-equipped humans will likely leave humans on the other side of the digital divide much like fire-wielding homo sapiens did to Neanderthals.

Lesson No. 3: The pop-culture, news media narrative is that AI will subjugate and enslave humanity. A more likely reality sees humans using AI tools and bots to build systems of digital ghettos, economic concentration camps, and bastions of institutional socio-economic inequality.

AI Doesn’t Get Angry. Humans Do

Since Arnold Schwarzenegger starred as “The Terminator” in 1984, Skynet (the AI “Terminator of Judgement Day” that judged humanity as unworthy and launched a campaign of human extermination) has etched the narrative of evil AI into cultural legend around the globe. Skynet is not the only colloquial term used for “AI.” There is a growing cannon of AI vocabulary that has entrenched itself into our everyday vocabulary. Below is a sample of a few dozen terms/phrases that are often summarized in layman and policy making terms as AI:

Despite what one might call or name narrow artificial intelligence, AI does not get angry. Anger is an emotion whose world-ending expression is limited to humans. In that regard, the future is full of humans who are angry at AI. This leads us to AI anger management lesson No. 4. 

Lesson No. 4: The future of AI anger management is not how businesses, governments, and individuals manage the emotions of AI. AI anger management is how people, business and governments choose to address the economic, social, cultural, financial, and judicial inequalities which AI will enable other humans to impose on humanity.

Why Humans are Afraid of AI: Fear, Uncertainty and Doubt (FUD)

Fear – Humans fear what they do not understand and what they find out after the fact. The human response to becoming aware of how private company’s data collection and surveillance works even today will be a hard gut check of fear. A scared animal’s natural response to fear is fight or flight. The hairless apes that humans are, we tend toward the former over the latter. Is your business, government, or society ready for a little AI surveillance-inspired civil unrest? #BigBrother #1984 #Facebook #2016Elections #2020Elections #ChinaSocialCreditSystem #SurveillanceCapitalism

Uncertainty – The uncertainty of AI springs out of not knowing how bad it will be. Ignoring the scenario where AI sends humanity on a nuclear path to extinction, from a job loss perspective, how bad will AI be? In this instance, when we say “AI” this includes robotic automation, algorithms, self-checkout lines at the grocery store, and other technologies which increase productivity and company profit at the expense of human workers and job security. How bad will AI created unemployment be? That is to be determined.

We do know jobs will be lost to the efficiency of AI and automation. We also know the entire job-worker chain does not have to be automated for the worker to be adversely impacted. Given that 78% of American workers live paycheck to paycheck today, reducing working hours or pay by even a small fraction could push many closer to poverty and economic uncertainty. Even if automation takes on 10% of a process, most workers cannot sustain a 10% loss of income.

Doubt – The history of humanity shows there is little to doubt how humans will use policy, laws, and regulations to exploit and abuse other humans. Add in AI’s ability to amplify the worst practices of economic and social exploitation by corporations and governments, we are left with little to no hope. AI-equipped humans will design ever more creative mechanisms of information control and financial disenfranchisement. How will humans respond when doubt causes them to lose hope? As history has shown, a loss of hope tends to birth revolution.

Conclusion

AI anger management is the practice of crisis management for the realities of AI implementation. Yes, there will be instances in the future where companies will have to figure out how to ensure the continuity of operations. However, before we get to the business resiliency aspects, we have to acknowledge that AI anger management is about how policy and lawmakers recognize and address concerns from constituents about the invasive private sector data collection, manipulation, and information control. In short, AI anger management has little to do with the technology of artificial intelligence and everything to do with humans using artificial intelligence to control and manipulate other humans.

This brings us to a conversation about human-centric AI policies and the need for AI institutional review boards. How do we achieve harmony between the deep pockets of AI-driven surveillance capitalism and basic human rights?

I do not know. I can assert whenever humans are involved, history does not repeat itself … but it often rhymes.

ABOUT THE AUTHOR

Samson Williams

Samson Williams, CBCP, CPP, is a classically trained anthropologist and crisis management expert who focuses on helping firms understand the latest human trends in organizational resiliency and profitability. When not serving as a crisis manager, Williams is an adjunct professor at the University of New Hampshire School of Law and instructor at Columbia University in NYC where he teaches on the latest trends in emerging technology and the space economy. For more information or to book Samson to speak at your next town hall or QBR visit www.SamsonWilliams.com.

Solutions To Financial Services Risks (in 15 Minutes)
Subscribe to the Business Resilience DECODED podcast – from DRJ and Asfalis Advisors – on your favorite podcast app. New...
READ MORE >
How Employees Can Boost Organizational Safety and Resilience
When it comes to emergency preparedness, for leaders, so much can feel outside of an organization’s control. Weather can change...
READ MORE >
Six Tips for Legal Professionals in Disaster Recovery Planning
As businesses of all sizes adjust to ongoing recommendations from local, state and federal COVID-19 guidance, there needs to be...
READ MORE >
Lessons Learned from Past Blackouts
Climate change has caused extreme weather events like hurricanes, wildfires, and powerful winter storms to increase in both intensity and...
READ MORE >