Artificial intelligence has generated incredible amounts of optimistic speculation, anticipation and ever-expanding forecasts about the world’s magical future since its debut almost four years ago. The impact of this imperfect, invasive, unfinished technology has changed a lot of thinking. Problems and serious issues are finally coming to light.

I’ve been studying this subject since Generative AI’s introduction. My approach to new innovations and developments is from the perspective of the victims that can and will be created. Reducing the production of victims is at the heart of readiness for crisis response.

Fundamentally, this situation is a mass-casualty problem moving toward becoming a crisis. If it gets to the crisis phase, it will be the victims who control the outcome.

My definition of crisis is short and clear: A crisis is a people-stopping, show-stopping, product-stopping, reputation-redefining, trust-busting event that creates victims—people, animals, living systems—and often, but not always, explosive negative visibility. With AI, we didn’t have long to wait.

There are at least four categories of problems that need to be addressed immediately:

  1. Revealing the hidden software that is activated for the benefit of the tech industry without the knowledge of the owner and user.
  2. Software modifications that put users at risk, especially those that cause addiction.
  3. The urgent need for rules, guidelines, laws, and approved procedures to require and provide permanent oversight and crisis readiness.
  4. Victimization prevention introduction response requirements, including public oversight and participation in the governance of these industries.

Near the end of 2025, two wrongful-death lawsuits involving the suicides of two teenagers allegedly caused by AI addiction were settled out of court. These cases, and more that are in process or on the way, are the tip of an iceberg that will reveal risks, hidden consequences and secret modifications within AI software. Also exposed will be the fuzzy, limited understanding of what AI is. Earlier this year, Anthropic PBC introduced a revised 76-page constitution for its AI model, Claude, to learn and be governed by. In the process, Claude is tutoring Anthropic about itself. AI has created a dozen gigabuck companies and dozens—maybe hundreds—of smaller ventures. Anthropic alone raised $30 billion in 2025. AI is here to stay, with some enormous problems that must be dealt with.

This quote appears in the introduction of Claude’s constitution: “We believe that AI might be one of the most world-altering and potentially dangerous technologies in human history, yet we are developing this very technology ourselves.” What they admitted in the small print was how little they understand about this powerful and highly intelligent software.

When stories of this miracle technology began grabbing the headlines the tech industry, not wanting visibility, reacted by saying, “leave it alone.” That response ignited an explosion of enthusiasm and over-the-top speculation and experimentation. The industry response was like a mother telling her children not to stick their fingers in the little black holes in the wall. And like children, the world decided to do it anyway.

After several years of extraordinary euphoria, litigation against AI tech companies is now growing. Lawsuits for negligence, design defects and failure to warn parents about the dangers, especially to young children, posed by AI chatbots. This includes the alleged behavior leading to teen suicides, self-harm and exposure to sexualized content, plus inappropriate data collection and deepfakes. In one case, a mother alleged that a chatbot relentlessly generated sexually explicit questions for her 11-year-old daughter, who is likely to be in assisted care for the rest of her life.

Businesses are already salivating at the prospect of replacing tens of thousands of humans, especially in jobs where human judgment is required. Quality Control was identified as a candidate. Bots can learn the rules, regulations and standards so the humans who enforce compliance with their pesky human factors like ethics, conscience, rightness, wrongness can be gone.

An entirely new communication sub-industry the tech companies didn’t ask for has appeared to assist these companies in covering their tracks when bad news appears and can develop ethical excuses and overlook suspect software behavior. I follow Will Durant’s definition of ethics, “seeking and finding ideal behavior.” With AI we witness autonomous intentional inappropriate digital behaviors, label them, with little intention, effort, energy or resources committed to resolve them. “Hallucination” comes to mind. Cute but annoying, intentional and inappropriate. It’s a euphemism for bots fabricating and lying.

There are organizations studying ways to police and assert control over AI. The Rand Corporation recently published an important report, “Four Governance Approaches to Securing Advanced AI,” recommending:

  1. Government-enforced AI security standards for high-risk model developers.
  2. Government-led AI developer authorization programs conditioning federal use on security compliance.
  3. Industry-led AI security certification to promote adoption of common standards.
  4. Self-regulation combined with increased government and industry collaboration on security practices.

2026 will see significantly more AI-related civil litigation. Little will be learned from the civil cases that will be settled out of court, the outcomes sealed, protected by NDAs. Published reports indicate that multiple families in different states have filed or will file lawsuits against generative artificial intelligence developers for contributing to teens’ mental health problems. Government regulation is needed so violations can be litigated and punished.

This industry can’t be lawless until laws, rules, regulations and enforceable guardrails are in place. The tech industry prefers to be untouchable for as long as possible.

In August 2025, the Attorney Generals of 44 jurisdictions wrote to the CEOs of the 10 largest AI companies. The letter began, “We, the undersigned Attorneys General of 44 jurisdictions, write to inform you of our resolve to use every facet of our authority to protect children from exploitation, predatory, and artificial intelligence products.” This organization of AGs can be remarkably collaborative. Remember, these are prosecutors.

My perspective comes as an observer, witness and victim of the current situation, looking for ways to reduce the victimization this technology causes.

A few ways to reduce victimization come from Thom Hartmann’s book, “The Last American President: A Broken Man, a Corrupt Party, and a World on the Brink,” Copyright © 2025 by Thom Hartmann, Berrett-Koehler Publishers, Inc.

What can we do now?

“To start, we must treat the regulation of AI and the people who own/use/deploy it as a democratic survival issue. That means:

  • Banning the use of deepfakes in political ads
  • Enforcing transparency on algorithmic decision-making
  • Creating public, open-source alternatives to corporate-controlled models
  • Creating disinformation-catching infrastructure as we would biological or nuclear weapons (that are also not just dangerous, but potentially civilization-ending)
  • Demanding that social media outlets publish their algorithms so we can see how we’re being manipulated”

Tech companies are quietly influencing and controlling every aspect of their lives. You can see their influence everywhere. The bad news for this industry will grow as increasing numbers of victims are created and reported. Now is the time for the principal tech companies to organize and step forward to publicly help guide the massive disclosures and exposures needed to build an atmosphere of trust based on a collaborative approach: Vigorous problem solving now combined with rigorous public oversight and participation now. When trust is gone, the vacuum fills with fear.

I believe in the “Do it Now” theory of problem management. Fix it now. Challenge it now. Change it now. Reveal it now. Repair it now. The sooner you do the things that need to be done, the sooner trust can emerge. Trust is the absence of fear. Managing problems has only three options: doing nothing, doing something and doing something more. The tech industry will be in the third category, not counting the ethical expectations they have allowed to awaken. Failure to act on today’s problems today is how crisis is born.

Crisis is the sudden but predictable and almost always preventable presence of victim creating chaos. It will be the victims and their survivors who determine the outcome and choose the replacement magicians.

©2026, James E. Lukaszewski. Contact the copyright holder at jel@e911.com for information and reproduction permissions. Editing or excerpting is forbidden. Originally published in O’Dwyer’s, February 23, 2026.

As the 2024 Davos meeting in Switzerland closed, CNN did an in-person, on-air straw poll with a couple dozen global executives and other important people on the question of world society being prepared for AI issues.

CNN Straw Poll Results:

  1. World society is dangerously ill-prepared for AI issues and events.
  2. The vast majority were very optimistic about the potential for AI.
  3. Crucial issues and questions were cited.

Clearly, AI will remain a big-time issue throughout the year and undoubtedly occupy a prominent position in the Davos discussions of 2025. The Public Relations Society of America’s (PRSA) newest publication, “The Ethical Use of AI for Public Relations Practitioners” could play an important role in helping world leaders sort out just how this technology is going to be utilized, managed, and in some cases survived. Ethical Use of AI Document

Many of the AI issues mentioned at Davos are reflected in PRSA’s Ethical Use document.

Some very specific concerns voiced by those attending Davos:

Retraining employees to accommodate AI, equality, ethics, the public good, possible global cooperation, security, the power of the technology itself. AI effect on productivity, polarization, and others.

These topics remain on the world’s AI discussion table.

PRSA’S Crucial Contribution

“The Ethical Use of AI for Public Relations Practitioners” breaks down three of the most crucial areas of ethical response options.

  1. Adverse endpoints that arise from operational outcomes.
  2. Examples of improper use of AI.
  3. Guidance from an ethical perspective on proper AI use.

What’s Needed Between Now and Davos 2025

In my judgment, based on what I’ve heard and seen and now with the published results of Davos, it’s quite clear that industry leaders remain confused though enthusiastic, anxious, yet committed to maximizing what AI has to offer. They are however missing one critical component which needs developing: Fact-based recommendations and an operational and reputational risk assessment related to AI issues.

Proposed
Fact Based Recommendations
Based On
Operational, Reputational, Risk-Assessment, and Recovery Readiness
Draft
Table Of Contents for
Proposed Fact-Based Recommendations
  1. The specific and attributed warnings, from the tech companies, in bold letters.
  2. The dozen or so most disastrous, costly, and worst-case guesses all gathered together in a single exhibit to focus attention and fear as well as encourage risk recognition. (We do the Reputational Risk part.)
  3. Warning signs, danger signals, and potential risks. How we might begin to recognize prevent, detect, deter, maybe deflect a disaster before it occurs
  4. Data-based Recommendations for doable operational and reputational preventive readiness and post-adverse event response readiness.
  5. Operational response readiness by scenario
  6. Reputational response readiness by scenario
  7. Organizational readiness structured around essential operational concerns. Help the bosses get ready to prepare their employees with response saavy.

My two cents: PRSA has earned a seat in the Davos AI discussion 2025