IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Generative AI: Rewards, Risks and New EU Legislation

Depending on who you talk with or what stories you read, Open AI and ChatGPT may be the greatest things in the world — or the beginning of the end for humanity.

In the background, the ChatGPT logo and text that says "Welcome to ChatGPT" in white font on a black background. In the foreground a hand holds up a smartphone with the ChatGPT app open on the screen.
Shutterstock
Question: When you think about generative AI, or more specifically ChatGPT (or a competitor product), how do you primarily feel?
  1. Excitement, opportunity, more productivity, finally.  
  2. Possibilities, new career beginning, game-changer, disruptive, paradigm shift.  
  3. Fear, job losses, need to reskill.
  4. Increased fraud, more cyberattacks, scarier phishing attacks, ID theft, bad guys win.
  5. Overwhelmed, urge to hide, leave me alone and just let me retire. Not another big turn.
  6. Overhyped, Trojan horse, can’t control AI, wrong direction, humanity is doomed.
  7. What is this? Please explain more. Why should I care?   
  8. All of the above, depending on the day.

From Washington, D.C., to San Francisco and from Phoenix to Lille, France, I have heard all of these above sentiments regarding ChatGPT, Open AI and other generative AI applications and products over the past three months as I’ve traveled the world. A common answer is No. 8, and the pendulum seems to swing daily, as new headlines and articles are released in a growing pace of adoption.

One thing is clear: Almost everyone is talking about generative artificial intelligence (or "GenAI" for short), and making plans to course-correct on whatever they were doing before.

MORE DETAILS PLEASE...


Here are some of the top stories I have been reading on this topic over the past two weeks, with some excerpts to help.

Wall Street Journal (WSJ.com)Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot: “Wendy’s is automating its drive-through service using an artificial-intelligence chatbot powered by natural-language software developed by Google and trained to understand the myriad ways customers order off the menu.

“With the move, Wendy’s is joining an expanding group of companies that areleaning on generative AI for growth.

“The Dublin, Ohio-based fast-food chain’s chatbot will be officially rolled out in June at a company-owned restaurant in Columbus, Ohio, Wendy’s said. The goal is to streamline the ordering process and prevent long lines in the drive-through lanes from turning customers away, said Wendy’s Chief Executive Todd Penegor.”

DigitalTrends.comChatGPT: How to use the AI chatbot that’s changing everything: “ChatGPT has continued to dazzle the internet with AI-generated content, morphing from a novel chatbot into a piece of technology that is driving the next era of innovation. Not everyone’s on board yet, though, and you’re probably wondering: What’s ChatGPT all about?

“Made by OpenAI, well-known for having developed the text-to-image generator DALL-E, ChatGPT is currently available for anyone to try out for free. Here’s what ChatGPT is, how to use it, and how it could change the future of the internet.”

TechTarget.com7 generative AI challenges that businesses should consider: “The promise of revolutionary, content-generating AI models has a flip side: the perils of misuse, algorithmic bias, technical complexity and workforce restructuring.”

CNBC.comWorkers are secretly using ChatGPT, AI and it will pose big risks for tech leaders: “Soaring investment from big tech companies in artificial intelligence and chatbots — amid massive layoffs and a growth decline — has left many chief information security officers in a whirlwind. With OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard and Elon Musk’s plan for his own chatbot making headlines, generative AI is seeping into the workplace, and chief information security officers need to approach this technology with caution and prepare with necessary security measures.”

Yahoo.comAI could replace 80% of jobs 'in next few years': “Artificial intelligence could replace 80 percent of human jobs in the coming years — but that's a good thing, says U.S.-Brazilian researcher Ben Goertzel, a leading AI guru.”

HOW ABOUT CYBERSECURITY AND GENERATIVE AI?


Nextgov.comOfficials Warn of 'Power' and 'Failure' of Imagination in AI Regulation: “Federal policy leaders working in emerging technology plan to prioritize collaboration over competition with allies, as systems like artificial intelligence continue their rapid innovation with limited legal guardrails.

“Cybersecurity and Infrastructure Security Agency Director Jen Easterly and Ambassador at Large for Cyberspace and Digital Policy Nathaniel Fick discussed their views on how AI stands to impact the international cybersecurity landscape during a Special Competitive Studies Project event on Tuesday.

“While not explicitly naming Congress as the regulatory source to manage burgeoning AI systems, Easterly did express the need for guidance to continue motivating innovation while simultaneously protecting individuals from algorithmic-based harms.

"'We need to look at a framework that enables us to protect it, but still take advantage of the amazing innovation,' she said.”

APNews.comHow Europe is leading the world in building guardrails around AI: “Authorities around the world are racing to draw up rules for artificial intelligence, including in the European Union, where draft legislation faced a pivotal moment on Thursday.

“A European Parliament committee voted to strengthen the flagship legislative proposal as it heads toward passage, part of a yearslong effort by Brussels to draw up guardrails for artificial intelligence. Those efforts have taken on more urgency as the rapid advances of chatbots like ChatGPT highlight benefits the emerging technology can bring — and the new perils it poses."

You can also take a look at the new European Union's AI Act website here.

Here is a description from the website: “The AI Act is a proposed European law on artificial intelligence (AI) — the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”

The Washington Post (via MSN.com)Cybersecurity faces a challenge from artificial intelligence’s rise: “Earlier this year, a sales director in India for tech security firm Zscaler got a call that seemed to be from the company’s chief executive.

“As his cellphone displayed founder Jay Chaudhry’s picture, a familiar voice said, 'Hi, it’s Jay. I need you to do something for me,' before the call dropped. A follow-up text over WhatsApp explained why. 'I think I’m having poor network coverage as I am traveling at the moment. Is it okay to text here in the meantime?'

“Then the caller asked for assistance moving money to a bank in Singapore. Trying to help, the salesman went to his manager, who smelled a rat and turned the matter over to internal investigators. They determined that scammers had reconstituted Chaudhry’s voice from clips of his public remarks in an attempt to steal from the company.

“Chaudhry recounted the incident last month on the sidelines of the annual RSA cybersecurity conference in San Francisco, where concerns about the revolution in artificial intelligence dominated the conversation.”

SHOULD WE PAUSE GENAI RESEARCH?


I posted this article on LinkedIN recently from CNN on "Why the ‘Godfather of AI’ decided he had to ‘blow the whistle’ on the technology." I led the post with this comment: “A few years ago, I joined several colleagues in attempting to 'cut the Fear, Uncertainty and Doubt (FUD)' surrounding cybersecurity online. I still hold to that. But now ... Here comes AI FUD. Not sure what my view is on this yet, but what do you think about this scary article? Thoughts on this?”

The comments were all over the map, but many people do fear that we will not be able to control AI in a few years. Others agree with Bill Gates that fears are overblown. Everyone seems to agree that this is a work in progress and that the stakes are very high.

Regardless of industry voices expressing positive or negative opinions, it seems that this GenAI train continues to move forward and nothing will slow it down soon — unless a major issue hits the news.

FINAL THOUGHTS


Like many readers, I continue to learn more and more every day about this paradigm-breaking technology we call "GenAI," and how it is impacting, or will impact, different industries.

All across the globe experiments are unfolding, such as this hacker event which tests limits of the AI technology.

Yes, this is a work in progress for all of us.

My advice: Get on board this GenAI boat that is leaving the dock fast, at least to learn the new rules that are emerging for the tech and cyber industries. As I wrote in a blog back in January, there are serious concerns and much fact-checking needed, but ignoring this ChatGPT and GenAI trend would be a big mistake.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.