Since its public launch in November last year, ChatGPT has been dominating conversations in and outside of business. Whatever you think of it, it is definitely a game-changer. I wrote this post at the beginning of February, straight from the heart. Developments around the AI tool and the ripple effects it is causing are moving at lightning speed. I will have more to add from a purpose-driven business point of view, but here are my initial thoughts, which remain basically unchanged.
OpenAI, the company behind ChatGPT, says on its website its mission is “to ensure that artificial general intelligence benefits all of humanity”. That’s really great. But isn’t that a bit naïve coming from the creators who made a chatbot that mimics human language so eerily accurate that it is a perfect tool to manipulate and deceive, and launched it into the public domain?
To be honest, the recent interview in TIME Magazine with Mira Murati, chief technology officer at OpenAI, offers little reassurance.*
*Oh, and just in case you were wondering: No, this is NOT a ChatGPT-generated text answering the question: “What does the CTO at OpenAI think about ChatGPT?”
(take note, since at some point in the future it may be hard to come by opinion pieces produced by humans)
Secondary school kids
As a parent, I am deeply worried when I watch secondary school kids on the news with my teenage girl (about to enter secondary school herself) and hear them saying how super cool ChatGPT is: a tool that provides them with all the material they need for their papers and essays, so they don’t have to waste their time doing the research and writing themselves.
These 15 and 16-year-olds are saying this on camera, self-confident and without blinking an eye. So here is the new generation of future workers and leaders being educated, and apparently they have no notion of the value of critical thinking, fact-checking, autonomous reasoning, creating.
Conversations in parallel universes
And as a professional, I am amazed to see how two parallel universes are evolving around two separate conversations, led by equally fervent aficionados: one, where authenticity and real connection are the talk of the day (businesses need to be “authentic”, show their human side, engage as people with people because workers and customers crave for honest and caring relationships – even more so after the pandemic, where isolation and loneliness caused so much distress).
And a second universe, where artificial intelligence and all its fancy new tools such as ChatGPT, are glorified as the key to increased automation and efficiency, risk minimization, smarter decision-making and reduction in human error – in other words, the answer to all kinds of nasty flaws real people have.
Not the universe where I belong.
“Regulators need to step in”
So what does Mira Murati think?
Well, the CTO behind the tool that has taken the internet by storm admits in TIME Magazine that one of the “core challenges” of OpenAI’s baby ChatGPT, now that it’s out there into the open, “is that it may make up facts”. Yes, there are a lot of unsolved “ethical questions”, and no, its creators have no answer to the risk of the tool being used by bad actors.
But hey, they are just “a small group of people” anyway. Definitely, “regulators and governments and everyone else” need to step in to make sure that its use will be “controlled and responsible,” Murati concludes.
Caught up in a rat race
Oh surprise! The first in line to step in are the big tech guys that already dominate the world wide web. Microsoft has poured an initial 10 billion dollars into the tiny company, Elon Musk was among its first funders, and now Google and Meta are caught up in a rat race to launch their own copies of ChatGPT.
We all know that ethics and benefiting all of humanity are not the first things these companies and their owners deeply care about.