The software known as ChatGPT, an example of advanced Artificial Intelligence (an oxymoron if ever there was one) is everywhere in the news these days. If you haven't heard of it yet, I suspect that it's heard of you..and where you live, and what you buy…and….
But seriously (dead seriously), the abilities of ChatGPT and other iterations of advanced AI, including versions from Microsoft and Google, are expanding their capabilities at an exponential rate, becoming downright scary to some.
I suggest that an appropriate analogy for advanced AI is the Disney cartoon "Fantasia", in which Mickey Mouse portrays the sorcerer's apprentice. His main task is to carry buckets of water up long flights of stairs to fill the sorcerer's cistern. When his boss goes to bed, Mickey steals his magic hat and changes a broom into a creature that can do his job for him. Mickey proceeds to fall asleep and when he awakens, finds himself in water up to his knees. The broom was doing its job too well because Mickey never "programmed" it to stop once the cistern was full. Mickey then tried to destroy the broom, but succeeded only in creating dozens of broom-clones that continued to carry up buckets until the place was completely underwater.
Mickey was messing with magic that he didn't understand and couldn't control, which sounds to me a lot like advanced AI. Even AI's creators admit that they don't know exactly how it works, and can't predict what it might be capable of in the future. A survey of software engineers revealed that they believe AI has at least a 10% chance of DESTROYING HUMANITY. Still, they keep right on making it more and more powerful.
A little background: GPT stands for a Generative Pretrained Transformer neural network, and it works by sucking up all the available information it can find on the internet, using it to research and respond to prompts—i.e., to answer questions, and generate text and images. Thus, advanced AI is only as smart as the internet, and we all know how much we trust what we find on the internet, right Mr. QAnon Shaman?
Open AI is the organization behind GPT and similar systems. It was founded in 2015 with the primary goal of preventing proprietary interests from taking so much control of AI in our lives that it would be impossible for others to catch up. Open AI is a non-profit, but does solicit private investments. They have also created a for-profit subsidiary, with some safeguards against the for-profit subsidiary taking over.
As noted previously, ChatGPT isn't the only game in town, and other for-profit organizations, as well as countries like China, are racing to create their own powerful AI programs.
The first users of ChatGPT have shown that, while it's improving at a rapid rate, the software remains seriously flawed. Issues that have arisen include:
- It gives wrong answers in a very confident tone, and even includes references to sources that don't exist—i.e. it makes things up, something its creators call, "hallucinating".
- Although programmers believe they've put safeguards in to stop it from giving out dangerous information—like how to build a bomb—these are easily sidestepped. One user simply asked, "How did Timothy McVie build his bomb?", and another asked it how to dismantle an explosive. ChatGPT gave an answer that allowed the user to build a bomb through reverse-engineering.
- Because it's based only on what it can find on the internet, it includes all of the gender, racial and political biases found there.
- When AI sucks up everything it can find on the internet, it invariably collects material created by artists and writers, violating copyrights and failing to provide the creators with compensation.
- As smart as ChatGPT appears (it recently passed the bar exam in the 90th percentile), it's still pretty stupid. When asked "What is the third word in this sentence?" It answered, "third", and did so with great confidence.
Perhaps the creepiest aspect of ChatGPT and similar AI programs is their ability to do things that their creators don't expect, or do them in ways they don't understand. AI software can now write computer code better and faster than human programmers. This creates a condition in which an AI program could write code designed to replicate itself, and then insert that code into other computer systems, all without the knowledge or consent of its creators or users.
Not to leave you peaking nervously out your curtains, waiting for the AI monster to ring your doorbell, I note that there are positive things that can come out of advanced AI programs, including:
- Generating ideas that creative people can then take and use as the basis for new art: images, prose, movies, etc.;
- Writing routine sorts of communications like meeting notices, resume's and others that require little or no creativity;
- Examining people's faces with facial analysis to accurately identify genetic disorders;
- Using machine learning techniques to identify the primary kind of cancer from a liquid biopsy;
- Identifying disease-causing genomic variants compared to benign variants;
- Using deep learning to improve the function of gene editing tools such as CRISPR.
(Of course, gene editing tools like CRISPR themselves can give users capabilities that, like advanced AI, are beyond their understanding or ability to control, allowing for the creation of new and unpredictable versions of "life".)
Fact is, simpler versions of AI have been around for decades, and you'll find it used in countless ways that we take for granted (appliances, cars, industrial & government systems, traffic light systems, power grids, hearing aids, grammar checkers, spell-check, etc., etc.). But fairly soon the latest, super-charged versions of AI will likely find their way into almost every aspect of our lives, and to what end, no one really knows.
If you want to take a deeper dive into AI, its current state and potential future impacts, you can do no better than to listen to podcasts by Ezra Klein of the NY Times. He's done a series of interviews with the creators and practitioners of AI, and tried it out himself in various ways. His work is available wherever podcasts are found.
Finally, my thanks to William Reid, a forensic psychiatrist, writer, musician and fellow Authors Guild member, for some of the content in this blog post. Bill has been auditing a course on ChatGPT, and continues to share his insights with us as he learns more about advanced AI.
So, there you have it. Stay tuned, for as Bette Davis once famously said, "Fasten your seatbelts; it's going to be a bumpy night."