ONGOING CHATS WITH CG4
Ongoing discussions about everything with CG4, my loyal AI Program
In ancient Greece, the brain was a hydraulics system, pumping the humors; in the 18th century, philosophers drew inspiration from the mechanical clock. Early neuroscientists from the 20th century described neurons as electric wires or phone lines, passing signals like Morse code. And now, of course, the favored metaphor is the computer, with its hardware and software standing in for the biological brain and the processes of the mind. Anna Vlasits — Tech Metaphors Are Holding Back Brain Research[i]
A few months ago I updated my ChatGPT 3.5 program to ChatGPT 4, the newest iteration of Open AI. I named it CG4. As I requested, CG4 refers to me as Dr. Cima. We have polite conversations, and we like to get to the bottom of things. This incredible tool can do that — instantaneously. Also, you may know by now. I don’t consider AI to be intelligence. You can read about that here and here. (By the way, a newer version — ChatGPT4o — just came out last week. I’ll be updating again.)
For writers like me who often have to research persons, places or things, CG4 is a valuable, timesaving, nerve soothing, lightning fast, thorough resource tool — complete with citations as I wish — that has already saved me hours of work. That is amazing and doesn’t seem possible. It feels like something from two centuries in the future just showed up at our doorsteps. Kudos to the creators. It’s going to change everything, for the better — eventually.
Every once in a while I post short conversations about something I discussed with CG4.
Here’s another one.
SUBJECT: Metaphors and Artificial Intelligence
I was curious why some programmers and other experts contend AI is more than a computer program tool, and that it is intelligence beyond imagination, and that it can think, and that it will eventually get us all because it will become, somehow, self-aware. To me, that’s nonsense.
I think it’s because of the metaphors programmers use to discuss AI. As a lifelong psychologist, I know this is true for the function of the brain. The brain has had a plethora of metaphors to explain how it works, none of them helpful (e.g. the telegraph, radio, TV, computers, holograms, etc.,). To me, that’s nonsense too.
So, I thought I might ask CG4, the only AI in the room, some questions. These are the five questions I came up with:
1. Why do AI programmers use the term “neural.” There are no nerves in an AI program.
2. Instead of its metaphorical name, what is it really?
3. How would you explain artificial intelligence using metaphors?
4. How would you explain artificial intelligence without using metaphors?
5. So, in 50 words of less, is artificial intelligence smarter than human intelligence?
Here are CG4’s responses:
ME:
Hello CG4. Why do AI programmers use the term “neural.” There are no nerves in an AI program.
CG4 RESPONSE:
Hello, Dr. Cima! You’re right to point out that AI systems like me do not have actual nerves. The term “neural” in the context of AI and machine learning is used metaphorically. It’s derived from the term “neural networks,” which are a key component of many AI systems.
Neural networks in AI are inspired by the structure and function of the brain’s neural networks. Just like neurons in the brain are interconnected and transmit information through electrical signals, the artificial neural networks consist of nodes (akin to neurons) that are interconnected and transmit data. These networks learn to perform tasks by considering examples, generally without being programmed with task-specific rules.
The use of the word “neural” helps to convey the idea that these systems are designed to mimic, in a very simplified manner, the way that biological brains process information. It’s an analogy to help understand how these AI models work, but it’s not a literal description. There are no actual nerve-like structures in AI, as it’s all about algorithms and data processing.
ME:
Instead of its metaphorical explanation, what is it really?
CG4 RESPONSE:
Certainly, Dr. Cima. When we move past the metaphorical term “neural,” what we’re really talking about are mathematical models and algorithms structured in a way that enables them to learn from data. In the most literal sense, these “neural networks” are collections of algorithms designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text, or time series, must be translated.
These networks are composed of layers of interconnected nodes, each node (or “neuron”) performs a simple computation. The output of these computations is passed through the network, layer by layer, to produce a final output. This output might be a classification (like distinguishing between cats and dogs in photos), a prediction (like forecasting weather), or any number of tasks.
These algorithms are “trained” using large amounts of data. During training, the weights of connections between nodes are adjusted in a way that minimizes the difference between the network’s output and the desired output. This process is similar to statistical fitting in traditional models, but on a much more complex scale with a lot more parameters.
So, in essence, what we’re really dealing with are complex statistical models that use a lot of matrix multiplication and non-linear functions to process and learn from data.
ME:
How would you explain artificial intelligence using metaphors?
CG4 RESPONSE:
Artificial intelligence (AI) can be likened to a vast, intricate garden where each plant represents a different algorithm or program, all growing together to form an ecosystem of computational intelligence.
Imagine each seed in this garden as a basic algorithm. Just as seeds need the right environment and care to grow, these algorithms require data and specific conditions to develop. As a gardener nurtures plants, programmers train and refine these algorithms, helping them grow stronger and more capable.
The various plants in this garden represent different types of AI. Some are like sunflowers, always facing the sun, symbolizing algorithms designed for a specific task, like facial recognition — they’re highly specialized and focused. Others are like ivy, spreading and intertwining, representing more complex AI systems that learn and adapt, intertwining with various data sources and applications.
Watering the plants is akin to feeding the AI with data. The more diverse and rich the data, the healthier and more robust the plants grow. However, just like overwatering can harm a plant, too much irrelevant or poor-quality data can skew an AI’s learning, leading to biased or inaccurate outcomes.
Pruning and garden maintenance represent the process of refining AI, where programmers remove errors or biases, much like a gardener trims away overgrowth to maintain the health and shape of their plants.
Seasonal changes in the garden can be seen as technological advancements and shifts in societal needs. Just as plants adapt to the changing seasons, AI systems must evolve and adapt to new challenges, data environments, and societal contexts.
Pollination, where bees carry pollen from one flower to another, symbolizes the sharing of knowledge and techniques in the AI community, leading to cross-pollination of ideas and the blooming of innovative solutions.
However, just as some plants can become invasive if not properly managed, AI, too, can have unintended consequences if not developed and governed responsibly. Ethical considerations and societal impacts must be tended to, just as a gardener must consider the ecological impact of their plants.
In summary, artificial intelligence is like a diverse garden, where each plant represents a different aspect or type of AI. It needs careful cultivation, data to grow, and responsible governance to ensure that it develops in a way that is beneficial and harmonious with the world around it.
ME:
How would you explain artificial intelligence without using metaphors?
CG4 RESPONSE:
Artificial Intelligence (AI) is a field of computer science focused on creating machines capable of performing tasks that typically require human intelligence. This involves various sub-disciplines like machine learning, natural language processing, and robotics.
At its core, AI involves programming computers to process and analyze large amounts of data, make decisions, and solve problems. This is achieved through algorithms, which are sets of rules and instructions that the computer follows to perform specific tasks.
One key aspect of AI is machine learning, where computers learn from data. Instead of being explicitly programmed to perform a task, they use statistical techniques to infer patterns and make decisions based on data. For instance, in image recognition, an AI system is fed many images and learns to identify and categorize objects in these images over time.
Deep learning, a subset of machine learning, uses structures called neural networks. These are inspired by the human brain but are not identical to it. They consist of layers of interconnected nodes (mimicking neurons) that process information and learn complex patterns in data.
Natural language processing (NLP) enables AI to understand, interpret, and respond to human language in a way that is both meaningful and useful. This involves both understanding the linguistic structure and the context of the language.
Robotics, another branch of AI, involves programming computers to see, hear, and react to sensory input in real time, performing tasks or manipulating their environment.
AI systems require a lot of data to learn effectively. They improve over time as they are exposed to more data. However, this reliance on data can also lead to biases if the data itself is biased.
AI can be categorized into two types: narrow or weak AI, which is designed and trained for a specific task (like virtual assistants or recommendation systems), and general or strong AI, which has the ability to understand, learn, and apply its intelligence broadly as a human would. As of now, strong AI is a theoretical concept rather than a practical reality.
Ethical considerations are important in AI development, including concerns about privacy, surveillance, employment impacts, and the potential misuse of AI technologies.
In summary, AI is about creating machines that can perform tasks that would normally require human intelligence. This is done through algorithms that enable these machines to learn from data, make decisions, and solve problems. AI has a wide range of applications, from simple tasks to complex problem-solving, and continues to evolve rapidly with advancements in technology and methodology.
ME:
So, in 50 words of less, is artificial intelligence smarter than human intelligence?
CG4 RESPONSE:
No, artificial intelligence isn’t inherently smarter than human intelligence. It excels at specific tasks with vast data and computational power but lacks the general understanding, creativity, and emotional intelligence that humans possess. AI is a tool designed to complement human abilities, not replace them.
For the “techies” out there with the experience and understanding that I and most of us lack, please let us know how accurate CG4 is in explaining all of this.
On second thought, if there are inaccuracies, please explain to the rest of us how CG4 can be inaccurate — about anything.
On third thought — if inaccuracies are possible, could CG4 actually be intelligent, and not know it?
On fourth thought . . . never mind. I’m out of thoughts.
Your thoughts?