More on ChatGPT :

Google also showed off a new chatbot, Bard, this month, but scientists and journalists quickly realized it was writing nonsense about the James Webb Space Telescope. OpenAI, a San Francisco start-up, launched the chatbot boom in November when it introduced ChatGPT, which also doesn’t always tell the truth.

 

The new chatbots are driven by a technology that scientists call a large language model, or L.L.M. These systems learn by analyzing enormous amounts of digital text culled from the internet, which includes volumes of untruthful, biased and otherwise toxic material. The text that chatbots learn from is also a bit outdated, because they must spend months analyzing it before the public can use them.

 

As it analyzes that sea of good and bad information from across the internet, an L.L.M. learns to do one particular thing: guess the next word in a sequence of words.

It operates like a giant version of the autocomplete technology that suggests the next word as you type out an email or an instant message on your smartphone. Given the sequence “Tom Cruise is a ____,” it might guess “actor.”

 

When you chat with a chatbot, the bot is not just drawing on everything it has learned from the internet. It draws on everything you have said to it and everything it has said back. It is not just guessing the next word in its sentence. It is guessing the next word in the long block of text that includes both your words and its words.

The longer the conversation becomes, the more influence a user unwittingly has on what the chatbot is saying. If you want it to get angry, it gets angry, Dr Sejnowski said. If you coax it to get creepy, it gets creepy.

 

The alarmed reactions to the strange behaviour of Microsoft’s chatbot overshadowed an important point: The chatbot does not have a personality. It is offering instant results spit out by an incredibly complex computer algorithm.

 

Microsoft appeared to curtail the strangest behaviour when it limited the lengths of discussions with the Bing chatbot. That was like learning from a car’s test driver that going too fast for too long will burn out its engine. Microsoft’s partner, OpenAI, and Google, are also exploring ways of controlling the behaviour of their bots.

But there’s a caveat to this reassurance: Because chatbots are learning from so much material and putting it together in a complex way, researchers aren’t entirely clear on how chatbots produce their final results. Researchers are watching to see what the bots do and learning to place limits on that behaviour — often after it happens.

 

Microsoft and OpenAI have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose — and reeling them in when they stray. They believe their big, public experiment is worth the risk.

 

“Because the human and the L.L.M.s are mirroring each other, over time, they will tend toward a common conceptual state,” Dr Sejnowski said.

It was not surprising, he said, that journalists began seeing creepy behaviour in the Bing chatbot. Either consciously or unconsciously, they were prodding the system in an uncomfortable direction. As the chatbots take in our words and reflect them back to us, they can reinforce and amplify our beliefs and coax us into believing what they are telling us.

 

Around 2018, researchers at companies like Google and OpenAI began building neural networks that learned from vast amounts of digital text, including books, Wikipedia articles, chat logs and other stuff posted to the internet. By pinpointing billions of patterns in all this text, these L.L.M.s learned to generate text independently, including tweets, blog posts, speeches and computer programs. They could even carry on a conversation.

These systems are a reflection of humanity. They learn their skills by analyzing text that humans have posted to the internet.

 

But that is not the only reason chatbots generate problematic language, said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute, an independent lab in New Mexico.

 

When they generate text, these systems do not repeat what is on the internet word for word. They produce new text on their own by combining billions of patterns.

 

Even if researchers trained these systems solely on peer-reviewed scientific literature, they might still produce scientifically ridiculous statements. Even if they learned solely from true text, they might still produce untruths. Even if they learned only from wholesome text, they might still generate something creepy.

“Nothing is preventing them from doing this,” Dr Mitchell said. “They are just trying to produce something that sounds like human language.”

 

Artificial intelligence experts have long known that this technology exhibits unexpected behaviour. But they cannot always agree on how this behaviour should be interpreted or how quickly the chatbots will improve.

 

Because these systems learn from far more data than humans could ever wrap our heads around, even A.I. experts cannot understand why they generate a particular text at any given moment.

 

Dr Sejnowski said he believed that, in the long run, the new chatbots had the power to make people more efficient and give them ways of doing their jobs better and faster. But this comes with a warning for both the companies building these chatbots and the people using them: They can also lead us away from the truth and into some dark places.

“This is terra incognita,” Dr Sejnowski said. “Humans have never experienced this before.”