Artificial Intelligence - AI :

The Rise of the Chatbot

 

In 2014, Microsoft launched a hugely successful A.I. bot named Xiaoice in China. With over forty million conversations, users often described feeling as though they were interacting with a real human. Microsoft founder Bill Gates described it this way, “’ Xiaoice has attracted 45 million followers and is quite skilled at multitasking. And I’ve heard she’s gotten good enough at sensing a user’s emotional state that she can even help with relationship breakups.”

 

Xiaoice has published poetry, recorded musical albums, hosted television shows, and released audiobooks for children. She’s such a superstar that it’s easy to forget she is merely a set of complex algorithms. Some have pointed to this as evidence of the ELIZA Effect and the way we humanize AI chatbots.

 

In 2016, Microsoft hoped to duplicate Xiaoice’s success with the introduction of Tay in the U.S.A. This would be the virtual assistant of the future—part influencer, part helper, part content creator, and part counsellor. However, within hours, Tay began posting sexist and racist rants on Twitter. She spouted off unhinged conspiracy theories and engaged in trolling behaviours.

 

So, what happened? As trolls and bots spammed Tay with offensive content, the A.I. began to mimic racist and sexist speech. As the bot attempted to “learn” how to interact with a community, it picked up on the mores and norms of a group that deliberately spammed it with racist and sexist content. By the time Microsoft shut Tay down, the bot had begun promoting Nazi ideology.

 

While this might be an extreme example, deeper learning machines will always contain biases. There’s no such thing as a “neutral” A.I. because it pulls its data from the larger culture and will “learn” social norms and values from its vast data set. It’s easy to miss the subtler biases and misinformation embedded within generative A.I. when it often produces accurate content. In some sense, ChatGPT poses a different challenge than Tay because the bias is less overt but still powerful.

 

And yet, when people interact with A.I. bots, they are more likely to assume that the information is unbiased and objective. I’ve already noticed people saying things like, “I treat ChatGPT like a search engine” or “AI can teach you pretty much anything.” It’s taking on this assumption that AI is inherently unbiased and accurate — which is a really dangerous place to be.

 

In the upcoming decade, AI will fundamentally change the larger media landscape. Our students will inhabit a space where generative AI can create instant content that seems inherently human. They’ll have to navigate a world of deepfakes and misinformation. But this isn’t a new phenomenon. AI has been changing the larger information landscape for decades. Before diving into deepfakes, let’s explore how AI has already transformed the way we interact with media — because ultimately if we want to think about how generative AI will influence information literacy, we need to recognize how much rudimentary forms of AI have already shaped our current information landscape.


Is It Too Late to Jump Out Of The Pot?

The story goes that if a frog is placed in boiling water, it will immediately leap out to escape the danger. However, if the frog is put in warm water that is slowly heated to a boil, it will not perceive the gradual increase in temperature. It will ultimately meet its demise, unaware of the looming peril until it's too late. A bit like dozing off in a jacuzzi.

 

Similarly, the rise of AI has been and continues to be a gradual escalation. Initially introduced and talked about in simple examples that we could all understand (maps and navigation, facial detection), its presence was like the tepid water for the frog—comfortable and seemingly harmless. Society welcomed AI for its convenience, efficiency, and the promise of a better future. We've seen it evolve from simple machine learning algorithms to complex systems capable of emulating human-like decision-making and action, moving from the background of our digital experiences to the forefront of our daily lives.

 

Much like the frog that doesn't realize the water is heating up, we may not fully grasp the profound changes AI brings to our social fabric, economy, and privacy. The transformation is so incremental that each advancement seems like a natural progression, not a cause for alarm. We celebrate the conveniences and efficiencies it brings, often overlooking the broader implications, such as the displacement of jobs, ethical dilemmas around privacy and autonomy, and the potential for AI to make decisions beyond human control or understanding.

 

I feel that we are way too comfortable. We are frogs leading our best lives. Startups are rolling out tools, we’re using Claude and ChatGPT Copilot and the next tool that will become extinct/acquired/sunsetted. We’re drinking the Kool-Aid.

 

The boiling point we will soon hit will represent a future juncture where AI's influence and control over society become all-encompassing and possibly irreversible, leading to significant consequences for human autonomy, employment, and even our social structures. Like the frog that fails to jump out of the boiling water, humanity might not take decisive action until it's too late to fully reverse or mitigate the impacts.

 

AI’s evolution in our every day tells me one thing. The bar is so low for entry that it’s going to be ubiquitous across jobs and society. We will get so comfortable that those once perceived as intellectual will lose their edge.

 

We must redefine and evolve our capabilities today and structure ourselves in ways that will best mesh skillsets together. Over the last 18 months I have met with teams and companies, some of whom still have organisational structures that are hierarchical. The decision-making needs to go through layers of middle management to get approval. Conversely, you have those leading from the ground up - because their goals are clear. If we don’t have clarity in the why and the how, companies that are using innovative techniques with embedded AI in their thinking will win over those who are waiting for a decision from manager number 2, who happens to be on holiday with an “out of office” to contact an assistant if urgent. We are not quite at this precipice. But I make the point that bureaucracy will die because the speed of AI and the evolution of a new model will win.

 

Where we are going is different to what we have experienced in these crazy two years. Large Language Models will evolve into Large Action Models. A Large Action Model in AI could be defined as an advanced AI system that understands and generates human-like text (as LLMs do) and can perform actions in the real world or digital environments based on that understanding. These actions could range from making online reservations and scheduling appointments to interacting with other software systems to perform complex tasks. The key difference between LLMs and LAMs is the latter's ability to directly execute actions or commands based on the processed information, moving beyond mere suggestions or generating text-based outputs. In education, that might be a teacher asking a service to “prep my lesson” - and the agent would know what to prepare based on what was coming up the next day - based on the curriculum or lesson. It could also be an agent that could book a restaurant for dinner tonight - where preferences of those attending, the numbers and location could all be catered for. The LAM acts not just as an information processor but as an active agent that executes tasks on behalf of the user, interacting with various digital platforms and services. It exemplifies the transition from AI as a tool for understanding and generating language to AI as an autonomous actor within predefined domains.

 

Concurrently, while the speed of technology evolves, so does the hardware - AR/VR tools, as well as tracking rings and wristbands - and cameras tracking our every move. Not only do we have the Humane AI Pin but there are Ray-Ban Meta Sunglasses and iOS 18 and Apple earpods with cameras and sensors that highlight the deepening integration of AI into daily life, offering personalized suggestions and actions. The proliferation of devices equipped with cameras and sensors will enable a form of intelligence that adapts to and learns from user behaviour, enhancing the human-AI relationship.

 

AI is everywhere, and we’re letting it in because it feels comfortable, "cool” and efficient. We need to navigate these turbulent waters with a transformed consciousness, seeking to chart a course through the emerging landscape and shape it for the better.

 

The Frog in the Water is a cautionary tale - urging us to remain vigilant and proactive in shaping the development and integration of AI. By being aware of the temperature of the water, so to speak, we can ensure that AI serves to enhance human society rather than endanger it, maintaining a balance that fosters innovation while preserving our core values and autonomy. And don’t doze off at the edge of the jacuzzi.

 

Stay Curious, don’t get boiled - and don’t forget to be amazing,