Virtual AI assistants are poised to become virtual companions in people’s lives but legislators and regulators should adopt flexible principles to promote technological innovations while minimising risks, ensuring a favourable future for the virtual AI assistant revolution, writes Christophe Carugati.
Christophe Carugati an affiliate fellow at the think tank Bruegel, working on competition and digital issues. Bruegel’s mission is to improve the quality of economic policy with open and fact-based research, analysis and debate.
A decade ago, when Joaquin Phoenix portrayed Theodore in the science fiction movie ‘Her,’ his character developed a deep connection with a virtual AI assistant, ‘Samantha’. Today, with remarkable advancements in artificial intelligence (AI), the fiction is becoming real.
People can now engage in text, voice, and image-based interactions with an AI application, ushering in a new era where these virtual companions are poised to be indispensable in people’s lives. However, this transformation has potential benefits and pitfalls that regulators should quickly address with flexible principles.
AI is already present in people’s lives. When people shop or date online, AI shapes their online interaction with products, services, and other individuals through recommendations.
When Open AI released ChatGPT in November 2022, AI became even more present in people’s eyes. The chatbot enables seamless human-like interactions. It is so human that Open AI developed a tool to distinguish between AI-written text and human-written text, which rapidly became unavailable due to its low accuracy rate.
Remarkably, ChatGPT is the fastest-growing application in history, with over 100 million active users in just two months. Yet, chatbots are merely the tip of the iceberg in the virtual AI assistant revolution.
Virtual AI assistants are poised to become virtual companions. They can assist people in planning trips, responding to emails, and even serving as virtual friends through human-like interactions beyond current virtual assistant interactions limited to specific tasks, reminiscence of the movie ‘Her’.
Microsoft co-founder Bill Gates’ vision of a personal agent is now materialising and might profoundly impact how people interact with their online environment, as the app revolution did.
In recent weeks, a wave of innovations from several tech firms, including Microsoft, Open AI (backed by Microsoft), Amazon, Meta, and Google, unveiled novel services and products empowering people to engage with a ‘Samantha’.
‘Samantha’ is no longer a mere fiction but a tangible reality. This comes with potential benefits and pitfalls. In the best-case scenario, people will have a choice amongst several virtual AI assistant providers, fostering competition and innovation. In this environment, providers will open their ecosystems to third parties and compete to provide high privacy and safety, avoiding data misuse and addictive behaviours. However, the worst-case scenario presents a stark contrast, with only a few dominant providers stifling business access and compromising user privacy and safety because of low incentives to open their ecosystems to potential rivals and provide a high protection level, mimicking current digital competition issues.
Fast-moving developments in AI make it challenging to predict the likely scenario. Existing information already suggests that both are plausible.
Numerous providers already offer voice assistants, with Amazon, Google, and Apple leading the European market in 2022, but new entrants like Meta and Open AI are challenging the market dynamism.
Some firms like Meta allow third-party business access and are committed to safety.
Studies also found that virtual companions, like those provided by the social chatbot Replika, can positively impact mental health by overcoming loneliness. However, negative outcomes have resulted in people developing an emotional dependence on the chatbot.
Despite this uncertainty, there are avenues to ensure positive outcomes are more likely than negative ones. Laws like the Digital Markets Act (DMA) and the General Data Protection Regulation (GDPR), which guarantee open and fair digital markets and user privacy, could force the market to deliver the best-case scenario.
Still, these laws might not address or prevent all potential adverse outcomes, like barriers to third-party access or the possibility of emotional dependence. Furthermore, the pace of technological advancements outstrips the ability of legislators and regulators to respond to these harms promptly with sufficient knowledge, as shown by the ongoing discussions about foundation models that power these virtual AI assistants in the context of the European proposal for an AI Act.
In this context, legislators and regulators should prepare for the upcoming virtual AI assistant revolution. They should steer the trajectory toward positive outcomes by adopting flexible principles like the access principle to ensure third-party access or the safety-first principle to limit harmful addictive functionalities.
This is more efficient and appropriate than rigid rules that are hardly future-proof to market developments. Drawing inspiration from the United Kingdom’s competition authority, which recently embraced such an approach in its initial report on foundation models, these principles can be the foundation for developing industry-specific codes of conduct.
In these fast-moving developments, these principles will promote technological innovations while minimising risks, ensuring a favourable future for the virtual AI assistant revolution.