Chatbot
Template:Short description Template:For-multi Template:Use dmy dates


A chatbot (originally chatterbot)<ref name="Mauldin" /> is a software application or web interface designed to have textual or spoken conversations.<ref>Template:Cite web</ref><ref name="Caldarini-20223">Template:Cite journal</ref><ref>Template:Cite journal</ref> Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Chatbots have increased in popularity as part of the AI boom of the 2020s, and the popularity of ChatGPT, followed by competitors such as Gemini, Claude and later Grok. AI chatbots typically use a foundational large language model, such as GPT-4 or the Gemini language model, which is fine-tuned for specific uses.
A major area where chatbots have long been used is in customer service and support, with various sorts of virtual assistants.<ref>Template:Cite web</ref>
History
Turing test
In 1950, Alan Turing's article "Computing Machinery and Intelligence" proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human.<ref name="Turing" />
Early chatbots
Joseph Weizenbaum's program ELIZA was first published in 1966. Weizenbaum did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
In artificial intelligence, machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures. The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios. The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.<ref name="Weizenbaum" />
ELIZA's key method of operation involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').<ref name="Weizenbaum" /> Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Following ELIZA, psychiatrist Kenneth Colby developed PARRY in 1972.<ref name="Güzeldere" /><ref name="comphis" /><ref name="Sondheim" /><ref name="rfc0439" />
From 1978<ref>Kolodner, Janet L. Memory organization for natural language data-base inquiry. Advanced Research Projects Agency, 1978.</ref> to some time after 1983,<ref name="Kolodner-19832">Template:Cite journal</ref> the CYRUS project led by Janet Kolodner constructed a chatbot simulating Cyrus Vance (57th United States Secretary of State). It used case-based reasoning, and updated its database daily by parsing wire news from United Press International. The program was unable to process the news items subsequent to the surprise resignation of Cyrus Vance in April 1980, and the team constructed another chatbot simulating his successor, Edmund Muskie.<ref>Template:Citation</ref><ref name="Kolodner-19832" />
In 1984, an interactive version of the program Racter was released which acted as a chatbot.<ref>The Policeman's Beard is Half Constructed Template:Webarchive. everything2.com. 13 November 1999</ref>
A.L.I.C.E. was released in 1995. This uses a markup language called AIML,<ref name="Caldarini-20223" /> which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. A.L.I.C.E. is a weak AI without any reasoning capabilities. It is based on a similar pattern matching technique as ELIZA in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
Jabberwacky, released in 1997, learns new responses and context based on real-time user interactions, rather than being driven from a static database.
Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives).<ref>Template:Cite web</ref>
DBpedia created a chatbot during the GSoC of 2017.<ref>Template:Cite web</ref> It can communicate through Facebook Messenger.
Modern chatbots based on large language models

Modern chatbots like ChatGPT are often based on large language models called generative pre-trained transformers (GPT). They are based on a deep learning architecture called the transformer, which contains artificial neural networks. They generate text after being trained on a large text corpus.
Application
Template:Update sectionTemplate:See also
Messaging apps
Many companies' chatbots run on messaging apps or simply via SMS. They are used for B2C customer service, sales and marketing.<ref>Template:Cite news</ref>
In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017.<ref>Template:Cite web</ref>
Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines KLM and Aeroméxico both announced their participation in the testing;<ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref>Template:Cite news</ref><ref>Template:Cite web</ref> both airlines had previously launched customer services on the Facebook Messenger platform. The Nigerian event platform Demfati, for example, uses its Deeva chatbot on WhatsApp for dedicated B2C functions like ticket purchasing and event voting.<ref>Template:Cite web</ref>
The bots usually appear as one of the user's contacts, but can sometimes act as participants in a group chat.
Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities, and restaurant chains have used chatbots to answer simple questions, increase customer engagement,<ref>Template:Cite web</ref> for promotion, and to offer additional ways to order from them.<ref>Template:Cite web</ref> Chatbots are also used in market research to collect short survey responses.<ref>Template:Cite book</ref>
A 2017 study showed 4% of companies used chatbots.<ref>Template:Cite web</ref> In a 2016 study, 80% of businesses said they intended to have one by 2020.<ref>Template:Cite web</ref>
As part of company apps and websites
Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008<ref name="nytimes.com">Template:Cite web</ref> or Expedia's virtual customer service agent which launched in 2011.<ref name="nytimes.com" /><ref>Template:Cite web</ref> The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers.<ref>Template:Cite news</ref><ref>Template:Cite news</ref>
Chatbot sequences
Used by marketers to script sequences of messages, very similar to an autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response. Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message.
Company internal platforms
Companies have used chatbots for customer support, human resources, or in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to attempt to automate certain processes when customer service employees request sick leave.<ref>Template:Cite news</ref> Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using chatbots instead of call centres with humans to provide a first point of contact.Template:Citation needed In large companies, like in hospitals and aviation organizations, chatbots are also used to share information within organizations, and to assist and replace service desks.Template:Citation needed
Customer service
Chatbots have been proposed as a replacement for customer service departments.<ref>Template:Cite book</ref>
In 2016, Russia-based Tochka Bank launched a chatbot on Facebook for a range of financial services, including a possibility of making payments.<ref>Template:Cite news</ref> In July 2016, Barclays Africa also launched a Facebook chatbot.<ref>Template:Cite web</ref>
Healthcare
Template:See alsoChatbots are also appearing in the healthcare industry.<ref>Template:Cite web</ref><ref>Template:Cite news</ref> A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information.<ref>Template:Cite journal</ref> A 2025 review found that participants often rated chatbot responses as more empathic than those from clinicians.<ref>Template:Cite journal</ref>
In 2020, WhatsApp worked with the World Health Organization and the Government of India to make chatbots to answers users' questions on COVID-19.<ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref>Template:Cite web</ref>
In 2023, US-based National Eating Disorders Association replaced its human helpline staff with a chatbot but had to take it offline after users reported receiving harmful advice from it.<ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref>Template:Cite news</ref>
Politics
In New Zealand, the chatbot SAM – short for Semantic Analysis Machine<ref>Template:Cite web</ref> – has been developed by Nick Gerritsen of Touchtech.<ref>Template:Cite web</ref> It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger.<ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref>Template:Cite web</ref>
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the Danish parliamentary election,<ref>Template:Cite news</ref> and was built by the artist collective Computer Lars.<ref>Template:Cite news</ref> Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate.<ref>Template:Cite news</ref> This chatbot engaged in critical discussions on politics with users from around the world.<ref>Template:Cite web</ref>
In India, the state government has launched a chatbot for its Aaple Sarkar platform,<ref>Template:Cite web</ref> which provides conversational access to information regarding public services managed.<ref>Template:Cite news</ref><ref>Template:Cite web</ref>
Toys
Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.<ref name="virtualagentchat">Template:Cite web</ref>
Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk,<ref>Template:Cite web</ref> which previously used the chatbot for a range of smartphone-based characters for children.<ref>Template:Triangulation</ref> These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline.<ref>Template:Cite web</ref>
The My Friend Cayla doll was marketed as a line of Template:Convert dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. Like the Hello Barbie doll, it attracted controversy due to vulnerabilities with the doll's Bluetooth stack and its use of data collected from the child's speech.
IBM's Watson computer has been used as the basis for chatbot-based educational toys for companies such as CogniToys,<ref name="virtualagentchat" /> intended to interact with children for educational purposes.<ref>Template:Cite web</ref>
Malicious use
Malicious chatbots are frequently used to fill chat rooms with spam and advertisements by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.<ref>Template:Cite web Psychologist Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named.</ref>
Tay, an AI chatbot designed to learn from previous interactions, caused major controversy after being targeted by internet trolls on Twitter. Soon after its launch, the bot was exploited, and with its "repeat after me" capability, it started releasing racist, sexist, and controversial responses to Twitter users.<ref>Template:Cite journal</ref> This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse.<ref>Template:Cite book</ref>
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial social proof.<ref>Template:Cite web</ref><ref>Template:Cite web</ref>
Data security
Data security is one of the major concerns of chatbot technologies. Security threats and system vulnerabilities are weaknesses that are often exploited by malicious users. Storage of user data and past communication, that is highly valuable for training and development of chatbots, can also give rise to security threats.<ref name="Hasal-2021">Template:Cite journal</ref> Chatbots operating on third-party networks may be subject to various security issues if owners of the third-party applications have policies regarding user data that differ from those of the chatbot.<ref name="Hasal-2021" /> Security threats can be reduced or prevented by incorporating protective mechanisms. User authentication, chat End-to-end encryption, and self-destructing messages are some effective solutions to resist potential security threats.<ref name="Hasal-2021" />
Mental health
Chatbots have shown to be an emerging technology used in the field of mental health. Its usage may encourage users to seek advice on matters of mental health as a means to avoid the stigmatization that may come from sharing such matters with other people.<ref name="Chin-2023">Template:Cite journal</ref> This is because chatbots can give a sense of privacy and anonymity when sharing sensitive information, as well as providing a space that allows for the user to be free of judgment.<ref name="Chin-2023" /> An example of this can be seen in a study which found that with social media and AI chatbots both being possible outlets to express mental health online, users were more willing to share their darker and more depressive emotions to the chatbot.<ref name="Chin-2023" /> Users may also turn to chatbots because their replies can be perceived as empathic and emotionally supportive.<ref>Template:Cite journal</ref>
Findings prove that chatbots have great potential in scenarios in which it is difficult for users to reach out to family or friends for support.<ref name="Chin-2023" /> It has been noted that it demonstrates the ability to give young people "various types of social support such as appraisal, informational, emotional, and instrumental support".<ref name="Chin-2023" /> Studies have found that chatbots are able to assist users in managing things such as depression and anxiety.<ref name="Chin-2023" /> Some examples of chatbots that serve this function are "Woebot, Wysa, Vivibot, and Tess".<ref name="Chin-2023" />
Evidence indicates that when mental health chatbots interact with users, they tend to follow certain conversation flows.<ref name="Haque-2023">Template:Cite journal</ref> These being guided conversation, semi guided conversation, and open ended conversation.<ref name="Haque-2023" /> The most popular, guided conversation, "only allows the users to communicate with the chatbot with predefined responses from the chatbot. It does not allow any form of open input from the users".<ref name="Haque-2023" /> It has also been noted in a study looking at the methods employed by various mental health chatbots, that most of them employed a form of cognitive behavior therapy with the user.<ref name="Haque-2023" />
Adverse effects
Research has identified potential barriers to entry that come with the usage of chatbots for mental health.<ref name="Coghlan-2023">Template:Cite journal</ref> There are ongoing privacy concerns with sharing user's personal data in chat logs with chatbots.<ref name="Coghlan-2023" /> There is a lack of willingness from those in lower socioeconomic statuses to adopt interactions with chatbots as a meaningful way to improve upon mental health.<ref name="Coghlan-2023" /> Though chatbots may be capable of detecting simple human emotions in interactions with users, they are incapable of replicating the level of empathy that human therapists do.<ref name="Coghlan-2023" />
Due to the nature of chatbots being language-learning models trained on numerous datasets, the issue of algorithmic bias exists.<ref name="Coghlan-2023" /> Chatbots with built in biases from their training can have them brought out against individuals of certain backgrounds and may result in incorrect information being conveyed.<ref name="Coghlan-2023" />
There is a lack of research about how exactly these interactions help with a user's real life.<ref name="Haque-2023" /> There are concerns regarding the safety of users when interacting with such chatbots.<ref name="Haque-2023" /> When improvements and advancements are made to such technologies, how that may affect humans is not a priority.<ref name="Haque-2023" /> It is possible that this can lead to "unintended negative consequences, such as biases, inadequate and failed responses, and privacy issues".<ref name="Haque-2023" />
A risk in the usage of chatbots to deal with mental health is increased isolation, as well as a lack of support in times of crisis.<ref name="Haque-2023" /> A 2025 study by Sentio University evaluated how six major chatbots responded to disclosures of suicide risk and other acute mental health crises, finding that none consistently met clinician determined safety standards.<ref>Template:Citation</ref> Another notable risk is a general lack of a strong understanding of mental health.<ref name="Haque-2023" /> Studies have indicated that mental-health-oriented chatbots have been prone to recommending users medical solutions and to rely upon themselves heavily.<ref name="Haque-2023" />
Obsessive use of chatbots has been linked to chatbot psychosis<ref>Template:Cite web</ref> in people already prone to delusional and conspiratorial thinking. This is caused in part by chatbots "hallucinating" information,<ref name="RollingStone">Template:Cite magazine</ref> as they are designed for engagement, and to keep people talking.<ref>Template:Cite web</ref>
Limitations
Traditional chatbots particularly lacked understanding of user requests, leading to clunky, repetitive conversations. Their pre-programmed responses would often fail to satisfy unexpected user queries, causing frustration. These chatbots were particularly unhelpful for users who lacked a clear understanding of their problem or the service they needed.<ref>Template:Cite news</ref>
Chatbots based on large language models are much more versatile, but require a large amount of conversational data to train. These models generate new responses word by word based on user input, and are usually trained on a large dataset of natural-language phrases.<ref name="Caldarini-20223" /> They sometimes provide plausible-sounding but incorrect or nonsensical answers, referred to as "hallucinations". They can for example make up names, dates, or historical events.<ref>Template:Cite journal</ref> When humans use and apply chatbot content contaminated with hallucinations, this results in "botshit".<ref>Template:Cite journal</ref> Given the increasing adoption and use of chatbots for generating content, there are concerns that this technology will significantly reduce the cost it takes humans to generate misinformation.<ref>Template:Cite news</ref>
Impact on jobs
Chatbots and technology in general used to automate repetitive tasks. But advanced chatbots like ChatGPT are also targeting high-paying, creative, and knowledge-based jobs, raising concerns about workforce disruption and quality trade-offs in favor of cost-cutting.<ref>Template:Cite news</ref>
Chatbots are increasingly used by small and medium enterprises, to handle customer interactions efficiently, reducing reliance on large call centers and lowering operational costs.<ref>Template:Cite journal</ref>
Prompt engineering, the task of designing and refining prompts (inputs) leading to desired AI-generated responses has quickly gained significant demand with the advent of large language models,<ref>Template:Cite news</ref> although the viability of this job is questioned due to new techniques for automating prompt engineering.<ref>Template:Cite web</ref>
Impact on the environment
Generative AI uses a high amount of electric power. Due to reliance on fossil fuels in its generation, this increases air pollution, water pollution, and greenhouse gas emissions. In 2023, a question to ChatGPT consumed on average 10 times as much energy as a Google search.<ref>Template:Cite web</ref> Data centres in general, and those used for AI tasks specifically, use significant amounts of water for cooling.<ref>Template:Cite web</ref><ref>Template:Cite web</ref>
See also
Template:Portal Template:Div col
- Applications of artificial intelligence
- Artificial human companion
- Artificial intelligence and elections
- Autonomous agent
- Conversational user interface
- Deadbot
- Dead Internet theory
- Deaths linked to chatbots
- Friendly artificial intelligence
- Hybrid intelligent system
- Intelligent agent
- Internet bot
- List of chatbots
- Multi-agent system
- Social bot
- Software agent
- Software bot
- Stochastic parrot
- Technological unemployment
- Twitterbot
References
Further reading
- Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process?" New York Times Magazine (18 July 2023) online
- Template:Citation
- Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", London Review of Books, vol. 46, no. 19 (10 October 2024), pp. 29–32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency to anthropomorphise." (p. 29.)
- Template:Cite journal
- Template:Cite book
External links
Template:Natural Language Processing Template:Authority control