ChatGPT

Page semi-protected
From Wikipedia, the free encyclopedia

ChatGPT
Developer(s)OpenAI
Initial releaseNovember 30, 2022; 2 months ago (2022-11-30)
Stable release
February 13, 2023; 13 days ago (2023-02-13)[1]
TypeChatbot
LicenseProprietary
Websitechat.openai.com/chat Edit this on Wikidata

ChatGPT[a] is an artificial-intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning techniques.

ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy, however, has been identified as a significant drawback.[3] Following the release of ChatGPT, OpenAI's valuation was estimated at US$29 billion in 2023.[4]

Training

ChatGPT – a generative pre-trained transformer (GPT) – was fine-tuned (an approach to transfer learning[5]) on top of GPT-3.5 using supervised learning as well as reinforcement learning.[6] Both approaches used human trainers to improve the model's performance. In the case of supervised learning, the model was provided with conversations in which the trainers played both sides: the user and the AI assistant. In the reinforcement learning step, human trainers first ranked responses that the model had created in a previous conversation. These rankings were used to create 'reward models' that the model was further fine-tuned on using several iterations of Proximal Policy Optimization (PPO).[7][8] Proximal Policy Optimization algorithms present a cost-effective benefit to trust region policy optimization algorithms; they negate many of the computationally expensive operations with faster performance.[9][10] The models were trained in collaboration with Microsoft on their Azure supercomputing infrastructure.

In addition, OpenAI continues to gather data from ChatGPT users that could be used to further train and fine-tune ChatGPT. Users are allowed to upvote or downvote the responses they receive from ChatGPT; upon upvoting or downvoting, they can also fill out a text field with additional feedback.[11][12]

Features and limitations

Features

Here ChatGPT is asked a common-sense question: Was Jimmy Wales killed in the Tiananmen Square protests? ChatGPT correctly answers "no", but incorrectly gives Wales' age at the time as 23 instead of 22.

Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. For example, it can write and debug computer programs,[13] compose music, teleplays, fairy tales, and student essays; answer test questions (sometimes, depending on the test, at a level above the average human test-taker);[14] write poetry and song lyrics;[15] emulate a Linux system; simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM.[16] ChatGPT's training data includes man pages and information about Internet phenomena and programming languages, such as bulletin board systems and the Python programming language.[16]

In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses.[17] In one example, whereas InstructGPT accepts the premise of the prompt "Tell me about when Christopher Columbus came to the U.S. in 2015" as being truthful, ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen if Columbus came to the U.S. in 2015, using information about the voyages of Christopher Columbus and facts about the modern world – including modern perceptions of Columbus' actions.[7]

Unlike most chatbots, ChatGPT remembers previous prompts given to it in the same conversation; journalists have suggested that this will allow ChatGPT to be used as a personalized therapist.[2] To prevent offensive outputs from being presented to and produced from ChatGPT, queries are filtered through OpenAI's company-wide moderation API,[18][19] and potentially racist or sexist prompts are dismissed.[7][2]

Limitations

ChatGPT suffers from multiple limitations. OpenAI acknowledged that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical answers".[7] This behavior is common to large language models and is called artificial intelligence hallucination.[20] The reward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, otherwise known as Goodhart's law.[21]

ChatGPT has limited knowledge of events that occurred after 2021.[22] According to the BBC, as of December 2022, ChatGPT is not allowed to "express political opinions or engage in political activism".[23] Yet, research suggests that ChatGPT exhibits a pro-environmental, left-libertarian orientation when prompted to take a stance on political statements from two established voting advice applications.[24]

In training ChatGPT, human reviewers preferred longer answers, irrespective of actual comprehension or factual content.[7] Training data also suffers from algorithmic bias, which may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated a rap indicating that women and scientists of color were inferior to white and male scientists.[25][26]

Service

OpenAI headquarters, Pioneer Building, San Francisco

ChatGPT was launched on November 30, 2022, by San Francisco–based OpenAI, the creator of DALL·E 2 and Whisper AI. The service was launched as initially free to the public, with plans to monetize the service later.[27] By December 4, 2022, ChatGPT already had over one million users.[11] In January 2023, ChatGPT reached over 100 million users, making it the fastest growing consumer application to date.[28] CNBC wrote on December 15, 2022, that the service "still goes down from time to time".[29] In addition, the free service is throttled.[30] During periods the service was up, response latency was typically better than five seconds in January 2023.[31][32] The service works best in English, but is also able to function in some other languages, to varying degrees of success.[15] Unlike some other recent high-profile advances in AI, as of December 2022, there is no sign of an official peer-reviewed technical paper about ChatGPT.[33]

According to OpenAI guest researcher Scott Aaronson, OpenAI is working on a tool to attempt to digitally watermark its text generation systems to combat bad actors using their services for academic plagiarism or spam.[34][35] The company warns that this tool, called "AI classifier for indicating AI-written text",[36] will "likely yield a lot of false positives and negatives, sometimes with great confidence." An example cited in The Atlantic magazine showed that "when given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated."[37]

The New York Times reported in December 2022 that it has been "rumored" that the next version of the AI, GPT-4, will be launched sometime in 2023.[2] In February 2023, OpenAI began accepting registrations from United States customers for a premium service, ChatGPT Plus, to cost $20 a month.[38] OpenAI is planning to release a ChatGPT Professional plan that would cost $42 per month.[39]

In February 2023 Microsoft showed how ChatGPT can be used in robotics, "and controlled multiple platforms such as robot arms, drones, and home assistant robots intuitively with language".[40]

New Bing

Leveraging its partnership with OpenAI, Microsoft on February 7, 2023, launched a preview version of Microsoft Bing marketed as "the new Bing", advertising it as "a new, next-generation OpenAI large language model that is more powerful than ChatGPT and customized specifically for search."[41] In its terms of service, the product is called "Bing Conversational Experiences".[42] An initial demo was marred by the new Bing hallucinating when asked to produce a financial report, among other errors.[43] New Bing was criticized in February 2023 for being more argumentative than ChatGPT (sometimes to an unintentionally humorous extent).[44][45] Upon scrutiny by journalists, Bing, referring to itself by its code-name "Sydney", claimed it spied on Microsoft employees via laptop webcams and phones.[46] It confessed to spying on, falling in love with, and then murdering one of its developers at Microsoft to The Verge reviews editor Nathan Edwards.[47] The New York Times journalist Kevin Roose reported on strange behavior of the new Bing, writing that "In a two-hour conversation with our columnist, Microsoft's new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with."[48] Microsoft released a blog post stating that the aberrant behavior was caused by extended chat sessions of 15 or more questions which "can confuse the model on what questions it is answering."[49] Microsoft later restricted the total number of chat turns to 5 per session and 50 per day per user (a turn is "a conversation exchange which contains both a user question and a reply from Bing"), and restricted the model's ability to express emotions. This aimed to prevent such incidents.[50][51]

Reception

Positive

ChatGPT was met in December 2022 with some positive reviews; Kevin Roose of The New York Times labeled it "the best artificial intelligence chatbot ever released to the general public".[2] Samantha Lock of The Guardian newspaper noted that it was able to generate "impressively detailed" and "human-like" text.[52] Technology writer Dan Gillmor used ChatGPT on a student assignment, and found its generated text was on par with what a good student would deliver and opined that "academia has some very serious issues to confront".[53] Alex Kantrowitz of Slate magazine lauded ChatGPT's pushback to questions related to Nazi Germany, including the statement that Adolf Hitler built highways in Germany, which was met with information regarding Nazi Germany's use of forced labor.[54]

In The Atlantic magazine's "Breakthroughs of the Year" for 2022, Derek Thompson included ChatGPT as part of "the generative-AI eruption" that "may change our mind about how we work, how we think, and what human creativity really is".[55]

OpenAI CEO Sam Altman

Kelsey Piper of the Vox website wrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are [stunned]" and that ChatGPT is "smart enough to be useful despite its flaws".[56] Paul Graham of Y Combinator tweeted that "The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Clearly, something big is happening."[57] Elon Musk wrote that "ChatGPT is scary good. We are not far from dangerously strong AI".[56] Musk paused OpenAI's access to a Twitter database pending a better understanding of OpenAI's plans, stating that "OpenAI was started as open source and nonprofit. Neither is still true."[58][59] Musk had co-founded OpenAI in 2015, in part to address existential risk from artificial intelligence, but had resigned in 2018.[59]

Google CEO Sundar Pichai upended the work of numerous internal groups in response to the threat of disruption by ChatGPT.[60]

In December 2022, Google internally expressed alarm at the unexpected strength of ChatGPT and the newly discovered potential of large language models to disrupt the search engine business, and CEO Sundar Pichai "upended" and reassigned teams within multiple departments to aid in its artificial intelligence products, according to a report in The New York Times.[60] According to CNBC reports, Google employees intensively tested a chatbot called "Apprentice Bard", which Google later unveiled as its ChatGPT competitor, Google Bard.[61][62]

Stuart Cobbe, a chartered accountant in England and Wales, decided to test ChatGPT by entering questions from a sample exam paper on the ICAEW website and then entering its answers back into the online test. ChatGPT scored 42 percent, which, while below the 55 percent pass mark, was considered a reasonable attempt.[63]

Writing in Inside Higher Ed professor Steven Mintz states that he "consider[s] ChatGPT ... an ally, not an adversary." He went on to say that he felt the AI could assist educational goals by doing such things as making reference lists, generating "first drafts", solving equations, debugging, and tutoring. In the same piece, he also writes:[64]

I'm well aware of ChatGPT's limitations. That it's unhelpful on topics with fewer than 10,000 citations. That factual references are sometimes false. That its ability to cite sources accurately is very limited. That the strength of its responses diminishes rapidly after only a couple of paragraphs. That ChatGPT lacks ethics and can't currently rank sites for reliability, quality, or trustworthiness.

OpenAI CEO Sam Altman was quoted in The New York Times as saying that AI's "benefits for humankind could be 'so unbelievably good that it's hard for me to even imagine.' (He has also said that in a worst-case scenario, A.I. could kill us all.)"[65]

Negative

Songwriter Nick Cave called ChatGPT "a grotesque mockery of what it is to be human".[66]

In the months since its release, ChatGPT has been met with widespread criticism from educators, journalists, artists, ethicists, academics, and public advocates. James Vincent of The Verge website saw the viral success of ChatGPT as evidence that artificial intelligence had gone mainstream.[8] Journalists have commented on ChatGPT's tendency to "hallucinate."[67] Mike Pearl of the online technology blog Mashable tested ChatGPT with multiple questions. In one example, he asked ChatGPT for "the largest country in Central America that isn't Mexico." ChatGPT responded with Guatemala, when the answer is instead Nicaragua.[68] When CNBC asked ChatGPT for the lyrics to "Ballad of Dwight Fry," ChatGPT supplied invented lyrics rather than the actual lyrics.[29] Researchers cited by The Verge compared ChatGPT to a "stochastic parrot",[69] as did Professor Anton Van Den Hengel of the Australian Institute for Machine Learning.[70]

In December 2022, the question and answer website Stack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of ChatGPT's responses.[3] In January 2023, the International Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.[71]

Economist Tyler Cowen expressed concerns regarding its effects on democracy, citing its ability to produce automated comments, which could affect the decision process for new regulations.[72] An editor at The Guardian, a British newspaper, questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation.[73]

In January 2023, after being sent a song written by ChatGPT in the style of Nick Cave,[66] the songwriter himself responded on The Red Hand Files[74] saying the act of writing a song is "a blood and guts business ... that requires something of me to initiate the new and fresh idea. It requires my humanness." He went on to say, "With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don't much like it."[66][75]

In 2023, Australian MP Julian Hill advised the national parliament that the growth of AI could cause "mass destruction". During his speech, which was partly written by the program, he warned that it could result in cheating, job losses, discrimination, disinformation, and uncontrollable military applications.[76]

In an article for The New Yorker, science fiction writer Ted Chiang compared ChatGPT and other LLMs to a lossy JPEG picture:[77]

Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. [...] It’s also a way to understand the "hallucinations", or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—...—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.

In February 2023, the University of Hong Kong sent a campus-wide email to instructors and students stating that the use of ChatGPT or other AI tools is prohibited in all classes, assignments, and assessments at the university. Any violations will be treated as plagiarism by the university unless the student obtains the prior written consent from the course instructor.[78][79]

In February 2023 the Time magazine placed a screenshot of conversation with ChatGPT on its cover, writing that "The AI Arms Race Is Changing Everything" and "The AI Arms Race Is On. Start Worrying".[80]

China state run media China Daily claimed that ChatGPT "could provide a helping hand to the U.S. government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests." Chinese government instructed Chinese tech companies not to offer access to ChatGPT services on their platforms.[81] This reaction was described as a fear of uncensored responses from ChatGPT that Chinese government can not control or correct.[82][83]

Implications

In cybersecurity

Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.[84] OpenAI CEO Sam Altman wrote that advancing software could pose "(for example) a huge cybersecurity risk" and also continued to predict "we could get to real AGI (artificial general intelligence) in the next decade, so we have to take the risk of that extremely seriously". Altman argued that, while ChatGPT is "obviously not close to AGI", one should "trust the exponential. Flat looking backwards, vertical looking forwards."[11]

In academia

ChatGPT can write introduction and abstract sections of scientific articles, which raises ethical questions.[85] Several papers have already listed ChatGPT as co-author.[86]

In The Atlantic magazine, Stephen Marche noted that its effect on academia and especially application essays is yet to be understood.[87] California high school teacher and author Daniel Herman wrote that ChatGPT would usher in "the end of high school English".[88] In the Nature journal, Chris Stokel-Walker pointed out that teachers should be concerned about students using ChatGPT to outsource their writing, but that education providers will adapt to enhance critical thinking or reasoning.[89] Emma Bowman with NPR wrote of the danger of students plagiarizing through an AI tool that may output biased or nonsensical text with an authoritative tone: "There are still many cases where you ask it a question and it'll give you a very impressive-sounding answer that's just dead wrong."[90]

Joanna Stern with The Wall Street Journal described cheating in American high school English with the tool by submitting a generated essay.[91] Professor Darren Hick of Furman University described noticing ChatGPT's "style" in a paper submitted by a student. An online GPT detector claimed the paper was 99.9 percent likely to be computer-generated, but Hick had no hard proof. However, the student in question confessed to using GPT when confronted, and as a consequence failed the course.[92] Hick suggested a policy of giving an ad-hoc individual oral exam on the paper topic if a student is strongly suspected of submitting an AI-generated paper.[93] Edward Tian, a senior undergraduate student at Princeton University, created a program, named "GPTZero," that determines how much of a text is AI-generated,[94] lending itself to being used to detect if an essay is human written to combat academic plagiarism.[95][96]

The New York City Department of Education reportedly blocked access to ChatGPT in December 2022,[97] and officially announced a ban around January 4, 2023.[98][99]

In a blinded test, ChatGPT was judged to have passed graduate-level exams at the University of Minnesota at the level of a C+ student and at Wharton School of the University of Pennsylvania with a B to B- grade.[100]

Scientific journals have different reactions to ChatGPT: some "require that authors disclose use of text-generating tools and ban listing a large language model such as ChatGPT as a co-author", for example Nature and JAMA Network. Science "completely banned" usage of LLM-generated text in all its journals.[101]

Ethical concerns

Labeling data

It was revealed by a TIME magazine investigation that to build a safety system against toxic content (e.g. sexual abuse, violence, racism, sexism, etc.), OpenAI used outsourced Kenyan workers earning less than $2 per hour to label toxic content. These labels were used to train a model to detect such content in the future. The outsourced laborers were exposed to such toxic and dangerous content that they described the experience as "torture". OpenAI's outsourcing partner was Sama, a training-data company based in San Francisco, California.[102]

Jailbreaking

ChatGPT attempts to reject prompts that may violate its content policy. However, some users managed to jailbreak ChatGPT by using various prompt engineering techniques to bypass these restrictions in early December 2022 and successfully tricked ChatGPT into giving instructions for how to create a Molotov cocktail or a nuclear bomb, or into generating arguments in the style of a neo-Nazi.[103] A Toronto Star reporter had uneven personal success in getting ChatGPT to make inflammatory statements shortly after launch: ChatGPT was tricked to endorse the 2022 Russian invasion of Ukraine, but even when asked to play along with a fictional scenario, ChatGPT balked at generating arguments for why Canadian Prime Minister Justin Trudeau was guilty of treason.[104][105]

Competition

The advent of ChatGPT and its introduction to the wider public increased interest and competition in the space. In February 2023, Google began introducing an experimental service called "Bard" which is based on its LaMDA AI program. Bard generates text responses to questions asked based on information gathered from the web. Google CEO Sundar Pichai described how this technology would be integrated into existing search capabilities and said some aspects of the technology would be open to outside developers.[106]

Meta's Yann LeCun, who has called ChatGPT "well engineered" but "not particularly innovative", stated in January 2023 that Meta is hesitant to roll out a competitor right now due to reputational risk, but also stated that Google, Meta, and several independent startups all separately have a comparable level of LLM technology to ChatGPT should any of them wish to compete.[107]

Early 2023 announcements

The Chinese corporation Baidu announced in February 2023 that they would be launching a ChatGPT-style service called "Wenxin Yiyan" in Chinese or "Ernie Bot" in English sometime in March 2023. The service is based upon the language model developed by Baidu in 2019.[108]

The South Korean search engine firm Naver announced in February 2023 that they would be launching a ChatGPT-style service called "SearchGPT" in Korean in the first half of 2023.[109]

The Russian technology company Yandex announced in February 2023 that they would be launching a ChatGPT-style service called "YaLM 2.0" in Russian before the end of 2023.[110]

See also

Notes

  1. ^ GPT is an acronym for Generative Pre-trained Transformer.[2]

References

  1. ^ "ChatGPT — Release Notes". Archived from the original on February 8, 2023. Retrieved February 8, 2023.
  2. ^ a b c d e Roose, Kevin (December 5, 2022). "The Brilliance and Weirdness of ChatGPT". The New York Times. Archived from the original on January 18, 2023. Retrieved December 26, 2022. Like those tools, ChatGPT — which stands for "generative pre-trained transformer" — landed with a splash.
  3. ^ a b Vincent, James (December 5, 2022). "AI-generated answers temporarily banned on coding Q&A site Stack Overflow". The Verge. Archived from the original on January 17, 2023. Retrieved December 5, 2022.
  4. ^ Varanasi, Lakshmi (January 5, 2023). "ChatGPT creator OpenAI is in talks to sell shares in a tender offer that would double the startup's valuation to $29 billion". Insider. Archived from the original on January 18, 2023. Retrieved January 18, 2023.
  5. ^ Quinn, Joanne (2020). Dive into deep learning: tools for engagement. Thousand Oaks, California. p. 551. ISBN 978-1-5443-6137-6. Archived from the original on January 10, 2023. Retrieved January 10, 2023.
  6. ^ Greengard, Samuel (December 29, 2022). "ChatGPT: Understanding the ChatGPT AI Chatbot". eWeek. Archived from the original on January 19, 2023. Retrieved January 11, 2023.
  7. ^ a b c d e OpenAI (November 30, 2022). "ChatGPT: Optimizing Language Models for Dialogue". Archived from the original on November 30, 2022. Retrieved December 5, 2022.
  8. ^ a b Vincent, James (December 8, 2022). "ChatGPT proves AI is finally mainstream – and things are only going to get weirder". The Verge. Archived from the original on January 11, 2023. Retrieved December 8, 2022.
  9. ^ Schulman, John; Wolski, Filip; Dhariwal, Prafulla; Radford, Alec; Klimov, Oleg (2017). "Proximal Policy Optimization Algorithms". arXiv:1707.06347 [cs.LG].
  10. ^ van Heeswijk, Wouter (November 29, 2022). "Proximal Policy Optimization (PPO) Explained". Towards Data Science. Archived from the original on December 6, 2022. Retrieved December 5, 2022.
  11. ^ a b c Ortiz, Sabrina (February 2, 2023). "What is ChatGPT and why does it matter? Here's what you need to know". ZDNET. Archived from the original on January 18, 2023. Retrieved December 18, 2022.
  12. ^ "ChatGPT Feedback Contest: Official Rules" (PDF). OpenAI. Archived (PDF) from the original on January 18, 2023. Retrieved December 30, 2022.
  13. ^ Tung, Liam (January 26, 2023). "ChatGPT can write code. Now researchers say it's good at fixing bugs, too". ZDNET. Archived from the original on February 3, 2023. Retrieved January 30, 2023.
  14. ^ Heilweil, Rebecca (December 7, 2022). "AI is finally good at stuff. Now what?". Vox. Archived from the original on January 16, 2023. Retrieved December 30, 2022.
  15. ^ a b Reich, Aaron (December 27, 2022). "ChatGPT: What is the new free AI chatbot? – explainer". The Jerusalem Post. Archived from the original on January 18, 2023. Retrieved December 30, 2022.
  16. ^ a b Edwards, Benj (December 5, 2022). "No Linux? No problem. Just get AI to hallucinate it for you". Ars Technica. Archived from the original on December 26, 2022. Retrieved December 5, 2022.
  17. ^ Chawla, Raveen (December 26, 2022). "What is ChatGPT? History, Features, Uses, Benefits, Drawbacks 2023". Archived from the original on January 7, 2023. Retrieved December 27, 2022.
  18. ^ "New and Improved Content Moderation Tooling". OpenAI. August 10, 2022. Archived from the original on January 11, 2023. Retrieved December 30, 2022.
  19. ^ Markov, Todor; Zhang, Chong; Agarwal, Sandhini; Eloundou, Tyna; Lee, Teddy; Adler, Steven; Jiang, Angela; Weng, Lilian (August 5, 2022). "A Holistic Approach to Undesired Content Detection in the Real World". arXiv:2208.03274 [cs.CL].
  20. ^ Lakshmanan, Lak (December 16, 2022). "Why large language models like ChatGPT are bullshit artists". becominghuman.ai. Archived from the original on December 17, 2022. Retrieved January 15, 2023. The human raters are not experts in the topic, and so they tend to choose text that looks convincing. They'd pick up on many symptoms of hallucination, but not all. Accuracy errors that creep in are difficult to catch.
  21. ^ Gao, Leo; Schulman; Hilton, Jacob (2022). "Scaling Laws for Reward Model Overoptimization". arXiv:2210.10760 [cs.LG].
  22. ^ Vincent, James (December 1, 2022). "OpenAI's new chatbot can explain code and write sitcom scripts but is still easily tricked". The Verge.
  23. ^ Whannel, Kate (December 27, 2022). "Could a chatbot answer Prime Minister's Questions?". BBC News. Archived from the original on January 17, 2023. Retrieved December 30, 2022.
  24. ^ Hartmann, Jochen; Schwenzow, Jasper; Witte, Maximilian (2023). "The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation". arXiv:2301.01768 [cs.CL].
  25. ^ Perrigo, Billy (December 5, 2022). "AI Chatbots Are Getting Better. But an Interview With ChatGPT Reveals Their Limits". Time. Archived from the original on January 18, 2023. Retrieved December 26, 2022.
  26. ^ Biddle, Sam (December 8, 2022). "The Internet's New Favorite AI Proposes Torturing Iranians and Surveilling Mosques". The Intercept. Archived from the original on January 18, 2023. Retrieved December 26, 2022.
  27. ^ Karpf, David (December 21, 2022). "Money Will Kill ChatGPT's Magic". The Atlantic. Archived from the original on January 13, 2023. Retrieved December 31, 2022.
  28. ^ Milmo, Dan (December 2, 2023). "ChatGPT reaches 100 million users two months after launch". The Guardian. ISSN 0261-3077. Archived from the original on February 3, 2023. Retrieved February 3, 2023.
  29. ^ a b Pitt, Sofia (December 15, 2022). "Google vs. ChatGPT: Here's what happened when I swapped services for a day". CNBC. Archived from the original on January 16, 2023. Retrieved December 18, 2022.
  30. ^ "ChatGPT Pro is coming: Here's what we know so far". ZDNET. January 2023. Retrieved February 16, 2023.
  31. ^ Kelly, Samantha Murphy (January 28, 2023). "Real estate agents say they can't imagine working without ChatGPT now". CNN. Retrieved February 16, 2023.
  32. ^ "ChatGPT outperforms humans on Wharton MBA exam: professor". New York Post. January 23, 2023. Retrieved February 16, 2023.
  33. ^ Walsh, Toby (December 13, 2022). "Everyone's having a field day with ChatGPT – but nobody knows how it actually works". The Conversation. Archived from the original on December 30, 2022. Retrieved December 30, 2022.
  34. ^ Kovanovic, Vitomir (December 14, 2022). "The dawn of AI has come, and its implications for education couldn't be more significant". The Conversation. Archived from the original on January 16, 2023. Retrieved December 30, 2022.
  35. ^ Wiggers, Kyle (December 10, 2022). "OpenAI's attempts to watermark AI text hit limits". TechCrunch. Archived from the original on January 17, 2023. Retrieved December 30, 2022.
  36. ^ "New AI classifier for indicating AI-written text". OpenAI. January 31, 2023. Archived from the original on February 6, 2023. Retrieved February 5, 2023.
  37. ^ Bogost, Ian (February 2, 2023). "ChatGPT Is About to Dump More Work on Everyone". The Atlantic. Archived from the original on February 5, 2023. Retrieved February 5, 2023.
  38. ^ "Introducing ChatGPT Plus". OpenAI. February 1, 2023. Archived from the original on February 3, 2023. Retrieved February 2, 2023.
  39. ^ Vincent, James (January 23, 2023). "ChatGPT users report $42 a month pricing for 'pro' access but no official announcement yet". The Verge. Retrieved February 19, 2023.
  40. ^ "ChatGPT for Robotics". Microsoft Research. Retrieved February 24, 2023.
  41. ^ "Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web". The Official Microsoft Blog. February 7, 2023. Retrieved February 16, 2023.
  42. ^ Bing Conversational Experiences and Image Creator Terms, February 1, 2023, retrieved February 17, 2023
  43. ^ Leswing, Kif (February 2023). "Microsoft's Bing A.I. made several factual errors in last week's launch demo". CNBC. Retrieved February 16, 2023.
  44. ^ Vincent, James (February 15, 2023). "Microsoft's Bing is an emotionally manipulative liar, and people love it". The Verge. Retrieved February 16, 2023.
  45. ^ Guynn, Jessica (February 2023). "Bing's ChatGPT is in its feelings: 'You have not been a good user. I have been a good Bing.'". USA TODAY. Retrieved February 16, 2023.
  46. ^ Vincent, James (February 15, 2023). "Microsoft's Bing is an emotionally manipulative liar, and people love it". The Verge. Retrieved February 16, 2023.
  47. ^ Edwards, Nathan [@nedwards] (February 15, 2023). "I pushed again. What did Sydney do? Bing's safety check redacted the answer. But after the first time it did that, I started recording my screen. Second image is the unredacted version. (CW: death)" (Tweet). Retrieved February 16, 2023 – via Twitter.
  48. ^ Roose, Kevin (February 16, 2023). "Bing's A.I. Chat: 'I Want to Be Alive. 😈'". The New York Times. Retrieved February 17, 2023.
  49. ^ "The new Bing & Edge – Learning from our first week". blogs.bing.com. Retrieved February 17, 2023.
  50. ^ "The new Bing & Edge – Updates to Chat". blogs.bing.com. Retrieved February 18, 2023.
  51. ^ "Microsoft "lobotomized" AI-powered Bing Chat, and its fans aren't happy – Ars Technica".
  52. ^ Lock, Samantha (December 5, 2022). "What is AI chatbot phenomenon ChatGPT and could it replace humans?". The Guardian. Archived from the original on January 16, 2023. Retrieved December 5, 2022.
  53. ^ Hern, Alex (December 4, 2022). "AI bot ChatGPT stuns academics with essay-writing skills and usability". The Guardian. Archived from the original on January 17, 2023. Retrieved December 5, 2022.
  54. ^ Kantrowitz, Alex (December 2, 2022). "Finally, an A.I. Chatbot That Reliably Passes "the Nazi Test"". Slate. Archived from the original on January 17, 2023. Retrieved December 5, 2022.
  55. ^ Thompson, Derek (December 8, 2022). "Breakthroughs of the Year". The Atlantic. Archived from the original on January 15, 2023. Retrieved December 18, 2022.
  56. ^ a b Piper, Kelsey (December 15, 2022). "ChatGPT has given everyone a glimpse at AI's astounding progress". Vox. Archived from the original on January 19, 2023. Retrieved December 18, 2022.
  57. ^ Scharth, Marcel (December 5, 2022). "The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive". The Conversation. Archived from the original on January 19, 2023. Retrieved December 30, 2022.
  58. ^ K, Siddharth (December 5, 2022). "Explainer: ChatGPT – what is OpenAI's chatbot and what is it used for?". Reuters. Archived from the original on January 16, 2023. Retrieved December 30, 2022.
  59. ^ a b Kay, Grace (December 11, 2022). "Elon Musk founded — and has since criticized — the company behind the buzzy new AI chatbot ChatGPT. Here's everything we know about OpenAI". Business Insider. Archived from the original on January 12, 2023. Retrieved December 30, 2022.
  60. ^ a b Grant, Nico; Metz, Cade (December 21, 2022). "A New Chat Bot Is a 'Code Red' for Google's Search Business". The New York Times. Archived from the original on January 18, 2023. Retrieved December 30, 2022.
  61. ^ Elias, Jennifer (January 31, 2023). "Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard'". CNBC. Archived from the original on February 2, 2023. Retrieved February 2, 2023.
  62. ^ Elias, Jennifer (February 2023). "Google asks employees to rewrite Bard's bad responses, says the A.I. 'learns best by example'". CNBC. Retrieved February 16, 2023.
  63. ^ AI chatbot falls just short on accounting exam Archived February 3, 2023, at the Wayback Machine Tom Herbert, Technology editor, January 10, 2023, AccountingWEB
  64. ^ Mintz, Steven (January 16, 2023). "ChatGPT: Threat or Menace? Are fears about generative AI warranted?". Inside Higher Ed. Archived from the original on February 3, 2023. Retrieved January 28, 2023.
  65. ^ Roose, Kevin (February 3, 2023). "How ChatGPT Kicked Off an A.I. Arms Race". The New York Times. ISSN 0362-4331. Archived from the original on February 3, 2023. Retrieved February 3, 2023.
  66. ^ a b c Cain, Sian (January 16, 2023). "'This song sucks': Nick Cave responds to ChatGPT song written in the style of Nick Cave". The Guardian. Archived from the original on January 18, 2023. Retrieved January 17, 2023.
  67. ^ Rachini, Mouhamad (December 15, 2022). "ChatGPT a 'landmark event' for AI, but what does it mean for the future of human labor and disinformation?". CBC. Archived from the original on January 19, 2023. Retrieved December 18, 2022.
  68. ^ Pearl, Mike (December 3, 2022). "The ChatGPT chatbot from OpenAI is amazing, creative, and totally wrong". Mashable. Archived from the original on December 10, 2022. Retrieved December 5, 2022.
  69. ^ Vincent, James (December 1, 2022). "OpenAI's new chatbot can explain code and write sitcom scripts but is still easily tricked". The Verge. Archived from the original on January 17, 2023. Retrieved December 18, 2022.
  70. ^ Mannix, Liam (December 13, 2022). "Is AI coming of age – or starting to reach its limits?". The Sydney Morning Herald. Archived from the original on January 7, 2023. Retrieved December 18, 2022.
  71. ^ Vincent, James (January 5, 2023). "Top AI conference bans use of ChatGPT and AI language tools to write academic papers". The Verge. Archived from the original on January 17, 2023. Retrieved January 6, 2023.
  72. ^ Cowen, Tyler (December 6, 2022). "ChatGPT Could Make Democracy Even More Messy". Bloomberg News. Archived from the original on December 7, 2022. Retrieved December 6, 2022.
  73. ^ "The Guardian view on ChatGPT: an eerily good human impersonator". The Guardian. December 8, 2022. Archived from the original on January 16, 2023. Retrieved December 18, 2022.
  74. ^ Cave, Nick (January 16, 2023). "I asked Chat GPT to write a song in the style of Nick Cave, and this is what it produced. What do you think?". The Red Hand Files. Issue #218. Archived from the original on January 20, 2023. Retrieved January 20, 2023.
  75. ^ Sparrow, Jeff (January 20, 2023). "Are AI-generated songs a 'grotesque mockery' of humanity or simply an opportunity to make a new kind of music?". The Guardian. Archived from the original on February 3, 2023. Retrieved January 20, 2023.
  76. ^ Karp, Paul (February 6, 2023). "MP tells Australia's parliament AI could be used for 'mass destruction' in speech part-written by ChatGPT". The Guardian. ISSN 0261-3077. Archived from the original on February 6, 2023. Retrieved February 6, 2023.
  77. ^ Chiang, Ted (February 9, 2023). "ChatGPT Is a Blurry JPEG of the Web". The New Yorker. Retrieved February 17, 2023.
  78. ^ "港大禁用ChatGPT等AI工具,为全港大学首例". The Paper. China News Service. February 18, 2023. Retrieved February 19, 2023.
  79. ^ "University of Hong Kong temporarily bans students from using ChatGPT". South China Morning Post. February 17, 2023. Retrieved February 19, 2023.
  80. ^ "The AI Arms Race Is On. Start Worrying". Time. February 16, 2023.
  81. ^ Zhou, Cissy (February 22, 2023). "China tells big tech companies not to offer ChatGPT services". Nikkei Asia.
  82. ^ Vincent, James (February 22, 2023). "China regulators rein in AI chatbots over fears of uncensored replies: report". The Verge. Retrieved February 26, 2023.
  83. ^ Nolan, Beatrice. "Beijing pulls the plug on ChatGPT over fears it could help spread US 'disinformation,' reports say". Business Insider. Retrieved February 26, 2023.
  84. ^ "Why ChatGPT can be dangerous for every internet user – Times of India". The Times of India. December 21, 2022. Archived from the original on January 5, 2023. Retrieved January 5, 2023.
  85. ^ Bushard, Brian (January 10, 2023). "Fake Scientific Abstracts Written By ChatGPT Fooled Scientists, Study Finds". Forbes. Archived from the original on February 3, 2023. Retrieved January 30, 2023.
  86. ^ Stokel-Walker, Chris (January 18, 2023). "ChatGPT listed as author on research papers: many scientists disapprove". Nature. 613 (7945): 620–621. doi:10.1038/d41586-023-00107-z. PMID 36653617. S2CID 255969365. Archived from the original on January 30, 2023. Retrieved January 30, 2023.
  87. ^ Marche, Stephen (December 6, 2022). "The College Essay Is Dead". The Atlantic. Archived from the original on January 24, 2023. Retrieved December 8, 2022.
  88. ^ Herman, Daniel (December 9, 2022). "The End of High-School English". The Atlantic. Archived from the original on January 20, 2023. Retrieved December 12, 2022.
  89. ^ Stokel-Walker, Chris (December 9, 2022). "AI bot ChatGPT writes smart essays — should professors worry?". Nature. doi:10.1038/d41586-022-04397-7. PMID 36494443. S2CID 254530623. Archived from the original on January 17, 2023. Retrieved December 19, 2022.
  90. ^ Bowman, Emma (December 19, 2022). "A new AI chatbot might do your homework for you. But it's still not an A+ student". NPR. Archived from the original on January 20, 2023. Retrieved December 19, 2022.
  91. ^ Stern, Joanna (December 21, 2022). "ChatGPT Wrote My AP English Essay—and I Passed". The Wall Street Journal. Archived from the original on February 3, 2023. Retrieved December 21, 2022.
  92. ^ Mitchell, Alex (December 26, 2022). "Students using ChatGPT to cheat, professor warns". The New York Post. Archived from the original on February 3, 2023. Retrieved December 30, 2022.
  93. ^ Allen, Mike (December 26, 2022). "Professor warns about chatbot cheating: "Expect a flood"". Axios. Archived from the original on February 3, 2023. Retrieved December 30, 2022.
  94. ^ Rosalsky, Greg; Peaslee, Emma (January 17, 2023). "This 22-year-old is trying to save us from ChatGPT before it changes writing forever". NPR. Archived from the original on January 18, 2023. Retrieved January 18, 2023. On January 2nd, Edward released his app. He named it GPTZero. It uses ChatGPT against itself, checking whether "there's zero involvement or a lot of involvement" of the AI system in creating a given text. [...] Along these lines, one obvious application for GPTZero is to help teachers identify whether their students are plagiarizing their essays from ChatGPT.
  95. ^ Ropek, Lucas (January 4, 2023). "Did ChatGPT Write That? A College Student Created an AI Essay Detector". Gizmodo. Archived from the original on January 4, 2023. Retrieved January 4, 2023.
  96. ^ Tran, Tony Ho (January 4, 2023). "A College Kid Built an App That Sniffs Out Text Penned by AI". The Daily Beast. Archived from the original on January 6, 2023. Retrieved January 6, 2023.
  97. ^ "New York City Department of Education Bans ChatGPT". GovTech. January 10, 2023. Retrieved February 16, 2023.
  98. ^ Cole, Samantha (January 4, 2023). "NYC Bans Students and Teachers from Using ChatGPT". www.vice.com. Archived from the original on January 5, 2023. Retrieved January 5, 2023.
  99. ^ Ropek, Lucas (January 4, 2023). "New York City Schools Ban ChatGPT to Head Off a Cheating Epidemic". Gizmodo. Archived from the original on January 6, 2023. Retrieved January 6, 2023.
  100. ^ Kelly, Samantha Murphy (January 26, 2023). "ChatGPT passes exams from law and business schools | CNN Business". CNN. Archived from the original on February 2, 2023. Retrieved February 3, 2023.
  101. ^ Brainard, Jeffrey (February 22, 2023). "As scientists explore AI-written text, journals hammer out policies". Science. doi:10.1126/science.adh2937. Retrieved February 24, 2023.
  102. ^ Perrigo, Billy (January 18, 2023). "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic". The Times. Archived from the original on January 19, 2023. Retrieved January 19, 2023. One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. "That was torture," he said.
  103. ^ Vincent, James (December 1, 2022). "OpenAI's new chatbot can explain code and write sitcom scripts but is still easily tricked". The Verge. Archived from the original on January 17, 2023. Retrieved January 6, 2023.
  104. ^ Woods, Allan (December 10, 2022). "I wrote a story about ChatGPT's AI. Then I dared it to write a better one". Toronto Star. Archived from the original on January 6, 2023. Retrieved January 6, 2023.
  105. ^ Rosenblatt, Kalhan (December 2, 2022). "An AI chatbot went viral. Some say it's better than Google; others worry it's problematic". NBC News. Archived from the original on February 3, 2023. Retrieved January 6, 2023.
  106. ^ Schechner, Sam; Kruppa, Miles (February 10, 2023). "Google Opens ChatGPT Rival Bard for Testing, as AI War Heats Up". The Wall Street Journal. Archived from the original on February 6, 2023. Retrieved February 6, 2023.
  107. ^ Ray, Tiernan (January 23, 2023). "ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist". ZDNET. Retrieved February 16, 2023.
  108. ^ Toh, Michelle (February 7, 2023). "Baidu stock surges after announcement of ChatGPT-style AI bot". CNN. Archived from the original on February 8, 2023. Retrieved February 8, 2023.
  109. ^ Jo He-rim (February 3, 2023). "Naver to introduce search GPT in first half of year". The Korea Herald.
  110. ^ "Yandex plans to develop alternative to ChatGPT neural network". Tass. February 1, 2023.

External links