“ChatGPT, an innovative technology developed by OpenAI, has made a tremendous impact on the field of artificial intelligence (AI). It has revolutionized the way we approach natural language processing (NLP), chatbots, language translation, and more.”
We can’t have an article about ChatGPT without any output generated from the “scary good” and close to “dangerously strong AI” model itself, as described by OpenAI’s co-founder emeritus, Elon Musk. For those who have avoided signing up for the personal assistant-like AI and exhaustingly draft our own emails and grocery lists, a bit of background might be needed.
What’s all the fuss about ChatGPT?
ChatGPT uses both existing and novel technologies to generate human-like responses to text-based conversations (think: customer service chatbox). Its uniqueness is attributed to the model’s use of a Reinforcement Learning with Human Feedback (RLHF) technique, which allows it to learn from human feedback and improve its responses over time. OpenAI also uses a modified version of Google’s “Transformer” language model, called Generative Pre-trained Transformer (GPT), which trains on large amounts of text data using unsupervised learning. The text data collected from the internet includes content created only up until 2021, and the released model itself does not make active internet searches to produce responses on the server.
Should I be worried?
As you sign up to start requesting customized workout plans from ChatGPT, you are reminded of its many limitations. Some users experience issues with:
- Plausible-sounding but incorrect or nonsensical answers: ChatGPT may unintentionally fool users with confidently crafted answers that are rooted in unreliable training source material. Resolving the issue may cause the model to decline to answer questions that it has the correct answer to, but is trained to be cautious of engaging with.
- ChatGPT needs a helping hand: Certain wording of phrases or input prompts may need a few trial iterations before the model fully understands the question. It may make claims of not knowing the answer until the correct input incites a proper and/or accurate response.
- Unexpected context: Instead of asking clarifying questions for ambiguous queries, the model will guess the context around the user’s question and provide answers.
- Moderation API not being perfect: Warnings or blocks to unsafe outputs that may include harmful instructions or biased responses are not always provided, allowing false positives or negatives to follow through.
Clearly, ChatGPT can create some unfavorable scenarios as users continue to use this technology in their personal and professional work, pushing telemarketers’ jobs to a vulnerable state. However, the convenience of receiving a single (potentially incorrect) answer instead of Google’s list of relevant links that require you to sift through endless content, has made it a daily virtual companion for users.
First essays, now hacker threats?
While some schools have blocked access to ChatGPT on their networks to combat the generation of AI-written low-quality essays, others have recommended using it as a consulting aid. But if you’re out of school and have dabbled in Women Who Code’s technical workshops, you might find ChatGPT to be a competent tool for debugging code. It competitively ranks against automatic and various deep-learning program repair techniques with the added touch of explaining ideas to optimize your code for better performance. Due to its generally unreliable nature, however, Stack Overflow banned posting ChatGPT-generated solutions as the volume of inaccurate answers explosively increased.
As this technology levels the playing field for those with zero coding experience, forums have been found highlighting individuals using this model to help compile malware. Cybersecurity risks are on the rise as cybercriminals can more easily produce scripts for online text-based platforms or create realistic-looking social profiles. Tackling this ongoing security threat is no easy task, however, using ChatGPT to compose live incident reports of attacks can significantly increase assessment and response rates from security teams.
The arm’s race has begun as Microsoft releases a web and mobile test program integrating ChatGPT with Bing’s search engine. Now users can receive human-like responses from active internet searches by the language model, along with links to back its sources. The chatbot only allows very limited uses as controversial responses deem it far from being public-ready. With open-source variants of ChatGPT in development, it is at the forefront of technological discourse as OpenAI CEO and former Y Combinator President, Sam Altman, voices his concern for “how people of the future will view us.”