Generative AI tools like ChatGPT can produce a lot of different content quickly, including quick answers to questions, cover letters, poems, short stories, outlines, essays, and reports. However, the content created should be carefully checked, as it may contain errors, false claims, or plausible-sounding but completely incorrect or nonsensical answers.
Generative AI can also be used to create fake images or videos so well that they are increasingly difficult to detect, so be careful which images and videos you trust, as they may have been created to spread disinformation.
Generative AI relies on the information it finds on the Internet to create new output. As information is often biased, the newly generated content may contain a similar kind of bias, including gender bias, racial bias, cultural bias, political bias, and religious bias. Scrutinize AI-generated content closely for inherent biases.
AI content may be selective as it depends on the algorithm to create the responses. Although it accesses a huge amount of information found on the Internet, it may not be able to access subscription-based information secured behind firewalls.
Content may also lack depth, be vague rather than specific, and be full of clichés, repetitions, and even contradictions.
AI tools may not always use the most current information in the content they create. When the free version of ChatGPT was first created, its knowledge cut-off date was September 2021. The knowledge cut-off date is now January 2022 (as of 2/26/2024).
In most disciplines, having the most recent and updated information is crucial. For example, COVID research was evolving swiftly, and it was essential to have the most recent comprehensive and reliable data available. Technology is another constantly changing area, and valid information one year may not be correct the next. You must supplement your own research with information from AI tools to ensure you haven't missed anything important!
Generative AI tools don't always include citations to the sources of information. These tools are also known to create incorrect citations and makeup citations to non-existent sources (sometimes called AI Hallucination).
Not crediting sources of information used and creating fake citations are both cases of plagiarism and, therefore, breaches of academic integrity.
Generative AI tools rely on what they can find in their vast knowledge repository to create new work, and new work may infringe on copyright if it uses copyrighted work for the new creation.
For example, there have been several lawsuits against tech companies that use images found on the Internet to program their AI tools. One such case is Getty Images, which accuses Stable Diffusion of using millions of pictures from Getty's library to train its AI tool. They are claiming $1.8 trillion in damages. There is much debate about copyright ownership of a product that AI created. Is it the person who wrote the code for the AI tool, the person who came up with the prompt, or is it the AI tool itself?