Inaccuracy and Bias are Built-In
AI models are trained on vast amounts of human-generated text, as a result, they may reflect and perpetuate biases and errors present in this data. Companies producing AI models often put barriers in place to prevent inaccuracy and bias, but their presence and effect are often difficult to correct.
Hallucination
When AI generates incorrect information, it’s called a “hallucination." Since it is difficult at times to tell where an AI tool is sourcing its data, as a researcher, you have the responsibility to review and identify sources.
Errors
AI models make mistakes. Although they process and analyze data at incredible speeds, they simply don't possess the intuitive understanding or judgment that a human possesses. We should always review and contextualize its outputs.
Your professor may have included in the course syllabus a statement that addresses the use of AI tools. If not, ask them.
"All members of Tulane University have a responsibility to protect university data from unauthorized access or disclosure. Consistent with Tulane’s data governance, data management, and data classification policies, data classified as Level 2- Internal, Level 3-Confidential Data, or Level 4- Restricted should not be entered into publicly available generative AI tools. Information shared with generative AI tools using default settings is not private and could result in unauthorized access or disclosure of university proprietary, confidential or restricted data." (Source: https://ai.tulane.edu)
By design, ChatGPT and other generative AI collect the data you input to improve their models. It's important to limit the amount of personal information you share. As a user, making sure you’re omitting personal details in queries is a good first step. Review your tool to see how you may protect your privacy. For example, to limit what ChatGPT can save and use from your queries, you can turn off chat history and model training.