AI models return results based on their training data and algorithms, which are inaccurate or biased. Companies producing AI models often put barriers in place to prevent inaccuracy and bias, but their presence and effect are often difficult to correct.
When AI generates incorrect information, it’s called a “hallucination." Since it is difficult at times to tell where an AI tool is sourcing its data, as a researcher, you have the responsibility to review and identify sources, as possible.
Your professor may have included in your syllabus a statement that addresses the use of ChatGPT or other AI tools specifically.
Look for:
"All members of Tulane University have a responsibility to protect university data from unauthorized access or disclosure. Consistent with Tulane’s data governance, data management, and data classification policies, data classified as Level 2- Internal, Level 3-Confidential Data, or Level 4- Restricted should not be entered into publicly available generative AI tools. Information shared with generative AI tools using default settings is not private and could result in unauthorized access or disclosure of university proprietary, confidential or restricted data." (Source: https://ai.tulane.edu)
By design, ChatGPT and other generative AI collect the data you input to improve their models. It's important to limit the amount of personal information you share. As a user, making sure you’re omitting personal details in queries is a good first step. Review your tool to see how you may protect your privacy. For example, to limit what ChatGPT can save and use from your queries, you can turn off chat history and model training.