Last edited on January 18, 2024


To the research community

You might already be informed about the ongoing conversations surrounding the utilization of generative AI tools, including GPTs, LLMs and its implications on academic and research activities.

These guidelines are pertinent to everyone in the research domain, including faculty, staff, students across all levels, guest researchers like unpaid volunteers, interns, visiting academics, as well as collaborators and consultants undertaking research at our institution. Our aim with these guidelines is to set the foundation for the principled and judicious use of AI in academic research.

It's essential that you familiarize yourself with these guidelines and weave them into your academic and research activities. Adapt them where necessary to align with the conventions and practices of your scholarly discipline. We also urge mentors and advisors to regularly discuss with their students and other research trainees the role of generative AI in their research endeavors.

Keeping in mind the accelerated pace of development of new AI tools, we expect these guidelines to undergo periodic revisions.

Implementing guidelines

  1. Understanding AI operations:
    1. Be aware that the workings behind AI-generated outputs might be opaque to the user.
    2. Consequently, the content produced may not be easily validated against primary sources.
  2. Recognize biases and limitations:
    1. Understand that AI-generated content reflects the biases of the data it was trained on.
    2. Researchers should critically assess and acknowledge these biases. The output might sometimes be incorrect or wholly fabricated, even when it seems trustworthy.
  3. Privacy and confidentiality:
    1. Treat inputting private or confidential data into public AI tools as public disclosure.
    2. Understand that uploading information to these tools, including querying tools like GPTs, LLMs, means releasing that data to a third party
    3. Do not assume that generative AI tools adhere to privacy regulations like HIPAA and FERPA.
    4. Always exercise caution, as disclosing sensitive information can lead to privacy and security breaches including data breaches and exposing intellectual property.
    5. Generative AI might generate content that infringes on intellectual property or copyrighted material. Using such content might lead to accusations of plagiarism or misconduct against the researcher.
  4. Adherence to evolving norms:
    1. Recognize that standards for using generative AI continually change, varying by application, context, and discipline.
    2. Ensure that AI use aligns with the standards and policies of your research context and discipline.
    3. Be aware of the positions and guidelines from journals, publishers, and professional organizations regarding generative AI use.
  5. Researcher accountability:
    1. Stay updated on policies governing generative AI use in your research.
    2. You're accountable for the work you create and share, including ensuring accuracy, proper attribution, and disclosure of AI involvement.
    3. This responsibility extends to everyone in the research community, from faculty to research trainees.
  6. Transparent disclosure and documentation:
    1. Clearly indicate and record any use of generative AI in research activities.
    2. Note that documentation requirements can differ by context and discipline, but when in doubt, choose transparency.
  7. Open communication with teams:
    1. Supervisors and senior researchers should foster open discussions with their teams about the potentials and pitfalls of using generative AI in research.
  8. Continuous learning:
    1. Given the rapid advancements in AI technology and changing standards, keep yourself informed.
    2. Engage in professional development opportunities to strengthen your knowledge and skills related to AI integration in research.

AI usage guidance

Specific examples of how AI can be used in different research areas (e.g., data analysis, modeling, simulation):

  • Example 1: Using AI to analyze large datasets and identify patterns.
  • Example 2: Using AI to develop new drugs and treatments.
  • Example 3: Using AI to create simulations that can be used to study complex phenomena.

Potential biases and limitations of AI in research, and how to account for them:

  • Potential biases
    • Data bias: AI algorithms are trained on data, which can inherit and amplify existing societal biases. This can lead to unfair and discriminatory outcomes in research, such as biased decision-making, inaccurate predictions, and skewed results.
    • Algorithmic bias: AI algorithms themselves can be biased due to the choices made in their design and development. This includes biases in the selection of training data, the choice of algorithm parameters, and the design of the evaluation criteria.
    • Human bias: Researchers may unconsciously introduce bias into their research through their own assumptions, interpretations, and data analysis methods. This can be amplified by AI tools that are not designed to detect and mitigate human bias.
  • Limitations
    • Lack of Explainability: Many AI algorithms are complex and lack transparency, making it difficult to understand how they arrive at their conclusions. This can make it challenging to interpret research results, identify potential biases, and build trust in AI-generated findings.
    • Overfitting: AI models can sometimes overfit to the training data, leading to poor performance on new data. This can result in inaccurate and unreliable research findings that cannot be generalized to broader populations or contexts.
    • Limited applicability: AI tools may not be suitable for all types of research questions or methodologies. Additionally, they may require specialized expertise and resources that are not readily available to all researchers.

To address the potential biases and limitations of AI in research, several strategies can be implemented:

  • Data curation: Carefully selecting and curating high-quality data that is diverse, representative, and free from bias.
  • Algorithmic transparency: Using AI algorithms that are designed to be transparent and explainable, allowing researchers to understand how they reach their conclusions.
  • Human oversight: Maintaining human oversight and involvement in all stages of research, from data collection and analysis to interpretation and reporting of results.
  • Validation and evaluation: Validating and evaluating AI-generated findings through rigorous testing and comparison with alternative methods.
  • Continuous improvement: Continuously monitoring and improving AI tools and methods to address emerging biases and limitations.
  • Collaboration: Fostering collaboration among researchers, ethicists, and policymakers to develop responsible and ethical approaches to using AI in research.

Additional resources