Guidelines for use of artificial intelligence in research

At Arizona State University, we recognize the rapidly evolving landscape of artificial intelligence and its potential to advance knowledge, research, and scholarly work. We support the responsible and ethical use of AI tools to facilitate research activities. As the use cases of AI tools for research become better understood and federal agencies release guidance, we will be providing regular updates, guidance, and resources to keep the ASU Research Community up to date.

Before starting any research project that involves the use of AI tools, it is strongly recommended that you discuss the appropriateness of using the technology with your co-investigators, collaborators, and field experts.   If you decide to use generative AI in your research, keep in mind the following items: 

  • Use only tools that have been approved for use at ASU. AI Tools for the ASU Community
  • Before using any generative AI tools, review the AI Digital Trust guidelines and Data Handling Standard to ensure you understand the responsibilities and expectations for data privacy, copyright and intellectual property protections at ASU.
  • Many Federal agencies have tools to detect AI-generated content. Be aware of these tools and their potential impact on your research.
  • Many Federal agencies are developing AI standards and guidelines around the use of AI in research projects.   Before deciding to work with a federal agency, review where the agency is in developing their standards and guidelines. Review the award terms and conditions of the research project for any AI-related standards or seek guidance from your federal program officer.
  • Content generated from AI often paraphrases from other sources. This could raise concerns regarding plagiarism and intellectual property rights.
  • Content generated from AI may be inaccurate or could be biased. It is important to validate content provided using other reliable and verifiable sources.
  • Do not rely solely on generative AI for decision-making purposes. Use the results to inform your research while making decisions based on additional factors and evidence.
  • Do not place federal, state or ASU data into an externally sourced generative AI tools. AI tools can easily violate educational privacy laws (FERPA) and data protection regulations (GDPR, HIPAA). Once the data is placed into AI tools, the data becomes a Public Record and open source.   This could lead to unintended disclosure of sensitive information, lack of control or accountability, no control over data retention and deletion uncertainty and potential data misuse.  This occurs, for example, with chatGPT, Bard, Bing or GPT as well as with prompts to generative image processors such as DALL-E.  Additionally, the data may be subject to other terms and conditions, which may include a Data Use Agreement.
    • Note:  There are large language models that are HIPAA-compliant and support PHI. For questions on LLMs see Get Protected.
  • Do not place any US export-controlled data into any generative AI tools.  Placing such data into AI tools will violate US export control laws.   
  • For meetings that will involve discussions of a sensitive nature (e.g. personal, confidential, financial, IP, proprietary, personnel, export-controlled, etc.), do not use AI automated meeting tools (Read.ai., Otter.ai, Fireflies.ai, MeetGeek) to record and capture discussions, measure attendee engagement, etc., as the data generated by these tools will be considered Public Records and pose broad privacy and data security risks. 
  • Be cognizant of virtual meetings where AI meeting tools may be used.  These tools monitor schedules and automatically join meetings to produce real-time transcripts and summaries.  Inquire with the meeting host about the use of an AI tool, and request removal of the tool or decline participation in the meeting if the information shared in the meeting is of a sensitive nature.   Many research projects require compliance with confidentiality and data security requirements as part of  the award terms and conditions or may require approval from the federal program officer or institutional review board (IRB) before providing any research data outside of secure ASU systems.
  • When working with vendors or subcontractors, inquire about their practices of using AI. Additional terms and conditions may need to be included in any resulting agreement to ensure responsible and ethical use of AI tools by collaborating organizations.

By following these guidelines, researchers can leverage the benefits of generative AI in their research while ensuring safety, responsibility and ethical use of this technology.

Resources:

AI Tools for the ASU Community

ASU Artificial Intelligence

Office of the Provost, Generative AI.

Get Protected.

Blueprint for an AI Bill of Rights.

National Science Foundation, Artificial Intelligence

Questions?

  • General questions about generative AI tools (e.g., access, functionality, support, safeguarding data):   Contact your local IT support, Enterprise Technology ET Routing Matrix or the Get Protected team.  
  • Questions about the use of generative AI in research projects involving human subjects (e.g. IRB studies, etc.):   Email the ASU IRB at [email protected]
  • Questions about US export controls related to research projects involving specific AI technologies, hardware or software:  Email the Export Control team at[email protected]
  • Questions from external funding agencies about ASU’s use of AI in research:
  • Reach out to Heather Clark, Chief Research Compliance Officer (CRCO), at [email protected].