Considerations for the Responsible and Ethical Use of AI
Drafted: Nov. 25, 2025 | Last Revised: Feb. 16, 2026
Virginia Tech’s Considerations for Researchers on the Use of AI translates the university’s Responsible and Ethical AI Framework (2025) and the Generative AI Delegation Taxonomy (GAIDeT) (Suchikova et al., 2025) into practical steps for researchers. It provides stage-specific considerations for using generative AI responsibly across the stages of the research lifecycle as identified by the GAIDeT. These seven stages include conceptualization, literature review, methodology, software development, data management, writing, and ethical oversight. Each of the seven stages is discussed in terms of the seven principles in VT's Responsible and Ethical AI Framework. In addition to the seven GAIDeT stages, we acknowledge that researchers engage in peer and proposal review as part of their academic responsibilities. Many journals and sponsors prohibit the use of AI tools in these contexts, and researchers should therefore exercise heightened caution and follow all applicable confidentiality and integrity requirements.
In these considerations we construe Artificial Intelligence (AI) broadly and note that this term, coined in 1955, includes a vast array of methods and technologies. AI, in its simplest definition, refers to any technology or machine that can perform complex tasks typically associated with human intelligence. These considerations focus on the responsible use of AI tools in research, not on the development, training, or fine-tuning of AI or machine-learning models.
The resource helps faculty, students, and research staff apply AI transparently and in compliance with university and sponsor requirements and recommendations. Reflection prompts and real-world tips support ethical decision-making, documentation, and data protection. Together, these practices uphold Virginia Tech’s commitment to Ut Prosim, the Principles of Community, and leadership in responsible innovation.
In addition to these considerations, researchers are encouraged to review AI-related position statements, ethical codes, or policy guidance issued by their professional or disciplinary societies. Many organizations, such as the Human Factors and Ergonomics Society (HFES), the Institute of Electrical and Electronics Engineers (IEEE), the American Psychological Association (APA), the American Medical Association (AMA), and the Association for Computing Machinery (ACM), have published discipline-specific standards for responsible AI design, testing, and use. These professional resources complement Virginia Tech’s institutional framework by clarifying expectations and ethical norms within specific research domains. Researchers are also encouraged to pursue training in ethical AI use and data protection as appropriate for their discipline and research activities.
These considerations are not intended to serve as a legal advice or as an exhaustive set of best practices. It should be viewed as a living document. It is not a comprehensive manual on how to conduct research using AI, nor does it provide technical instruction on model development, machine learning methodologies, or other specialized AI practices. Instead, it offers ethical and procedural considerations to support responsible use of AI tools within research workflows. Researchers engaged in AI development should consult discipline-specific best practices, Virginia Tech’s Responsible and Ethical AI Framework, and relevant professional society guidelines for technical or methodological considerations beyond the scope of this document.
Given the rapidly changing landscape of AI, these considerations will be reviewed and updated on a routine basis. Feedback from the research community is encouraged and will be reviewed as part of the ongoing update process. Questions and recommendations should be submitted to the Privacy and Research Data Protection Program in the Office of Research and Innovation (prdp@vt.edu).
Purpose and Context
AI can help researchers generate ideas, refine research questions, identify novel directions, and explore feasibility. Used responsibly, it can broaden creativity while keeping human expertise at the center of the research process. Ethical use at this stage means allowing AI to support, not drive, the intellectual foundation of your work.
| Virginia Tech Responsible and Ethical AI Principle1 | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI to explore ideas that advance Virginia Tech’s research and service missions |
|
| Innovation for Good | Balance innovation with ethical reflection and feasibility. |
|
| Human-Centered Benefit | Maintain human creativity and disciplinary insight as the core of idea generation. |
|
| Responsible and Ethical Use | Consider who or what could be affected by an AI-inspired idea. |
|
| Fairness and Transparency | Acknowledge AI’s role in idea creation to promote integrity and reproducibility. |
|
| Human Judgment and Accountability | Researchers remain responsible for determining which ideas are valid and feasible. |
|
| Data Security and Privacy | Protect unpublished ideas and early concepts shared with AI systems. |
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech |
1 Principles in this table are drawn from Virginia Tech’s Responsible and Ethical AI Framework (v1.0, 2025). This guidance operationalizes those principles for research contexts across the stages of the research lifecycle
Questions to Consider
- Does this AI-assisted idea align with my discipline’s standards and Virginia Tech’s mission?
- Have I documented AI’s role?
- Am I using AI to inspire creativity or to outsource thinking?
- Can I explain results generated in part or whole using AI tools? Did I check their validity? Can I predict the implications of their use?
Real-World Tips
Do not enter or upload proprietary, confidential, or fundable concepts as prompts in non-Virginia-Tech–approved AI tools. Once entered, that information is stored, shared with external servers, and used to train future models, potentially compromising intellectual property protection and your research ideas.
Use your research notebook (paper or secure digital system) to capture how AI informed your thinking, rather than recording it in the AI platform itself. Keeping this record in your notebook not only ensures transparency and reproducibility but also helps establish evidence of your original contribution and supports your claim to intellectual property.
Purpose and Context
AI can assist researchers in conducting literature searches, summarizing findings, identifying trends or notable gaps, and organizing references. If used responsibly, AI can expand the scope and efficiency of your review. Individual researchers are responsible for maintaining accuracy, scholarly judgement, and integrity at all times.
Note: Formal evidence synthesis research methods (systematic reviews, meta-analyses, and similar) may be assisted by AI tools in specific, auditable ways. However, as these are considered methods for conducting original research, the use of AI should be fully guided by human researchers in a manner that does not compromise rigor or increase risk of bias. Learn more about these methods and contact information for partnership or support services via the Libraries’ Systematic Reviews and Meta-Analyses Guide.
The considerations below do not apply to formal evidence synthesis methods.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI to deepen understanding of the scholarly landscape, broaden discovery, and serve the scholarly and global community. |
|
| Innovation for Good | Leverage AI tools to explore interdisciplinary connections or emerging areas of research. |
|
| Human-Centered Benefit | Maintain human expertise as the foundation for synthesis and interpretation. |
|
| Responsible and Ethical Use | Respect copyright, licensing, and data-use agreements when using AI tools for literature searches. |
|
| Human Judgment and Accountability | Retain full accountability for the accuracy, currency, and completeness of your literature review. |
|
| Data Security and Privacy | Protect reference lists, annotations, and draft materials in accordance with university security standards. |
|
Questions to Consider
- Does my use of AI align with the standards and expectations of relevant stakeholders (e.g., publisher guidelines, conduct and reporting guidelines, funder expectations, etc.)?
- Have I verified all AI-generated summaries and citations against original sources?
- Am I transparently documenting how AI contributed to my search or synthesis process?
Does my review include diverse perspectives and not rely solely on AI-curated materials?
Real-World Tips
Create a simple tracking system to document which AI-suggested references you have verified.
For example, add a column in your reference manager or spreadsheet labeled “Verified” to mark each source after you:
- Locate the actual publication link or DOI, and
- Confirm that the AI summary accurately reflects the article’s content.
Keep notes and document if an AI-generated summary is misleading or incomplete. This helps you refine prompts and demonstrate due diligence in your literature review.
Purpose and Context
AI tools are increasingly used to assist in experiment design, modeling, selecting analytical methods, refining research questions, and generating or reviewing code for data analysis. This section focuses primarily on the ethical use of generative AI tools—such as large language models and code generators—in research design and analysis planning, while recognizing that similar principles apply to other AI systems used for modeling, prediction, or analysis. Used responsibly, AI can improve rigor, reproducibility, and efficiency by helping researchers explore alternative methods, identify potential limitations, or document analytical decisions more clearly. Research methods are also where AI use has a particularly strong influence on research integrity. Overreliance on opaque, unverified, or poorly understood AI-assisted methods can undermine the validity and reproducibility of a study.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI tools to strengthen methodological rigor, reproducibility, and alignment with the research mission. |
|
| Innovation for Good | Use AI to explore novel methodological approaches that expand knowledge responsibly. |
|
| Human-Centered Benefit | Use AI in ways that enhance human creativity, insight, and problem-solving. |
|
| Responsible and Ethical Use | Evaluate potential risks, ethical implications, and regulatory requirements of AI-influenced designs and methods involving AI. |
|
| Fairness and Transparency | Clearly disclose the role of AI in shaping your research design or analysis plan and evaluate whether underlying training data or algorithms could introduce bias. |
|
| Human Judgment and Accountability | Retain full responsibility for the validity, accuracy, and reproducibility of AI-assisted methods. |
|
| Data Security and Privacy | Protect all data, algorithms, and design files used with AI systems under Virginia Tech’s data-handling standards. |
|
Questions to Consider
- Have I verified that all AI-assisted methods are reproducible, explainable, and aligned with disciplinary standards? If human verification is not feasible, have I documented the limitations and explained how the results were validated?
- Have I transparently documented AI’s role in my methods section for others to understand and replicate?
- Am I balancing innovation with ethical responsibility and compliance?
- When collecting or selecting a dataset for training, am I considering whether its composition might inadvertently exclude certain populations or introduce bias?
- Am I complying with the Virginia Tech’s Risk Classification Standard, disciplinary standards, and research sponsor and other contractual data security requirements?
Real-World Tips
- When using AI to generate code, analytic approaches, or research designs, create a method-verification log. Each time you implement or modify an AI-suggested step, record:
- The AI tool used and date
- The prompt or query (if applicable) you used to generate the AI response
- Whether the output was verified or edited by a human
- Who conducted the verification, and how
- When developing or sharing datasets that could be used in future AI training, include a simple README or dataset documentation file that describes key elements such as the dataset’s purpose, composition, data sources, collection methods, known limitations, and any potential biases. A README can accompany the dataset in your repository or project folder and should enable another researcher to understand how the data were created and how they should, or should not, be reused. Established frameworks such as Datasheets for Datasets, and The Data Nutrition Project can be consulted for additional guidance.
Before incorporating AI into research design, confirm that the tool and its use align with local, state, and federal regulations, sponsor requirements, and Virginia Tech policies. Review project contracts, data management plans, and data sharing agreements, and if uncertain, contact your sponsor’s program officer, the Office of Sponsored Programs (OSP), or the Privacy and Research Data Protection Program (prdp@vt.edu).
Purpose and Context
AI technologies, including generative models, machine learning systems, and other algorithmic tools, can assist researchers by writing code, automating data processing, optimizing algorithms, developing predictive models, and building research tools. When used responsibly, they can enhance productivity, reproducibility, accessibility, and discovery. However, automating research processes also introduces ethical, legal, and security risks, including errors that can propagate quickly, hidden bias in AI-generated code or training data, copyright or licensing violations, and compliance gaps when automated tools handle restricted data. Ethical use of AI in software development and automation means maintaining human oversight, verification, and accountability for all code, models, and processes, ensuring they comply with university policies, data classifications, and sponsor requirements.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI tools to assist in building or improving research software, scripts or automated workflows in ways that advance Virginia Tech’s missions of discovery, learning, and service. |
|
| Innovation for Good | Employ AI to innovate responsibly in code generation, automation, and prediction, balancing creativity with ethical foresight. |
|
| Human-Centered Benefit | Ensure AI automation supports learning and mentorship. |
|
| Responsible and Ethical Use | Confirm that AI-generated software complies with licenses and security standards. |
|
| Fairness and Transparency | Maintain transparency in code authorship, model development, and disclose AI assistance in automation or algorithm design. |
|
| Human Judgment and Accountability | Humans remain responsible and accountable for all AI-generated code and models. |
|
| Data Security and Privacy | Securely manage all code, data, and systems involved in automation, including model training and storage. |
|
Questions to Consider
- Have I verified that all AI- or machine learning-generated code, models, or automated processes meet cybersecurity and data protection requirements?
- Have I documented the role of AI or machine learning in code creation, model development, or workflow automation?
- Does the research team retain appropriate expertise and oversight to evaluate and validate AI-generated outputs, even when the AI performs tasks beyond what a human could manually produce?
- Could automated or predictive systems unintentionally introduce bias or inequity into research results?
Real-World Tips
Before integrating AI- or machine learning-generated code, models, or automated processes into your research, perform a code and compliance check:
- Review the tool’s terms of service and licensing to ensure compatibility with university and sponsor policies.
- Test AI- or machine learning-generated code and models in a controlled environment before deployment.
- Confirm that the platform and data storage align with the risk level of your project and have been approved by Virginia Tech.
- Document human verification steps and approvals in your version control or lab notebook.
This practice protects research integrity, data security, and intellectual property while reinforcing accountability across collaborative teams.
Purpose and Context
AI tools can assist researchers with nearly every stage of data management, from collection and cleaning to curation, analysis, and visualization. Used responsibly, AI can help improve reproducibility, enhance efficiency, identify patterns, and ensure quality control. However, using AI for data management also requires ethical and compliance responsibilities. Researchers must confirm that any AI tool or automated workflow complies with Virginia Tech data classification standards, privacy protections, and sponsor or regulatory requirements. Additionally, human oversight remains essential for verifying accuracy, ensuring fairness, and interpreting AI-generated insights responsibly. Ethical use of AI in data management and analysis means using these tools to support, not replace, human expertise and ethical judgment, maintaining transparency, reproducibility, and trust in research outcomes.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI to manage and analyze data in ways that advance Virginia Tech’s missions of discovery, learning, and service to the public good. |
|
| Innovation for Good | Apply AI to explore new approaches for data collection, processing, or visualization while balancing innovation with ethical awareness. |
|
| Human-Centered Benefit | Use AI-assisted data management to enhance human understanding and engagement rather than automating interpretation or decision-making. |
|
| Responsible and Ethical Use | Handle all data, including AI-assisted processing, in compliance with regulatory, sponsor, and institutional requirements. |
|
| Fairness and Transparency | Evaluate and disclose the role of AI in data analysis, addressing potential bias in datasets, training data, or algorithms. |
|
| Human Judgment and Accountability | Researchers remain accountable for verifying AI-generated analyses, visualizations, and conclusions. |
|
| Data Security and Privacy | Protect all data used in AI workflows in accordance with Virginia Tech’s data-handling policies and sponsor requirements. |
|
Questions to Consider
- Have I confirmed that the AI tools used for data collection, analysis, or visualization are approved for the data’s risk level and comply with privacy or sponsor requirements?
- Have I validated AI-assisted data analyses or visualizations through independent verification?
- Does my use of AI in data management enhance, rather than replace, human interpretation and skill development?
- Have I documented how AI influenced data cleaning, transformation, or analysis for transparency and reproducibility?
Real-World Tips
Before processing or analyzing data with AI tools, perform a data authorization and verification check:
- Confirm the data classification level using Virginia-Tech-Risk-Classification Standard.
- Verify that your AI platform or computing environment is approved by Virginia Tech for that classification.
- Ensure that human review is built into every stage of data analysis, especially when AI identifies patterns or outliers.
- Document the AI system used, version, and date of analysis in your research records to support transparency and reproducibility.
These steps protect data privacy, ensure compliance, and reinforce Virginia Tech’s commitment to responsible innovation and research integrity.
Purpose and Context
AI tools can assist researchers in planning, drafting, editing, summarizing, translating, and formatting research materials. Used thoughtfully, AI can improve clarity, accessibility, and productivity in scholarly communication. However, using AI during the writing process raises ethical considerations involving authorship, accountability, attribution, confidentiality, tone, and integrity. Ethical use of AI in writing and editing means using these tools to support—rather than replace—human scholarship, creativity, and responsibility. Researchers must ensure that AI contributions are transparent, verifiable, and compliant with Virginia Tech’s academic and research integrity policies, including the Virginia Tech Policy on Misconduct in Research (13020), the Virginia Tech Undergraduate Honor Code, and the Graduate Honor System at Virginia Tech. This section covers responsible AI use in writing tasks such as text generation, proofreading and editing, summarizing, formulating conclusions, adjusting tone, translation, reformatting, and preparing press releases or outreach materials.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI tools to enhance research communication in ways that advance Virginia Tech’s missions of discovery, learning, and service. |
|
| Innovation for Good | Use AI to responsibly explore new ways to create, summarize, and share research findings. |
|
| Human-Centered Benefit | Use AI to enhance human creativity and communication, not replace scholarly expertise or skill development. |
|
| Responsible and Ethical Use | Ensure AI use in writing and editing complies with academic integrity, sponsor requirements, and data security policies. |
|
| Fairness and Transparency | Disclose how AI contributed to writing, editing, translation, or summarization to maintain transparency and trust. |
|
| Human Judgment and Accountability | Researchers remain accountable for all written work, regardless of AI assistance. |
|
| Data Security and Privacy | Protect unpublished manuscripts, research data, and correspondence when using AI systems. |
|
Questions to Consider
- Have I clearly disclosed where and how AI contributed to my writing, summarization, or translation
- Have I verified every AI-assisted summary, reference, and translation for factual accuracy and context?
- Does AI use in my writing align with Virginia Tech’s Principles of Community and standards of scholarly authorship?
- Am I ensuring that AI complements my expertise instead of replacing critical writing and reasoning skills?
- Does the AI-generated text reflect how I would explain it to a colleague in my field or to the target audience?
- Does the AI-generated text sound like me and accurately reflect my work?
Real-World Tips
Before submitting a manuscript, outreach piece, or translation that involved AI assistance:
- Document how AI was used (e.g., “ChatGPT was used to suggest grammar edits for the abstract”).
- Disclose AI involvement using the format recommended by Virginia Tech Libraries’ AI Citation Guidance and ensure the disclosure also meets the specific requirements of the journal, publisher, or sponsor.
- Verify every statement, citation, and translation for accuracy and meaning.
- Retain ownership of scholarly authorship; AI can assist with structure or style, but you are responsible for the ideas and conclusions.
- Protect both confidentiality and data integrity. Do not use public AI systems for unpublished, restricted, or proprietary research materials. Ensure that datasets and findings remain secure until they are cleared for public release or publication.
These steps preserve authorship integrity, comply with institutional and sponsor requirements, and model responsible AI use for students and collaborators.
Purpose and Context
AI can assist researchers in examining and documenting ethical considerations across the research lifecycle. When used responsibly, AI tools might help identify potential bias, organize risk summaries, assist with drafting compliance materials, or detect possible confidentiality issues, and assist with data confidentiality monitoring. However, ethical evaluation and oversight, including determinations about whether research meets regulatory thresholds or requires IRB, IACUC, or IBC review, must remain human-led.
These activities can strengthen ethical awareness and transparency when used appropriately, but researchers must maintain oversight, verify accuracy, and ensure that all ethical or regulatory decisions are made by the appropriate boards.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI in ways that strengthen ethical reflection and uphold Virginia Tech’s commitment to integrity, respect, and responsible discovery. |
|
| Innovation for Good | Apply AI responsibly to enhance ethical awareness, bias detection, and compliance documentation. |
|
| Human-Centered Benefit | Keep human ethical reasoning and accountability central when using AI for review or compliance support. |
|
| Responsible and Ethical Use | Ensure all AI use in ethics-related activities complies with institutional, regulatory, and sponsor requirements. |
|
| Human Judgment and Accountability | Ensure that ethical determinations, regulatory classifications, and compliance decisions remain the responsibility of qualified human researchers and institutional review bodies. |
|
| Fairness and Transparency | Use AI to support fairness and disclosure in ethical analysis and documentation. |
|
| Data Security and Privacy | Protect all data and ethical review materials and consider how AI might assist with identifying or preventing confidentiality risks. |
|
Questions to Consider
- Have I used AI only to support, not replace, my ethical reasoning and oversight responsibilities?
- Did AI help me identify potential ethical issues or bias areas that I later review manually?
- Have I verified that AI-assisted tools did not process confidential or identifiable data in unapproved environments?
- Have I referred decisions about whether my research requires IRB, IACUC, or IBC review to the appropriate compliance office or review board?
Real-World Tips
Do not use Copilot, ChatGPT, or other AI tools to decide which ethical reviews are required. Follow Virginia Tech’s compliance requirements and seek guidance from the appropriate review boards.
As of January 2026, federal research compliance regulations and requirements do not allow for AI to make determinations of whether research plans meet regulatory requirements.
Purpose and Context
AI tools can assist researchers in preparing materials for publication and communicating results to different audiences. When used responsibly, AI can help draft or format cover letters, generate plain-language summaries, check journal compliance requirements, or organize responses to reviewers. However, researchers must maintain authorship integrity, ensure factual and ethical accuracy, and never delegate human reasoning, interpretation, or scholarly responsibility to AI systems.
AI use in publication support should always comply with Virginia Tech’s research integrity standards, publisher and sponsor disclosure requirements, and confidentiality obligations when handling peer reviews, revisions, or embargoed materials. Researchers are responsible for verifying all content and disclosing AI assistance transparently.
| Virginia Tech Responsible and Ethical AI Principle | Application | Tips and Considerations |
|---|---|---|
| Mission Alignment | Use AI to support clear, accurate, and ethical dissemination of research findings that contribute to scholarship and the public good. |
|
| Innovation for Good | Use AI to expand the accessibility and efficiency of the publication process. |
|
| Human-Centered Benefit | Keep human authorship, expertise, and critical thinking at the center of all publication-related activities. |
|
| Responsible and Ethical Use | Ensure that AI use in publication preparation and peer communication complies with ethical, institutional, and publisher standards. |
|
| Human Judgement and Accountability | Maintain human responsibility for the accuracy, integrity, authorship, and ethical compliance of all publication-related materials. |
|
| Fairness and Transparency | Promote fairness, inclusivity, and transparency in AI-assisted publication communication. |
|
| Data Security and Privacy | Protect confidential, proprietary, or unpublished research materials when using AI systems for publication support. |
|
Questions to Consider
- Have I verified that all AI-assisted materials (cover letters, summaries, etc.) accurately represent my work?
- Have I followed journal, sponsor, and institutional policies on AI disclosure and data protection?
- Have I disclosed AI assistance when required, and reviewed all language for tone, accuracy, and fairness?
- Have I ensured that confidential or unpublished research materials were not entered into public or non-secure AI systems?
- Have I retained human authorship, intellectual ownership, and final approval over all published content?
Real-World Tips
AI can assist with publication tasks but cannot represent your research for you. Use AI to help format documents, draft cover letters, or check journal requirements, but always review and approve the final version yourself. Never upload reviewer comments, confidential correspondence, or unpublished work into public or unapproved AI systems. Follow publisher, sponsor, and Virginia Tech compliance requirements for disclosure and authorship integrity.
Suchikova Y, Tsybuliak N, Teixeira da Silva JA, Nazarovets S. GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing. Account Res. 2025 Aug 8:1-27. doi: 10.1080/08989621.2025.2544331
Responsible and Ethical AI Framework for Virginia Tech (v1.0)
Data Risk Classification Standard. VT Division of IT
Using (and Citing) AI: Tips & Tools from Your Librarians
Virginia Tech Policy 13020: Misconduct in Research
Virginia Tech Responsible and Ethical AI Principles
GAIDeT Declaration Generator: An open-source web tool for documenting and disclosing Generative AI use throughout the research lifecycle.
Source: Suchikova, Y., Tsybuliak, N., Teixeira da Silva, J. A., & Nazarovets, S. (2025). GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing.
Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. arXiv. Datasheets for Datasets
The Data Nutrition Project: An open-source initiative offering tools and templates for creating dataset “nutrition labels,” supporting responsible data stewardship, including bias detection, metadata quality, and dataset documentation. Retrieved from https://datanutrition.org on Dec. 17, 2025.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech
Examples of Discipline Specific Guidance
The following examples illustrate how professional organizations are establishing discipline-specific standards and ethical expectations for AI. Researchers should review the guidance relevant to their field to ensure their use of AI aligns with both Virginia Tech policies and their profession’s standards of practice.
- American Psychological Association (APA). Ethical Guidance for AI in the Professional Practice of Health Service Psychology
- Association for Computing Machinery (ACM). ACM-USTPC GenAI Principles June 2023
- American Medical Association (AMA). AMA Principles for Augmented Intelligence Development, Deployment, and Use
- Human Factors and Ergonomics Society (HFES). HFES Policy Statement - AI guardrails for human use.-Rev 7
- International Association of Business Communicators (IABC) IABC > About > What We Do > Standards > Ethical Use of AI
- United Nations Educational, Scientific and Cultural Organization (UNESCO). Ethics of Artificial Intelligence | UNESCO
Note: This list is illustrative, not exhaustive. Researchers are responsible for identifying and adhering to the AI-related standards and policies established by their own professional or disciplinary organizations.