Virginia Tech® home

Considerations for the Responsible and Ethical Use of AI

Drafted: Nov. 25, 2025 | Last Revised:  Feb. 16, 2026

Virginia Tech’s Considerations for Researchers on the Use of AI translates the university’s Responsible and Ethical AI Framework (2025) and the Generative AI Delegation Taxonomy (GAIDeT) (Suchikova et al., 2025) into practical steps for researchers. It provides stage-specific considerations for using generative AI responsibly across the stages of the research lifecycle as identified by the GAIDeT. These seven stages include conceptualization, literature review, methodology, software development, data management, writing, and ethical oversight. Each of the seven stages is discussed in terms of the seven principles in VT's Responsible and Ethical AI Framework. In addition to the seven GAIDeT stages, we acknowledge that researchers engage in peer and proposal review as part of their academic responsibilities. Many journals and sponsors prohibit the use of AI tools in these contexts, and researchers should therefore exercise heightened caution and follow all applicable confidentiality and integrity requirements.

In these considerations we construe Artificial Intelligence (AI) broadly and note that this term, coined in 1955, includes a vast array of methods and technologies. AI, in its simplest definition, refers to any technology or machine that can perform complex tasks typically associated with human intelligence. These considerations focus on the responsible use of AI tools in research, not on the development, training, or fine-tuning of AI or machine-learning models.

The resource helps faculty, students, and research staff apply AI transparently and in compliance with university and sponsor requirements and recommendations. Reflection prompts and real-world tips support ethical decision-making, documentation, and data protection. Together, these practices uphold Virginia Tech’s commitment to Ut Prosim, the Principles of Community, and leadership in responsible innovation.

In addition to these considerations, researchers are encouraged to review AI-related position statements, ethical codes, or policy guidance issued by their professional or disciplinary societies. Many organizations, such as the Human Factors and Ergonomics Society (HFES), the Institute of Electrical and Electronics Engineers (IEEE), the American Psychological Association (APA), the American Medical Association (AMA), and the Association for Computing Machinery (ACM), have published discipline-specific standards for responsible AI design, testing, and use. These professional resources complement Virginia Tech’s institutional framework by clarifying expectations and ethical norms within specific research domains. Researchers are also encouraged to pursue training in ethical AI use and data protection as appropriate for their discipline and research activities.

These considerations are not intended to serve as a legal advice or as an exhaustive set of best practices. It should be viewed as a living document. It is not a comprehensive manual on how to conduct research using AI, nor does it provide technical instruction on model development, machine learning methodologies, or other specialized AI practices. Instead, it offers ethical and procedural considerations to support responsible use of AI tools within research workflows. Researchers engaged in AI development should consult discipline-specific best practices, Virginia Tech’s Responsible and Ethical AI Framework, and relevant professional society guidelines for technical or methodological considerations beyond the scope of this document.

Given the rapidly changing landscape of AI, these considerations will be reviewed and updated on a routine basis. Feedback from the research community is encouraged and will be reviewed as part of the ongoing update process. Questions and recommendations should be submitted to the Privacy and Research Data Protection Program in the Office of Research and Innovation (prdp@vt.edu).

Purpose and Context

AI can help researchers generate ideas, refine research questions, identify novel directions, and explore feasibility. Used responsibly, it can broaden creativity while keeping human expertise at the center of the research process. Ethical use at this stage means allowing AI to support, not drive, the intellectual foundation of your work.

Virginia Tech Responsible and Ethical AI Principle1 Application Tips and Considerations
Mission Alignment Use AI to explore ideas that advance Virginia Tech’s research and service missions
  • Ask if AI-generated ideas reflect Ut Prosim and the goal of improving the human condition.
  • Pursue AI-inspired ideas for scholarly or societal merit, not solely for novelty or convenience.
Innovation for Good Balance innovation with ethical reflection and feasibility.
  • Use AI to explore existing research and refine early ideas.
  • Transparently document how AI contributed to idea generation.
  • Reflect on potential harms or unintended consequences that could affect individuals, groups, or systems beyond the research team, including societal, environmental, or ethical impacts, before pursuing AI-generated ideas.
Human-Centered Benefit Maintain human creativity and disciplinary insight as the core of idea generation.
  • Treat AI suggestions as inputs for brainstorming, not as research decisions.
  • Avoid relying on AI to define your research problem.
Responsible and Ethical Use Consider who or what could be affected by an AI-inspired idea.
  • Ask early questions about possible ethical or societal impacts before deciding to pursue the idea.
  • Use caution when advancing AI-generated directions that could pose ethical or dual-use risks (research that has legitimate scientific value but could be misapplied to cause harm or create security vulnerabilities).
Fairness and Transparency Acknowledge AI’s role in idea creation to promote integrity and reproducibility.
  • Keep notes on when AI contributed to conceptualization.
  • Follow University Libraries’ Guidelines for Citing AI
  • Do not present AI-generated hypotheses as your own intellectual output.
Human Judgment and Accountability Researchers remain responsible for determining which ideas are valid and feasible.
  • Archive AI prompts or outlines in research notebooks.
  • Make final decisions about study focus and goals as a research team.
  • Independently verify the accuracy of any AI-generated facts, claims, or citations. Treat all AI outputs as provisional until confirmed as tools can introduce errors, omissions, or fabricated references.
Data Security and Privacy Protect unpublished ideas and early concepts shared with AI systems.
  • Use only Virginia Tech-approved AI tools.
  • Do not enter confidential or proprietary information (including grant concepts, unpublished data, inventions, or other information that cannot be shared publicly) into tools that are not Virginia Tech-approved.

 AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

1 Principles in this table are drawn from Virginia Tech’s Responsible and Ethical AI Framework (v1.0, 2025). This guidance operationalizes those principles for research contexts across the stages of the research lifecycle

Questions to Consider

  • Does this AI-assisted idea align with my discipline’s standards and Virginia Tech’s mission? 
  • Have I documented AI’s role?
  • Am I using AI to inspire creativity or to outsource thinking? 
  • Can I explain results generated in part or whole using AI tools? Did I check their validity? Can I predict the implications of their use?

Real-World Tips 

Do not enter or upload proprietary, confidential, or fundable concepts as prompts in non-Virginia-Tech–approved AI tools. Once entered, that information is stored, shared with external servers, and used to train future models, potentially compromising intellectual property protection and your research ideas.

Use your research notebook (paper or secure digital system) to capture how AI informed your thinking, rather than recording it in the AI platform itself.  Keeping this record in your notebook not only ensures transparency and reproducibility but also helps establish evidence of your original contribution and supports your claim to intellectual property. 

Purpose and Context

AI can assist researchers in conducting literature searches, summarizing findings, identifying trends or notable gaps, and organizing references. If used responsibly, AI can expand the scope and efficiency of your review.  Individual researchers are responsible for maintaining accuracy, scholarly judgement, and integrity at all times.  

Note: Formal evidence synthesis research methods (systematic reviews, meta-analyses, and similar) may be assisted by AI tools in specific, auditable ways. However, as these are considered methods for conducting original research, the use of AI should be fully guided by human researchers in a manner that does not compromise rigor or increase risk of bias. Learn more about these methods and contact information for partnership or support services via the Libraries’ Systematic Reviews and Meta-Analyses Guide.

The considerations below do not apply to formal evidence synthesis methods. 

Virginia Tech Responsible and Ethical AI Principle Application Tips and Considerations
Mission Alignment Use AI to deepen understanding of the scholarly landscape, broaden discovery, and serve the scholarly and global community.
  • Combine AI-assisted searches with traditional internet, digital library, and database methods to ensure completeness.
  • Avoid relying solely on AI summaries or rankings/ratings to determine relevance.
  • If you are conducting a systematic review or other formal evidence synthesis research, consult guidance or get assistance in planning use of AI within your process.
Innovation for Good Leverage AI tools to explore interdisciplinary connections or emerging areas of research.
  • Use AI to identify underexplored topics or cross-disciplinary trends.
  • Verify that AI-suggested sources and topics align with your research goals and ethical standards.
Human-Centered Benefit Maintain human expertise as the foundation for synthesis and interpretation.
  • Use AI to organize valid references, extract information, prepare draft summaries, and identify themes, not to replace critical reading and evaluation.
  • Always read and assess primary sources yourself.
Responsible and Ethical Use Respect copyright, licensing, and data-use agreements when using AI tools for literature searches.
  • Disclose AI assistance in your methods or acknowledgments section when appropriate.
  • Be aware of potential bias in AI literature discovery tools. For example, many overrepresent English-language content or publications from high-impact journals.
Human Judgment and Accountability Retain full accountability for the accuracy, currency, and completeness of your literature review.
  • Verify all AI-generated references and claims against original sources as AI tools can introduce errors, omissions, or fabricated references.
  • Educate yourself regarding the date range and sources used by the AI tool to understand its limitations and biases.
  • Do not cite unverified or AI-generated materials as primary evidence.
  • Be aware that AI tools might have limited access to recent or specialized publications. AI tools can overemphasize highly cited or mainstream publications, reinforcing existing perspectives and missing emerging or diverse scholarship.
  • Be aware that a valid link or real citation does not guarantee that the article supports the point the AI suggests. You must read the source and confirm that its findings, context, and conclusions align with your interpretation.
Data Security and Privacy Protect reference lists, annotations, and draft materials in accordance with university security standards.
  • Use only Virginia Tech–approved AI systems for managing literature and notes.
  • Avoid using personal or public accounts for collaborative writing or AI-assisted editing.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider 

  • Does my use of AI align with the standards and expectations of relevant stakeholders (e.g., publisher guidelines, conduct and reporting guidelines, funder expectations, etc.)? 
  • Have I verified all AI-generated summaries and citations against original sources?
  • Am I transparently documenting how AI contributed to my search or synthesis process?
  • Does my review include diverse perspectives and not rely solely on AI-curated materials?

Real-World Tips

Create a simple tracking system to document which AI-suggested references you have verified. 
For example, add a column in your reference manager or spreadsheet labeled “Verified” to mark each source after you:

  1. Locate the actual publication link or DOI, and
  2. Confirm that the AI summary accurately reflects the article’s content.

Keep notes and document if an AI-generated summary is misleading or incomplete. This helps you refine prompts and demonstrate due diligence in your literature review.

Purpose and Context

AI tools are increasingly used to assist in experiment design, modeling, selecting analytical methods, refining research questions, and generating or reviewing code for data analysis. This section focuses primarily on the ethical use of generative AI tools—such as large language models and code generators—in research design and analysis planning, while recognizing that similar principles apply to other AI systems used for modeling, prediction, or analysis. Used responsibly, AI can improve rigor, reproducibility, and efficiency by helping researchers explore alternative methods, identify potential limitations, or document analytical decisions more clearly. Research methods are also where AI use has a particularly strong influence on research integrity.  Overreliance on opaque, unverified, or poorly understood AI-assisted methods can undermine the validity and reproducibility of a study. 

Virginia Tech Responsible and Ethical AI Principle Application Tips and Considerations
Mission Alignment Use AI tools to strengthen methodological rigor, reproducibility, and alignment with the research mission.
  • Use AI to streamline simulations or optimize workflows that support scientific advancement.
  • Apply AI to improve research design, transparency, and collaboration in ways that advance knowledge, serve the public good, and uphold Ut Prosim.
  • Do not use methods that prioritize efficiency over accuracy or research quality.
Innovation for Good Use AI to explore novel methodological approaches that expand knowledge responsibly.
  • Experiment with AI-driven modeling or simulation tools that enhance discovery.
  • Reflect on how new AI-assisted methods could create inequities, environmental burdens, or unintended ethical consequences.
Human-Centered Benefit Use AI in ways that enhance human creativity, insight, and problem-solving.
  • Evaluate whether AI assistance improves or hinders understanding and accessibility.
  • Use AI to explore new analytical directions or generate design ideas that expand your team’s capability.
  • Avoid using AI in ways that deskill researchers, obscure human contribution, or prevent students and trainees from developing the methodological expertise essential to their disciplines. Responsible AI use should enhance, not replace, mentorship and skill-building.
Responsible and Ethical Use Evaluate potential risks, ethical implications, and regulatory requirements of AI-influenced designs and methods involving AI.
  • Consider safety, consent, and compliance implications early, especially for studies involving human data or interventions.
  • Carefully consider and document potential dual-use applications (ideas with both beneficial and harmful potential). If dual-use applications exist, use extreme caution when using (especially uploading) any research data. If research is funded, consult with your project/program officer or funding entity for guidance on AI use.
Fairness and Transparency Clearly disclose the role of AI in shaping your research design or analysis plan and evaluate whether underlying training data or algorithms could introduce bias.
  • Include a short statement in your methods section noting where AI contributed to design or parameter selection.
  • Review available documentation from the AI tool to understand its training data sources, known limitations, and potential bias.
  • Assess whether the AI’s outputs might systematically favor or exclude certain populations, variables, or perspectives relevant to your field.
  • When developing or sharing data that could contribute to future AI training, document the dataset’s origins, composition, and intended use. Clearly describe potential limitations or biases (for example, demographic underrepresentation) and include metadata that guides responsible reuse. This transparency supports fairness, reproducibility, and ethical downstream AI development.
  • Do not present AI-generated code or models as fully original.
  • Do not rely solely on AI-generated methods or models without considering and reporting transparently how bias in the training data could affect validity, generalizability, or equity in your results.
Human Judgment and Accountability Retain full responsibility for the validity, accuracy, and reproducibility of AI-assisted methods.
  • Independently verify AI-generated results whenever possible, and ensure that analytical steps are transparent and well documented. If human verification is not feasible, ensure transparency by documenting the limitations and describe how you validated the results.
  • Review AI-generated analyses with disciplinary experts before adoption.
  • Document model parameters and data inputs so results can be reviewed or replicated.
  • Eligibility criteria for research participation must be applied in accordance with the IRB (Institutional Review Board)-approved study protocol. Automated screening tools (such as rule-based surveys) may be used when approved by the IRB. However, AI systems should not independently determine or infer participant eligibility, as a qualified researcher must retain oversight of inclusion and exclusion criteria.
Data Security and Privacy Protect all data, algorithms, and design files used with AI systems under Virginia Tech’s data-handling standards.
  • Use AI platforms that have been approved by Virginia Tech to collect, process, and store data at the appropriate risk level. For example, the Virginia Tech Advanced Research Computing (ARC) environment includes multiple clusters, each approved for specific data types (e.g., Controlled Unclassified Information (CUI), Moderate Risk data, and one for High-Risk identifiable data from humans).
  • Never input regulated, confidential, or export-controlled data into tools that have not been approved to process them.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider

  • Have I verified that all AI-assisted methods are reproducible, explainable, and aligned with disciplinary standards? If human verification is not feasible, have I documented the limitations and explained how the results were validated?
  • Have I transparently documented AI’s role in my methods section for others to understand and replicate?
  • Am I balancing innovation with ethical responsibility and compliance?
  • When collecting or selecting a dataset for training, am I considering whether its composition might inadvertently exclude certain populations or introduce bias?
  • Am I complying with the Virginia Tech’s Risk Classification Standard, disciplinary standards, and research sponsor and other contractual data security requirements?

Real-World Tips

  1. When using AI to generate code, analytic approaches, or research designs, create a method-verification log. Each time you implement or modify an AI-suggested step, record:
    • The AI tool used and date
    • The prompt or query (if applicable) you used to generate the AI response
    • Whether the output was verified or edited by a human
    • Who conducted the verification, and how
    This log not only strengthens reproducibility but also provides documentation of human oversight, which is essential for research integrity, intellectual contribution, accountability, and compliance reviews.
  2. When developing or sharing datasets that could be used in future AI training, include a simple README or dataset documentation file that describes key elements such as the dataset’s purpose, composition, data sources, collection methods, known limitations, and any potential biases. A README can accompany the dataset in your repository or project folder and should enable another researcher to understand how the data were created and how they should, or should not, be reused.  Established frameworks such as Datasheets for Datasets, and The Data Nutrition Project can be consulted for additional guidance.  

Before incorporating AI into research design, confirm that the tool and its use align with local, state, and federal regulations, sponsor requirements, and Virginia Tech policies.  Review project contracts, data management plans, and data sharing agreements, and if uncertain, contact your sponsor’s program officer, the Office of Sponsored Programs (OSP), or the Privacy and Research Data Protection Program (prdp@vt.edu).

Purpose and Context

AI technologies, including generative models, machine learning systems, and other algorithmic tools, can assist researchers by writing code, automating data processing, optimizing algorithms, developing predictive models, and building research tools. When used responsibly, they can enhance productivity, reproducibility, accessibility, and discovery. However, automating research processes also introduces ethical, legal, and security risks, including errors that can propagate quickly, hidden bias in AI-generated code or training data, copyright or licensing violations, and compliance gaps when automated tools handle restricted data. Ethical use of AI in software development and automation means maintaining human oversight, verification, and accountability for all code, models, and processes, ensuring they comply with university policies, data classifications, and sponsor requirements. 

Virginia Tech Responsible and Ethical AI Principle Application Tips and Considerations
Mission Alignment Use AI tools to assist in building or improving research software, scripts or automated workflows in ways that advance Virginia Tech’s missions of discovery, learning, and service.
  • Apply AI responsibly to improve efficiency, transparency, and reproducibility in research computing.
  • Use AI-driven automation to expand accessibility or reduce human error in complex workflows.
  • Avoid automation that prioritizes convenience over data integrity or research quality.
  • Ensure AI-generated code, models, or outputs align with Virginia Tech’s Principles of Community by avoiding bias, exclusionary logic, or disrespectful content that could harm individuals or groups.
Innovation for Good Employ AI to innovate responsibly in code generation, automation, and prediction, balancing creativity with ethical foresight.
  • Use AI to prototype tools that enhance human capability or scholarly insight.
  • Use existing machine learning tools responsibly to support automation or analysis, and regularly assess tool performance, potential bias, and interpretability when applying them to your research.
  • Consider whether automated systems could unintentionally replace critical human judgment, amplify bias, or create inequities in access, decision-making, or impacts.
Human-Centered Benefit Ensure AI automation supports learning and mentorship.
  • Use AI-generated code or machine learning-driven processes as learning and mentoring tools for students and staff developers.
  • Encourage review and discussion of AI-generated code as part of team learning and peer mentoring.
  • When creating software or automation that relies on AI tools, ensure the resulting outputs and workflows remain understandable, maintainable, and usable by humans. Design these workflows to expand human capability rather than creating systems only the AI or a few experts can operate.
  • Avoid using AI in ways that prevent students and trainees from developing essential programming or analytical skills.
Responsible and Ethical Use Confirm that AI-generated software complies with licenses and security standards.
  • Use AI tools approved by Virginia Tech that align with the data classification level.
  • Attribute AI-assisted code generation transparently in documentation and publications.
  • Do not deploy unverified AI-generated code in systems, especially with restricted data.
Fairness and Transparency Maintain transparency in code authorship, model development, and disclose AI assistance in automation or algorithm design.
  • Document when and how AI designed, generated, optimized, or tested code.
  • When AI tools for code generation algorithm design, or workflow automation, review available documentation (e.g., known limitations, design assumptions, an potential bias) to understand how the tool may influence the logic or behavior of resulting software.
  • Review documentation provided by AI tools to understand how their design or training might shape automated decisions or outputs.
  • If you are developing or sharing data that might be used to train AI systems, document the dataset’s scope, sources, and known limitations. Include metadata and provenance that describe potential bias and appropriate contexts for reuse. Established frameworks such as Datasheets for Datasets and The Data Nutrition Project can be consulted for additional guidance.
Human Judgment and Accountability Humans remain responsible and accountable for all AI-generated code and models.
  • Review, test, and validate AI-generated code, machine learning outputs, and automated processes before implementation.
  • Maintain version control and clear records of code and model review and approval.
  • When using machine learning algorithms in automated workflows, maintain human oversight and verify results before integration into research conclusions.
  • Do not attribute authorship to AI or rely on unverified AI-generated results or machine learning-driven automation for critical research functions. When verification is not possible, researchers should acknowledge this constraint and apply appropriate safeguards, such as transparency about uncertainty, expert judgment, and alternative validation approaches, to maintain research integrity.
Data Security and Privacy Securely manage all code, data, and systems involved in automation, including model training and storage.
  • Use only Virginia Tech–approved AI and computing platforms for coding or automation that handle sensitive data.
  • Follow Virginia Tech Risk Classification Standard when processing data with automated scripts, trained models, or AI-generated workflows.
  • Never input or upload proprietary, confidential, or export-controlled data into publicly accessible code-generation tools.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider

  • Have I verified that all AI- or machine learning-generated code, models, or automated processes meet cybersecurity and data protection requirements? 
  • Have I documented the role of AI or machine learning in code creation, model development, or workflow automation?
  • Does the research team retain appropriate expertise and oversight to evaluate and validate AI-generated outputs, even when the AI performs tasks beyond what a human could manually produce? 
  • Could automated or predictive systems unintentionally introduce bias or inequity into research results?

Real-World Tips 

Before integrating AI- or machine learning-generated code, models, or automated processes into your research, perform a code and compliance check: 

  1. Review the tool’s terms of service and licensing to ensure compatibility with university and sponsor policies. 
  2. Test AI- or machine learning-generated code and models in a controlled environment before deployment. 
  3. Confirm that the platform and data storage align with the risk level of your project and have been approved by Virginia Tech. 
  4. Document human verification steps and approvals in your version control or lab notebook. 

This practice protects research integrity, data security, and intellectual property while reinforcing accountability across collaborative teams.

Purpose and Context

AI tools can assist researchers with nearly every stage of data management, from collection and cleaning to curation, analysis, and visualization. Used responsibly, AI can help improve reproducibility, enhance efficiency, identify patterns, and ensure quality control. However, using AI for data management also requires ethical and compliance responsibilities. Researchers must confirm that any AI tool or automated workflow complies with Virginia Tech data classification standards, privacy protections, and sponsor or regulatory requirements. Additionally, human oversight remains essential for verifying accuracy, ensuring fairness, and interpreting AI-generated insights responsibly. Ethical use of AI in data management and analysis means using these tools to support, not replace, human expertise and ethical judgment, maintaining transparency, reproducibility, and trust in research outcomes. 

Virginia Tech Responsible and Ethical AI Principle Application Tips and Considerations
Mission Alignment Use AI to manage and analyze data in ways that advance Virginia Tech’s missions of discovery, learning, and service to the public good.
  • Apply AI to improve data accuracy, transparency, and reproducibility.
  • Use AI to strengthen research that addresses community needs and societal challenges.
  • Do not use data practices that prioritize convenience or automation over integrity, privacy, or inclusion.
Innovation for Good Apply AI to explore new approaches for data collection, processing, or visualization while balancing innovation with ethical awareness.
  • Use AI in ways that make data management efficient and reproducible while ensuring that resources and datasets are used fairly and responsibly.
  • Consider how automation or model design might generate false conclusions, amplify bias, or exclude underrepresented populations.
Human-Centered Benefit Use AI-assisted data management to enhance human understanding and engagement rather than automating interpretation or decision-making.
  • Use AI tools to support analytical thinking, not replace it.
  • Encourage students and research staff to learn data literacy skills, even when AI automates parts of the process.
  • Avoid overreliance on AI outputs without human review of assumptions, patterns, and limitations.
Responsible and Ethical Use Handle all data, including AI-assisted processing, in compliance with regulatory, sponsor, and institutional requirements.
  • Confirm that data processing tools and storage systems are approved for your data’s classification level.
  • Verify IRB, FERPA, and export control compliance before sharing or analyzing data with AI systems.
  • Document all data transformations performed by AI to ensure reproducibility and accountability.
  • Avoid uploading sensitive, identifiable, or proprietary data to unapproved AI platforms.
Fairness and Transparency Evaluate and disclose the role of AI in data analysis, addressing potential bias in datasets, training data, or algorithms.
  • Examine whether AI tools or models might overrepresent dominant data sources or underrepresent marginalized groups.
  • Document data provenance, preprocessing methods, and known limitations of AI analysis.
  • If creating or sharing datasets that might be used in AI or machine-learning applications, include metadata describing potential bias, representational gaps, and appropriate reuse conditions. Tools such as the Data Nutrition Project Label Maker can support this process.
  • Be transparent in publications or reports about where and how AI contributed to analysis or visualization.
Human Judgment and Accountability Researchers remain accountable for verifying AI-generated analyses, visualizations, and conclusions.
  • Independently validate AI-generated results through replication or secondary analysis.
  • Maintain clear documentation of human oversight, quality control, and review processes.
  • Do not rely solely on AI systems for analytical accuracy or discovery; humans must confirm findings and interpretations.
Data Security and Privacy Protect all data used in AI workflows in accordance with Virginia Tech’s data-handling policies and sponsor requirements.
  • Process data using Virginia Tech–approved tools matched to Virginia Tech’s data risk classification.
  • Use ARC clusters or other university-managed platforms appropriate to the dataset (e.g., CUI, moderate risk, or high-risk data).
  • Do not upload regulated or high-risk data into public or non-Virginia Tech-approved AI tools.
  • Consult the Privacy and Research Data Protection Program (prdp@vt.edu) or the Office of Export and Secure Research Compliance (oesrc@vt.edu) if uncertain about approved systems or data classifications.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider

  • Have I confirmed that the AI tools used for data collection, analysis, or visualization are approved for the data’s risk level and comply with privacy or sponsor requirements?
  • Have I validated AI-assisted data analyses or visualizations through independent verification?
  • Does my use of AI in data management enhance, rather than replace, human interpretation and skill development?
  • Have I documented how AI influenced data cleaning, transformation, or analysis for transparency and reproducibility?

Real-World Tips

Before processing or analyzing data with AI tools, perform a data authorization and verification check: 

  1. Confirm the data classification level using Virginia-Tech-Risk-Classification Standard
  2. Verify that your AI platform or computing environment is approved by Virginia Tech for that classification. 
  3. Ensure that human review is built into every stage of data analysis, especially when AI identifies patterns or outliers. 
  4. Document the AI system used, version, and date of analysis in your research records to support transparency and reproducibility. 

These steps protect data privacy, ensure compliance, and reinforce Virginia Tech’s commitment to responsible innovation and research integrity.

Purpose and Context

AI tools can assist researchers in planning, drafting, editing, summarizing, translating, and formatting research materials. Used thoughtfully, AI can improve clarity, accessibility, and productivity in scholarly communication. However, using AI during the writing process raises ethical considerations involving authorship, accountability, attribution, confidentiality, tone, and integrity. Ethical use of AI in writing and editing means using these tools to support—rather than replace—human scholarship, creativity, and responsibility. Researchers must ensure that AI contributions are transparent, verifiable, and compliant with Virginia Tech’s academic and research integrity policies, including the Virginia Tech Policy on Misconduct in Research (13020), the Virginia Tech Undergraduate Honor Code, and the Graduate Honor System at Virginia Tech. This section covers responsible AI use in writing tasks such as text generation, proofreading and editing, summarizing, formulating conclusions, adjusting tone, translation, reformatting, and preparing press releases or outreach materials. 

Virginia Tech Responsible and Ethical AI Principle Application Tips and Considerations
Mission Alignment Use AI tools to enhance research communication in ways that advance Virginia Tech’s missions of discovery, learning, and service.
  • Use AI to improve clarity and accessibility in scholarly communication, ensuring your work remains accurate and contributes meaningfully to your field.
  • Review AI-generated outreach materials or summaries to ensure tone and inclusivity align with Virginia Tech’s Principles of Community.
  • Collaborate with University Relations when using AI to draft press releases or public communications.
  • Do not use AI to create or publish content that misrepresents research results or fails to meet university communication standards.
Innovation for Good Use AI to responsibly explore new ways to create, summarize, and share research findings.
  • Use AI to improve grammar, readability, or translation while retaining human authorship and intellectual control.
  • Apply AI to adapt tone and format for audiences such as community partners or policymakers.
  • Do not rely on AI for factual accuracy or citation generation; verify all results manually.
  • Avoid allowing automation to oversimplify complex or nuanced findings.
Human-Centered Benefit Use AI to enhance human creativity and communication, not replace scholarly expertise or skill development.
  • Retain full authorship of ideas, conclusions, and arguments; AI can assist with phrasing but should not entirely replace human reasoning.
  • Encourage students and trainees to build writing and communication skills rather than outsourcing writing to AI.
  • Use AI to improve accessibility, such as simplifying technical text for outreach audiences.
  • Review and edit AI-generated text to ensure it reflects your reasoning, disciplinary vocabulary, and writing style. Remove or rewrite any sections that sound generic or disconnected from your research interpretation.
Responsible and Ethical Use Ensure AI use in writing and editing complies with academic integrity, sponsor requirements, and data security policies.
  • Acknowledge AI assistance Using (and Citing) AI - AI: Tips & Tools from Your Librarians
  • Verify that AI use complies with sponsor, contractual, and institutional policies.
  • Never upload confidential, proprietary, or export-controlled content into tools that have not been approved by Virginia Tech to process restricted information.
Fairness and Transparency Disclose how AI contributed to writing, editing, translation, or summarization to maintain transparency and trust.
  • Follow the citation and acknowledgment examples provided by Virginia Tech Libraries for citing generative AI in academic writing.
  • Consider using the GAIDeT Declaration Generator Tool to generate a clear disclosure of AI use across the research lifecycle.
  • Include an acknowledgment in publications explaining AI’s role in text generation, summarization, or translation.
  • Avoid minimizing AI’s contribution as being transparent strengthens credibility and ethical integrity.
Human Judgment and Accountability Researchers remain accountable for all written work, regardless of AI assistance.
  • Review, fact-check, and edit all AI-generated text for accuracy, logic, and tone.
  • Ensure conclusions and interpretations are written by humans, supported by evidence, and confirmed by human reasoning to the extent possible.
  • Confirm translations and summaries are accurate and retain the intended meaning and context.
  • Do not rely on AI-generated text for analytical reasoning, conclusions, or literature interpretation without checking and editing.
Data Security and Privacy Protect unpublished manuscripts, research data, and correspondence when using AI systems.
  • Use only Virginia Tech–approved AI tools and secure platforms for writing and editing sensitive materials.
  • Do not enter or upload proprietary, confidential, or fundable concepts as prompts in non-Virginia-Tech–approved AI tools. Once entered, that information is stored, shared with external servers, and might be used to train future models, potentially compromising intellectual property protection and your research ideas.
  • Ensure translations and reformatting are performed in compliant environments when handling restricted or identifiable information.
  • Consult the Privacy and Research Data Protection Program (prdp@vt.edu) and if uncertain about approved AI use for your research materials.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider

  • Have I clearly disclosed where and how AI contributed to my writing, summarization, or translation
  • Have I verified every AI-assisted summary, reference, and translation for factual accuracy and context?
  • Does AI use in my writing align with Virginia Tech’s Principles of Community and standards of scholarly authorship?
  • Am I ensuring that AI complements my expertise instead of replacing critical writing and reasoning skills?
  • Does the AI-generated text reflect how I would explain it to a colleague in my field or to the target audience?
  • Does the AI-generated text sound like me and accurately reflect my work?

Real-World Tips

Before submitting a manuscript, outreach piece, or translation that involved AI assistance: 

  1. Document how AI was used (e.g., “ChatGPT was used to suggest grammar edits for the abstract”). 
  2. Disclose AI involvement using the format recommended by Virginia Tech Libraries’ AI Citation Guidance and ensure the disclosure also meets the specific requirements of the journal, publisher, or sponsor. 
  3. Verify every statement, citation, and translation for accuracy and meaning. 
  4. Retain ownership of scholarly authorship; AI can assist with structure or style, but you are responsible for the ideas and conclusions. 
  5. Protect both confidentiality and data integrity.  Do not use public AI systems for unpublished, restricted, or proprietary research materials. Ensure that datasets and findings remain secure until they are cleared for public release or publication.   

These steps preserve authorship integrity, comply with institutional and sponsor requirements, and model responsible AI use for students and collaborators.

Purpose and Context

AI can assist researchers in examining and documenting ethical considerations across the research lifecycle. When used responsibly, AI tools might help identify potential bias, organize risk summaries, assist with drafting compliance materials, or detect possible confidentiality issues, and assist with data confidentiality monitoring.  However, ethical evaluation and oversight, including determinations about whether research meets regulatory thresholds or requires IRB, IACUC, or IBC review, must remain human-led. 

These activities can strengthen ethical awareness and transparency when used appropriately, but researchers must maintain oversight, verify accuracy, and ensure that all ethical or regulatory decisions are made by the appropriate boards.

Virginia Tech Responsible and Ethical AI Principle Application Tips and Considerations
Mission Alignment Use AI in ways that strengthen ethical reflection and uphold Virginia Tech’s commitment to integrity, respect, and responsible discovery.
  • Use AI to help identify areas of ethical risk, bias, or confidentiality concern that require further human review.
  • Ensure AI-assisted ethical analysis aligns with the Principles of Community and policies on research integrity.
  • When preparing ethics or compliance materials, consult any discipline-specific AI standards issued by your professional society to ensure your practices align with field expectations.
  • Avoid using AI outputs as justification for ethical or regulatory decisions that have not been reviewed or approved by a qualified committee.
Innovation for Good Apply AI responsibly to enhance ethical awareness, bias detection, and compliance documentation.
  • Use AI to generate summaries of risk considerations, organize supporting documents, or help identify potential ethical concerns.
  • Treat AI outputs as organizational tools; they can flag areas for attention but cannot evaluate ethical sufficiency.
  • Avoid using AI to determine risk level or review classification. Only reviewers or committees trained in the applicable regulatory requirements can make those determinations.
Human-Centered Benefit Keep human ethical reasoning and accountability central when using AI for review or compliance support.
  • Use AI to assist with tasks such as bias detection or flagging sensitive data, while retaining human interpretation and approval.
  • Ensure all human subjects, animal, or biosafety protections are confirmed by qualified personnel.
  • Use AI for preparation and awareness, not compliance determinations.
  • Do not rely on AI systems to determine whether research meets regulatory requirements or qualifies for exemption.
  • AI cannot decide whether a study requires review or approval by the IRB, IACUC, or IBC. Researchers must consult the appropriate compliance office.
Responsible and Ethical Use Ensure all AI use in ethics-related activities complies with institutional, regulatory, and sponsor requirements.
  • Confirm that any AI-assisted review or documentation tools are approved for use with your project’s data classification level.
  • Verify the accuracy and completeness of all AI-generated content before including it in ethics or compliance submissions.
  • Do not input confidential protocols, reviewer comments, or proprietary materials into unapproved AI systems.
  • Record how AI was used for transparency and reproducibility in ethics documentation.
Human Judgment and Accountability Ensure that ethical determinations, regulatory classifications, and compliance decisions remain the responsibility of qualified human researchers and institutional review bodies.
  • Treat AI-assisted outputs (e.g., risk summaries, bias flags, draft compliance text) as advisory and subject to human review.
  • Ensure that all ethical determinations are reviewed and confirmed by appropriate institutional authorities.
  • Maintain documentation of human decisions and approvals to support accountability and auditability.
Fairness and Transparency Use AI to support fairness and disclosure in ethical analysis and documentation.
  • Use AI tools to identify potential bias or discrimination risks in study design, recruitment, or data interpretation.
  • Disclose any AI assistance in ethics documentation or risk assessments submitted for review.
  • Review all AI results manually; bias detection tools can misinterpret or exaggerate patterns if not validated by researchers.
Data Security and Privacy Protect all data and ethical review materials and consider how AI might assist with identifying or preventing confidentiality risks.
  • Use Virginia Tech-approved AI tools to assist with data confidentiality monitoring, such as detecting identifiable or sensitive information in datasets, text, or figures before public release.
  • Periodically confirm that AI tools and storage systems meet Virginia Tech’s data-handling and confidentiality requirements.
  • Document any AI-assisted confidentiality checks (e.g., dataset scans or content reviews) to support ethical transparency.
  • Never rely solely on AI to ensure data protection—researchers retain responsibility for verifying the content of datasets.
  • Do not upload confidential, restricted, or unpublished data into public AI systems.
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider

  • Have I used AI only to support, not replace, my ethical reasoning and oversight responsibilities?
  • Did AI help me identify potential ethical issues or bias areas that I later review manually?
  • Have I verified that AI-assisted tools did not process confidential or identifiable data in unapproved environments?
  • Have I referred decisions about whether my research requires IRB, IACUC, or IBC review to the appropriate compliance office or review board?   

Real-World Tips

Do not use Copilot, ChatGPT, or other AI tools to decide which ethical reviews are required. Follow Virginia Tech’s compliance requirements and seek guidance from the appropriate review boards.

As of January 2026, federal research compliance regulations and requirements do not allow for AI to make determinations of whether research plans meet regulatory requirements.

Purpose and Context

AI tools can assist researchers in preparing materials for publication and communicating results to different audiences. When used responsibly, AI can help draft or format cover letters, generate plain-language summaries, check journal compliance requirements, or organize responses to reviewers. However, researchers must maintain authorship integrity, ensure factual and ethical accuracy, and never delegate human reasoning, interpretation, or scholarly responsibility to AI systems. 
 
AI use in publication support should always comply with Virginia Tech’s research integrity standards, publisher and sponsor disclosure requirements, and confidentiality obligations when handling peer reviews, revisions, or embargoed materials. Researchers are responsible for verifying all content and disclosing AI assistance transparently.

Virginia Tech Responsible and Ethical AI Principle  Application Tips and Considerations
Mission Alignment Use AI to support clear, accurate, and ethical dissemination of research findings that contribute to scholarship and the public good.
  • Use AI for grammar, structure, or formatting assistance in cover letters or submissions while maintaining your own scholarly authorship. 
  • When creating public summaries or outreach materials, review AI-generated text to ensure it accurately reflects the scholarship and aligns with Virginia Tech’s Principles of Community. 
  • Avoid using AI to generate exaggerated claims or language inconsistent with the actual findings.
Innovation for Good Use AI to expand the accessibility and efficiency of the publication process.
  • Use AI to check journal formatting, citation style, or metadata accuracy. 
  • Use AI to create accessible summaries, including plain-language or multilingual abstracts, for broader audiences. 
  • Verify that all AI-generated content is accurate and appropriate before submission or posting. 
  • Ensure AI use complies with publisher and sponsor disclosure rules.
Human-Centered Benefit Keep human authorship, expertise, and critical thinking at the center of all publication-related activities.
  • Write all conclusions, interpretations, and responses to reviewers yourself; AI can help with phrasing or organization but cannot replace human reasoning.
  • Maintain ownership of intellectual contributions and verify all AI-assisted outputs for accuracy. 
  • Do not allow AI to write or reword reviewer responses in ways that alter meaning or tone. 
  • Always disclose your role and AI’s role transparently in acknowledgments or cover letters if required.
Responsible and Ethical Use Ensure that AI use in publication preparation and peer communication complies with ethical, institutional, and publisher standards.
  • Follow journal or conference policies on AI use and disclosure. 
  • Use only Virginia Tech–approved tools to handle documents containing unpublished or confidential material. 
  • Verify that AI tools do not store or reuse text or metadata from manuscripts. 
  • Do not upload drafts, reviews, or editorial correspondence to public or Virginia Tech-unapproved AI systems. 
  • Do not use AI to conduct peer reviews, summarize, or critique peer-reviewed manuscripts.
Human Judgement and Accountability Maintain human responsibility for the accuracy, integrity, authorship, and ethical compliance of all publication-related materials.
  • Review and approve all AI-assisted materials (e.g., cover letters, responses to reviewers, summaries) before submission.
  • Retain full accountability for claims, interpretations, tone, and scholarly conclusions
  • Do not delegate decisions related to authorship, acknowledgments, conflict-of-interest statements, or disclosure requirements to AI tools.
Fairness and Transparency Promote fairness, inclusivity, and transparency in AI-assisted publication communication.
  • Disclose AI assistance in cover letters, acknowledgments, or author contribution statements as required by journal or publishing entity. 
  • Use AI to improve accessibility (e.g., translation, formatting) while ensuring human review. 
  • Avoid language or translation tools that introduce bias or misrepresentation. 
  • Be transparent with collaborators and co-authors about how AI was used during manuscript preparation, including translation if applicable.
Data Security and Privacy Protect confidential, proprietary, or unpublished research materials when using AI systems for publication support.
  • Use only secure, Virginia Tech–approved AI platforms for editing or formatting unpublished manuscripts.
  • Avoid entering manuscript text, reviewer comments, or embargoed materials into public or unapproved AI tools. 
AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Questions to Consider

  • Have I verified that all AI-assisted materials (cover letters, summaries, etc.) accurately represent my work? 
  • Have I followed journal, sponsor, and institutional policies on AI disclosure and data protection?
  • Have I disclosed AI assistance when required, and reviewed all language for tone, accuracy, and fairness? 
  • Have I ensured that confidential or unpublished research materials were not entered into public or non-secure AI systems?
  • Have I retained human authorship, intellectual ownership, and final approval over all published content? 

Real-World Tips

AI can assist with publication tasks but cannot represent your research for you. Use AI to help format documents, draft cover letters, or check journal requirements, but always review and approve the final version yourself. Never upload reviewer comments, confidential correspondence, or unpublished work into public or unapproved AI systems. Follow publisher, sponsor, and Virginia Tech compliance requirements for disclosure and authorship integrity. 

Suchikova Y, Tsybuliak N, Teixeira da Silva JA, Nazarovets S. GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing. Account Res. 2025 Aug 8:1-27. doi: 10.1080/08989621.2025.2544331

Responsible and Ethical AI Framework for Virginia Tech (v1.0)

Data Risk Classification Standard. VT Division of IT

Using (and Citing) AI: Tips & Tools from Your Librarians

Virginia Tech Policy 13020: Misconduct in Research

Virginia Tech Responsible and Ethical AI Principles

GAIDeT Declaration Generator:   An open-source web tool for documenting and disclosing Generative AI use throughout the research lifecycle. 

Source: Suchikova, Y., Tsybuliak, N., Teixeira da Silva, J. A., & Nazarovets, S. (2025). GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing.

Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. arXiv. Datasheets for Datasets

The Data Nutrition Project:  An open-source initiative offering tools and templates for creating dataset “nutrition labels,” supporting responsible data stewardship, including bias detection, metadata quality, and dataset documentation. Retrieved from https://datanutrition.org on Dec. 17, 2025.

AI Tools and Access at Virginia Tech | Artificial Intelligence | Virginia Tech

Examples of Discipline Specific Guidance

The following examples illustrate how professional organizations are establishing discipline-specific standards and ethical expectations for AI. Researchers should review the guidance relevant to their field to ensure their use of AI aligns with both Virginia Tech policies and their profession’s standards of practice.

Note: This list is illustrative, not exhaustive. Researchers are responsible for identifying and adhering to the AI-related standards and policies established by their own professional or disciplinary organizations.