Principles of AI use
Generative AI, such as ChatGPT, Bard, and Bing refers to computer systems capable of creating content—text, images, other media—in response to natural language inputs. With its growing presence, AI is poised to transform many academic and scientific disciplines. To engage with AI safely and effectively in global health education and practice, students first require proficiency in the subject area, must accept the responsibility of verifying accurate output of AI models, and should follow processes for documentation of AI use, when appropriate and as instructed.
The use of AI by students in the IGHS education programs must also adhere to University of California and UCSF policies and procedures. This includes those around data security. Data classified as P3 or P4 cannot be used with AI systems not approved and monitored by UCSF. This applies to all MS capstone and PhD Dissertation data that has not been made publicly available at the time of AI use. See UCSF Data Classification Standard.
Proficiency: Learning is not simply memorization of facts—it is building flexible knowledge structures that can be called upon to solve problems and evaluate possible solutions. AI systems represent one source of possible solutions, but to evaluate the value of an AI’s proposed solution, it is necessary to have adequate competency in that domain. Furthermore, as AI systems are currently based on predictive text – that which has been written before, they are subject to replicating sexist, racist, and colonial theories and biases. Users should be well-aware of these limitations and review all output for these underlying issues at the risk of perpetuating them oneself.
Verification: Students must take full responsibility for AI-generated materials as if they had produced them themselves. Facts must be true, and assertions must follow from those facts. Generative AI is well-known to output incorrect, misleading, or entirely fabricated information (‘hallucinations’). This includes creating and citing sources that do not exist to justify statements. This limitation is especially important in health-related education, where knowledge forms the basis for decisions that can impact patient or population health.
Documentation: All ideas that are not originally one’s own have a source, and that source generally must be attributed. (See Classification Levels of Allowed Use.) As noted above, generative AI may invent sources. Documentation of AI use is always a best practice and may be required. When documentation is required, students are obligated to follow standard practices for documentation. (See Appendix.) The lack of documentation when AI is used will be considered an issue of Academic Misconduct and will be handled according to the program’s policies and procedures.
Classification levels of allowed use of AI in assessments and deliverables
The following classification scheme will be used for all course assessments and deliverables required for course completion:
- Assessments
Demonstrations of knowledge or skill, whether proctored or un-proctored, such as examinations, quizzes, final presentations, Capstones, Dissertations. - Deliverables
Written, oral, or audiovisual assignments or presentations such as assignments, problem sets, etc. Instructors may provide a blanket classification for all assessments or deliverables for a learning experience or provide separate classification for individual assessments or deliverables.
If an assessment or deliverable does not have a classification provided, it is assumed to be classified AI- Disallowed. By submitting an assessment or deliverable for evaluation:
- Students assert that they have respected all specific requirements of the assigned work, in particular requirements for transparency and documentation of process, or have explained themselves where this was not possible.
- When use of AI is allowed, students assert that it accurately reflects the facts and that they have verified the facts, especially if they originate from generative AI resources. The presence of unverified facts and/or other issues in the writing when AI is used will be subject to deductions per standard grading practices.
- When use of AI is allowed and documentation is required, students assert that all sources that go beyond ‘common knowledge’ are suitably documented and verified. Common knowledge is what a knowledgeable reader can assess without requiring confirmation from a separate source.
OVERVIEW OF AI CLASSIFICATIONS
AI-DISALLOWED | Any use is academic integrity violation | Example: exam |
---|---|---|
AI-RESTRICTED | Restriction on type of AI resources or aspects of assessment allowed and documentation required | Example: journal club assignment |
AI-DOCUMENTED | No restrictions on AI use but all use must be documented | Example: assisting in reducing word count to meet assignment requirements |
AI-UNREGULATED | No restrictions on use and no documentation required | Example: email communication |
AI-DISALLOWED
Generative AI tools cannot be used in this assessment or deliverable. In such an assessment or deliverable, students must not use artificial intelligence (AI) to generate any materials or content in relation to the task. Use of AI will be considered an academic integrity violation and will trigger the Policy on Student Misconduct in Academic Studies. Examples of assessments or deliverables that might be classified AI-Disallowed may include, but are not limited to:
- Formal exams.
- Assessments and quizzes.
- Any data classified as P3 or P4 under UCSF policy (outside of Versa)
AI-RESTRICTED
Generative AI tools are restricted for this assessment or deliverable and require documentation. In such an assessment or deliverable, students are restricted in either the types of AI tools that may be used, or on which aspects of the assignment AI may be employed. All use of AI must be appropriately acknowledged. (See AI- Documented.) The nature of the restrictions should be specified by the instructor. Examples of assessments or deliverables that might be classified AI-Restricted include, but not limited to:
- Journal article assignments where summarization is performed by AI, but assessment of strengths and limitations is generated by the student.
- Production of summaries of topics that provide a basis for further non-AI-assisted inquiry.
- Creating analytical plans for raw data and/or the interpretation of output.
AI-DOCUMENTED
Generative AI tools may be used in any manner for this assessment or deliverable but require documentation. In such an assessment or deliverable, any AI tools may be used on any aspect of the assignment, but all use of AI must be appropriately acknowledged. Examples of assessments or deliverables that might be classified AI- Documented include, but are not limited to:
- Assisting in the editing of prior written work for conciseness, language, and the like when the primary material has been originally written by the learner.
- Troubleshooting analytical code to help in the analysis of data source. Note: this does not include asking AI to construct analytical plans or interpret statistical output.
- Assignments whose goal is to develop skills in using AI-based services.
AI-UNREGULATED
Generative AI tools are not restricted for this assessment or deliverable and documentation of use is not required. In such an assessment or deliverable, any AI tools may be used to assist in any way, and it is not necessary to document or attest to their use. Note that AI products are increasingly integrated into standard software packages (e.g., Microsoft) to provide grammar and spellchecking, and these capabilities will likely increase. Current versions of such products do not require citation.
Safe and compliant use of AI
This policy is designed to apply to assessments and deliverables used in the pedagogical process, not as part of direct patient care or human subjects’ research. Most commercially available AI systems are not compliant with HIPAA or FERPA protections and entry of patient or student information (identified or de-identified) into such systems is a violation of UCSF policies and potentially a crime. Use of AI systems in patient care or research requires direct, positive confirmation from a research mentor or director that such use is allowed and that the system is authorized to work with such information. The use of AI technology for research overseen by UCSF must be documented and approved prior to use.
References
- Sentient Syllabus Project
- UNESCO ChatGPT and artificial intelligence in higher education: quick start guide
- Monash University: Policy and practice guidance around acceptable and responsible use of AI technologies
- UCSF Data Protection and Large Language Models (requires MyAccess login)
Appendix
Standards for acknowledging use of generative AI, when such use is allowed. When assessments or deliverables are classified as AI-Restricted or AI-Documented, documentation as to the manner of AI use (if used) must be provided. Documentation may take the form of primary source citations, summary statements, or both. Instructors should specify which type of documentation is expected for the assessment or deliverable.
Primary source citation
Assignments may require citation of primary sources. While AI content may include references to primary sources, the AI’s output is not reviewed by experts and the source references may not be correct. When primary source citations are specified, all factual statements in the work product that are not common knowledge must have the original source cited per standard citation styles.
Summary statement
For some assignments, inclusion of a summary statement noting the use of the gAI and the manner of its use may be sufficient. Summary statements should note the name and url of the gAI system, the use case of the gAI, the prompt(s) used, and how the gAI’s output was used or adapted:
I acknowledge the use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. The prompts used include [list of prompts]. The output from these prompts was used to [explain use].
Example:
I acknowledge the use of [1] ChatGPT to [2] generate materials for background research and self-study in the drafting of this assignment. I entered the following prompts on 4 January 2023:
- [3] Write a 50-word summary about the formation of Monash University. Write it in an academic style. Add references and quotations from Sir John Monash.
[4] The output from the generative artificial intelligence was adapted and modified for the final response.
Policy adapted with permission from UCSF School of Pharmacy: Policy on Use of Artificial Intelligence (AI) in Assessments and Deliverables