Introduction
As we align with the evolving landscape of education, Far Eastern University recognizes the importance of adapting innovative technologies such as Generative Artificial Intelligence in our academic programs. This guideline outlines the responsible and ethical use of AI in education, emphasizing our core values of Fortitude, Excellence, and Uprightness, and our commitment to student-centered learning. Furthermore, in line with our FEULOs, this guideline will provide justifications why AI-generated outputs are not acceptable student submissions. For the purpose of this guideline, Generative AI, Large Language Models, and General AI are defined as follows:
A) Generative Artificial Intelligence is a type of AI that generates new content that could be in the form of images, text, music, or videos based on learned patterns (OpenAI, 2023). It is unable to generate original work as it relies on existing patterns to generate new content that is similar to the original.
B) Large Language Models (LLMs) are a specialized subclass of Generative AI. They excel in text generation capabilities and language tasks such as translating, text completion, and question and answer (Anthropic, 2023). However, LLMs do not have the capacity to think.
C) General Artificial Intelligence does not exist yet; it is an aspiration to have an AI that can learn, understand, and reason to replicate human thinking (OpenAI, 2023).
These definitions are lifted from Claude and ChatGPT, which are both LLMs. Following these definitions, it is clear that LLMs do not have the capacity to learn persuasive communication, intellectual curiosity, critical thinking, creative problem solving, professionalism, and responsible digital citizenship. Therefore, a student submission generated only through LLMs or Generative AI defeats the purpose of why and how assessment is done in FEU. Additionally, an output generated solely through LLMs and/or Generative AI shall be subject to FEU’s Policies on Academic Integrity.
LLMs and Generative AI, however, are suited to activities that stimulate thinking, for example, using LLMs or Generative AI to create ungraded formative assessments, generate discussion points, brainstorm topics and ideas, and come up with metacognitive activities (e.g., getting students to reflect on an AI-generated text, cross-referencing information from AI-generated text with peer-reviewed journal articles, etc. ); this is similar to treating LLMs as a normal search engine like Google and Bing. Faculty members, therefore, are encouraged to design class activities that will make use of LLMs and Generative AI as tools to promote persuasive communication, intellectual curiosity, critical thinking, creative problem solving, professionalism, and responsible digital citizenship.
This guideline is also based on the idea that original, critical, and creative thinking is the basis of all academic and scholarly work; the same idea underlies the high value placed on academic integrity. Generative AI may be used as a tool to help generate ideas (similar to brainstorming) which must then be evaluated and processed by the user. It may also be used to assist in articulating points and in getting feedback. Faculty members need to be circumspect about their roles in ensuring that students are able to build and develop basic skills such as critical thinking and effective communication, which they may not be able to do if they were imprudent in the use of AI.
- Faculty Awareness and Empowerment on AI: Faculty members should familiarize themselves with the capabilities and limitations of AI such as those listed in the definitions above. They are encouraged to participate in trainings and workshops that increase their familiarity and competence with AI technologies. This awareness will equip them to effectively guide students in the use of AI.
- Understanding of When AI is Best Used: Faculty are encouraged to reflect on and be thoroughly familiar with the learning outcomes for each of their classes. This might be the best way to determine whether Generative AI should be used and to what extent it can be used.
- Use of AI in Classroom Activities: Faculty members are encouraged to design activities that incorporate the use of AI to illustrate its usefulness and potential for enhancing learning. These activities should be designed in a way that promotes critical thinking and problem-solving skills. Faculty members are also encouraged to communicate in person and through their CIBs how and how much AI is to be used.
- Promoting Reflective Learning: Faculty members should encourage students to be reflective about the output generated by AI. This reflective approach will foster a deeper understanding of the knowledge produced by AI and its application in real-world contexts.
- Educating Students on AI Limitations: Faculty members are responsible for making students aware of the limitations of AI, including its tendency for “hallucination,” which involves creating false citations, making incorrect calculations, and providing incorrect data. These limitations should be clearly communicated to ensure that students are not over reliant on AI and that they understand its limitations.
- Guarding Against Indiscriminate Use of AI: Faculty members are expected to be vigilant about students’ use of AI. They should encourage students to use AI responsibly and ethically, fostering a culture of integrity and originality in their academic work.
- Authentic Assessment: Faculty members are encouraged to create authentic assessments that reflect what students have learned from reading materials, class discussions, and personal experiences. This will ensure that assessments are not merely AI-generated but are reflective of student’s understanding and learning.
- Knowing Student Abilities: Faculty should strive to understand their students’ skills and abilities. This knowledge will help them assess whether a student’s output might be AI- generated especially if there is a sudden improvement in the organization and grammar in their outputs.
- Faculty Use of AI in the Creation of Assessments, Modules, Lessons, and Reports: Faculty members are encouraged to use AI to create assessments, learning modules, lessons, and reports. As they want their students use AI prudently—with a respect for data reliability, always aware that the quality of the prompt determines the quality of the output, and always aware of the need for checking and editing—so should they apply the same prudence in their use of AI pedagogically.
- Assessment and Evaluation of Student Outputs: Teachers are professionals who embody academic integrity. As we expect our students to submit original work, so are teachers expected to not rely on AI in assessing and evaluating students’ submissions. Using AI to assess and evaluate student outputs prevents teachers from monitoring students’ progress and determining their skill level. Furthermore, the feedback that AI generates is often generic and impersonal when feedback should be unique to each student. Lastly, teacher feedback should allow students to reflect on their learning, something that AI cannot do as it does not have our experiences as human persons.
- Faculty Use of AI in Research and Study: Faculty use of Generative AI must always be done with a firm commitment to academic integrity, the basic principles of which are:
- the gathering, reporting, and use of reliable data;
- the proper acknowledgement of the work of others;
- being truthful about one’s effort at producing new knowledge (research) or demonstrating skill and familiarity with received knowledge.
- Generative AI as an EdTech Tool: Courses in FEU are designed to stimulate thinking at various levels. A testament to this is our subscription to Bloom’s taxonomy when writing our intended learning outcomes. Each faculty member in FEU is assumed to be equipped with skills that allow them to write SMART learning outcomes and design activities that are of appropriate difficulty level. Given that Generative AI makes thinking and learning easier as it can generate outputs very quickly, thinking is not supposed to be easy.
- Thinking is effortful, uncertain, and slow. Daniel Willingham suggests that students are motivated by knowledge gaps and are demotivated by knowledge chasms. A knowledge gap occurs when the distance between what the students already know and what they are supposed to learn is not very wide, while a knowledge chasm occurs when there is no connection between a student’s background knowledge and what they are supposed to learn. Students exert effort when they are presented with activities that shorten the distance between knowledge gaps.
These activities require them to think laboriously. While we welcome the use of EdTech tools, we want our students to be the ones to close the gap between what they know and what they are supposed to know; by providing shortcuts, Generative AI and LLMs compromise student learning.
As such, faculty members are encouraged to design activities that will facilitate students’ learning and thinking while encouraging the responsible use of EdTech Tools. Far Eastern University recognizes the potential benefits of AI in education, but we also believe in the importance of its ethical and responsible use. We believe that AI should be used as a tool to enhance learning and not replace the intellectual effort and critical thinking that are hallmarks of a quality education. Faculty members play a crucial role in ensuring that this balance is maintained.
Note: This guideline was generated with the help of Generative AI (Chat GPT-4) and with input from sentientsyllabus.org. Seven of the 10 main points (1, 3-8) were generated with the help of Generative AI. However, Generative AI was not used for points 2, 9-12, and the additional points related to the basis of the guideline; neither was it used in editing these guidelines.