Skip to main content

Your Classroom Faculty Guide to AI

Laptop with an AI graphic on it

What is AI?

Artificial intelligence (AI) is the creation of computer systems that can perform tasks that typically require human intelligence. It involves algorithms and machine learning, enabling machines to think, learn, and solve complex problems. While AI offers benefits, there are ethical concerns about privacy, job displacement, and biases in decision-making.

What is generative AI?

Generative AI refers to a type of artificial intelligence that is designed to create new and original content, such as images, music, or text. It uses complex algorithms and models to generate these creations based on patterns and examples it has been trained on. Different types of generative AI can be used to create different kinds of content. For example, transformer models, like GPT are used for creating textual output, whereas diffusion models are used to produce images.

Generative AI can produce realistic and unique outputs that have never been seen before, making it a powerful tool for creative applications. However, it’s important to consider ethical implications and potential misuse of generative AI, as it can also be used to generate false content or manipulate information.

A person typing on a laptop

Best Practice use for AI in your Course

When it comes to implementing AI in the classroom, there are several best practices that can guide educators towards successful integration. Firstly, it is crucial to start with a clear pedagogical purpose. Identifying specific learning objectives or challenges that can be addressed through AI technology helps ensure that its implementation is purposeful and meaningful. This may involve leveraging AI tools to personalize instruction, provide real-time feedback, or facilitate collaborative learning experiences. By aligning AI initiatives with educational goals, educators can maximize its potential impact on student learning outcomes.

Secondly, a learner-centered approach is essential. AI should be seen as a tool to support and enhance teaching and learning, rather than replace human interaction. Educators should design AI-enabled activities that encourage student engagement, critical thinking, and creativity. This may involve incorporating AI into project-based learning, simulations, or problem-solving tasks. Creating opportunities for students to interact with AI technologies, understand their underlying principles, and reflect on their implications fosters digital literacy and prepares them for the AI-driven world they will face.

Furthermore, ongoing professional development is crucial for educators to effectively implement AI in the classroom. Faculty members should be provided with the necessary training and support to understand AI technologies, their applications, and their potential impact on teaching and learning. Collaborative spaces, such as professional learning communities or workshops, can serve as platforms for educators to share best practices, exchange ideas, and troubleshoot challenges related to AI implementation. By investing in professional development, institutions can empower educators to harness the full potential of AI in the classroom.

Successful implementation of AI in the classroom requires a clear pedagogical purpose, a learner-centered approach, and ongoing professional development for educators. By adhering to these best practices, educators can leverage AI technologies to create engaging and effective learning experiences that prepare students for the AI-driven world while maintaining the central role of human interaction and guidance in the learning process.

Six Best Practices for Approaching AI in the Classroom

People annotating a paper

Usage and Citation Guidelines

How to Guide Students on Using and Citing AI Sources

Using AI sources can be helpful for students to brainstorm, edit, translate, or generate ideas for their academic work. However, all users should also be aware of the limitations and ethical implications of using AI sources. Here are some tips and guidelines for using and citing AI sources in your students’ academic work:

Set clear expectations for your students on whether and how they can use AI sources in your assignments. You may want to allow it only for certain purposes or require specific disclosures or citations.

  • Discuss with your students how to check the facts! Text AI sources like ChatGPT may write responses without concern for factual accuracy. These tools do not disclose the origin of the information they provide and sometimes completely make up information or fake source citations. Encourage your students to use non-AI sources to verify accuracy and to cite those non-AI sources instead of the AI-generated content.
  • Require your students to declare their use of AI sources with an AI Use Disclosure (or other statement as you direct, such as appendix) that describes which tool they used and how they used it. Ask your students to include an AI Use Disclosure even if they only used AI tools for brainstorming or other preparation for an assignment.
  • Require your students to cite the specific AI-generated content they use or take ideas from (text, images, videos, audio, code, etc.). We recommend requesting both in-text citations and full citations in the References.

How to Cite AI Sources in APA Format

APA format uses an author-date citation system for in-text citations. Directly after a sentence or clause that uses AI research—but before the punctuation mark—put a parenthetical citation with the company that created the AI and the year they accessed it.

For example:

When asked to describe the symbolism of the green light in The Great Gatsby, ChatGPT provided a summary about optimism, the unattainability of the American dream, greed, and covetousness (OpenAI, 2023).

For the full citation in the References list, use the following format:

Company name. (Year). Title of source [Description]. URL.

For example:

OpenAI. (2023). Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald [Text generated by ChatGPT]. https://chat.openai.com/chat.

For more information and examples on how to cite AI sources in APA format, see these resources:

How to Cite AI Sources in MLA Format

MLA format uses a template of core elements to cite sources. The core elements are:

  • Author
  • Title of source
  • Title of container
  • Other contributors
  • Version
  • Number
  • Publisher
  • Publication date
  • Location

To cite AI sources in MLA format, you should follow these guidelines:

  • Do not treat the AI tool as an author. This recommendation follows the policies developed by various publishers, including the MLA’s journal PMLA.
  • Describe what was generated by the AI tool in the Title of Source element. This may involve including information about the prompt or the input if you have not done so in the text.
  • Use the Title of Container element to name the AI tool (e.g., ChatGPT).
  • Name the version of the AI tool as specifically as possible in the Version element. For example, if the AI tool assigns a specific date to the version, use that date.
  • Name the company that made the tool in the Publisher element.
  • Give the date the content was generated in the Publication Date element.
  • Give the general URL for the tool in the Location element.

For example:

ChatGPT and Matt Ellis. “Write a brief entry for a website about the use and citation of AI sources.” OpenAI, 3 Aug. 2023, https://chat.openai.com/chat.

For more information and examples on how to cite AI sources in MLA format, see these resources:

A desk with a computer on it

Strategies for Reducing AI Vulnerability

As artificial intelligence advances, assignments and assessments may become vulnerable to the use of AI tools. Faculty should consider reasonable steps to reduce this vulnerability while recognizing that AI usage cannot be fully eliminated for most activities and assessments.

  1. Set clear policies on AI use: Establish clear policies on whether AI tool use is permitted for assignments
  2. Emphasize unique human skills: Highlight critical thinking, creativity, and problem-solving skills like critical thinking that AI cannot easily replicate. This underscores the value of human learning.
  3. Evaluate the learning process: Assess students’ learning journeys:
    1. by requiring artifacts such as lab notebooks, design outlines, or other process documentation that reveal their problem-solving, research, analysis, reflection, revision, collaboration, application, and creativity over time, not just the final product.
    2. through structured tasks and reflections that allow assessment of problem-solving, analysis, skill development, knowledge application, and the creative process, in addition to the final work product.
  4. Acknowledge the challenge: Encourage students to embrace challenges and view mistakes as opportunities to grow. This values resilience.
  5. Determine whether an essay is needed: Consider if assignments need to be in essay form or if other formats could achieve learning objectives.
  6. Focus on higher-order thinking: reward work that successfully applies knowledge, analyzes concepts, assesses theories, creates new ideas, and produces organized writing displaying deep understanding.
  7. Ensure technical accuracy and precision: Prompt students to show in-depth technical knowledge by accurately using terminology and concepts in written work, explaining technical details clearly, analyzing complex scenarios, making evidence-based recommendations, and completing applied skills assignments
  8. Require specific references: Ask for references to current, relevant, and specific scholarly sources, including screenshots of key passages. This necessitates deep engagement with source texts.

By thoughtfully implementing these strategies, faculty can promote meaningful learning experiences while reducing

overreliance on AI.

A man standing in front of sculptures

Leaning into AI

In the rapidly evolving landscape of higher education, faculty members can “lean into” AI to harness its potential and enhance various aspects of teaching, research, and administrative functions. By embracing AI technologies and integrating them into their practices, faculty can significantly benefit both themselves and their students.

One of the most prominent applications of AI in higher education is personalized learning. Faculty can leverage AI-powered learning platforms to analyze individual student data, identify learning patterns, and tailor instructional content to meet the unique needs of each learner. This personalized approach not only fosters better engagement and motivation but also improves learning outcomes.

Furthermore, AI can streamline time-consuming administrative tasks, allowing faculty members to focus more on teaching and research. Automated grading systems can assess assignments and exams, providing instant feedback to students while freeing up instructors from the burden of manual grading. Additionally, AI-powered tools can help manage course scheduling, student enrollment, and other administrative processes, optimizing efficiency and minimizing administrative overhead.

AI also plays a crucial role in research and knowledge discovery. Faculty can utilize AI algorithms to analyze vast amounts of data, identify trends, and generate insights that would otherwise be challenging to uncover manually. AI can assist in literature reviews, data analysis, and experimental design, leading to more rigorous and impactful research outcomes.

To fully embrace AI, faculty members need to stay updated on the latest advancements in AI technologies and their applications in education. Institutions can offer professional development programs and workshops to empower faculty with the necessary skills to incorporate AI tools effectively.

While AI undoubtedly presents immense opportunities, faculty should approach its implementation thoughtfully and ethically. Transparency, data privacy, and algorithmic bias must be carefully considered to ensure that AI is used responsibly and in the best interest of students and society as a whole. By “leaning into” AI with a proactive and mindful approach, higher education faculty can revolutionize the learning experience and lead the way in shaping the future of education.

Abstract AI graphic

AI Detection

Text-based generative AI detection works by using machine learning techniques to identify and distinguish between text produced by generative AI models and text written by humans. These detection models are trained on large datasets containing both real human-written content and text generated by AI models. They learn to recognize patterns, language structures, and other characteristics that are unique to AI-generated text, enabling them to differentiate between the two. Typically, after analysis, it then provides a similarity score, indicating the percentage of text that matches existing sources. 

False Negatives:  

Let’s consider a scenario where a student submits an academic paper and generative AI detection is applied to check for plagiarism or ghostwriting. The detection model analyzes the submitted paper and compares it against a vast database of academic publications and other sources.   

In the example below, the student attempts to conceal plagiarism by paraphrasing the AI source to more closely mimic human writing style, thereby circumventing the detection system. 

Original Source Text: “Climate change is a pressing global issue that requires urgent attention from governments and citizens alike.”  

Student’s Attempt (Disguised Plagiarism): “The concern of climate change is a critical matter that demands immediate action from both governments and the public.” 

When paraphrasing generated content, the detection model may not recognize the similarity due to the linguistic variations used. In such cases, the generative AI detection model might produce inaccurate or false-negative results. 

False Positives:  

Conversely, AI detection systems can produce false positives. This frequently occurs, for example, when encountering the precise and specialized medical terminology, statistical language, and structured writing style typical of academic papers. Due to its training on vast amounts of text data, including machine-generated content, the system mistakenly interprets the paper as machine-generated rather than human-authored. This misclassification results in a false positive, indicating potential content generation or plagiarism. 

Can I Rely on AI Detectors? 

While AI detection tools are improving, detecting AI-generated content is difficult. As of July 2023, AI-detection programs currently yield a high level of false positives, have been shown to be biased against non-native English speakers, and are primarily based on the older GPT 3 model, meaning they may be less likely to detect output from new models. 

AI writing detectors use different methods of detection, but frequently rely on properties such as “perplexity” and “burstiness”. Perplexity measures the degree to which a text is different or surprising in comparison to an AI model’s training data. Burstiness occurs when certain words or phrases appear in close succession within a text. Due to a perceived lack of “perplexity” and “burstiness” GPTZero (an AI detector) indicated that the US Constitution is “likely to be entirely written by AI” (Edwards, 2023). Notably, in July 2023, less than a week after the experiment Edwards (2023) described when testing the US Constitution against AI detectors, OpenAI quietly pulled its AI detector from the market due to “a low rate of accuracy” (OpenAI, 2023). Moreover, even slight paraphrasing can “can break a whole range of detectors, including ones using watermarking schemes” (Sadasivan, 2023).  

For the time being, at least, it is not advisable for instructors to rely on AI detectors, including Turnitin’s, as the sole means for reliably identifying AI-produced text. 

Assessment in the Age of AI 

With AI detection proving to be unreliable, it will be difficult for instructors to know the degree to which their students are using AI in their work. The challenge, then, is to design assignments that ensure authentic assessments that mitigate (or incorporate) the use of AI by students. At its base, this requires instructors to design assessments that focus on higher-order thinking skills and require students to demonstrate their understanding and application of knowledge in meaningful ways.