Skip to Main Content

AI in Academia: Ethical Use and Practical Guidance

A general overview of AI including different models, challenges, and tips for using it ethically.

What is Artificial Intelligence or AI?

A branch of computer science that focuses on creating systems capable of performing tasks that typically require human intelligence.

We interact with AI daily:

  • Voice assistants like Siri or Alexa
  • The recommendations we get while shopping online
  • Being rerouted while driving due to an accident on our route
  • Using Google Translate to find the bathroom while traveling abroad

However, in the last few years, the impact new AI programs, such as ChatGPT and Gemini, have had on our world has drastically increased. When asked how an AI chatbot works, Copilot, an AI tool by Microsoft, gave the following response:

Imagine the AI as a highly knowledgeable assistant who has read an extensive amount of text from books, articles, and websites. It learns how people communicate by analyzing all this information. When you type a message, the AI processes your words to understand your question or statement. It then uses its vast knowledge to generate a response that is relevant and coherent.

Think of it this way: you ask a question, and the AI searches through its extensive database to find the most appropriate answer. It’s like having a conversation with someone who is well-versed in many different subjects. The AI continually improves because developers update it and correct any errors.

So, when you interact with the AI, it leverages its comprehensive understanding to provide helpful and accurate responses. It’s akin to conversing with a knowledgeable friend who strives to give you the best possible answer.

This guide will show you the challenges and considerations of using AI, support for both students and faculty, and a variety of tools to help you in your day-to-day activities.