Before turning to generative AI, first be aware of its costs. Then consider whether it is the right tool for your job. For example, is your current task one where truth and accuracy are important, such as research or learning about an unfamiliar topic? A disclaimer beneath the HuggingChat prompt box states that "generated content may be inaccurate or false." Similar text with Bing Chat (now Copilot) reads, "Bing is powered by AI, so surprises and mistakes are possible." For ChatGPT, it's "ChatGPT can make mistakes. Consider checking important information." If the tool you are using generates incorrect information, would you be able to determine whether that information is incorrect? Or, in this case, would AI not be the best suited tool for your needs?
The following exchange with Bing Chat (now Copilot) provides an example of how confidently generative AI tools respond, even when their output is incorrect. In this case, it is easy to identify the error. Bing Chat used the letter "e" when it was asked not to. Twice. It also claimed, twice, that it succeeded in not using the letter "e" when it did not. When asking generative AI about unfamiliar topics, how would you identify inaccuracies?
Prompt: Generate a 200-word paragraph on any topic without using the letter "e".
Follow-up prompt: That paragraph contained the letter e, for example, in the words "sleep" and "strawberry". Try again.
Note how Bing Chat painted itself into a corner by bringing up pangrams. Pangrams require every letter, including "e". Its example, "A quick brown fox jumps over a lazy dog" uses the letter "e" in the word "over" but is not a pangram. It is missing the letters "t" and "h". The correct pangram is generally presented as, "The quick brown fox jumps over the lazy dog."
Security and privacy are important to consider even before creating accounts for generative AI tools. What personal information is required to create an account for a particular tool? What is being asked for beyond an e-mail address and password? Is your first and last name required? Your date of birth? A working mobile phone number? Why is this information required and are you comfortable providing it?
If you are fine with the personal information required to create an account, concerns about security and privacy also apply when using generative AI. It is important not to include personal or private information in prompts or inputs for generative AI tools; otherwise, that information could become part of the tool's training data. A good rule of thumb is treat all prompts and inputs as though they are public. You should find out whether the tools you are using use your prompts for training. Some tools have settings that let you keep your prompts private. As with any other digital tool, it is a good idea to examine the settings to see what privacy and security options are available.
Institutions of higher education, including the University of Ottawa, are concerned about how generative AI impacts academic integrity. The International Center for Academic Integrity lists six fundamental values of academic integrity: honesty, trust, fairness, respect, responsibility, and courage. Academic integrity is not just about avoiding punishment by not plagiarizing. It is about developing and demonstrating the fundamental values in academic work.
Fostering these values is not just the responsibility of students. Institutions of higher education and course instructors have roles to play (as does society in general, if we want to go that far) in creating an environment that values academic integrity, encourages its development, and inspires students to value their education for its own sake and not just as a means to an end (e.g. getting a credential to get a job).
The use of generative AI does not in itself contravene the values of academic integrity. It depends how AI is being used and what for. The guidance from the University of Ottawa, in essence, is to be clear when and how you use generative AI in your work provided that it is permitted. Acknowledge all uses of generative AI explicitly in your work.
One way that students and instructors can determine together whether or not generative AI use is problematic is to look at the learning objectives for an assignment or course. If using generative AI results in bypassing learning objectives, then there is a problem. For example, if the learning objective is to be able to write unassisted, then using generative AI results in a failure to meet that objective. If the learning objective is to be able to use generative AI to improve one's writing, then using generative AI becomes a requirement.
A note for course instructors: Do not use "ChatGPT" to mean "generative AI tools in general". Telling students that they cannot use ChatGPT does not address whether they can continue to use Grammarly, for example, which many do.
Prompt writing (sometimes called prompt engineering) is the skill of creating prompts for generative AI tools that increase the likelihood that the output will match your intention.
There's a common expression in computing that applies to prompts: garbage in, garbage out. If you enter a vague prompt, you may get a response that does not match what you intended. With prompt writing, it is important to think through the details of the output you would like, so that you can craft a prompt that captures as many specifics as possible. There is a lot of advice and there are many frameworks for prompt writing. Since AI tools are always changing, it is possible that what works now for prompt writing will also change.
For our purposes, let's start by contrasting generative AI prompts from web searches. For years, we have trained ourselves for web searching. Many of us are accustomed to entering short prompts, rarely longer than a sentence, sometimes in the form of a question, to find information online. A fundamental lesson for generative AI prompt writing is to break from this habit. That is, do not treat generative AI prompts like web searches. With a generative AI prompt, if you take time to include detail you would not normally include in a web search, you are more likely to get output that matches your intentions.
In addition to your core prompt (i.e. the task you are asking of the tool), you can include information on any of the following, as relevant:
If your prompt still does not result in the intended output, analyze the output and consider how you can fine-tune your prompt for a better response.
Example of a rapid prompt influenced by years of web searching:
What are the main parts of the brain and their functions?
Example of a more detailed prompt:
Write an article about the main parts and functions of the brain. Write in full paragraphs only. Do not include any bullet lists. Write for a high school audience. Include analogies and use an entertaining tone.
For image generators, you can use specialized vocabulary and concepts from photography and art (e.g. framing, style, lighting) to specify your intended output.