I was surprised a few weeks back by a student’s “AI disclosure” that crossed a few red lines. That led me to have a chat with ALL my students, to explain that AIs are (a) not a good way to learn and (b) very much an insult to me, if they want me to take any time to read and comment on “their” work.
I also collected some data, which gives an interesting (but not perfect!) view of how they are using AIs:
Here’s how I interpreted the results to them (“you”):
The categories marked in RED are a no-go. You should do this work on your own, as an essential part of learning.
For the YELLOW instances, the idea here is caution. Yes, it’s possible to get some help from an AI on these tasks, but it’s also possible that the AI will give you wrong or biased information. Thus, it’s better to avoid AI or — at a minimum — double check everything the AI gives you AND do your own work (e.g., using google scholar or talking with other humans)
The only GREEN is “understanding concepts,” where I am interpreting the AI as a kind of tutor that can help answer your questions and — if you use it right — ASK YOU questions that you should be able to understand or focus on learning. AI-as-tutor is a really promising use of this tech.
None of this is official and only some of this is still unsettled to me, but I wanted to give you this feedback to help you avoid unethical and/or prohibited behavior.
As we all know, it’s hard for anyone to know that students are using AIs, so we — you and me and all the other members of LUC’s academic community — need to understand why “the hard way” is the only way to learn.
As I said in class, there’s a big difference between the use of AIs in school (learning) vs work (getting shit done), especially when you realize that the only way to use it wisely (giving good prompts) is AFTER you’ve learned enough about the topic.
If you’re on reddit, then check out the [190!] comments from r/professors on a thread I started.
My one-handed conclusion is that students and teachers need to talk about the ethics and proper use of AIs. What’s sad is that they will enable cheaters and “I just want the diploma” types… Maybe time to get rid of grades [PDF]?
Addendum: Paul Graham says that AIs are going to lead us to divide into “writes and write-nots,” which is concerning when you remember that one must write in order to think.
The AI service I like, Perplexity, provides citations (with clickable links) for everything it “says”. What’s more, the citations are in proper academic format. I find this extremely handy. These days, it seems that asking for a citation is threatening to the loudmouth who just cut and pasted something his brother saw in a chat room.
When my daughter was in grad school, several of her students cheated regularly. If caught, they acted offended rather than ashamed. Going into debt to “earn” a degree is awfully stupid. Did they think their eventual employer wouldn’t see within a few weeks that they don’t know anything about what they were hired to do? When they get fired, they’ll say it was because of racism, workplace hostility, restless leg syndrome…anything except lying.