Going straight to your 4th point, I agree that most people using AI like this do not have any negative intentions. My guess is that in the example case, the person was a bit out of their depth. They were confused by the IDerâs comment, and since it contradicted whatever prior understanding they had, they also werenât sure if the comment was right. So they asked AI to explain the comment, possibly using some version of this (exaggerated to demonstrate the possible issue):
âI thought the answer was âXâ, and almost everything else online agrees with me, but another person says the answer is Y. Their explanation is confusing, and Iâm wondering if it might be wrong. Can you you explain their comment to me? Are they definitely right, or is it possible the answer is actually âXâ?â
If the LLM doesnât understand the topic or know what the correct answer is (and it may not, because up-to-date ID info can be incredibly obscure), its explanation is likely to be useless and to have little basis in fact. Itâs highly susceptible to bias and to reinforcing whatever someone already thinks. It can do this even if you are actively trying to prevent that from happening, and it has an extremely loose relationship with the truth. In real life, the interaction was probably more subtle, but the effect can be similar, as we rarely describe things from an entirely neutral perspective. If the observer wasnât convinced yet, they probably would have let the LLM know that. I am speaking from personal experience on this. I used AI quite heavily, in part to try to develop a more informed perspective on it.
The line between appropriate and inappropriate AI use is an absolutely massive topic about which many pages could be written, and I donât think it can be easily reduced to a small number of objective questions. I have come up with some general guidelines for myself that I think could also serve others well, but havenât taken the time to write them out. I think AI is primarily useful in two types of situations. The first is helping you in areas where you have expertise, and can easily recognize any errors in its responses. In these situations, itâs helping you with a process of some sort (coding, helping with very specific parts of the editing process while writing, or any of a large variety of things). Itâs not acting as a source of new information, and its responses are closely vetted and modified rather than being used directly. When you do enough of this, you will learn exactly how fallible it is and how confidently wrong it can be.
The second type of situation when it can be helpful is when you know very little about a subject. Iâve used it to explore topics in physics and philosophy that I have very little prior understanding of or education in. Sometimes, I have an interesting thought, question, or idea, Iâm sure it has been extensively studied before, but I donât even have the terminology I need to find more information about it. AI can put a name to some of these ideas which allows me to find things I would have otherwise struggled greatly to find. The key here is that I donât trust anything it says even a little bit, and I mean that literally. I treat everything it says as a guess, and it just happens to be quite good at guessing. Any new knowledge or understanding it provides isnât real until I can validate or test it by some independent outside source/method. Other sources can have errors, but I think itâs incredibly rare for them to have the sort of blatant fabrications that AI can produce, which would be seen as fraud or deception if they came from a human. I find AI least useful in areas where I have moderate understanding, because it takes longer to verify whatever it says than to figure things out myself.
I think there are also a lot of more subtle and pernicious effects that can accompany AI use. It has several slippery slopes that can lead to accidental misuse, and one needs to always be assessing their relationship with it. Is it helping you to learn, or just giving you false confidence, reinforcing your views, and solidifying possibly incorrect beliefs? This one is a much greater risk than it seems. Most LLMs default to validating you and telling you what you want to hear, perhaps articulating what you could not and explaining why you are right even if you werenât totally sure. For a while, I gave custom instructions warning against this and trying to make my LLM more critical/skeptical. The result was that it more often challenged me, but I âwonâ every single debate or discussion I had with it. The ironic and counter-intuitive effect was that it strengthened my views more by challenging me than it would have in trying to validate me. LLMs have improved a bit to avoid excessive validation, and I think this can make it less obvious and more powerful. I think it can be dangerous for this reason, and I try to avoid discussing subjective things, beliefs, or interpersonal dynamics with it. If I ever do, I frame things in the third person so it canât tell what I actually think or believe (this is also a good idea for objective subjects).
I think everyone must ask themselves whether they are really using AI for learning, or if thatâs a cover for a sort of intellectual forgery in which you misrepresent your level of understanding and fool even yourself in the process. Are you using it to help you better articulate your arguments, or is it subtly shaping your arguments and beliefs in ways you may not even be aware of while weakening your ability to think for yourself and make your own decisions? Are the arguments even your own, or is it giving you what it thinks you want to hear before you have taken the time to think through things independently? I think these are subjective personal questions that require honest introspection and usually donât have definitive answers. They are intended to be a jumping off point to reflect on how and why one is using AI rather than a way to assess whether they should be. Yet when I see obvious AI text shared verbatim and someone is doing this repeatedly, I canât help but feel that this represents an extreme, and that the person has abdicated their position in the driverâs seat. When AI is used properly, I think it should be difficult or impossible to tell that it was used at all.
Based on the fact that many people struggle even to vet the information they find online by more traditional means, I think a large fraction of society currently lacks the ability to use AI appropriately, and there are inadequate norms and guidance to help them with this. In sharing my own experience and thoughts on AI, and gently pushing back when I see what I believe to be misuse, Iâm trying to contribute to the development of new norms that will increase our resiliency to it. Iâm not opposed to using AI for editing comments and posts, but I fear a world in which AI performs both the writing and the reading, helping me to decide what to say and how to say it, and helping you to decide what to think and how to feel in response to the condensed summary it feeds you.
Note: I use âyouâ in a generalized sense throughout this post, and itâs not intended to refer to âyouâ meriweather; I have no reason to suspect that you are using AI or that your use is improper if so.