If the designers of Large Language Models (LLMs) do the right thing, they will incorporate the fact that the article was retracted into the training process. If they do that, this sort of incident can serve as an opportunity for LLMs to become more effective at recognizing and pushing back against the manufacture of fallacy.
As mentioned in a comment on the blog, the paper in question was flagged as likely AI-generated anyway, so even without human intervention a detection seems feasible. Let’s hope that one day, all news and books and videos and speeches will be processed through a “b.s. detector” (AI-powered).
On the other hand, I wonder whether such retracted ‘fake’ articles that are profusely commented, cited, and linked to (although as examples of ‘bad science’) still count towards increading the h-index -and thus academic credentials- of their authors.
That, too must be addressed. It is a rough ride with many unintended and some unanticipated consequences. It will take much effort to keep ahead of the tide. Alerts, such as yours, can help facilitate thinking about possible solutions.
EDIT:
As we consider the consequences of citing retracted articles, we can think about the net effect of our discussing the article here, as an example. In my opinion, the current discussion is a potential net positive for recognition of truth. A good LLM should be able to determine that the mentions of the article within this discussion and others like it reflect negatively upon that article. Now, the designers of credentialing systems must incorporate the recognition of that negativity into their models. Here’s hoping!
We just need to set up a Ministry of Truth to vet all published materials.
Alas, George Orwell warned us, but how do we heed that warning without sending ourselves right into the same manner of trap that we are striving to avoid?
Oh my goodness. It moves so organically too. This is so concerning
Everyone was focused on the rat, but I love that their “JAK/STAT pathway” image just has everything labeled as JAK or STAT. It’s like when you forgot the pathway on the exam and just start labeling stuff randomly hoping that one or two of them will be right.
Yes, as others have previously mentioned that isn’t reliable, but I’d exercise caution when simply going “off the offness.” See: https://forum.inaturalist.org/t/beware-ai-images-on-inat/44346/22?u=cs16-levi
Woah, that is absolutely insane! When I first clicked on the link I was expecting it to change and show the AI version after the real one! That is crazy. Although, is it just me or does the bird have the same reflection or shine in the eye the whole time, even when it turns?
A lot of discussion for narrowing out AI images has been the “to good to be true” or “something is off aspect” what if you try to make a bad somewhat blurry photo with AI? That would make it quite a bit more believable, personally I think Metadata or something else along those lines is likely the only way to stop the uploading of AI images.
I’ve heard photoshop actually be referred to as a new way to go to college Lol. AI prompt is certainly much easier.
@cs16-levi you can reply to multiple posts in one post, that keeps things a bit tidier in a discussion. Select text from the post you want to reply to then click on Quote. That text will be quoted in your reply. you can reply below it, then choose another quote from another text. Anyone whose text is quoted is notified they have a reply.
My fault, I thought replying to more people in one message would make it in fact untidy.
a bumble mouse
Please use the proper binary name: Mus bombus
i wish this was real, so cute
It needs redbull
Its a mumble bee!
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.