ChatGPT and Alaska's Native Palms

This is hilarious, I love seeing people mess up AI like this, haha. Like @jnstuart also pointed out though, it is also a bit disturbing, but I think that this was a good example of how AI like ChatGPT really shouldn’t be relied on for information.

1 Like

With the new ability to send ChatGPT images, it can actually identify taxa fairly well, and give fairly accurate and useful information. For example, I sent it a Cardinal photo, and asked for identification:

ChatGPT: “The bird in the photo is a female Northern Cardinal (Cardinalis cardinalis). Key identifying features of the female Northern Cardinal include the following:

  • A predominantly brownish or grayish body with some warm reddish tinges on the wings, tail, and crest.
  • A distinctive red-orange beak.
  • A slight crest on the head, although less prominent than in males.
  • The facial area around the beak is slightly darker, forming a mask-like pattern.

Male Northern Cardinals are bright red overall, while females have a more subdued coloration but share the same general body shape and beak color. This species is common in North America, particularly in the eastern and central regions.”

I also tested it with a Ambystoma texanum larva photo, which it almost identified to the correct family, and also stated that more information would be needed for a more precise ID.

ChatGPT: “ The amphibian in the photo appears to be a larval stage of a salamander, commonly referred to as an “eft” or “newt” in its aquatic larval stage. The presence of external gills, which are the feathery structures on the sides of its head, and the overall body shape suggest it is a larval newt or salamander.

Based on the external gills and body characteristics, it is likely a larval stage of a species from the family Salamandridae (newts) or possibly from the family Ambystomatidae (mole salamanders). To narrow it down further to the exact species, additional details such as geographic location, habitat, and specific morphological features would be needed. However, the general appearance and external gills are strong indicators of it being a larval salamander or newt.”

Finally, I tested a freshwater microbivalve in the genus Euglesa, family Sphaeriidae, which it got really close to.

IMG_9166

ChatGPT: “The bivalve in the photo appears to be a freshwater clam, likely from the family Sphaeriidae. The transparent and glossy appearance of the shell, as well as its small size, suggests it could be a species of the genus Sphaerium or Pisidium, commonly known as pea clams or fingernail clams.

These clams are typically found in freshwater environments and are characterized by their thin, delicate shells. Identifying the exact species can be challenging without additional information on size, habitat, and specific shell morphology, but the overall appearance is consistent with members of the Sphaeriidae family.”

For all of these examples I sent the image and said “Identify the species in this image” without giving further context.

2 Likes

Hot take :fire: :sunglasses: I will defend the AI :grimacing:

Ditto. I used ChatGPT to ID a caged parrot in the Philippines. All I did was upload a photo, and the AI did the rest. I didn’t even ask for an ID.

It’s like Wikipedia, or ANY source of information . . . it helps if you have a background in the subject, so that you can tell if the answer is more or less plausible. And you understand who programmed the AI (or who edited the Wikipedia article), and how they stand to benefit by doing that :money_mouth_face:

I agree with how the AI is programmed to react. Imagine meeting someone who is adamant about a certain position, which you know for a fact is wrong. Are you going to go to war with them, in order to show them the error of their ways? No. You are going to smile and nod, and politely “peace out” :v: and wish them a nice life.

I agree with some of this, but I think one key difference, and many of the problems with AI, come from the perception of its human users. If you talk about something to a person on the street, you will probably be sceptical of how correct they are. You would also probably not expect that they would totally make up very plausible information (like scientific paper citations) and try to pass them off as true. If a human did this regularly (as AI does), we would call them a “pathological liar.”

Yet outputs from AI models are often treated authoritatively and uncritically by humans. Of course, some (maybe even most) of the blame for this lies at the feet of the human users, but I think it’s fair to assign some blame to the AI (or its programmers). The creators of various AIs definitely aim to present them as useful and authoritative. They could program in safeguards to make sure that the AI doesn’t do things like give fake citations (it would be fairly straightforward to have an AI check that any papers it suggests are in Google Scholar or similar for instance…). However, they have deliberately not taken steps to make it very clear to users that AI is quite often incorrect and/or just makes stuff up. These actions would undermine their business model.

I also disagree that AI is very comparable to Wikipedia and many other traditional sources of information that are often considered relatively authoritative. For instance, Wikipedia is open and interrogable - there’s a change log, sources, we can see which users made changes. None of that exists with most LLMs - they are effectively black boxes. And while Wikipedia isn’t perfect, it also isn’t inherently a money-making venture. Unlike the examples above, if a general user discovers an error in Wikipedia, they can change it quite easily - not the case with an AI model. Bad actors can certainly add false information to Wikipedia, but it can be corrected quickly and fairly easily, and the type of misinformation isn’t at the scale of AI. And while we may know the “programmers” behind AI in general, we certainly can’t assign an individual source to any piece of information or part of the AI - the best we can probably get is looking up the heads of the AI teams for certain companies. In fact, even the AI designers/programmers can’t tell you how an AI generated any specific piece of information. They understand the general process their AI uses, but it isn’t really possible to “source” the information provided by an AI in anything but general terms (ie, saying what it’s training set was and what the structure of the LLM is).

In short, I’m not totally against AI - it’s a very useful tool for certain things. But I do think it is fundamentally different to other more established sources of information in terms of

  • understanding by the public/general user,
  • sourcing,
  • generating novel false information (hallucinations), and
  • transparency
12 Likes

For me it’s a tool for many things, but it is not a source of information. It may point me to such though.

2 Likes

In the example above, I could blame the AI for not having reviewed it’s own response to my 1st question, as it did after my 2nd question.

Now, imagine that the AI does this extra check before giving the response to the 1st question. What would then be the response to the 1st question? Would the AI try again with another species in the family (and enter some kind of ~infinite loop, checking one species after another)? Then would it break the search loop and provide an acceptable answer such as “It seems there is no confirmed succulent species in the Fabaceae family, or if there is one I don’t know it yet.”

(In fact, there is one, discovered in 2015, Oberholzeria etendekaensis, see this journal post).

In other words, why should the AI always “build” a positive answer to every question, instead of checking one or a few hypothesis and then conclude with an answer?

3 Likes

Try asking it about Irish mythology and modern Irish poetry–it makes up poem titles and whole collections of works.

In fact ask it for a source for information on Alaskan palms. I bet it creates one. This is what students do not understand about relying on AI for papers.

There’s another alternative???

1 Like

Are you concerned that when AI eventually takes over, it will remember the people trying to tell it about Alaskan Palm Trees, and send a T-800?

I am in the camp that all these AI resources are tools. Properly used in support of good reviews, they are very useful. Improperly used and we get the mess that people made of eBird from MerlinID.

2 Likes

My very first try with ChatGPT, on 10th Feb. 2023:

It seems to ignore the word “only”, both in the question and answer!

4 Likes

ChatGPT is a generator of bullshit. I have recently heard someone praise its value for legal work and I couldn’t help but laugh - yeah, when your whole field of work is based on generating needlessly convoluted text, a bullshit generator is exactly what you need. The guy seriously suggested that people use ChatGPT to analyze incoming documents to summarize them into practical points, then respond in practical points themselves and use ChatGPT to “expand” those into a reply - at no point did it occur to him how easier life would be if people just exchanged the practical points between them, instead of the bullshit!

And this is how science is different - we actually care for facts, not grandiose formulations. My PhD student has recently given me an abstract to review for a conference that she plans to attend - and she admitted using ChatGPT to “help” her. I explained to her that what she wrote is nonsense against basic physics, to which she said "but ChatGPT claims … " to which I have to sternly explain that I am NOT interested in arguing with ChatGPT!

4 Likes

I don’t share your view entirely, but this

really annoys me without end. Like you I have heard intelligent people with lots of expertise in their field quote ChatGPT opinions like it was a substantial contribution to the discussion. I want to shout at them ‘the day I care about your ChatGPT quotes will be the day I don’t need YOU anymore!’

(I think being used properly it can do a lot more than creating bullshit though.)

1 Like

Politicians and lawyers have been organic ChatGPT BS generators for longer than I’ve been alive. You don’t need modern computer tech to be an effective generator of that.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.