ChatGPT the Naturalist

It’s on the learning curve, and gracious when corrected!

I recently, successfully(?) learned to recognize butterflies in general in the old Lepidoptera pile, primarily using their antennae. I thought I’d try extending my antennae skills with any other applicable superfamilies. So when I see common names like “knot horn” and “curved horn”, maybe they’d be good places to start? I looked at Wikipedia earlier today re curved horn aka Gelechioidea moths and learned - cool!- that the feature is coming from elsewhere on their heads.

Then I wondered if the internet’s new toy Chat GPT would know some tips? Here’s the interesting result.

If I hadn’t have already known, I probably would have thought I struck out both times on adult id. Meanwhile, if any human has magic id tips for moth antennae or whatever, I welcome your advice. (Like are there other ones beside Gelechiodea that have those mouthparts, or am I safe going with that feature to get to superfamily?)


this is more or less what i was hoping could be explored for

1 Like

I had another round on the topic and now the the AI is 0/2. Relatable. ;)

1 Like

First half ok-ish - second half some hallucinations:

What does 'opting out of community ID' mean?

In iNaturalist, “opting out of community ID” means choosing to not participate in the community identification process for your observations. Community ID is a feature in iNaturalist that allows other users to help identify the species in your observations through a collaborative process.
If you opt out of community ID, your observations will not be visible to other users for identification and you will be responsible for identifying the species in your observations on your own. To opt out of community ID, you can go to your observation settings and toggle the switch for “Allow community ID” to “off”.

I woud prefer that to be true. Let me unsee global opted outs - and they can go ahead with - only I can ID my obs, thanks.

i wonder where ChatGPT learned this? or did it just infer this? i wish i had access to the new version of Bing to see what it says. it’s supposed to use a more advanced version of ChatGPT and will provide references to its sources.

I also wonder whether that answer was a real ‘hallucination’ (sounds like an oxymoron, doesn’t it?)
But now, as I posted ChatGPT’s (wrong) answer here, it might provide a self-reinforcing source for the program to actually confirm its ‘opinion’ for its next learning cycle.

Please, ChatGPT, mind the context :smirk:
(Should’ve better just posted a screenshot of the answer…)

I bet iNaturalist’s early versions of Computer Vision had the same sorts of wrong answers (well, it still does, in some cases). I wonder how much better ChatGPT and its congeners will be after a decade of interaction with the public.

You can go back onto the conversation (sidebar at the chatgpt site) and correct it like I did, and maybe it will learn from its mistakes?

Might. Would be nice if ChatGPT got better. Really would help in ID ing.

I vaguely remember it was trained prerelease on “the internet up to around 2016”, so it may need a taxonomy update on some things first.

Hm, I wonder if we whispered our favorite id tips to it, if another naturalist might hear that knowledge back later, when they ask about the same taxon? Like, someone should ask ChatGPT separately about Gelechiodea sometime and report back. ;)

1 Like

My very first question to ChatGPT was:

About the question:

It seems that ChatGPT has an issue with the concept of unicity.
Or it simply ignores some of the words, which is fatal if “logic” is a key aspect of the question.

About the answer:

The question was in principle very easy to “understand”, but very difficult to answer.
For the answer, I just expected “I don’t know”, instead of this explicit contradiction.

Does ChatGPT check the consistency of what it is about to respond?
Apparently, no.

Solution :

A possible answer to the question was the family Fabaceae,
that contains the species Oberholzeria etendekaensis:

1 Like

It must have been designed by the same people who design web browsers. As often as not, as I scroll through irrelevant results, I see the annotation, “missing: [the most important word].”

And while we’re on the subject, I offer this archival Facebook post:

Computers are no better at understanding plainly asked questions than humans are. Here is my Google search string:

“append to list from df python”

And the first response returned is:

“Appending a list or series to a pandas DataFrame as a row?”

Which is the exact opposite of what I asked. Reframed in human speech, it is as if I asked, “How do I put a column from this table into a list?”, and the person proceeded to explain how to put a list into a column of that table.

Now don’t get me wrong – humans do this sort of thing to me with exasperating frequency. But I was hoping computers were better at taking me at my word.

1 Like

but we can insist - must include that word

it looks like the new Bing’s response omits the wrong part of the answer:

… but then if i try to get more information in chat mode, it gives me the garbage and references this thread as the source of that garbage:

Please, ChatGPT, mind the context

It can’t. It’s a parrot-bot. It’s not an intelligence and has no concept of facts, only that certain sequences of words seem to go together.

1 Like

Update on the topic of potentially transferring id corrections and tips across to other users- it sounds like that’s not currently doable. After a couple of false starts in asking about it, I got this:

Reportedly there, it would need the other user to “provide the same correction”. So, not that useful I would guess.

(BTW during the conversation I couldn’t get it to acknowledge any use of the name “Sydney”- google that in recent AI news if curious ;) )

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.