You are not mistaken. Per iNat staff, iNat is meant to connect people with iNat first, and any scientific value is a “byproduct”:
I agree the CV needs improvement. I was just pointing out that it doesn’t completely ignore location.
You are not misguided about the purpose of iNaturalist. Keep in mind that the opinions of individual users on iNaturalist may not always be in line with the actual stated purposes of iNaturalist itself.
It is. Keep believing it. (But that doesn’t mean every user on iNaturalist or this forum is encouraging. Some are not.)
Because geography is not considered in determining what suggestions are made. The list of suggestions is based on perceived visual similarity. If it so happens that what the algorithm thinks is the most similar is also seen nearby then that label will be added.
But the list is generated and ranked based only on what it thinks are the most similar looking.
You are absolutely right about the stated purpose of iNat. But if it is a sole purpose and the scientific data is a byproduct, there are always other “buts”… First one: if the communication with nature is the sole purpose, why iNat proudly shows every scientific paper that was based on iNat data? Second one: why it is connected to GBIF? Third one: do you think that qualified identifiers (researchers among them) spend enormous amount of time at identifying entirely for the purpose for nature communication? It is for the sake of data correctness and education. If the users are not particularly interested in any ofthese, the IDers get frustrated and some leave - who gains then? You can read more about what response from the users or which behaviour frustrates experts, in the response of Myelaphus in this thread: https://forum.inaturalist.org/t/roles-of-taxonomic-experts-on-inat/13363/3
And here is a new thread on the problems of iNat ‘byproduct’: https://forum.inaturalist.org/t/data-quality-of-observations-from-india-data-quality-rg-gbif-india/13388
However grumbly I am about AI suggestions, I do not think it is a good idea. I myself use AI for time saving purposes in cases when there are correct suggestions (that I know definitely) in the AI,just to save time instead of printing. And limiting AI usage willnot help very much. Instead, I would think, that rules of usage/tutorials should be made so, that the most problematic parts discussed in the forum threads must be strongly highlighted and - the most important thing - the new users should go through an obligatory read of the rules/tutorials. What I mean, when a user registers first time, there is a banner not allowing to go into a full use without going into any action until the user have not read the rules/seen a tutorial. Definitely, many people will just skip them still, but some will read them.
YES! I find this happens all too often. iNat autofills the species id based on my file label. Sometimes this leads to off the wall identifications. Granted it is my responsibility to correct the id, but when you’re uploading 50+ images, it is easy to overlook one that has autofilled, but incorrectly. Even if my image is labeled correctly, it will upload only a portion of the file name. For instance Large Lace-border (Scopula limboundata) is OFTEN auto-filled as Lace Border (Scopula ornata), a species found in Great Britain. I think ekmes makes a very good point that labeling an identification as “out of range” would be a great addition, perhaps even highlighted in red.
Thank you, I also decided it at least takes visual similarity as a main thing and I never read before it takes the geography in account.
Because GBIF have decided to incorporate the data.
iNat cant force that decision on them, nor does it in any way change what the iNat team or community see as the vision and role of the site.
The field “owners_identification_from_vision” will tell you if if the observer’s ID used CV, the field “vision” will tell you if any particular ID used CV.
Are you saying the dependence on Research Grade data by GBIF-informed researchers was forged after iNat was created - rather than, iNat was created to inform GBIF?
What I am saying is GBIF is an independent organization separate from iNaturalist and they make their own decisions about which datasets they will incorporate or ask/choose to use.
Inat was not founded for the purpose of populating GBIF.
I absolutely understand your frustration regarding users who are not particularly interested in data correctness and education. I think those users are the same people who post a photo of an insect to every online identification website they find and simply wait 'til someone (expert or otherwise) identifies it.
I believe those individuals are an entirely different type of user than most of the people erroneously identifying organisms. The site (iNat) does not require users to complete a tutorial before uploading images, thus, people excitedly begin sharing observations no doubt believing the magic of iNat has identified their photographs. Even worse…there is a population using the site as photo backup.
Personally, I wish to see who is identifying my images, so I click on the users’ profile picture. I feel more confident accepting their ID if I see a Curator tag, or…pertinent credentials. If any aspect of the process should be improved - to improve on the limitations of the AI, too - it should be the process that allows someone to be a trusted IDer. Perhaps some minimum number of Curator/expert-reviewed IDs that must be met before agreement by any new user is provided the credentials to be the final confirmation on an observation going to Research Grade? Sort of like the criteria (50 verifiable observations) required before a user may create a Place.
If - in many cases - only two individuals must agree on an ID for it to become Research Grade (assuming the location, etc. are also filled in)…couples, friends, student peers, untrained bioblitzers, etc. are probably constantly (unintentionally) misidentifying organisms and pushing them to Research Grade. I don’t think those individuals are intentionally being malicious, they are simply mutually misinformed and ignorant to the consequences of their misidentified images being considered Research Grade.
I do the same. Additionally, if there is only a standard generated phrase xxxx is a naturalist! I look into their identification page. If they ID only certain group, this is a good sign. If they ID certain group and something more from their own region, this is good, too. But if I see an IDer with species-level identifications from plants to diptera and all the other organism groups, I am cautious. If I see the IDer with comparatively few and diverse IDs, I ask about the ID, usually “Why is this xx species and not yy?”. Almost invariably I do not receive any answer,or the answer is “I checked the pictures”.
I am here with you absolutely. I think,it would be good idea to add some banner for Bioblitz or City Challenge projects,or School projects where two important things would be seen as a blazing message: 1. Short explanation about what is a meaning and the consequences of Research grade; 2. Reminder to mark cultivated plants and domestic/zoo animals as captive/cultivated.
I understood the AI was the hallmark feature that separated iNat from other systems. I think it’s a mistake to expect people to avoid using it. They won’t remember and it’s too easy right now. Maybe add a button called ID help instead of having it autofill with suggestions. I realize this will mean many more unknown because folks just won’t bother at the time. But it’s a question of which is more important right now. The data integrity or the expansion of adoption. Then work on getting the AI to be more location specific, request a justification comment if the choice is rare for that area and possibly a refer for additional ID if the ID is questionable.
Am I abusing the system? I’m mostly cataloguing moths in my area. Even I’m certain of the ID, I don’t type it in. I let the system pick the name because I then avoid typos and I often don’t remember the scientific name anyway, but I know it when I see it. I guess, bottom line, does an ID entered this way differ from one typed in manually?
Interestingly, lately I entered a bunch of moth images I identified manually using various field guides and
online sources. I was happy to discover that my manual IDs matched the system generated IDs in almost all cases. This has led me to trust the system, not invariably, but most of the time.
I think it’s noted that the AI was used but it doesn’t really affect how the observation is listed. And yeah, the IDs for butterflies and moths, plus beetles and honeybees, have been very accurate in my experience. I personally try to catalogue other arthropods though and the AI doesn’t always hold up. Most flies I end up having to just list them as “diptera” since none of the AI suggestions match, or they match too well and I can’t tell them apart. Same with a lot of spiders, small crustaceans, fungi, etc.
Agreed. I also use the AI feature as a way not to have to type. I avoid switching back and forth from mouse to keyboard as much as possible. I also use it as a check on what I think the organism is. Does it agree or is it suggesting something that I should be considering? And I use the AI to give me a direction to investigate if I’m not sure where to start.
But I always (unless I’m absolutely sure on id, like… Northern Cardinal) let the suggestions pop up and use a right click on all options that look feasible or interesting and open them in a new tab. Then I go to each and take a look. Does it look similar to me? Then I often look at the map and see if it’s found widespread in my area, not at all in my area, or maybe I’m on the fringe of the area in which it’s found. If I’m just looking to back up my hunch that I’m right, I might stop there but if I’m truly seeking out new knowledge, I then go to the internet. I have my favorite local websites for in depth identification help for wildflowers, dragonflies, and butterflies. I have field guides for my area of the US I might reference. When I feel confident that I have a good ID, I then hit the AI suggestion to autofill in the field.
I do get laughable suggestions, I’m sure due to some feature of my photo. When I do, I might choose another photo of the organism to upload, or maybe crop a photo differently.
Using the AI is quite helpful for lots of reasons. But I don’t let it make my decisions for me.
I will add, I have taken to labeling all of my photos pending for upload with numerical numbers. They’re usually sorted into folders by organism and they get labeled as 1a, 1b, 1c, etc in order of how I want them to appear in the slideshow. That’s to prevent it filling in the id field from my file name. After I upload them, I add more info in the file name. Even putting in the name of the location will cause issues.