Implement an iSpot style reputation system

I don’t have an opinion on the overall proposal, just giving two cents to the value of “expert identifications.”

If Anthony Gill really is the leading expert on dottybacks, and he makes a mistake that others have identified, then outside of any oversized ego I would hope that those others should be able to point out and convince him of his mistake. But it is often true that leading experts see through things that completely trick people without their experience level. In some taxa color patterns are so variable that they may be one of the least reliable ways for an expert to identify a specimen, while many amateurs will come to an opinion based on the superficial color pattern similarity, which can be an issue especially when a rare pattern in one species matches the most common pattern in another. They may not see the same subtitles in limb length or head scalation or general gestalt that an expert sees.

The way that I’ve seen taxonomic identifications work, I agree that no democratic process among non-experts can really displace the discernment of the primary expert on a taxon. If the world’s leading expert on Parafimbros says that a specimen is Parafimbros lao, and anther expert says it’s Parafimbros vietnam, then it doesn’t really matter whether 100 people with less experience agree with the 2nd guy, that has no bearing of the root issue because if the basic issue is enough to confuse one of two experts in the field then it can just as easily confuse a whole bunch of other people too.

Of course, that only works under the case when someone is really the “leading” expert in that particular taxa. There could be 10 other guys listed as snake experts, even as SE Asia snake experts, but if they haven’t had experience with odd-scaled snakes and haven’t read the papers on Parafimbros or understood the ongoing taxonomic controversy, then their opinion really might not be as good as the competent, dedicated amateur who read the right papers. So it would be important for “experts” to be honest and set true boundaries on their expertise.

Perhaps that would mean a tiered option where an expert can give his “research grade” ID or he can give his “best guess” ID on taxa for which he does not have the same certainty. And also, iNaturalist could decide that while no democratic process could “outrank” the designated expert on a taxa and making it research grade observation over his objections, perhaps the democratic process COULD keep the expert’s identification from being research grade itself if enough people with enough experience disagreed.

2 Likes

I’ll throw my 2 cents on the pile I guess, forgive me rehashing things that have already been covered in this and other threads. I am not an “expert” - whatever that would mean in this context. But I am reasonably confident in my ability to accurately identify, say, the 30-40 most common species of spiders found in North America given a clear enough photo. And I feel like that is a reasonably valuable contribution to the site despite my lack of credentials. Spiders, and arthropods in general (with some exceptions like Odonata and Lepidoptera) are popular observations but have some of the lower percentages of identified observations, “Research Grade” or otherwise. There is a variety of reasons for this - they are difficult to photograph well, taxonomy is confusing, many simply can’t be IDed beyond genus or even family from a photo, etc. - but beyond that there are just too many observations posted for the handful of active identifiers to keep up with. So the suggestion that only experts get to suggest IDs just says to me that the vast majority of these observations ultimately will go un-identified. This would have the opposite effect of the stated goal of the site, which is to get people interested in the natural world around them. There are pages and pages of observations where someone has created an account, posted a photo of a cool looking spider/beetle/whatever, not seen any interaction with their observation, and eventually never come back to the site. I’m sure the iNat staff could (and probably has) run some queries to suggest whether the data lines up with this: “Users whose first X observations were identified or commented on were Y% more likely to become active participants of the site”

And experts: What qualifies experts? Do you need a PhD? Have written X number of papers in certain publications? Which publications? Is there to be a community-based consensus on who is an expert? Do I need to upload my drivers license to prove who I am? Can only experts approve other experts? Some of the most helpful “experts” in various taxa have no formal education in their area of expertise or even biology in general. One of my old high school friends is a highly regarded mycologist but has no formal education in anything biology related, he just decided he liked mushrooms 20 years ago and is now a respected taxonomist and a valuable contributor to this and other sites. Our resident expert on Salticidae does not have any degrees related to entomology/arachnology (as far as I know) but I think would be considered an expert for the purposes of this site - but I imagine some here might disagree.

I agree that the problem of inaccurate data getting pushed out to GBIF, etc. is indeed a problem and I do think that a well-implemented reputation system (if that is even possible) would help improve the overall data quality, but may have serious drawbacks. This has all been discussed ad nauseam but based on past discussions I tend to be of the opinion that this sort of solution would likely end up causing more problems than it solves (I concede that what constitutes a “problem” is somewhat subjective). What if If we end up with 75% more accurate IDs, but also 75% fewer IDs overall (since only certain people are qualified to make them)? I imagine that would lead to less activity and ultimately be harmful to the mission of iNaturalist. The primary goal of this site is to connect people with nature, not to ensure that every photo is reviewed by one of the few expert taxonomists in the world. Which would probably involve trashing half the photos on the site since they aren’t detailed enough (like bugguide does). Which would lead a large percentage of people to stop using the site, and so on. Maybe that sounds hyperbolic, I dunno. Everyone wants the site to produce accurate data but it’s not possible for it to be anywhere near perfect in practice. Some people here clearly value data accuracy over everything else, which is fine, but again that is not iNaturalist’s primary mission and I think that’s been the gist of the feedback from iNat staff in past discussions.

I think that further refinement to the “Computer Vision” system would help a lot, but I think that is already happening or at least the developers are open to suggestions. If people agree that a certain taxon cannot reasonably be auto-identified to any particular level by the computer, that seems like it should be possible, if complicated, to implement. If we agree that a certain genus (for instance) cannot be identified to species by photo at all, I think it should be possible to allow for at least a warning when someone attempts to do so. I would love to be able to tell the AI to never suggest Ipsum loremii for North American observations because that species is endemic to New Zealand. These seem to me like easier problems to solve, from a design/programming perspective, than a complex reputation system that’s guaranteed to be contentious and probably drive some people away. Changing how we treat the data seems more practical than changing how we treat the users.

Sorry for the wall of text but I’m relatively new here and missed my chance to participate in several previous discussions on this topic. :)

10 Likes

As I see it, there’s really two primary issues that need to be addressed.

#1. Get more experts making IDs on iNat.

#2. Get more people to explain the basis for their ID - not just have a list of people saying “it’s this” or “it’s that”, but people saying, “You can tell it’s this and not that because of XYZ”.

If we managed to make #1 and #2 happen, then a tiered ranking for IDs might not even be very important, because there would be enough experts making IDs and enough people explaining IDs that the general ability of the community to identify things correctly would steadily rise.

Of course, the question is how to make #1 and #2 happen.

Any system which compromised the desire of people to participate should be an immediate non-starter. I’d take an expert ID over 10 non-expert IDs any day, but I wouldn’t take 1 expert’s participation over a single non-expert’s participation.

6 Likes

There was a thread a while back about this sort of thing. I’ve found in most cases, it just requires a simple question to the expert and they’re more than happy to explain. I usually make a few hundred IDs per day on average, so I don’t have the time to explain all of them. But if someone asks me why I provided that ID (which is actually fairly common, especially for the mollusc IDs I make), then I’m more than happy to give an explanation.

Basically, I think the onus for drawing out an explanation should not be on the identifier. The main reason for this is that in a lot of cases, what a non-expert would consider a tricky ID, an expert considers easy. So when the expert is making the ID, an explanation is not needed in their mind. So it should be up to the receiver of the ID to ask for one (in my opinion).

7 Likes

I would just note that, as a data user, I would not be blindly trusting GBIF data for reasons that extend way beyond inclusion of iNat data. There is likely not a single GBIF data source that is free of misidentifications and other data errors, and I’m virtually certain (admittedly without testing the hypothesis) that iNat is far from the worst among them.

9 Likes

Cannot agree more with this. If I’m a researcher downloading/using a dataset from ALA, GBIF, etc., I’m checking that data regardless of where I think it’s from. Even if it’s from the most reputable sources out there, human error will always exist in data entry/collection.

10 Likes

I feel like the “onus” for an explanation seems to naturally fall to the first person to disagree. Personally, if I come across an observation that I believe is wrong, I’ll post what I believe the correct observation to be and also post the specific reasons I identify it as that species and not as whatever it was originally identified as.

So it would be nice if we encouraged people who “disagree” to give a basis for their disagreement.

(That’s actually how I started participating on iNaturalist. I was looking for occurrence data on certain species in certain counties, but I kept coming across people misidentifying other things as that species. So I logged in to correct the IDs, but I figured my opinion wouldn’t mean anything with no observations. So I entered a nice range of observations of the target species and all other species in the genus from that area. Then I corrected the false observations, clearly pointing out the identifying characteristic that allowed the species ID and linking to other observations which displayed that characteristic well. Then I did it again with another set of misidentified pictures. And then they trapped me and had me setting up projects and all sorts of stuff.)

9 Likes

I 100% agree with you.

I should’ve specified: for my situation I meant if person X posts an observation → expert IDs it → person x blindly agrees without necessarily knowing how to ID it themselves → ob becomes RG with really only one person knowing what it was. In this case, I don’t think that expert necessarily should be forced to explain off the bat, but person X should be asking for clarification so they’re not blindly agreeing.

4 Likes

So basically I think we should encourage people to take the initiative/be more proactive and ask for an explanation if they’re unsure (rather than forcing experts to explain every one of their IDs)

2 Likes

Even museum collections have mis-IDed specimens and some ridiculously poorly mis-attributed locations.

I recently got a few hundred museum records for a project I’m working on, and in that limited # of data points I ran into one specimen with the wrong coordinates (didn’t correctly match described location), one specimen that was almost certainly a stowaway and did not accurately reflect a population, and two specimens that are clearly outside of both range and established habitat preferences and thus will require further follow-up to figure out what is going on, which I think is most likely mis-ID in both cases but possibly mislabeled collection location.

5 Likes

I totally agree, I was trying to address the people who seem to think that data accuracy should be our primary concern. If I was studying the spread of a certain species (for example), I would of course review the actual data (each observation) to ensure that it met my standards before including it in my research. But I would certainly be glad that data existed for me to access, even if I had to do some extra work to clean it up for my use.

Ultimately my feeling is that the overall value of the body of data being generated by iNaturalist, and citizen science projects in general, outweighs any potential downsides that come from potentially polluting systems like GBIF with erroneous data. So my suggestions/opinions in these discussions tend toward things that encourage more users to collect more data.

9 Likes

I should’ve specified: for my situation I meant if person X posts an observation → expert IDs it → person x blindly agrees without necessarily knowing how to ID it themselves → ob becomes RG with really only one person knowing what it was. In this case, I don’t think that expert necessarily should be forced to explain off the bat, but person X should be asking for clarification so they’re not blindly agreeing.

Yes, limiting the number of people who auto-agree without being able to independently verify is worthwhile.

We do try, but maybe could do more:
https://forum.inaturalist.org/t/identification-etiquette-on-inaturalist-wiki/1503

4 Likes

Yes I do. You know who most of them are? People who have dedicated years of their life and tens of thousands of hours to becoming skilled field naturalists. Despite having other responsibilities in life.

People who would effectively be told their input is of no value should such a system be implemented.

That doesn’t benefit iNaturalist, it diminishes it.

2 Likes

I feel like there might be a distinction to be made here for computer vision vs independent identifications. If someone’s made an independent determination of their taxon, and I blow in and ID it without explaining why, it’s pretty mystifying–there’s nothing actionable for them to take back to a key to find out where they went wrong. On the other hand, if the initial ID was made by CV, then the original identifier (probably) just pulled down the menu, squinted at the thumbnails, and said, “Yeah, that looks about right”. I’m not as good as I should be about leaving comments on disagreement, and I appreciate the moral reminder, but in the CV case, I feel much less reciprocal obligation since it’s not clear to me the original identifier put much thought into their ID! (That said, I am always happy to explain my IDs at greater length when someone asks.)

3 Likes

In my area of interest (botany of the eastern US), there have been a number of cases where old herbarium specimens have been re-annotated after the discovery of a new species, or significant range extensions have been discovered by re-determination of old specimens. People see what their key tells them to see! (See https://www.youtube.com/watch?v=SFApGT8cHcE&t=654s for a funny retelling of the incident in the field that kicked off one such range extension.)

2 Likes

In my experience BugGuide is more effective for identifying obscure insects than iNaturalist, at least for the insect groups I look at. As far as I can tell, that’s simply because they have experts for each group who is able to identify those groups, whereas iNat does not have those experts. I’m not sure though whether they are missing because something about about iNat repels them (which this proposal could potentially address) or just because nobody has asked them to join. Over time we do have more experts here. I agree that BugGuide is less user-friendly for the observer though.

3 Likes

Is it possible to cast a vote against feature requests?

1 Like

No. Only comment against it

I sometimes use Bugguide. (It is less user friendly for a host of reasons, and I favor iNat. as a result, but the mission is different–I get that). I just want to point out that I do see some of the experts from Bugguide here now. I appreciate that they are willing to add to both sites despite the differences in approach.

5 Likes