I have heard of Gerald, but what is this about the South African Julia Skimmer?
Please write to staff if you encounter an inappropriate behaviour like that.
That’s a whole epistemological question. Still, while not saying that other forms of expertise are invalid or not useful, it might be useful to distinguish or tag those who have published (in an academic journal or similar formal space) on a particular species or genus. That sub-task itself seems fairly feasible, given that iNaturalist allows linking one’s profile on iNat to ORCID. From there it’s probably a matter of scraping article titles and checking for taxonomic names within. Even having a single publication on a topic generally implies a fair level of familiarity.
I should add that I wouldn’t want to give those who are “experts” a veto, supervote, or anything like that. However, in any online forum where certain kinds of misinformation can be spread (e.g. false species IDs, false facts, etc…), it does help substantially to distinguish reputable actors in order to avoid the problem where the support for an idea or conclusion is artificially influenced by bot-like behaviours. Links to real-world proof of a person’s expertise or experience help to counter that kind of false crowd influence.
And perhaps it might be useful in other ways. As sbrushes notes, perhaps it could help the computer vision dataset. It could be possible, for instance, to weight users confirmations differently in in training the model, even if their votes are all equal in coming to an agreement on an ID.
Ok, this doesn’t seem like its just going to die…
So, I want to qualify at least a few things I originally said…
My triggers for writing this original post have been mainly interactions with field “experts” offsite, but also field experts within the site, struggling behind the scenes. I am basically just trying to advocate for them, as ultimately, iNaturalist really needs their input I think and I just want them to have the support and encouragement their experience warrants.
Yes, iNaturalist focuses on more common taxa. But observing common taxa isn’t what keeps me here or any long term users I imagine. Repetitively identifying common taxa certainly doesn’t encourage me to participate with identification more either. If iNaturalist is to remain coarse in its identification or become more coarse, then longer term, I wonder how involved I will be.
If, as I imagine, iNaturalist is hoping to become less coarse and more accurate over time, isn’t it in everyones interests to accelerate and support that?
The potential difficulty in choosing a threshold for expertise should not sidetrack the central idea of value-ing the existence and importance of expertise. Which is part of what seemed to happen on the other thread. The value-ing of expertise also should not devalue the amazing work amateurs and experts in other fields do in sifting the coarser IDs into the right place or to species where possible.
One of the many triggers for my original comment was for an ID for a parasitic Chalcid wasp by Roger Burks here on iNaturalist. I tried to confirm this offsite within the UK. Only two records of this species exist within UK. In addition, even the senior curators of parasitic wasps at the NHM were unable to confirm, as “we have no specialist for Chalcid wasps in the UK”. Even the parasitic wasp experts, people who devote their entire lives to studying parasitic wasps are unable to confirm species level ID of this entire parasitic wasp superfamily for me.
So, I have a choice.
Do I blindly agree to his identification in order for it to reach GBIF?
Or do I do some amateur research to confirm ?
If I blindly agree, then I am simply acknowledging his profile indicating he’s a researcher in the field.
I am essentially, empowering him. Just as @thomas_everest mentions others blindly empower his IDs. All I am saying is why not formalise this to save time and energy and encourage other experts to participate?
But lets say I choose instead to try and research this field…
The insinuation in other posts talking about amateur input seems to be that I might be able to research this myself to confirm. I think perhaps(?) some of these arguments stem from lack of familiarity with more complex taxa. For example, comparing bird identification with insect identification is simply not comparing like with like…
An issue central to the premise of this often linked thread incidentally
where its noted
“accuracy varies considerably by taxon , from 91% accurate in birds to 65% accurate in insects”
Lets breakdown the species numbers for UK
Birds = 620
Insects = 27000
Relative to species totals, those stats start to look a little misleading…
And the comparison of the two in terms of the possibility of amateur input, radically different.
Breaking down some fo the insect species further, we have
Wait, 9000 wasps?! Holy moly… so as an amateur the suggestion is I might be able to grapple with a group this complex? Even the 2500 distinctively marked and large Ichneumonids are notoriously difficult to identify. I can’t even find a figure for the number of Chalcids in the UK, but all the Chalcid wasps I come across seem to be a few mm long and black or dark coloured without distinctive markings.
But lets say I really really really like a challenge.
How long will it take me to really have a degree of certainty to back up Rogers IDs?
How much would I have to study?
Would I then be an expert at the end?
If so, isn’t that just further recognition of expertise and the need for it?
Maybe if we replace the term expert with experience this will be less contentious.
Etymologically they seem to stem from the same place - the latin experiri, meaning to try.
Why not empower recognised experience?
My preferred choice is to just leave it alone, since getting data into GBIF or achieving Arbitrary Green Label aren’t my goals.
I don’t think experts should have a more heavily weighted IDs. I’ve encountered a number of misidentifications made by someone who could be called an expert, but who also hasn’t been active in several years. I expect that if the individual was active they’d correct the IDs in a heartbeat (everyone makes honest mistakes/oversights), but since they are inactive often their ID stands in the way of reaching RG. Now what if their ID counted 3X more than what others can give? Forget about ever reaching research grade on that observation! (I’m talking central African butterflies where having more than 2 IDs on an observation posted 6yrs ago is nearly a miracle.)
What I would advocate for is users being able to “flag”(nominate) other users (not themselves) as experts, for review. Then curators would look to confirm the expertise, and ultimately apply an appropriate tag next to their username. The tag would simply alert other users to the specific expertise someone is confirmed to have.
I might be concerned that this approach may lead to individuals over-estimating their true level of expertise, whether innocently or to appear higher status in the community or to amass stats.
I won’t add much to this well trodden path. I like iNat, and don’t want it to become a research driven site. I maintain that it’s up to the researchers to confirm that any data they use is correct. I’d also like to add that many professional taxonomists are busy folks, who have their own jobs, and may not always be able to help as much as we might like.
(Sorry @cabintom - It wasn’t my intention to reply to you directly. Not sure how that happened)
Ok, I get that POV to some extent, if seen purely from the observers side…
My central issue though is about whether iNaturalist is attractive to those with expertise and how we can encourage them to participate. Having an ID just sit in a “Needs ID” pool for eternity makes this significantly less attractive for experts surely? If you add 10000 IDs and there’s nobody to agree to them, that must feel for most like pitching time into a void. Or if you have studied a group for decades and your wasp ID won’t go to species as the original poster thought it was a fly, this can take time tagging in others to help shift to the correct family even. This all takes a lot of community energy which has to be outweighed against the cost of empowering experts from the get-go. I think the potential gain would outweigh the cost.
I agree, RG term problematic though.
Another solution I wonder about to this topic is simply adding more and better terms, to add a finer grain to the data. On iRecord in UK for example, data-points are defined as something like “likely”, “plausible” or “certain” … I think.
Maybe “Arbitrary green label”, “Arbitrary yellow label”, “Arbitrary red label”?
Not really - when I’ve helped someone in my region learn what a plant is in their backyard, that’s the primary satisfaction for me.
I’ve addressed some of the other issues and potential fixes in this summary post: https://forum.inaturalist.org/t/recruiting-more-identifiers/2388/152, such as allowing people to filter and export observations based on their ID, not the observation’s current label.
Ok… I should qualify that with for some experts perhaps - I think this is again, connected to direct feedback I heard from UK community when advocating for iNaturalist off-site.
Potential fixes link super useful to understand how concerns are already being addressed.
Thanks @bouteloua, will go over all that too…
I’d noticed the thread, but hadn’t come across your summary.
Be great to have these sort of links in a forum FAQ or something, gathering all this stuff together ( if this doesn’t already exist )
iNat is good for experts, our local botanists work hard, and they’re working in universities and have own job to do, still they’re not discouraged to do I’d work on iNat because they have the same weight as other users.
And no, adding ids is cool and being the only one to make them, well, if that’s the issue and you’re a true expert - invite the other one expert to check observations on iNat, it would be cool to check ids made and get RG where possible. When I am iding I also don’t think about getting RGs, the only observations I care more is those with many wrong ids or those where people keep iding them wrong, there a help can be needed and tagging someone else is a normal way to do that.
Re. making iNaturalist attractive to experts: Unfortunately, I think @thomaseverest was correct in his initial reply on this thread, that helping people learn more about the natural world is the goal of iNaturalist, while having clean and accurate data is not. Fundamentally, iNaturalist is built to be an outreach tool, not a biodiversity information system. It can be used as a biodiversity information system, and is frustrating precisely because it has incredible potential in this regard yet remains hamstrung in various ways because that is not the intended purpose.
Other sites that have tried this approach of favouring experts have found it is better at driving away users who feel marginalized than it is at pulling in more experts.
I usually tag the other identifiers with a note suggesting they take another look at this one, with good results.
I do the same, although I get a response only around 30-40% of the time.
I think there is a third option here that might improve usability for research without impairing the primary outreach function of iNaturalist. We have two user groups, basically the “citizen” and “science” halves of citizen science. These user groups have different interests and different needs; conflict is created when a single tool is intended to serve both user groups. The community IDs and research grade designations are intended to serve the citizens, so let’s just leave those as they are now. How can we add functionality that will be useful to the scientists?
I think any research use of iNaturalist data should follow this basic principle: rely on your own identifications, or those of experts you know to be reliable. This has been standard practice in specimen-based research for as long as there has been specimen-based research. I think a lot of discussion on iNaturalist gets led astray by the term “research grade”. “Research grade” designation does not mean an observation is suitable for research. Proposals to make the designation conform to that expectation have been unambiguously rejected by the iNaturalist powers-that-be, and may not be feasible regardless given how difficult it would be to automate the identification of expertise in any reliable way. So, there is not and will not be a marker that says “you can take this observation on faith without further investigation”. We should just pretend that “research grade” doesn’t exist and try to provide researchers with a system that lets them efficiently verify or correct IDs for their usage. I think the path of least resistance is:
Let users specify that observations be displayed / organized according to the identifications provided by a particular user or users.
In its simplest implementation nothing in the community IDs, taxonomic databases, etc., would need to change, the name on an observation would just have to point at “ID provided by user…” rather than “community ID”. There are already a couple of more or less equivalent toggles in the iNaturalist interface, related to choosing common vs. scientific names and opting out of community ID. A more complete implementation would require an additional couple of steps: allow identifications to specify any name in iNaturalist’s taxonomic database, rather than only those marked as “accepted”; create community IDs by running the raw identifications through a taxonomic lookup function; have the “show me identifications by user X” toggle switch to the raw IDs rather than the community IDs. I am sure there are hurdles related to server load and other aspects of implementation at scale that I am not aware of, although these might be minimal if this remained a “niche” function for a small number of users. In terms of creating working code that would achieve this kind of functionality, though, I know there is nothing difficult here. In an R context, it’s the kind of thing a mildly competent user who’s had a couple of months of self-taught poking around would not have trouble with, and I say that as someone mildly competent in R who has implemented parts of this functionality in a shiny app. I know there is nothing prohibitively difficult here, and that there is no inherent conflict with the citizen-oriented functionality that serves iNaturalist’s primary user base.
(By the way, I know that some of this functionality could be cobbled together in iNaturalist as-is, and have looked into this in the past enough that I have a general idea how to do it. I don’t think having each researcher try to create an ad hoc parallel ID / taxonomy database is a viable solution, though! That it can be done does not mean it is a good idea.)
while this is a well discussed topic, i will add one more thought. I’m both an iNat user, and also a wetland ecologist who works for a conservation entity. I DO use iNat data for work, including wetland mapping and species range. And i do this via looking at all of the data other people submit. I find that it’s necessary to do so unless it’s someone i already have a working relationship with and even still i want to see what they have added to the site anyway. But… here’s the kicker. I get sent lots of data. Not just from iNat. From people ranging from genius expert botanists to volunteers with no botany knowledge who haven’t yet been trained. People send me wetland data all the time. I have to review all of that too. That’s just the nature of large datasets. If you want a heavily curated dataset with very low rates of error that meets your own standards, honestly? the only way I have been able to achieve this is by being a part of curating the data myself. Thus i have a huge work-in-progress Access database with hundreds of data points on thousands of wetlands.
What iNat offers is a shared, massive, easy, free, georeferenced field notebook. Anyone can add to it, but anyone can also disregard what others put in. It’s an amazing project. It’s great for things like field recon, getting a feeling for species range and habitat, and for curating my own data especially field recon type data. It isn’t great for: taking the data as is, running statistical analysis on it and calling it a graduate thesis, or using it to set conservation policy without any additional analysis. And if you heavily weigh the opinions of ‘experts’ and disenfranchise others, that will still be the case, though i think it will reduce site participation so you’d have less data to sort through…