Just because it has attained the criteria to reach what is currently Research Grade, does not mean it is ‘Verified’. There are plenty of observations with this label that are not correctly identified.
‘Shared with partners’ is not a great label as users can choose to license their observations so that they can not be shared, unless those are going to be carved out under a different label of their own.
While I have no opinion one way or the other on what name is applied, if there is going to be a change, to me ‘Community Consensus’ is better than ‘Verified’
unlinks captive/cultivated status from a data quality metric (“verifiable” / “casual”)
allows filtering to find only captive observations that still need ID
would allow AI to find/train on verified captive obs
doesn’t make captive obs grey and sad
removes reference to research or science as a quality metric for the data, “verified” being short for something like “community verified,” whether or not the community was correct…
continues showing wild observations by default, but as suggested perhaps the checkboxes could be “sticky” based on ones past usage
was clumsily scrambled together in MS Paint, sorry :)
(Emphasis mine.)
(Edit: there are a lot of things here which are outside the scope of this specific feature request topic. Please ignore them and stay focused!)
Let us be clear of what the label is trying to achieve. There are several things all muddled up here.
Firstly there is “Identification Grade”. I would be happy for this to be called “research grade”. It means that the identification has achieved a certain level of reliability and can be considered true. iSpot had an excellent reputation system where the research grade meant something. At present on inaturalist it is simply plus two identifications. Hopefully when we have a reputation system it might come to mean something more: I very much like the concept of expert equivalents - which is what is the level of ID that one would get if the observation were given to a relevant professional at a leading taxonomic institute. Identification Grade can vary from 0 to over 100 EE (if 100 experts agreed).
Then there is “Export Grade” (or “Data Sharing Grade” or “Quality Grade”) - that means it meets all the requirements for sharing with related institutions: namely the locality and date are accurate, the ID is research grade and it is correctly tagged wild or captive/planted (irrespective of it being one or the other), and any other requirements (e.g. photos copyright tagged, valid observer name, etc. - including things that iNat always ensures).
Then there are criteria that explain why it cannot be either Identification Grade (e.g. no photo or sound, photo inadequate (quality), ID features not visible, ID impossible by photo at present, etc.) or Export Grade (uncertainties in date, locality, habitat, wildness, or level of research ID not reached or user profile inadequate, etc.)
These are three separate issues that must not be conflated.
Note too that Wildness is currently assumed. That is strictly bad science. There should be three settings: wild, captive & unspeciified (unknown). To assume that all observations not tagged captive are wild might be true in the majority of cases, but is not good recording, databasing.or science.
This also means that the range “casual - needs ID - research grade” is not strictly correct. it should really be:
Identification Grades: opted out - needs ID - research grade
Export Grades: casual - export grade
Wildness: wild - captive/planted - unspecified
What is verified? One item? Some items?. Everything?
So if I label an observation as “captive” does that make it unverified?
If two people have posted the same ID, is the observation verified? (indeed, is even the ID verified?)
For identification simply call a spade a spade:
opted out of identification - Needs identification - Identified
Dont be obtuse with “casual” and “verified”, and confuse everyone with potential alternative interpretations and misconceptions!
Discussion of whether non-wild organisms should appear in the list of things that need ID or not here. (Result: not by default, but a filter to deliberately include non-wild organisms will be added to the identification page.)
Note that we already have a three-way distinction for wild vs. non-wild in the data quality assessment (DQA): observations are not considered wild by default, they begin unspecified. It’s just that most people don’t actively mark things as wild unless there’s a disagreement, and having unspecified wild/non-wild status does not prevent an observation from reaching “Research Grade”.Looks like I’m confused too.
The other factors Tony mentions are mixed together in the DQA, and together determine whether an observation is “Casual”, “Needs ID”, or “Research Grade”. By default, most factors are unspecified, and an observation is allowed to become “Research Grade” (or “Export Grade” in Tony’s system) as long as enough confirming identifications are made at the species level, a photo is included, and nobody explicitly marks anything else as disqualifying. Discussion of what should or should not disqualify an observation from reaching “Research Grade” should be done elsewhere. Here, we’re talking about whether there are better names than “Casual”, “Needs ID”, and “Research Grade”. (Better in the sense that they cause less confusion. There is definitely some confusion about what these things are.)
Not to derail too much, but since it is related to the current concept of what “research grade” means, the default assumption on each observation is actually wild, not unspecified. There isn’t a middle ground currently.
Organism is wild?
0 yes, 0 no = wild
0 yes, 1 no = non-wild
1 yes, 1 no = wild
1 yes, 2 no = non-wild
I do agree “Export Grade”/“Data Quality” grade should be unlinked from identifications and wildness.
I agree that we need a more descriptive and accurate term than Research Grade.
And I agree with @cmcheatle and @tonyrebelo that “Verified” doesn’t work for me. Maybe it’s just my background, but to me that term implies blessing by an authority/expert. In some cases that may be true on iNat, but not in most.
Similarly the term “consensus” doesn’t work either. To me that term implies unanimous agreement (or nearly so), and 2/3 agreement among 3 IDs isn’t quite there.
It’s a tough nut to crack, there aren’t any really ideal options that come to mind (that don’t involve too many words). The “least misleading” one I’ve seen so far is “community identified.” (Maybe we could agree on something like a 5/6 threshold of at least 6 IDs, where that would change to “community consensus”?)
If we could develop some kind of system to register and vet taxon experts among our users, then maybe I could accept “Verified” for observations with their IDs. But that might be an unacceptable erosion of the community paradigm of iNaturalist.
And I also agree with @tonyrebelo that it would be good to decouple ID quality (this thread) with other attributes of data quality/suitability.
Concensus is the process of reaching agreement by a group, and it doesn’t have to be unanimous. iNat defines a concensus ID as being the taxa at which we reached a 2/3 agreement. If it helps, It could have a more specific description, like “Majority Concensus Reached”
Agreed, I guess the point being that it is another imprecise term that will have different meanings to different people, unless they bother to look up iNaturalist’s specific definition of the term. 2/3 agreement is how we currently define “Research Grade” (as to ID quality, at least), but that term suffers from similar issues.
Options so far, and my attempt at summarizing the opinions expressed:
Research Grade - easiest to implement, implies too much
(Community) Verified - still implies too much
(Community/Majority) Consensus Reached - a bit vague
(Community) Identified - emphasis on the change that usually brings an observation to this state
Export Grade / Shared with Partners - emphasis on what this state allows
I’m leaning towards “Identified”. It still has a few problems (like implying the identification is correct and/or final), but it’s descriptive and has fewer problems than the other options. It’s also a sensible counterpart to “Needs ID”.
Bear in mind that with a reputation system if we get one, we could have it on a sliding scale. And technically it does not have to be the same between groups. For instance, we could have one “level” for birds and other for “weevils”. So for any terminology we decide to use, it will be handy to keep it explicit enough that the meaning was clear, but flexible enough to incorporate future developments.
However, “Net Effective Summed Reputation Score” is very clumsy (for what we now call Community Consensus) …
and we also need to consider that RG refers to more than just Identification confidence… it also brings in having evidence, reaching a fine enough taxa level, and many other DQA factors besides… so maybe other good terms could be “Quality Reviewed” or “Concensus Reviewed”, the first implying a higher level of quality to the obs, and the later reflecting input from members of the community. When/If the sliding scale from a reputation system comnes in, we could have “Quality Review Score” or “Concensus Review Score”.
I wish we would dispense with RG, or any other term, and just show the metric on agrees/disagrees. Let each user judge for him/herself what that implies.
Well, the current metric is that observations are moved off the “Needs ID” list when more than 2/3rds of species-level IDs are in agreement. I suppose this could be made to work if the “Needs ID” list had a filter that defaulted to more than 2/3rds but could be set to an arbitrary percentage. Then the observations above the percentage a user chose would be called… what?
I like the idea of not having to argue about what to call them, but I think the site does need a label for them.