- Designating experts
designated experts must provide documented evidence of their qualifications or professional experience
This is justifiably the weak link in a reputation system that recognizes experts. Simply there is no good way to rank experts. Some amateurs with no credentials totally outrank the leading experts with impressive publication lists. Field researchers and museum/herbarium technicians may have far superior knowledge than the taxonomists using their specimens. And some taxonomists publish only a single magnum opus, whereas others publish almost daily. Still on average, field ecologists and museum/herbarium staff that regularly make identifications are the gold standard - you are unlikely to get a better and more trustworthy ID than by going to your local reputable museum/herbarium with a specimen and asking the relevant staff. If there can be a way of estimating how close identifiers are to the gold standard, that would be top prize. A reputation system based on activity on site, based on identifications made that are agreed to or disagreed with (and ideally weighted by the reputations of those agreeing or disagreeing), is a perfect way of ranking and progressing novices. And easily implementable (i.e. the computation overheads realtime are trivial - although complicated and impossible methods can clearly be designed, as well as AI and neural network options).
The big problem that we found on iSpot is that the reputation system never established, never advanced, in fact failed totally, without externally designated experts to “train” it. (In fact, for fish in south Africa, we had to chose our most frequent identifiers - who seemed reasonably competent - and make them experts (at rank knowledgeable) in order to get the system to work.
The biggest problem was that the leading identifiers (including experts) tended not to get agreements because few users were competent to agree with them. However, this was never an issue because their single ID (of 1000 votes) guaranteed “Research Grade” and there was never an incentive to “push the observation to Research Grade” because it was already there. And this is the failing with entirely internal systems of reputation. Not knowing the experts, experts tend not to get any agreements because few people are qualified to agree with them. ((or alternatively, users game the system - I will agree with any expert on any ID they make in their group, because I trust them and because the reputation system does not value them, and because the odds are that of all people on earth (let alone iNaturalist) they are the most likely to have made the best ID possible - even if wrong!))
So having established that an outside imposition of “Expert” status is needed, the question is “how to implement it?” There is no doubt that any designation of Experts will be a burden on curators and staff, and require some system of standards. iSpot southern Africa was a very small community and it was trivial for the curator (there was only one) to do a google scholar search and check credentials, But I should point out that the taxonomical community is southern Africa is a fraction of that in California, let alone any European or American country.
But the system was greatly enhanced by other experts welcoming and alerting the curators when an expert visited, often providing CVs or links to CVs or publication lists, or just strong commendations. Within any group (apart from Vertebrates) the community is usually quite well known and connected, as are the factions and frauds. On iSpot we set the bar low: a single refereed publication (but not a self published monograph or journal) was good enough for expert status and earned a pegged reputation of 1000 votes per ID. These experts “trained” the reputation system, as well as mentoring and training keen novices, who in order to advance in reputation had to learn how to make valid identifications, and so established a rapport with the local experts in their interest group. It worked surprizingly well. (but experts and reputation was badged on iSpot, making the identification of experts inescapable:: but let us not muddy the waters about displaying reputations: that is another debate)
iSpot also had a second category to Expert, that of “Knowledgeable”. That earned only 500 votes (the maximum that a novice could attain - i.e. half an expert :: but that need not be, or could be adjusted up or down in an alternative scheme). Unlike experts who often nominated themselves, knowledgeable people were invariably nominated by the community as someone who locally was exceptionally adept in their group. Checking up these was almost impossible, but we let it through under two nominations.
I dont know if iNat need go here. iNat has sufficient users for a reputation system to discover and award regular users that are knowledgeable based on their contributions. The only issue I foresee is the someone really knowledgeable may have to make a few hundred new identifications in order to earn their reputation, instead of being awarded it outright.
As a curator on iNat I would be more than willing to take on the minor extra curation of vetting and assigning experts for southern Africa - including those many European and American taxonomists that contribute immensely to our knowing and understanding our southern African fauna, flora and fungi.
I applaud them: I wish iNat would too!