It’s never been put into a formal thread, or even a request, as has been suggested be done the the mods, so I am going to enter this to formalize and centralize.
My creation of this topic is in no way an endorsement from me, I will not vote for it, and will not support it.
Issue - observations are identified and promoted to research grade by individuals unqualified to do so which wrecks the scientific credibility of the site.
Proposed solution- implement an iSpot style ‘reputation system’ whereby one’s reputation score dictates the influence or ability to add an ID.
iSpot granted 1000 ‘points’ towards the id of a record to designated experts
non-experts are granted 1 point
non-experts may get promoted to additional points by doing leading ID’s only which are validated by an expert. So as an example perhaps every 50 leading id’s a non expert makes gains them 1 extra id point
users who add their own observations with an ID which is later confirmed by an expert also count as leading ID’s and towards gaining points.
designated experts must provide documented evidence of their qualifications or professional experience
Discuss…
EDIT - please note I was not an iSpot user, any inaccuracy in my description is accidental and I will be glad to fix if pointed out.
i won’t get way into it like last time since everyone knows how i feel so i’ll just say if we have a reputation system ever, i personally think it should be based only on iNat activities and not external criteria (PhD’s, popularity, number of papers, whether one is paid to identify stuff or not, etc)
And I’ll say again that if it happens, I think it should have an opt-out feature. If by some stroke of luck I end up the top IDer on a taxon (it has happened before) and that feeds into whatever calculates my reputation, I want the ability to override the high reputation it would give me on that taxon.
1000points: it’s just a ratio. It went from 1 to 1000 but it would make no difference it it were 1 to 10 or 1 to 5. the value is arbitrary.
On iSpot a newcomer’s ID did nothing but put the equivalent of a placeholder until you had made at least 10 IDs that a non-newbie had agreed with. This will immediately lessen the workload of our IDers as we won’t have to undo the bad IDs of newbies who misuse the CV, or who make joke IDs etc
iSpot failed us, and we left, but the problem was defn not the reputation system.
I’ve experienced both platforms, and I could go at length into why iNat is a thousandfold better than iSpot, so don’t get me wrong, I’m not knocking iNat, I’m just saying I’ve experienced both systems and that iNat would benefit from some form of reputation system.
I am strongly in favor of a “reputation system” being implemented. No “opt outs”… or at least relegate those observations such that they don’t become research grade and don’t show on the map.
One big advantage to this, aside from the reduction in misidentifications, is the impetus this might create for users to become an “expert” in more taxa. I’m professionally an entomologist, but I hardly ever ID insects on here due to how many are misidentified and the overall low quality of the curation. If instead I was granted some sort of “expert badge” that allowed me to easily override bad IDs, I’d be far more inclined to contribute to these groups. I doubt I’m the only “expert” who feels this way.
So a novice vote is equal to the leading taxonomist on earth who has studied the group for 40 years?
The claim that a novice is equal to an expert is frankly just insulting!
If iNaturalist wants experts to contribute, if it wants accurate and trustworthy identifications, it will eventually have to credit experts with a little bit of knowledge.
Taxonomy isn’t a democracy. There is a right and a wrong answer to what a species is or isn’t. Allowing everyone an equal vote in this is a recipe for misidentifications. It’s egregious that this errant data then gets shuffled on to other databases. I’m all for novices contributing, but the current standard has the effect of diminishing the role true taxonomic experts should have here. For an example of a properly functioning biodiversity website, take a look at bugguide.
Likely IDs:
Whereas iSpot had a great reputation system, it did not have a very sophisticated “Likely ID” algorithm. Basically the Identification train could be “Likely” if it exceeded another identification train by a single vote. And the baseline was a single vote: so an observation could become “Research Grade” by even a total novice.
The iNat system of a two thirds majority on more than two votes is a far better ratio. Even with a reputation system with voting levels, the ratio of 2/3 votes on more than 2 experienced identifiers (where experienced is some arbitrary determined level of proficiency, which may even vary between groups based on the number of identifiers and their reputation profiles :: NB: this may perhaps not be voters but votes - so a minimum votes rather than voters) is one that I would strongly recommend.
Bugguide. https://bugguide.net/help
In what way is this better than iNaturalist for identifications: I briefly looked around but could not figure out how the IDs were done, and how ID disputes were resolved.
So how is this better?
@joe_fish Can we get some clarity on what you mean? I’m just having trouble understanding why you would deny a user the option to remain a newbie in the reputation system. Maybe I know I’m just a good guesser and don’t really know squat about these taxa.
And if opting out of community ID is something you’re recommending, that would greatly alter the relationship that iNat has with some of its users. The observations set to opt out of community ID are already removed from most searches and cannot be research grade.
The more I think about it… Opting in should be required for the reputation system if adopted. I don’t ever want to be mislabeled as an expert in one taxa because of correct IDs in another taxa. I suppose I’d recommend opting in for any taxonomic group you’d like to be evaluated as expert about. This is starting to feel unwieldy.
IDs on bugguide are made by curators. Poor quality images are discarded (“frassed”, in bugguide’s parlance), though I believe the biogeographic and temporal data is saved. The end result is that the data contained on bugguide is enormously useful. It’s possible to make reasonably accurate IDs to species-level just by skimming through the photographs and reading comments. The dialogue that accompanies observation also tends to be far more informative, with experts weighing in on diagnostic characters that are or are not present. Every entomologist I know makes ample use of it. Compare that to iNat, where an ID is simply a bunch of “agrees” from users of questionable expertise.
[Edit: I should rephrase that. A user can ID things directly by uploading their observation to the relevant species-level page. But a curator can move it at their discretion. In this way, a single knowledgeable expert can maintain the accuracy of whatever taxon they have expertise in. This process is far more laborious on iNat.]
I may have misinterpreted what you meant by “opt out”. The current iteration, where a user refuses the community ID, is anathema to how a site like this should function. I’ve regularly seen misidentifications from opted-out users that was research grade or incorporated into the maps on here.
With regards to a reputation system, no user should be able to opt out such that their observations are identified by some other metric. Now, if a user wants to opt out of becoming an expert… sure, why not.
I won’t ID then. It’s that simple. I’ll identify my own observations because they’re mine, but I’m not going to stress myself out trying to help identify others’ observations when my identifications are not really of any value. Frankly it’s embarrassing to be worth 1 point out of 1000 (if I understand this system). I’m not an expert, but I know a Monarch when I see one–I assume that’s helpful knowledge right now. Under such a system, it wouldn’t matter that I know what a Monarch is–it wouldn’t count for much. Currently, yes, I make mistakes, but I thought that was part of the beauty of iNaturalist–someone more knowledgeable corrects me; the ID is then accurate; I learn, and we all move on. I don’t worry about having lost points or value here–I just am more careful thereafter and more knowledgeable. If this system were in place when I joined, I would have never even tried an identification. Just my thoughts.
Bugguide is different. iNat isn’t supposed to be like bugguide. if people were ‘frassing’ my photos i doubt i’d have stuck around. It’s great that it exists, but I don’t think making iNat more like bugguide is a good idea.
I also want to point out that even if iNat wanted to create a reputation system, it’s a huge undertaking. And, a lot of people don’t want it, and despite some people not liking how iNat works, overall it’s been very successful. So… i hope no one reading this thinks we are about to do something like this and panics. It’s very unlikely any time soon.
Fine like I said in my other quote, turn it off completely, don’t let the amateurs screw up the data. Simply turn it into an academic site where the experts get their minions to go out and do their collecting for them.
I’d personally rather be told I am completely worthless than be told I am .001 the value of someone.
I assume the goals of this website are to #1: encourage more naturalists #2: provide accurate IDs
A reputation system has the potential to provide more accurate IDs. I don’t think anyone would argue that. But it should also have baked into its formula a feedback that improves one’s score with every correct identification, with the result being that anyone (even you @cmcheatle) can eventually become an expert if you prove you possess the requisite knowledge. I would think that would encourage more participation.
Designating experts designated experts must provide documented evidence of their qualifications or professional experience
This is justifiably the weak link in a reputation system that recognizes experts. Simply there is no good way to rank experts. Some amateurs with no credentials totally outrank the leading experts with impressive publication lists. Field researchers and museum/herbarium technicians may have far superior knowledge than the taxonomists using their specimens. And some taxonomists publish only a single magnum opus, whereas others publish almost daily. Still on average, field ecologists and museum/herbarium staff that regularly make identifications are the gold standard - you are unlikely to get a better and more trustworthy ID than by going to your local reputable museum/herbarium with a specimen and asking the relevant staff. If there can be a way of estimating how close identifiers are to the gold standard, that would be top prize. A reputation system based on activity on site, based on identifications made that are agreed to or disagreed with (and ideally weighted by the reputations of those agreeing or disagreeing), is a perfect way of ranking and progressing novices. And easily implementable (i.e. the computation overheads realtime are trivial - although complicated and impossible methods can clearly be designed, as well as AI and neural network options).
The big problem that we found on iSpot is that the reputation system never established, never advanced, in fact failed totally, without externally designated experts to “train” it. (In fact, for fish in south Africa, we had to chose our most frequent identifiers - who seemed reasonably competent - and make them experts (at rank knowledgeable) in order to get the system to work.
The biggest problem was that the leading identifiers (including experts) tended not to get agreements because few users were competent to agree with them. However, this was never an issue because their single ID (of 1000 votes) guaranteed “Research Grade” and there was never an incentive to “push the observation to Research Grade” because it was already there. And this is the failing with entirely internal systems of reputation. Not knowing the experts, experts tend not to get any agreements because few people are qualified to agree with them. ((or alternatively, users game the system - I will agree with any expert on any ID they make in their group, because I trust them and because the reputation system does not value them, and because the odds are that of all people on earth (let alone iNaturalist) they are the most likely to have made the best ID possible - even if wrong!))
So having established that an outside imposition of “Expert” status is needed, the question is “how to implement it?” There is no doubt that any designation of Experts will be a burden on curators and staff, and require some system of standards. iSpot southern Africa was a very small community and it was trivial for the curator (there was only one) to do a google scholar search and check credentials, But I should point out that the taxonomical community is southern Africa is a fraction of that in California, let alone any European or American country.
But the system was greatly enhanced by other experts welcoming and alerting the curators when an expert visited, often providing CVs or links to CVs or publication lists, or just strong commendations. Within any group (apart from Vertebrates) the community is usually quite well known and connected, as are the factions and frauds. On iSpot we set the bar low: a single refereed publication (but not a self published monograph or journal) was good enough for expert status and earned a pegged reputation of 1000 votes per ID. These experts “trained” the reputation system, as well as mentoring and training keen novices, who in order to advance in reputation had to learn how to make valid identifications, and so established a rapport with the local experts in their interest group. It worked surprizingly well. (but experts and reputation was badged on iSpot, making the identification of experts inescapable:: but let us not muddy the waters about displaying reputations: that is another debate)
iSpot also had a second category to Expert, that of “Knowledgeable”. That earned only 500 votes (the maximum that a novice could attain - i.e. half an expert :: but that need not be, or could be adjusted up or down in an alternative scheme). Unlike experts who often nominated themselves, knowledgeable people were invariably nominated by the community as someone who locally was exceptionally adept in their group. Checking up these was almost impossible, but we let it through under two nominations.
I dont know if iNat need go here. iNat has sufficient users for a reputation system to discover and award regular users that are knowledgeable based on their contributions. The only issue I foresee is the someone really knowledgeable may have to make a few hundred new identifications in order to earn their reputation, instead of being awarded it outright.
As a curator on iNat I would be more than willing to take on the minor extra curation of vetting and assigning experts for southern Africa - including those many European and American taxonomists that contribute immensely to our knowing and understanding our southern African fauna, flora and fungi.
I applaud them: I wish iNat would too!
Are you really contributing to this debate? A reputation system immediately allows users to advance, to improve to become an expert.
You are introducing a ridiculously simple system of experts = 1000 votes and everyone else = 1 votes. That is not a decent reputation system. A well designed system will start novices at 1 votes but allow them to progress by making original identifications that are agreed to by others up to a level of almost an expert (at some arbitrary cut-off probably between 500 and 999 votes). Properly constructed, the reputation earned for any ID will be proportional to the sum of the reputations of those who agree to it.
Sure you can game the system. It is easy: all you have to do is post hundreds of observations and correctly identify them first and then get the high reputation users to agree with you. But you will discover that few experts or regular identifiers will agree with 200 House Sparrow IDs, and they will focus their efforts on rarer and more exciting identifications, which means that these observations will be agreed to by lower reputation users, requiring - in a well designed system - a lot more observations to earn a reputation.
And if anyone is willing to go looking for rare species to game the system: then that is exactly what any good reputation system is about. Learning and rewarding those who contribute and are willing to learn more.
A “academic site where the experts get their minions to go out and do their collecting for them.” is simply absurd.
A reputation system will not create minions, but will create experts. It will allow interested identifiers to hone their skills and benefit from the input of the world’s experts on a one-on-one basis, personal and up front. What better way to unite the professional and aspiring naturalists of the world into one exciting and competent family?
Do you really have no naturalist heros? Would you not like to interact and learn from them on iNaturalist. A reputation system enhances the experience of working on Citizen Science sites. Ask anyone from iSpot: I dont think you will get a single nay vote on this issue!