It would also be helpful to filter by net agreements. I would like to find all the observations where the observer agreed to a later ID, and the net agreements are less than three.
Perhaps, a popup might suffice.
A notice, when you click confirming an identification made by others, that says something like:
"Are you really convinced, based on what you personally know, that the identification you agree with is the correct one?
If you are not convinced, it is better to wait for someone else who, independently, can confirm the identification "
Then two buttons. āYesā - āNoā.
That would double the effort for every identification. Iād be very anti-that, as itās what I spend most of my time doing.
As I have said before, I think the system is OK. A good identifier will take time before confirming an ID. I suspect that the number of ājust agreeā IDās is limited. Also, if a person wants to use iNat data for research, it should be up to them to check the veracity of the IDās. I take my time, but still make mistakes - helps us learn.
If it was only on your own observations it would not be a significant burden and would likely cut down on the behavior of improper agreement with the first thing that is suggested.
Agreed. Iām really not a fan of trying to influence the behavior of a subset of users by making the system more difficult to work with for all users.
I like it the way it is now. āResearch Gradeā means some level of accuracy is likely. We could raise that level, but in the process, weād exclude some observations from the new level. It seems to me that the marginal gain in accuracy would not justify the loss of research level observations.
Encouraging users not to automatically āpile onā an identification would help. In the case more accuracy is required, I believe itās already possible to search for research grade observations with a certain number of confirming IDs.
I agree.
Just because an observation has not attained āresearch gradeā does not invalidate its use in research. There will always be some obs at research grade that donāt belong there.
Give the researcher some credit. It is their choice what to include.
I see no need for change as there will always be some who donāt do as they should, whether accidentally or on purpose.
What you are describing is a cognitive bias known as the frequency illusion:
after noticing something for the first time, there is a tendency to notice it more often, leading someone to believe that it has a high frequency of occurrence.
Before proposing a fix, you need to quantify the problem. As I see it, your alleged problem has two factors:
- How often, out of all RG observations RG, has been achieved solely through the OP agreeing with the first person to ID to species.
- How often, out of all of those where (1) has happened, the consensus ID has been incorrect.
These are things which can be measured.
Again, yes, it is ātoo dramatic stepā particularly when you have not quantified the scale of the alleged problem.
Again, y ou have failed to quantify the problem. You have no basis to say āa lot of bad dataā without ā¦ you knowā¦ actual data.
First empirically quantify the scale to demonstrate that it is a problem worth acting upon.
I disagree. Your way appears designed to lead to inaction on problems.
However, discouraging users from adding extra agreeing IDs will reduce the number of high-confidence IDs. I think we SHOULD add extra, confirming IDs if we can be bothered, and this will allow data users to select observations with relatively high confidence in their ID.
It would help if the withdraw button was visible :
https://forum.inaturalist.org/t/make-the-withdraw-function-visible-as-a-button-on-the-observation-block-with-a-connected-tool-tip/14659
Please provide evidence that the āproblemā exists; that this is a substantive issue. That is what I am asking. It is not impossible, nor is it unreasonable.
Energy and effort on the part of those who run the site, as well as human volunteers has a cost and is finite. Blindly asking others to do something by alleging a āproblemā without evidence risks diverting those energies from more important or more productive matters.
Each approach demands effort, but the act of measuring the scale of an issue has far less risk and requires far less effort than changing out the site works without any evidence.
Problem is, we almost never know why they are concurring. They may have hit the agree button as a ālikeā or āthank youā to the identifier, or may have selected a Computer Vision suggestion because they didnāt know a better option to use. Or, they may have done either of those things because they actually knew what the organism was, and it was a convenient short-cut for adding their own ID.
I think the better solution is to provide a ālikeā or āthumbs-upā option on IDs and comments, and to rename the Agree button to something that more clearly describes what āAgreeā is supposed to mean.
Exactly. There are plenty of āresearch gradeā scientific specimens in herbaria and museums that are not (yet) correctly identified. They still have great value for research - arguably more value for the very reason that they may be difficult to identify because they donāt fit existing taxonomic hypotheses very well.
In that sense, every observation on iNaturalist is āresearch gradeā if it contains any discernable evidence at all of what the organism was, and where and when it was seen. On iNaturalist, the āproxyā for sufficient evidence has been whether more than 2/3 of the identifiers can agree on what it is. But lack of such agreement doesnāt mean that there is lack of sufficient or valuable evidence.
While it is possible I think it is unreasonable to expect it at this stage of the conversation, and unnecessary.
What is happening here is just spitballing - someone throws an idea in and it gathers feedback from people with the same or different perspectives, supported by opinions and a bit of anecdata. The amount of energy required to do this is minimal. Itās an iterative, incremental way of making sure that not too much effort is put into a change before the change is deemed worthwhile - āgoodā* ideas gather momentum, ābadā* ideas wither on the vine.
When (well, if) further investigation is deemed worthwhile, then sure, go do a proper problem assessment, sizing, impact analysis, cost benefit assessment etc etc.
*Subjective, ultimately. I suppose āpopularā and āunpopularā would be more appropriate.
Careful there! Youāre using a classic debating technique to stall a conversation.
Youāre under the mistaken impression that one needs to provide double-blind, peer-reviewed references in order to have a conversation.
You can have 2 people blindly agree as easily as one, so the change imo would create more problems than solve. Not all species/regions have the same number available identifiers.
Besides, no one says you need to use the research grade, you can use the number of identifications for your work, if you require 4, than use only records with 4, your choice.
No, this is about someone making a specific claim in with a complete absence of evidence:
(Moreover, I have offered a plausible cause for the impression that @paulexcoff has noted.)
Such claims, made without evidence, should be dismissed:
āWhat can be asserted without evidence can also be dismissed without evidence.ā It implies that the burden of proof regarding the truthfulness of a claim lies with the one who makes the claim; if this burden is not met, then the claim is unfounded, and its opponents need not argue further in order to dismiss it.
https://en.wikipedia.org/wiki/Hitchensās_razor
Maybe a problem does exist, but if one does, one should not jump to the next step (as @paulexcoff did), asking, āwhat must be done about it?ā Rather the next question should be, āCan we quantify the alleged problem? And if so, how?ā
Iāve done plenty of IDs (roughly just as many as Paul) and Iām not seeing a substantial problem as he seems to be claiming. Hence quantifying it. Otherwise the effort (1) wastes time and effort and (2) sets perfection above good enough, despite the 50% increase in effort which he suggests.
In the scenario where ID A and B are agreeing and user C comes along to disagree, it currently requires at least 1 more person to agree with A or 3 more people to agree with C to reach RG again. If the bar was increased to 3, does the math for maverick adjust accordingly? Thatās asking a lot when identifiers are outnumbered 10:1.
I think your point has been made.
The community can still have a conversation around the OPās hypothesis, even if it hasnāt been tested yet. Anyone who feels that it is a waste of time and effort is not obligated to participate.
Another use case for having complete and accurate identification histories be easily accessible! :-)
I agree that this kind of analysis is desirable, but expecting someone who thinks they have noticed a problem to engage in this kind of analysis before saying anything is absurd. The first part, here, should be relatively easily doable for someone with some scripting experience, if we had access to complete identification historiesāwhich we donāt. The second part would be exceedingly difficult, though not impossible.