Should an observation require 3 IDs to reach "research grade" when the observer is just agreeing with a suggestion?

It would also be helpful to filter by net agreements. I would like to find all the observations where the observer agreed to a later ID, and the net agreements are less than three.

4 Likes

Perhaps, a popup might suffice.

A notice, when you click confirming an identification made by others, that says something like:

"Are you really convinced, based on what you personally know, that the identification you agree with is the correct one?

If you are not convinced, it is better to wait for someone else who, independently, can confirm the identification "

Then two buttons. ā€œYesā€ - ā€œNoā€.

3 Likes

That would double the effort for every identification. Iā€™d be very anti-that, as itā€™s what I spend most of my time doing.

8 Likes

As I have said before, I think the system is OK. A good identifier will take time before confirming an ID. I suspect that the number of ā€˜just agreeā€™ IDā€™s is limited. Also, if a person wants to use iNat data for research, it should be up to them to check the veracity of the IDā€™s. I take my time, but still make mistakes - helps us learn.

8 Likes

If it was only on your own observations it would not be a significant burden and would likely cut down on the behavior of improper agreement with the first thing that is suggested.

3 Likes

Agreed. Iā€™m really not a fan of trying to influence the behavior of a subset of users by making the system more difficult to work with for all users.

6 Likes

I like it the way it is now. ā€œResearch Gradeā€ means some level of accuracy is likely. We could raise that level, but in the process, weā€™d exclude some observations from the new level. It seems to me that the marginal gain in accuracy would not justify the loss of research level observations.

Encouraging users not to automatically ā€œpile onā€ an identification would help. In the case more accuracy is required, I believe itā€™s already possible to search for research grade observations with a certain number of confirming IDs.

7 Likes

I agree.
Just because an observation has not attained ā€œresearch gradeā€ does not invalidate its use in research. There will always be some obs at research grade that donā€™t belong there.
Give the researcher some credit. It is their choice what to include.
I see no need for change as there will always be some who donā€™t do as they should, whether accidentally or on purpose.

2 Likes

What you are describing is a cognitive bias known as the frequency illusion:

after noticing something for the first time, there is a tendency to notice it more often, leading someone to believe that it has a high frequency of occurrence.

Before proposing a fix, you need to quantify the problem. As I see it, your alleged problem has two factors:

  1. How often, out of all RG observations RG, has been achieved solely through the OP agreeing with the first person to ID to species.
  2. How often, out of all of those where (1) has happened, the consensus ID has been incorrect.

These are things which can be measured.

Again, yes, it is ā€œtoo dramatic stepā€ particularly when you have not quantified the scale of the alleged problem.

Again, y ou have failed to quantify the problem. You have no basis to say ā€œa lot of bad dataā€ without ā€¦ you knowā€¦ actual data.

First empirically quantify the scale to demonstrate that it is a problem worth acting upon.

3 Likes

I disagree. Your way appears designed to lead to inaction on problems.

However, discouraging users from adding extra agreeing IDs will reduce the number of high-confidence IDs. I think we SHOULD add extra, confirming IDs if we can be bothered, and this will allow data users to select observations with relatively high confidence in their ID.

1 Like

It would help if the withdraw button was visible :
https://forum.inaturalist.org/t/make-the-withdraw-function-visible-as-a-button-on-the-observation-block-with-a-connected-tool-tip/14659

3 Likes

Please provide evidence that the ā€œproblemā€ exists; that this is a substantive issue. That is what I am asking. It is not impossible, nor is it unreasonable.

Energy and effort on the part of those who run the site, as well as human volunteers has a cost and is finite. Blindly asking others to do something by alleging a ā€œproblemā€ without evidence risks diverting those energies from more important or more productive matters.

Each approach demands effort, but the act of measuring the scale of an issue has far less risk and requires far less effort than changing out the site works without any evidence.

4 Likes

Problem is, we almost never know why they are concurring. They may have hit the agree button as a ā€œlikeā€ or ā€œthank youā€ to the identifier, or may have selected a Computer Vision suggestion because they didnā€™t know a better option to use. Or, they may have done either of those things because they actually knew what the organism was, and it was a convenient short-cut for adding their own ID.

I think the better solution is to provide a ā€œlikeā€ or ā€œthumbs-upā€ option on IDs and comments, and to rename the Agree button to something that more clearly describes what ā€œAgreeā€ is supposed to mean.

Exactly. There are plenty of ā€œresearch gradeā€ scientific specimens in herbaria and museums that are not (yet) correctly identified. They still have great value for research - arguably more value for the very reason that they may be difficult to identify because they donā€™t fit existing taxonomic hypotheses very well.

In that sense, every observation on iNaturalist is ā€œresearch gradeā€ if it contains any discernable evidence at all of what the organism was, and where and when it was seen. On iNaturalist, the ā€œproxyā€ for sufficient evidence has been whether more than 2/3 of the identifiers can agree on what it is. But lack of such agreement doesnā€™t mean that there is lack of sufficient or valuable evidence.

6 Likes

While it is possible I think it is unreasonable to expect it at this stage of the conversation, and unnecessary.

What is happening here is just spitballing - someone throws an idea in and it gathers feedback from people with the same or different perspectives, supported by opinions and a bit of anecdata. The amount of energy required to do this is minimal. Itā€™s an iterative, incremental way of making sure that not too much effort is put into a change before the change is deemed worthwhile - ā€œgoodā€* ideas gather momentum, ā€œbadā€* ideas wither on the vine.

When (well, if) further investigation is deemed worthwhile, then sure, go do a proper problem assessment, sizing, impact analysis, cost benefit assessment etc etc.

*Subjective, ultimately. I suppose ā€œpopularā€ and ā€œunpopularā€ would be more appropriate.

5 Likes

Careful there! Youā€™re using a classic debating technique to stall a conversation.

Youā€™re under the mistaken impression that one needs to provide double-blind, peer-reviewed references in order to have a conversation.

8 Likes

You can have 2 people blindly agree as easily as one, so the change imo would create more problems than solve. Not all species/regions have the same number available identifiers.

Besides, no one says you need to use the research grade, you can use the number of identifications for your work, if you require 4, than use only records with 4, your choice.

4 Likes

No, this is about someone making a specific claim in with a complete absence of evidence:

(Moreover, I have offered a plausible cause for the impression that @paulexcoff has noted.)

Such claims, made without evidence, should be dismissed:

ā€œWhat can be asserted without evidence can also be dismissed without evidence.ā€ It implies that the burden of proof regarding the truthfulness of a claim lies with the one who makes the claim; if this burden is not met, then the claim is unfounded, and its opponents need not argue further in order to dismiss it.
https://en.wikipedia.org/wiki/Hitchensā€™s_razor

Maybe a problem does exist, but if one does, one should not jump to the next step (as @paulexcoff did), asking, ā€œwhat must be done about it?ā€ Rather the next question should be, ā€œCan we quantify the alleged problem? And if so, how?ā€

Iā€™ve done plenty of IDs (roughly just as many as Paul) and Iā€™m not seeing a substantial problem as he seems to be claiming. Hence quantifying it. Otherwise the effort (1) wastes time and effort and (2) sets perfection above good enough, despite the 50% increase in effort which he suggests.

1 Like

In the scenario where ID A and B are agreeing and user C comes along to disagree, it currently requires at least 1 more person to agree with A or 3 more people to agree with C to reach RG again. If the bar was increased to 3, does the math for maverick adjust accordingly? Thatā€™s asking a lot when identifiers are outnumbered 10:1.

2 Likes

I think your point has been made.

The community can still have a conversation around the OPā€™s hypothesis, even if it hasnā€™t been tested yet. Anyone who feels that it is a waste of time and effort is not obligated to participate.

8 Likes

Another use case for having complete and accurate identification histories be easily accessible! :-)

I agree that this kind of analysis is desirable, but expecting someone who thinks they have noticed a problem to engage in this kind of analysis before saying anything is absurd. The first part, here, should be relatively easily doable for someone with some scripting experience, if we had access to complete identification historiesā€“which we donā€™t. The second part would be exceedingly difficult, though not impossible.

4 Likes