I came across a user profile that said this:
"Account to post images from the______project studying non-lethal identification of bees in collaboration with the USFWS. All images are of bees (and sometimes wasps) that have been chilled on ice, and were subsequently released. The bees typically sunned themselves, cleaned their antennae and wings, then flew off after their photo shoot.
We are presently testing the utility of iNaturalist to identify bees from images. The same bees will be identified by molecular techniques, expert taxonomists and other machine learning software. (Thus initial identifications will be the top hit suggested by iNaturalistâs machine learning algorithm). "
This user is taking the top suggestion, even if in many cases it is blatantly wrong (as in not even the right Family). The profile suggests that they would know what the proper IDâs are, but are purposely adding the wrong IDâs. Which means it takes three of us to switch them over.
Is this flaggable? Iâm a little annoyed that this guy is needlessly adding more work to the load.
If you think maybe you should flag it just do and let curators discuss it there; no action is taken automatically when you flag something and we arenât supposed to discuss specific situations on the forum
Iâd view this is potentially annoying but not a problem. There seems to be a good reason for what heâs doing. It would be good if the person would go back and fix the IDâs that are wrong, to the extent he knows. Might be good to chat with him about it â in a non-confrontational way. Youâre glad heâs doing the test and are wondering if the person is planning to correct any wrong IDâs after the data is gathered.
Many people take the top iNat suggestion when uploading images, even if they know itâs wrong. If iNat tried to suspend everyone that did that, theyâd never get any new users. Itâs not clear from that paragraph that they actually know the correct IDâs. Plenty of people involved with collecting insects canât actually identify them (I was involved with a project collecting bees, and all I could tell you was that they werenât Honeybees, at the time).
(!) Intentionally adding false IDs or DQA votes. We expect you to submit information that you believe is an accurate assessment of the evidence provided, and not intentionally false. Itâs ok to make an incorrect identification or accidentally add an incorrect date or something, but itâs not ok to intentionally add an incorrect identification or add an intentionally false vote to the Data Quality Assessment.
So I would say create a flag. At least so curators can discuss it.
True, but the point is about repeated behavior and knowingly selecting IDs that are false. If an account is going to pick the top CV suggestion no matter what. At some point they will be choosing things they know is wrong.
An observer relying on CV may be unreliable but is not a violation. However, IDs that are intentionally wrong for any reason should be hidden per the curator guide, and an observer using the CV knowing that the CV is wrong is intentionally incorrect. I donât know how you prove that they knew the CV is wrong in any particular observation, but it does sound like they intend knowingly incorrect IDs as part of this study, so I think a curator should message them. I can do it if you DM me who it is
Is this account adding IDs to othersâ observations, or only its own? It is suspendable to add CV-based identifications to othersâ observations that are blatantly wrong, as a pattern of behavior.
If this account is only adding IDs to its own observations, it still may be suspendable if the account does not behave like a human (does not respond to corrections, comments, messages, etc.). I would reach out to staff (help@inaturalist.org) if this seems to be the case.
The only guidelines I know of allowing for suspension based on IDs are intentionally wrong IDs or automated IDs
Failing to respond to comments is not in and of itself enough to conclude that an account is suspendable for not being human, it would have to be combined with bot-like behavior
**(!) Machine generated observations, identifications and comments.**We do not allow machines to generate and post content on iNat with no human oversight curating each piece of content, and any account suspected of doing so is subject to suspension and the removal of the content. Read more about what constitutes machine generated content here.
Good Form
[âŚ]
Add accurate content and take community feedback into account. Any account that adds content we believe decreases the accuracy of iNaturalist data may be suspended, particularly if that account behaves like a machine, e.g. adds a lot of content very quickly and does not respond to comments and messages.
Examples of prohibited behavior (non-exhaustive):
[âŚ]
Machine generated identifications would include identifications generated from machine learning algorithms or a generic data source with no human moderation/oversight.
So I think @rynxs is technically correct that posting CV IDs with no human oversight is specifically enumerated as suspendible. As he also said, generally we donât enforce that at all for the original observer. Not enforcing that rule on the original observer is basically necessary for e.g. Seek uploads, where a human is barely in the loop from the observer ID perspective anyway. We generally also assume good faith/give warnings prior to suspension for IDers on observations by others, given that itâs often hard to prove they are just clicking CV IDs with no oversight. However, if they openly admit they are doing that on observations by others, and wonât stop when asked, then I guess that would be suspendible. If they admit they are doing it on their own observations, then I think maybe they should be talked to by curators to discuss whether their use of the site is really aligning with their goals/the goals of the site.
IDs without human oversight are absolutely suspendable, I thought rynxs was saying that just using CV on others obs and being wrong a lot was suspendable, and I am not sure any guideline actually says that
Absolutely, admitted auto IDs and refusal to stop would be suspendable
Accounts can be suspended for repeatedly using CV-based IDs on othersâ observations that are wildly inaccurate. There was an account with a large number of IDs that was obviously using some kind of image recognition to suggest them (either iNatâs own or something else), and after repeated warnings to stop and many users reporting this behavior, they were suspended by staff.
It is suspendable to act like a bot, in addition to actually being one. I generally recommend leaving this distinction up to staff, however (as in the example mentioned above). The lack of human response criterion is one component for determining if an account breaks the guidelines quoted by @wildskyflower.
Sounds like the person doesnât really understand iNat or that the ID suggestions are suggestions, not definitive IDs.
This sounds like a misuse of the platform.
If they wanted to do this in a responsible way instead they should use the CV Demo for each image and record the CV suggestion in their own separate research database (including info about what C build it is), and not be taking the approach they are.
Thatâs my opinion as an iNat user and someone who manages research and data for conservation work.
Yes, I agree that that is concerning. They say âwe are presently testing the utility of iNaturalistâ, not âtesting the accuracy of iNaturalistâs automated suggestionsâ. This then leads to âiNaturalist makes lots of ID mistakesâ.
Thanks everyone for weighing in. It sounds like I could consider flagging to be an option here.
Considering the expertise claimed in their bio, and what I read about them online, I have to assume that they have some expertise on bees to at least not get the wrong family when making IDâs.
I sent them a message last night. If they ignore and continue to act that way, Iâll pass it to your court.
Honestly, this wouldnât bother me nearly as much if they were putting out some honest IDâs. If they were wasting identifier time with their observations, but were assisting with others, I would still find it silly, since the experiment is pointless, but hell, if youâre contributing, then you do you. Regrettably, this account have all of zero IDâs for others.
It does seem to demonstrate a rather shocking ignorance of the site. Especially since they appear to be doing an experiment.
I was wondering about this as well. Wasting our time in IDâs, and doesnât even have that many observations, isnât going to have any affect on the suggestions.
In fact, I just think people should freely correct the IDs. If they want to record what the CV suggestion was they should be doing that on their own spreadsheet. We canât allow them maintain incorrect records in iNatâs database simply for the sake of their experiment.
That also might make it more obvious to them what iNat is and how it works, what âiNatâs IDâ actually is, and that we do in fact rather care about the accuracy of the records here.
I am a little surprised that the user has apparently never heard of morphologically monotonous. They say âThus initial identifications will be the top hit suggested by iNaturalistâs machine learning algorithmâ but they are uploading things like Lasioglossum. So even if they uploaded enough Sweat Bees to affect the suggestions, all that would mean is it would suggest that for all Sweat Bees, and weâd still have stuff to clean up. That they arenât helping with.
Alternatively, I struggle to see how their 47 Honey Bees are affecting the suggestions vs the 400K we currently have.
Oh we are. Thereâs a few of us keeping it under control. Whatâs going to happen though is in say 6 months, heâs going to go through and smugly put some ID that he got from his expert, with no explanation. If heâs using CO1 barcodes, Iâm curious how many will be wrong anyway.
It does sound from what has been said that they misunderstand iNaturalist, but perhaps though your efforts they will learn more than they expected. Without them understanding iNat their experiment will be of no, or even negative, value but if they understand it their experiment could be very interesting.