As I am sure most people reading this are aware by now, this year’s CNC event has caused a lot of disruption and frustration for many iNat users and curators. In the interests of being proactive and solution-seeking, I thought I’d put together a post compiling the exact issues, and potential actions that could be taken to alleviate them in the future.
Please feel free to comment with your own issues and suggestions, and I will incorporate them into this post.
The Issues
1. Overwhelming numbers of falsified observations, including:
- Users uploading photos that are not their own (this is the biggest issue);
- Uploading old photos with the date/location falsified so they become eligible for the project;
- Uploading the same photo dozens of times with a different ID each time to increase their species “count”;
- Marking cultivated organisms wild to prevent them from becoming casual-grade
2. Large amounts of “sockpuppet” activity - for those who may not know this term, this is when a user creates a secondary account to agree with themselves. Usually it involves adding IDs, but sometimes it is used for Data Quality votes (for instance, a user might get upset someone marked their houseplant “cultivated” and made it casual, so they create 3 more accounts just to mark it “wild” again).
3. Retaliatory harassment by new CNC users against users who flag their content or ask them to abide by iNat rules - Thankfully there were only a handful of instances of this, although it was quite unpleasant for those targeted. But since it was mostly done via DQA votes, which can’t yet be tracked, there are probably other targets who have not yet noticed the vandalism to their observations.
4. So. Many. Misidentifications. Bad IDs are a fact of iNat, but in some areas it seems like upwards of 90% of the observations are identified wrong. Generally, it appears to be due to extreme over-reliance on CV and accepting the first suggestion that appears - even if that means identifying a soda can as a great blue whale. Worse, their friends and classmates often immediately confirm the ID and make it research grade.
5. No disincentives for cheating at the Challenge. The ‘winning’ project this year has very few legitimate observations, but since the project settings allow casual observations, all the cultivated, incorrect location, and copyright-flagged observations still count towards the total. This means that a malicious user doesn’t lose any progress by being suspended or flagged - their old observations still count, and they simply make a new account and continue the same behavior.
6. Few CNC organizers have meaningfully contributed to helping with the workload generated by any of the above points
Since the beginning of CNC this year, iNat users have contributed around 8,500 copyright flags. Matter of fact, I myself did 3,600 of those (and I have the wrist strain to prove it!). This, unfortunately, is only the very tip of the iceberg - there are many times this number still up.
Many of us flaggers have felt the responses by CNC organizers in recent threads have been overly dismissive of this issue - and I have come to believe it may be because they simply do not understand the effort involved in what we are doing.
The top 20 CNC projects this year have generated a total of 1,222,099 observations, or 38% of the overall total. I checked the managers of all of those projects (77 accounts in total) and came up with this:
-
The admin of one of the largest and most problematic projects (over 900 members) has been inactive since the day CNC started
-
7 out of 20 projects do not have a single admin or manager who has ever submitted a flag of any kind in the entire history of their account. Do they even know how to do it?
-
Only 3 out of 20 projects have had an admin or manager who has submitted even a single relevant CNC - related flag of any kind this year. The winner is Monterrey, Mx, with 98 copyright flags, followed by Sao Paulo with 2, and South Florida in third place with a single flag.
-
Another 3 projects have at least one admin who has submitted a CNC-related flag, but it was not a flaggable issue (2 observations flagged for being drawings, one flagged for no organism, one flagged for being cultivated, and one seemingly normal observation flagged ‘inappropriate’ - the flagger did not respond to being tagged to ask why).
-
In 4 out of 20 projects, no project manager has submitted a single ID to any observation in their project
-
all told, managers in the top 20 projects have submitted 30,761 IDs for 1,222,099 observations.
Potential actions by CNC organizers
-
CNC coordinators should take a MUCH more active role in curating this event if it is to re-occur in a form that involves iNat. It is unconscionable to create this much extra work for the rest of us without sharing in the burden. I do not mind contributing, but this feels more like exploitation.
-
Casual observations should not be allowed. Any regional projects that include casual observations should be excluded from the overall challenge. This will allow falsified observations to be removed from the challenge, and reduce the incentive for adding them. This one is very important to me personally - I despise seeing people get rewarded for cheating. Especially when it has happened multiple years in a row in the same region, and each year they get applauded for “winning” and cheat more the next time around.
-
Participants need to be provided clearer instructions about the purpose of the event, and what constitutes an appropriate observation with emphasis removed from the competition aspect and placed on the accuracy aspect.
-
New or inexperienced users should not lead projects. It is impossible for someone to guide new users about appropriate iNat behavior if they have no experience themselves. At a bare minimum, a user should have a decent amount of time as an active site user, a familiarity with the rules, a demonstrated ability to communicate with other users when contacted on the site, and a knowledge of how the flag system works.
Potential actions by iNat staff
I have submitted a feature suggestion for having copyright flags trigger automatic warnings and (eventually) account suspensions. You can find it here. I think this, or something like it, would be a very valuable tool to have during an event like CNC.