I agree not grounds for deletion per se, and suspension only in egregious cases or if they’ve been properly warned they will be suspended if they don’t stop and continue, but it would be nice if there were a way to bulk flag the observations/pictures within observations as copyright infringement (either the entire account or just like check boxes) for when someone has uploaded hundreds of copyright infringing photos and few or none of their own.
I’ve had similar situations where the organism was only identifiable because, purely by accident, the different observers had captured different features due to their differing angle or focus choices. I know it can be annoying in other cases, but I don’t think it’s one of the major limiting factors on any use case for the site, nor among the more time consuming to handle.
That would be perhaps true if the primary purpose of iNat was to generate high quality data, but since it is to encourage people to observe nature I think it has to be accepted that mess generated by non-voluntary users (e.g. students iNating because of being requested to do so) is an unavoidable side product.
yes, i meant more like either or. like do whatever you gotta do, whether it be mark it as captive or just review it, or just carry on, if you don’t know what to do.
I admit, sometimes I have thought the same but this seems a little bit too drastic and not easily feasible as nothing can stop people to organize events related to iNat.
We could think to raise the requirements allowed to create projects. Of course, this is not a warranty that the mess will not happen, but it could be an attempt to try to contain this issue. Indeed, educators or teachers could simply push students to use iNat without being registered or without creating a project nor relating the activity to a particular event. At least from my country it is rather frequent to see new users all posting observations (more or less all of the same species, often with wrong IDs) from a school garden or a urban park.
I add: for events-related projects let’s think to introduce this recommendation/rule: creators and administrators must care for what is uploaded and put all the efforts in fixing things once the event is finished. Sometimes (or often?) it happens that they disappear after the event is finished.
Moreover, let’s translate iNat instructions on this aspects and, if possible, also create a visual guide. In my country almost everyone studies or has studied English but few really understand it or want to read and try to understand a page written in English.
I don’t consider this a huge problem. I saw about a dozen observations from India (a guy holding a handful Abrus precatorius peas and all his friends uploaded a (slightly different) picture of that). I just hope he didn’t eat them. This happens if a scholl “expplodes” and releases a bunch of students. They obviously all find the same specimens (beautiful flowers, stangely-shaped fungi, mushrooms) etc. and upload them. I don’t know whether the data flow to IGBF (or other “customers”) counts those things multiple times, but I guess students won’t often find rare species near their schools anyway.
I agree with you that blurry, unidentifiable photos are not much use to anyone (except perhaps for the individual observer) – but this is true regardless of whether they are photos taken by one person or by several people in a group.
iNat data is collected unsystematically and idiosyncratically; it does not and cannot represent frequency data. There is an open feature request to allow users to share observations, but it is not clear how or whether this is even feasible to implement.
I don’t see that multiple people observing the same organism as part of a group is any more problematic than multiple people observing the same organism on different occasions (say, because word got around that there was a cool plant at a certain location, or a vagrant bird, etc.), or one person observing the same organism repeatedly (e.g. regular visitors at a home bird feeder). If anything, records from the same location and time are probably easier for researchers to identify as representing duplicates when they analyze their data set before use.
The main problem I see with these group observations is that they often make more work for IDers – because the photos are scattered across different observations by different users, it isn’t possible to quickly and easily compare the features in the various photos that might allow for ID if they were in one place. But the biggest issue I’ve encountered is that sometimes these groups will incorrectly identify the organism in question and then the observers will confirm each other’s IDs, which means that IDers are left with dozens of observations that are difficult or impossible to correct, particularly if the organism in question happens to be in a taxon where there are not many specialists on iNat.
So it comes back to a need for better onboarding of new users and more awareness from instructors or organizers of bioblitzes that most new users need guidance and shouldn’t simply be set loose to observe anything in sight.
The new help site has been translated into multiple languages and covers many topics of relevance to new users. I would like to see it more prominently linked on the iNat home page, as it is currently not very easy to find unless one knows where to look for it.
So there is a planet that the select few how ever many will go that’s livable when mother nature takes over earth again fire flood ice witch ever the case is and when earth is creature livable again come back and start over ?
I think the key is to make sure educators using inat know how to use it properly. I’ve had my high school students make observations for outdoor projects, and as their teacher, I follow their accounts while we’re doing the activity and personally review all their observations. That means marking captive/cultivated when needed, reporting copyright infringement if it happens, marking uploads with multiple subjects, talking to kids in-person who are uploading unidentifiable photos, etc. I can’t control what they upload completely, but I can at least give them a bad grade if they’re deliberately misusing the platform, which ideally would encourage them to have some discretion. And I figure it’s my job to review their observations if I’m the one telling them to upload them. When I see 100 uploads of planted trees around a school marked “wild” and sitting under “unknown” for a week, it makes me wonder if maybe an educator didn’t know what they were getting into when they asked their kids to make inat observations. I think use of inat by educators should be encouraged, but educators should be encouraged to get to know the platform first before asking their kids to use it.
I feel like a big part of the problem is the implicit push for numbers in the City Nature Challenge - get more observations, more species, more observers. It has to be more than other competing cities, more than last year. Why? What’s the purpose of this more?
I would love to see a pivot towards emphasising quality over quantity in these events. I coordinate a City Nature Challenge project and we have somewhere of the order of a thousand observations per event. I can just about manage to review most of them and correct observations not marked as cultivated or with egregious misidentifications. But so many projects are in the tens of thousands of observations. There is just no way any one person can keep on top of that. And these numbers are explicitly encouraged, through leaderboards, superlative descriptions of the “biggest event so far” and so on. It’s very hard to resist the pull of “we need to have bigger numbers than last year”.
The solution needs to be some careful consideration of how we can incentivise more connection with nature, more quality observing, and not just get fixated on the numbers of observations and observers. I see some encouraging signs of that, with the monthly iNat updates, where the focus is on interesting discoveries, and the CNC is compiling noteworthy observations which if emphasised might help us resist the pull of the more.
I have a suggestion. How about when users start an account they need to choose between casual user or research data user? User’s picking the latter could be asked to read a page on how to do good observations, maybe even pass a 5 question multiple choice test. This way data can automaticaly be flagged as casual for the casual users that aren’t engaged enough to understand how to make good research worthy observations. They can just use it id plants or be involved in a school project or whatever else their purpose is. There could then be a toggle switch in the setting to move into research generating mode in case they change their mind.
Send them automatic warnings from the site and suspend their accounts if they keep it up. Force them to get in touch with moderators to get their priviledges back.
Repeat offenders? Ban 'em.
It would be useful if users could share an observation - so if a group comes across a rattlesnake on a hike, everybody doesn’t need their own image of that one snake to make it count on their list.
Also, if there are multiple photos from multiple users, they could be pooled as supporting that one observation. Copyright remains with the person who uploaded the photo(s) (as in eBird)