Seeing threads about people gaming the system to reach a high number of identifications ….
Seeing firsthand, newcomers adding and agreeing to incorrect IDs without realising the impact on the dataset…
Seeing the mistakes amateur identifiers make even when well-intentioned (myself included!)
Hearing experts reluctance to use the iNat GBIF data or participate here due to larger data quality issues…
I’ve been wondering about how this could be bettered…
It makes zero sense to me that for example :-
An entomologist of global standing, a specialist in a particular family that nobody else can even start to ID without access to a museum collection or decades of research…… should have to argue and debate their ID input with any Tom, Dick or Harry who downloads the app, starts taking pictures and is convinced they’ve found an X, Y or Z.
I think the existing dynamic
discourages more experts from joining
puts those off who already pitch in their time so kindly.
costs significant broader community energy
limits the level of accuracy the AI can reach…
Personally, I’d be in favour of simply empowering experts and disempowering newcomers.
e.g. something like …one expert ID = RG ….three newcomer IDs = RG
I’m sure similar ideas have been floated for a long time here though… but couldn’t dig out this exact point. Can anyone explain to me what stands in the way of this sort of empowerment / what are the arguments against this by the community?
For me, all the wonderful elements iNaturalist has to offer…
Attractive and addictive UI and UX …helping people learn more about the natural world….friendly and welcoming community….open source and community focussed ethos… are in no way mutually exclusive from… a degree of empowerment for experts, and a push towards a cleaner, better dataset.
Users are overwhelmingly willing to ask questions about IDs and withdraw/improve upon correction. Data quality issues are definitely problematic, but that misses the point of iNat: it fundamentally exists to teach people about nature, not to inform research. The latter may also occur, but that is not the main goal.
Anyone discouraged/put off by this is missing that the point is to teach people, not to get the correct ID. (I definitely need to be reminded of this all the time.)
Not if the broader community is learning to ID so that the experts don’t have to do all the work. And who would vet “experts”? That would take an enormous amount of time from the staff that they don’t have.
Maybe. Assuming there are experts whose IDs are currently not Research Grade. This is definitely the case sometimes, but I would guess that the real reason experts avoid iNat is because it takes so much time wading through common species to find the rare ones. This also misses the point that iNat is to teach people about nature, not find rare species.
And the big question remains: what is an expert? A museum or university employee? Someone with X number of publications? I know lots of amateurs that don’t fit the “expert” category who are essential IDers.
It is not possible to teach correctly about biodiversity by using iNaturalist or any other tool without many incorrect IDs. It is not possible to teach anything well without mistakes.
Crikey, man. No. That would be insane. Empower the “experts” and disempowering the rest. That’s just absolutely insane. It’s bad enough that a large portion of “experts” on here already reign supreme tyranny over the “non experts”, actually giving them permission to do it, would be the demise of Inat as newcomers would be disrespected and it would be an unpleasant place to learn and contribute and then nobody would want to be here.
Just look at the current politics that are happening right now. Complete partisan divide. That is tearing north america apart. That’s what happens when one side is treated poorer than the other side.
Also, here in Ontario Canada, eh. The ministry folks. The so called experts. They have the worst fish identification skills, I’ve personally ever seen. It is incomprehensible and unfathomable how terrible they are at it. Which is why very little fish in Ontario, on Inat get ID’ed. These folks are the so called experts on fish in Ontario on Inat. They reign tyranny. It’s what they say goes and they will attack in swarms in buddy systems, anybody that disagrees with them.
That’s why I don’t post fish on here anymore.
To actually condone and encourage this kind of behavior would be both inappropriate and counter productive to peoples interests in citizen science, wildlife etc, but rather turn them away from it and lose interest in it, when in fact. We should be encouraging everybody to get involved and to care about the environment and wildlife.
Personally, I’m not a fan of devaluing newcomer IDs, but I would suggest boosting expert IDs. Several times, I’ve observed something and given an incorrect ID
to it that someone else also agrees with. A few months or years later, someone with actual expertise might give a differing species ID and properly explain why, which I’ll usually concur with, but by that point the original observer that agreed with me has been long inactive, so the observation is stuck in uncertain limbo. The only hope of the new species ID reaching research grade (aside from the first agreer will logging on after their hiatus) is someone else stumbling upon it on their own and agreeing. Also, putting more weight on expert IDs might avoid more disasters like Gerald or the South African Julia Skimmer.
Don’t stop posting fish! With the state of the environment today, fish and amphibians have the most to lose among vertebrates, especially in places like Canada where they are adapted to cooler temperatures that might not exist soon. Thus, it’s imperative that more fish be observed so researchers can track and understand more about them.
I’ve generally been against this, mostly because it impacts accessibility for the experts, as they have to go through a process (in addition to actually signing up) to validate their expertise.
However, how about something along the lines of an “I work with this taxon and am a recognised expert” check box, where people can signal their own expertise. This can then be reflected in the IDs and comments they post. Make it available at a certain (more fine) taxonomic level only.
My reasoning is that while experts shouldn’t generally be given more power, it is sometimes hard to recognise the real experts. They often have fewer interactions with iNat, so they aren’t on the leaderboards, and are more likely to make dissenting IDs (because they know something the rest of us don’t). I think this is often the reason they end up in long debates with people who know a lot less (which I have seen several times myself). A honour-system feature with a visual marker will help this, and I hope that it would be used and self-policed appropriately (ie, people don’t use the checkbox unless they meet the criteria, and if they do, then they are politely and firmly educated on its use by those who are entitled to use it)
I think this is generally what the profile can do. However, I often suspect that because of my profile description, users quickly agree with what I’ve suggested. Then if/when I decide that I was incorrect (not terribly uncommon) there’s a dissenting ID that wouldn’t have been there otherwise. No way to know this for sure—maybe they just agree with any ID—but I’ve tried to make my profile seem less authoritative while still being helpful.
I’m referring to the idea that you’d need three newcomer IDs before ever reaching research grade, that seems like too much. I think experts can gain more weighted IDs without having to do stuff like that.
But on the topic of experts, I think it would be best to have a database of expert users and their area of expertise who can contacted in a pinch to help with IDs. I have a bunch of observations that really need to be reviewed for accuracy (mainly the many beardtongues I saw in Colorado) but I’m not sure who to message about those.
There’s a pretty inter-connected network in a variety of fields, and if you just start messaging people they probably know someone else if they can’t help. Usually you’ll find someone eventually. I’ve both benefited from and aided in this process.
Funny you mention that. I’ve seen if often, and it’s why I don’t think you need to weight experts’ IDs higher. Expert IDs are often followed (rightly or wrongly) by a bunch of agreeing IDs.
Whether that would be a bug or a feature of an “expert” tag, I don’t know. What I do know is that many people don’t look at the profile information, and a lot of experts don’t fill it out - especially if they’re only occasional users. A taxon-specific expert flag would give you a visual indicator on the observation page, which to me is much more likely to be noticed.
I think that you can do both. I might suggest developing a drop down menu that allows the identifier to enter their own “expert level”. I may be a newcomer with a lifelong obsession with amphibians or a botanist with 20 years experience, who has never worked in or studied North America plants, or maybe I dabble and want to increase my identification skills. No need for anyone to “vet” credentials, unless a need arises.
I am new to inaturalist. I want experts and researchers to use, what I am fortunate to witness daily. Hopefully for the betterment of planet. As well as to help others connect and gain interest in the wonders of nature all around us.
Thanks @bouteloua !
That is what I was looking for.
I highly recommend anyone who is thinking of responding to this thread, to wade through the 76 replies on the other thread before contributing All of my responses to everything everyone else has said in response to me are largely laid out there by @joe_fish.
I’m not sure there’s much need to replicate all of the points on that discussion here.
( sorry @tiwane for starting this ! )
This was partly a precursor to another suggestion - which I will raise on a separate thread I think for clarity - but somewhat relates to the comment of @mtank about acknowledging expertise more visibly.
I just think, the best would be to teach people …and to get the correct ID ;)
This seems to be a running debate, especially as the site gets larger.
It comes down to a question that we don’t seem to be clear on: is the point of the website to build a scientific database of organisms, or is it to encourage people to go out and observe and interact with nature?
One of the things to keep in mind is that even though some people on here are going on expeditions into national parks with good photography equipment to get pictures of rare and elusive species, probably 90% of observations are people in their backyards or nearby parks, taking photographs of common plants, birds and insects. Most of the peer IDing is for non-controversial, common organisms. You don’t need any formal biological training to be able to identify Queen Anne’s Lace or a Steller’s Jay.
The reason we need experts is that sometimes there are organisms that are rare, have less evidence, or are in a location where their presence might be controversial. Those cases do come up.
The best way to deal with that is probably to flag certain species (pumas would be a great example, not every paw print is a puma) as needing more and better expert evidence to be Research Grade. We could also flag it so species outside of their expected range would need better confirmation.
The problem is, people use this site for taking pictures of wolverines in Glacier National Park…and they use it to take pictures of dandelions in their backyard. So which users should IDing be oriented towards?
Worth comparing this discussion with the early debate between Wikipedia and Citizendium. The latter (aiming to produce more reliable content vetted by certified experts) eventually died, while Wikipedia (without ever implementing a formal reputation system based on external authority) ended up attracting tens of thousands of domain experts and improve its information quality over time.
I’ve only been on this forum for three weeks but I’ve crossed this debate in several topics.
Yes, the story of - and don’t click on this if you have bandwidth issues - Gerald speaks to an issue with the quality of the high level data from iNat. That’s not really a big issue because generating high level data is not iNat’s function. Focussing on the quality of high level data could and probably would compromise the objective of engaging people in the natural world and helping them learn about it, which is iNat’s raison d’etre.
Having said that, I am puzzled by the sweeping declarations being made about data. The raw data are not conclusions about the taxonomic category of the observations, the raw data are the observations themselves and they are a treasure, misidentifications and all. Quality control on external data is a fact of life and anybody who complains because an open source, free to all compendium of natural history records has some serious booboos embedded in it is really being an ungrateful churl.
The model in which high priests of taxonomy pronounce and the iNat parishioners are grateful for their wisdom is a wetware parallel to the AI software that reinforces the distance between ordinary folks and actual natural science. The beauty of the Observation/ID process is that it is a conversation and when used well it allows people to see not just what they are seeing but why that’s what it is. Forget about reputation systems. A simple innovation that would do wonders for iNat would be requiring at least 50 characters of explanation with any ID that contradicts the original observation or the community taxon.
I joined iNaturalist to learn about bumblebees (it was a Covid lockdown activity). My first 5 observations were examples of varying quality of 5 bumble bee species. I was prepared to be told that one photo didn’t permit ID to species but figured the other 4 were OK. It’s been over 3 weeks and not one has been IDed. I’m not bent out of shape about it, but school kids doing a pollinator project might not be happy and they could hardly be blamed for IDing each others’ observations when nothing else was happening, even if young Jeremiah’s rusty-patched bumblebee is actually an out-of-focus hummingbird moth. Reducing the pool of people who can get an observation over the hump to Research Grade is going to make the backlog for some taxa worse, not better.
I’m not against finding ways to clean up data. I am against undermining iNat’s enormously valuable reason for being to make it more convenient for academics.
Although I find the concept of empowering experts to be tempting, I do agree that implementing it would undermine iNaturalist and what it stands for. There are certainly moments where I find myself quite irritated about being overruled regarding an erroneous identification, but I try to remember that the records on this site (at least those regarding orchids) seem to be relatively accurate. As much as I’d like to have my IDs be more strongly weighted, I think that the voting system on this site is truly invaluable and should not be tampered with. After all, iNat isn’t merely a database—it’s a community.
I agree with what you said regarding contradictory IDs, although I think the real issue lies in users carelessly agreeing to identifications on a large scale. How it irks me to find an incorrectly identified observation with three IDs, and to post my ID (with an explanation) to no avail. Perhaps users who are deemed ‘experts’ might have enough weight to bring an observation with three IDs down to genus level, although creating such privileges would certainly be a slippery slope. Such a feature would be extremely helpful for me, as I find so many erroneous community Dactylorhiza IDs that I’ve actually started logging them. Part of this is, unfortunately, due to careless identifiers.
As far as meeting the requirements for research grade is concerned, I agree completely with what you said. If any two users agree on an ID, then let the observation reach RG status. Period. Aside from what I mentioned regarding the community ID, I think that the system should remain as it is now. Other thoughts I’ve had are making a system which allows any user to mark an observation as needing additional review, and/or adding a filter for observations that are identified contrary to the suggestions of the AI. Although iNat’s suggestions aren’t particularly accurate, they’re good enough to detect potentially obvious errors that are just lost in a sea of otherwise correct identifications. I’m sure that many of these results would be composed of unusual forms or varieties, but with a smaller pool it would be easier to pick out those that are truly incorrect. Above all else, it would save a great deal of time searching through records.