“I see a lot of people complaining about how bad IDs ruin the dataset for their research, but for most species it is not difficult to sort through all the observations and review them.”
This is the right answer. If for some reason you need the identifications in a taxon to be correct, just go through them and check them yourself. Learn the keyboard commands on the identification tool, assuming you only want a few species it won’t actually take that long.
“Regarding the gamification and reward incentive: has it been suggested that maybe top IDers be based in part on their improving IDs, so that simple agreement doesn’t rocket you to the top of the list?”
We should not be discouraging agreeing with Research Grade observations. We need MORE people agreeing with them not less. If only two people review an observation they must be perfectly accurate. If ten review it, a few incorrect IDs are not a big deal.
“The problem is that it’s impossible to accurately spot incorrect IDs. If it was possible to do that, incorrect IDs would be eliminated altogether”
While it is impossible for the computer to know which IDs are incorrect, I do think there is a place for computers looking for incorrect IDs. With a combination of the posting history, previous rate of incorrect identifications and the photo identification algorithm it should be possible to sort observations by the probability they are incorrect. Then observations could be sorted so you see the most likely to need review first. This will take some time to develop, but when we are at the 1 billion observation mark something like this really will be needed.