Self-rating system for audio and visual observations

Platform(s), such as mobile, website, API, other: all

URLs (aka web addresses) of any pages, if relevant: eBird media rating guidelines: https://support.ebird.org/en/support/solutions/articles/48001064392-rating-media

Description of need:
There are many observations on iNat with varying levels of quality, leading to identifiers having to wade through several low quality observations in order to find ones that are possible to identify with more ease. However, it is impossible to limit low quality observations as a whole due to many factors on the observers’ end, especially for hobbyists and newcomers, and some low quality observations can still be potentially IDed.

Feature request details:
Some kind of rating system that allows observers to rate their observations for quality so high quality observations can be more easily found and IDed. Perhaps something similar to eBird’s five star rating system, with a general guideline for how to rate observations. It will rely somewhat on good faith, and malicious actors could flood the system, but I think it could be overcome with good flagging and moderation.

For what it is worth, it is unclear if, or at least how well, the eBird system actually works, in part because not nearly enough people are rating photos (and even fewer for audio), in part because people do not read the guidelines, and in part because a small number of very determined people absolutely do game the system.

This is constantly discussed in the eBird Facebook discussion group, and a lot of people are very, very annoyed by it. Because there are not that many people rating, it’s relatively easy for a handful of people to get their own photos to the top of the ranking, even if their photos are not that good.

And even if there are no people acting in bad faith, people also just rate on aesthetics, or vibe, or whatever seems obviously correct to them. (I suspect this is because ratings systems are so ubiquitous, people assume they already know what to do.) This happens even where the standards are entirely objective and obvious. So for instance there are many photos rated 5 stars which have watermarks, which ought to mean that they get a maximum rating of 4 if the watermark is unobtrusive.

In other words: Do you really want to open this can of worms? Really??? (Well, okay then. It’s iNaturalist, some people like worms.)

11 Likes

First, you can “fave” your own observations, but that only makes them come up higher in a search sorted by faves.

If a system like this is implemented (and I feel like this has been discussed before), I’d say let identifiers do the ranking and not whoever made the observation. I have definitely seen unidentified observations where I think, “If I knew the plants of this area, I’d know exactly what this is. Someone can surely identify this one.” I usually just mark those as having fruit or flowers or whatever applies so identifiers who search for observations with those will find it.

3 Likes

I suppose that’s true, but I really do wish there were some way to at least mark an image as having a blurry and/or obscured subject to make it a little more convenient to sort through data. Or even the other way around, if you believe an organism has enough identifiable features to be IDed by someone familiar with them. Because right now, it’s impossible to avoid pictures that have the subject as just a blurry spot of colours, especially with survivorship bias meaning many of them will be there when sorting by Needs ID.

Imagine if we could - label and remove from Needs ID

  • duplicates
  • multiple images which need to be combined
  • too blurry for most of us to ID
  • habitat

Then identifiers, who wish to, could pick thru their chosen batch of those, or concentrate on where we can hope to make a difference. Geomodel anomalies and broad IDs above family for my current Cape Peninsula targets.

3 Likes

I personally would not use a self-rating system and think it would have little value because so often we don’t know what is needed to be able to ID any specific organism. See https://forum.inaturalist.org/t/whats-the-worst-pic-you-uploaded-to-inat/40286 and the proportion of pics that have been IDed for evidence.

3 Likes

Certainly, I never stated that low quality would make an observation impossible to ID. That’s why I’m not suggesting that such observations are removed, rather that there is a basic system to sort the iNat database by so higher quality observations, which tend to be easier to ID, especially for beginners, can be more easily found. As it stands, especially for small or quick moving animals like insects, identifiers have to sort through many blurry or low quality images to find ones of high enough quality that they feel confident in IDing. This is just a suggestion which could help to streamline such a process.

1 Like

I wonder if the quality of a photo uploaded to iNat and its identifiability are even related. Obviously, given a bad photo and a good photo of the same subject, the good photo will be more easily identifiable. But what I mean is that, at least for insects, some of the tiniest subjects attract the attention of extremely dedicated photographers, who post hundreds of gorgeous 5-star pictures of 1-mm beetles that simply aren’t likely to be identifiable due to the inherent difficulties in identifying tiny bugs (many require dissection, etc.). On the other hand, the “big showy” bugs, which attract the attention of less-than-dedicated photographers, often get posted in the form of blurry 1-star images of the wrong side of the bug pictured through a screen door with a smudge on the lens, but are still perfectly identifiable because of how distinctive and unique the bug is. So oddly, I don’t think I’d sort photos by quality as an identifier, because I’d expect these two factors to cancel each other out- the best quality macro photos to disproportionately feature “leave it at genus, needs dissection” bugs, and the worst quality blurry cell phone pics to disproportionately favor “wow, that might be a terrible picture, but I can tell it’s a luna moth anyway” bugs. Someone trying to get their cell phone to focus on something from 10 feet away probably isn’t aiming the camera at a Nepticulid, and someone with a $15,000 photography setup probably isn’t bothering to post their 500th luna moth photo. So in the end I suspect the “identifiability” of both high and low quality photo sets posted to iNat may be surprisingly similar. It would be interesting to figure out a way to test this hypothesis, as this is just based on my anecdotal experience identifying. But usually the set I’m “wading through” and the set I find that I can identify do not differ in their average photo quality.

4 Likes

I’m going to close this request. It’s open to abuse and would be really difficult to come up with a rating system for, and probably not worth the time it would take to set up and then regulate.

1 Like