I have several observations that were send to casual with this DQA “Evidence related to a single subject”. Apart from the fact that this DQA is being used incorrectly (I already contacted the user and think he just misunderstood the DQA) I think it is a bug that he was even able to tick those boxes at all, no? Because whenever I run into an observation with just a single photo this DQA is inactive for me.
Yes, they are active on my observations since yesterday and today those observations and others were newly DQAed.. I do check my caduals every once in a while and am pretty sure I did it several times already this year.
also, isn’t that specific DQA newer then this update last year? EDIT: Oh in fact, it seems to be one month older, wow, time flies! Anyhow, those DQA selections are still very recent
It’s possible to vote in the DQA in both the Android and new iNat app for iPhones, and there isn’t any graying out there. Do you know what platform this person is using? I bet they’re using the mobile app.
Yes, virtually every/any photo in the wild will have more than one species represented, so it’s my understanding that that DQA is intended to be applied when there is a confusing co-occurrence (e.g., co-dominance) of multiple species in an image or images of multiple species across multiple photos, i.e. something that would confuse a potential Identifier or iNat’s CV.
Before we got the (new) DQA. And for a while as iNatters settled in to using as intended - many circular arguments. Unless it is a posed studio shot, with a blank background - any photo of nature contains multiple sp, even a monoculture crop is not only ONE sp.
It isn’t really about confusing potential identifiers or the CV, since as you note, most photos will depict more than one species; identifiers and the CV simply have to deal with situations where the photo has multiple plausible focal organisms.
It’s about situations where the focal organism or signs thereof are not present in all the media items, which leads to mislabeled photos and questions about which photo should be ID’d. Mislabeled photos may lead to bad training data for the CV, but I don’t see this as the primary issue as much as the mislabeling itself – photos can easily be viewed or shared outside the context of the observation and this can be quite problematic if the ID attached to the image is one that a naive viewer might think is plausible (a bee mimic identified as a bee, rather than say, a giraffe labelled as a rose).