which to me is wrong, because there are only 3 cumulative IDs at Complex level, and a non-disagreeing neutral comment at a higher rank, but if iNat wants to call it 4 agreements, then OK.
We see that at the complex level we have 3 agreements: great.
But that at the Genus level, we suddenly get a disagreement, without any additional IDs - i.e. 3 IDs cumulative and 1 disagreement. Where is this disagreement??
At the Family level we get our cumulative 4 IDs, and the faulty disagreement count vanishes.
I suspect it is in fact related to recent changes in the community taxon calculation – however, not this most recent change but the previous one.
My guess is that whatever changes were made also involved redefining “disagreements” so that in some cases broader IDs are displayed as though they were disagreements. It is not related to whether the DQA is used or not – I checked a few of my observations that had a mixture of broader and finer IDs (but no disagreements) and it seems like under certain circumstances some of the IDs start being counted column 3 instead of in column 1.
It is also very very unintuitive – I really don’t understand what triggers this, but I assume it has something to do with community ID and observation ID being different.
But if they are a disagreement, they should apply to the leaf taxon as well, not just the intermediate ones …
And in the two examples above, the community and observation IDs at all the levels are the same.
Ether the algorithm arithmetic is wrong, or the display is wrong. I see that so often, that I ignore the discrepancy. 4 out of 4, but all 4 do not exist.
I didn’t say they were disagreements or that they should be called this. I also noted that I don’t completely understand what is going on.
I said that for some reason iNat seems to be labeling these IDs as disagreements under certain circumstances – I think it is basically any ID that is broader than the taxon in that row.
I have a vague recollection that the calculation used to display things slightly differently.
It’s probably not a very helpful data point for debugging, but I’ve seem similar Algorithm Summaries for a lot of observations recently. The pattern appears to be that the observation has an accurate score > 0.67 at the CID rank, but lower, inaccurate scores (often below 0.67) for some higher ranks.
I have tried unsuccessfully to figure out the logic for how these are being calculated. In simple terms it seems that the CID for a higher rank should never be lower than the score for one of its descendants. And until recently, that was always the case.
Thanks for reporting this. I found two bugs in the Display of the Community Taxon (not the actual behind the scenes calculation) and took a first pass at a fix that looks promising - we’ll aim to deploy early next week.