It’s probably not a very helpful data point for debugging, but I’ve seem similar Algorithm Summaries for a lot of observations recently. The pattern appears to be that the observation has an accurate score > 0.67 at the CID rank, but lower, inaccurate scores (often below 0.67) for some higher ranks.
I have tried unsuccessfully to figure out the logic for how these are being calculated. In simple terms it seems that the CID for a higher rank should never be lower than the score for one of its descendants. And until recently, that was always the case.