yes, that appears to be exactly it. you can get a better idea of what’s happening under the hood if you look at the browser’s network monitor. in the snapshot below, i have a snapshot of the page that shows Anoles in the filter and URL but observations of other taxa:
on the right half of the snapshot is the network monitor, which shows all the different requests that the page made. i’ve drawn arrows pointing at 2 lines. the first represents the request that pulls all observations, and the second line with the arrow represents the request that pulls just anole observations. the bars on the right side of each of those lines give a representation of how long those requests took to complete. you’ll see that the first (all) request took quite a while to complete. the second (anole) one started and ended all while that first request was still going.
upon completion of each of those requests, the page renders the response from those requests onto the page. since the anole request completed first, it rendered first, and then the all observations request completed a little bit after and replaced the anole observations.
probably the thing to do is to either force the cancellation of the first request if a subsequent request is initiated, or else let it complete, but don’t render the results from that first request. another less elegant thing you could do is look into why it takes such a long time to return the results for all observations and try to tighten that up. if there’s not such a large discrepancy between the time it takes to return all results and just anole results, then you’re less likely to run into the all results request completing after anoles.
probably not directly related, but while i was looking at this screen, i just let it load with all observations, and it appears that it makes 3 requests to the get observations API endpoint (see lines with arrows):
the first request is for unreviewed observation details:
the second is for a count of reviewed observations:
the third is for a count of reviewed+unreviewed observations:
i didn’t try to dig into the code, but just looking at the page i don’t understand why the second and third requests are necessary or where those counts would be used on that page. it might seem like a minor thing, but look at how much time it took to complete that 3rd request in particular. if it’s not necessary to make the request, then maybe you can save the server the extra work of processing unnecessary requests?