Please fill out the following sections to the best of your ability, it will help us investigate bugs if we have this information at the outset. Screenshots are especially helpful, so please provide those if you can.
Platform (Android, iOS, Website): Website
Browser, if a website issue (Firefox, Chrome, etc) : Chrome
When I searched for observations using both the “captive” and the “verifiable” tags, which ought to be mutually exclusive, it came up with a list of 1100+ observations. When I clicked through to look at the individual observations, it appears that they have all been voted captive in the DQA, but for some reason this vote did not result in the observation becoming casual, despite no counter-vote having been cast. When I added a second downvote, it then became casual, and remained so even after I removed my vote.
That is strange that there are Research Grade observations in those results. I was able to find some that do have a downvote for captive, but are still RG. Not all are though.
Captive and verifiable aren’t mutually exclusive though. Verifiable is a property of the data in the observation (has media, date, location) but captive is based on a judgment about the content. So it’s possible to have a verifiable, captive observation.
Has this Bug been looked into? It was brought to my attention today that 47 observations that I have marked as captive recently have remained at research grade. I have left them alone in case it would be useful for the debugging process.
Reproduction conditions will be useful for getting to the bottom of this. Can you describe your typical identifying/marking workflow? Device, browser, steps.
@esummerbell, I see that several observations you identified (and presumably marked as not wild) a few weeks ago are in this category. Can you also describe your id/marking workflow?
If there are commonalities to how @bpagnier and @esummerbell are working, that would be helpful.
If anyone else comes up with replication steps, please chime in.
I have a macbook pro, and use the safari browser. In Identify, I start with the first observation, click on it to expand it, and work my way through all the observations using the right arrow key. On each observation I will generally have to zoom in on the picture using the touchpad to look over the observation, then I use the keyboard shortcuts to “a” agree, “x” mark captive, or if it is another organism, I use “i” to add an identification. At this point I have my computer set up using a program to automatically expand species names I frequently use based on abbreviations I type into the line. I then arrow down to select the species name it pulls up and select it. Depending on the photo quality and features shown in the photos, I might only have to spend a few seconds to identify each observation. at the end, a message pops up saying I have gone through all the observations on the page. I then click “mark all as reviewed.” (Sometimes a message does pop up at this time saying failed to save record, but at that point it is impossible to tell which observation in the list it is talking about. It still seems to mark all of the observations as reviewed regardless of if the record successfully saved or not.) I wait for the loading icon to complete, then I refresh the page to bring in more unreviewed observations.
Wanted to confirm that this is still happening (now at 1,115 observations that are ‘marked’ captive but don’t show up as such, the latest being yesterday).
I had a similar problem where the IDs on an organism didn’t match with the community taxon (this was for older records; I made a flag for it on the taxon page, subfamily Thalictroideae). A curator noted it was because of indexing issues, but that there was no fix except to manually correct them, and one way to do that is to check and then uncheck either option under “Based on the evidence, can the Community Taxon still be confirmed or improved?” in the DQA. That strategy works in these cases as well. Don’t know if that’s helpful or not for debugging in this case.
I have reassessed the quality grade of these observations and I don’t think there are more observations that are captive and verifiable at this moment. There’s clearly still an underlying bug that allows observations to get into this state as some of the ones effected were very recent. It would be great if folks could keep an eye out for this when marking observations as captive so we can track down steps to reproduce the problem. We can also monitor for observations that get into this state going forward and try to replicate the problem by looking at what actions were taken on them and attempting to apply the same actions in the same order and see if the quality grade fails to update.
I was curious and searched, and I found one currently in both states, which seems to have received its second ID mere minutes ago. The ID and the not-wild vote seem to have come from the same person.
I came across this bug last year. I was able to reset the index by marking ‘organism is wild’ as ‘No’ and then removing my ‘No’. That meant I wasn’t changing the wild status but Research Grade became Casual (seen at the top of the observation screen). I undid my ‘No’ since I wasn’t actually trying to add to the wild/casual count. At that time I fixed all the Cactaceae and Canadian plants as those are my areas of interest.
There are some new ones: https://inaturalist.ca/observations?captive=true&place_id=any
After assessing the observations that ended up as both captive and verifiable after fixing the existing cases, it was clear there was some race condition that allowed them to get into this state. Essentially there was a possibility of two actions happening at the same time, with one resetting the quality grade without knowledge of the other event happening.
We released a potential fix for one case of this problem today. So far no observations have gotten into this state again, but we’ll keep an eye on it as there may have been multiple ways this state could have been achieved. Thanks again for pointing this out and for your patience while we work on a solution.
I just stumbled on this issue myself and found my way here. I am currently marking any Captive observations of Sphaeralcea ambigua I can find in the Southern California area, and I keep finding a few observations that are already marked as Captive under the Verifiable search results, and then then the ones I have marked as captive seem to be coming up, as well. When I go into the search filters and actively check “Captive” in addition to the already checked “Verifiable” filter, all of these observations disappear from the results.
So it seems the search is excluding them when instructed to check for Captive status, but not checking the Captive status when not instructed to do so and pulling Verifiable observations.
One observation that is coming up in the search result (https://www.inaturalist.org/observations/218521248) I have even marked redundantly as a “yes” under “Captive/cultivated”, and a “Captive” under “Is it wild?”
Thanks for the example. It appears you have marked this observation using Observation Fields. ObservationFields are user-generated custom fields that can be attached to observation. Observation Fields do no inform quality grade at all. The only iNaturalist-supported way of marking these attributes for observations is using the Data Quality Assessment section lower down on the page.
I would recommend you not use these observation fields, and instead use the Data Quality Assessment to indicate wether or not an observation is wild. The use of those observation fields is likely to just lead to confusion as it appears to have done in this case.
So to be clear, these observations, even though they have user-generated observation fields indicating so, are not captive according to the platform. They will be if someone adds a corresponding Data Quality Assessment. Until then, everything appears to be functioning as expected, and this isn’t evidence of any bugs related to this thread’s topic.