How to filter for observations with captive/wild disagreements?

Hi all,

I’m trying to find a way to systematically review observations that have disagreements on captive/cultivated vs. wild status.

I know there are ways to filter for observations with taxon disagreements or filter for casual vs. verifiable observations

However, I haven’t been able to find a way to query observations where: 1) A user has marked an observation as captive/cultivated then, 2) Another user has marked it as wild.

Is there a search URL parameter for this, or an API field that flags captive/wild disagreements? Alternatively, does anyone know of a workaround (e.g., filtering on DQA votes)?

Right now, the only way I can find these is manually, which isn’t scalable — I’m hoping to do this more systematically.

Thanks for any help you can provide!

1 Like

why are you interested in these scenarios?

if this is the only filter you want to apply across all observations, currently there’s not a good way for regular users to do this. it might be possible to apply a client-side filter to find when this occurs in a smaller set of observations returned from the server though.

1 Like

I’ve got a few observations where there’s some disagreement on whether the organism counts as wild so if someone wants to specifically search for those sorts of things, it might be good.

https://www.inaturalist.org/observations/319235189

https://www.inaturalist.org/observations/240381249

https://www.inaturalist.org/observations/218263718

1 Like

Hi Pisum - because I have students in a course I am teaching dog piling on clearly obvious captive organisms. I was doing lots of ID for the course which is tracked via a project that filters out casual observations. I was only identifying the observations that need ID, but some of these clearly casual observations were making it under my radar. I have made it clear to students they will earn negative points for dog piling but I could see this being an issue in the future.

At the moment I am reviewing every submission by students but it is taking especially long.

1 Like

so if i’m understanding correctly, your project excludes casual observations, but you’re concerned that some of your students observations that should be casual because they are not wild are making it into the project anyway because they’re collectively overriding the votes of others who say the observations are not wild?

if that’s the case, you should be able to do the client-side filtering that I talked about. you won’t be able to do a blanket filter, since each case probably needs someone actually looking at the individual observations to judge whether the votes are legitimate, but at least you’ll be able to scan through to find observations to focus on.

the way i would do it is this:

  1. open my Jupyter notebook that will get observations via iNaturalist’s API: https://jumear.github.io/stirpy/lab?path=iNat_APIv1_get_observations.ipynb. it’s designed to work in the browser. so you don’t need to set up anything special to run it.
  2. in the notebook, search for the line that begins with req_params_string and set it to req_params_string = 'project_id=fau-ecology-in-action-spring-2026' (i’m assuming that project is the one you’re interested in.)
  3. search for the line that begins with obs = await get_obs and set get_all_pages=True
  4. since you have more than 10,000 observations in your project, search for the line that begins with get_more_obs and set get_more_obs = True
  5. now start back at the top of the notebook, and run each block in order (shift + enter or click the button that looks like a play button), until you finish running the last block within the “Write Data to CSV” section.
  6. look in the navigation pane on the left of the page, and you should see a new “observations.csv” file. double-click to view its contents in the browser, or right-click and download to view it on your own machine with your own CSV viewer.
  7. look for the columns dqa_wild_score and quality_metrics. these will allow you to find observations that you want to look at in more detail. by default, the dqa_wild_score will treat each thumbs up as 1 and each thumbs down as -1, and sum the votes together. so where a score exists here means that there were votes, and then you can look at the detail of the votes in the quality_metrics column to see where exactly you had votes for and against wild. and here’s an example of what that looks like:

now, the score in step 7 above may not be ideal if you know there are cases where folks are piling on to override downvotes. so an alternative presentation of that score can be obtained by doing one extra step after step 1 but before step 5 above:

  • in the line that begins with {'label': 'dqa_wild_score', modify this to include 'score_type': 'ref_vs_total' in the params definition, like so: {'label': 'dqa_wild_score', 'ref': 'quality_metrics', 'function': 'filter_score', 'params': {'filter': [{'ref': 'metric', 'value': 'wild'}], 'score_ref': 'agree', 'score_type': 'ref_vs_total'}},

this basically will show thumbs up votes vs all votes, as shown below, which may make it easier to see when there are disagreements:

also:

  1. if you want to eliminate, add, or modify the information returned for each observation by the notebook, you can go to the parse_fields definition, and comment out, uncomment / add, or modify the lines that you want to change.
  2. if you want to filter out the observations that don’t have any votes for wild (neither up nor down), you can either:
    a. filter the results at the point of CSV creation by finding the line that starts with data_to_csv and setting it to data_to_csv([o for o in obs if o['dqa_wild_score'] is not None],'observations.csv'), or
    b. add post_parse_filter_function=(lambda x: x['dqa_wild_score'] is not None) to the parameters when calling the get_obs() function. (there are a couple of lines where this would need to be done.)

Is your project FAU Ecology in Action - Spring 2026?

There is fails_dqa_wild=true that will return observations that only have captive votes, or have more captive votes than wild votes.

No observations for fails_dqa_wild=true for FAU Ecology in Action. https://www.inaturalist.org/observations?fails_dqa_wild=true&project_id=268796&verifiable=any

Do you know R or Python? In order to get observations that have more wild votes than captive votes, you would need to use R or Python.

no such observations will ever be found in a project that excludes casual observations.

just to clarify, R or Python can be used to get data from the API, but they are not the only languages or tools that can do this.

There are 55 observations in iNat that are needs_id or research with fails_dqa_wild=true.

https://www.inaturalist.org/observations?quality_grade=research,needs_id&fails_dqa_wild=true&verifiable=any

The OP is a biology professor. R and Python are two commonly used languages in academia biology which is why I asked about those languages.

1 Like

these are all going to be cases of bad data. if you make any update on any of these observations, they should turn into casual observations as long as the wild vote consensus is thumbs down.

i updated all the research grade ones, and they all fell into the right buckets. two that were marked as not wild became casual, and one that had no votes for wild either way is no longer picked up by fails_dqa_wild=true.

1 Like

OP asked for API flags to filter observations based on DQA votes. fails_dqa_wild=true is an API flag that can be used. The next step after using existing API flags is custom code.

the goal is to find conflicting thumbs up and thumbs down votes. using this parameter doesn’t really go towards that goal.

1 Like

fails_dqa_wild=true returns observations gives a partial solution because it returns the observations with more wild votes than captive votes. To get a full solution, custom code is needed.

I didn’t provide custom code solution in my first response, because I don’t know the OP’s coding experience. The solution can range from the OP filling in the project id and then run code that someone else wrote, to providing pseudocode and have the OP use their coding experience to write the code in their preferred language.

fails_dqa_wild=true returns observations with more captive / not wild votes than wild votes.

1 Like