Data users— what are your use cases and requests for exporting data?

A post was split to a new topic: Species_guess in exports: what is it recording?

Yes, please

1 Like

I have created a project to encompass multiple properties (15) in a park system. I would like to be able to export all data for the project in a single export while also retaining the place information. I created and named all the places individually and then included them all in my project. However, if I export all project data there is no option to include a column in the .csv with my designated place names. None of the current “Geo” options will provide this information. Converting the latitude and longitude data into places would be difficult because the designated places have irregular shapes and are all very close together within a single city.

you should be able to tie coordinates to polygons using a GIS application like QGIS or ArcGIS. alternatively, you can use the iNat API to get encompassing place ids, and then you can match with a list of ids for your places. alternatively, you can do a csv export for each of your 15 places. add the place id column manually based on the place id associated with each export, and then merge them together.


@carrieseltzer - are there any updates that you can provide on this thread? Most critically, I am interested in the ability to download annotations. Thanks

1 Like

Right now it’s possible to access annotation information via the API. I know it’s not the easiest solution, but the json data about each observation does include annotations.

For example, here’s a search that returns data on all of your monarch observations from Ontario with life stage annotations:

The easiest place to find the annotation terms is in this post How to use iNaturalist’s Search URLs - wiki:


From the API: “Please note that we throttle API usage to a max of 100 requests per minute, though we ask that you try to keep it to 60 requests per minute or lower, and to keep under 10,000 requests per day. If we notice usage that has serious impact on our performance we may institute blocks without notification.”

@dkaposi David and I have the same problem, but using his as an example:

His traditional project, Moths of Ontario,, now has 154,199 observations that are annotated. If 200 is the max per page on the API GET obs endpoint, that would be 770 calls over 15 days.

I have an even larger number of obs to deal with if I’m to use any of iNat for the North American Moth Photographers Group mapping as I delineated on August 15, 2019 in this thread.

Perhaps iNat could make exceptions for “special use” cases to allow more extensive use of the API, or, ideally, fast track an update to the old Export Observations.

I think you’re misinterpreting the rate limit. The 770 API calls (fetching 200 obs each) can be done at 60 per minute and therefore completed in just 13 minutes.

1 Like

From the API: “keep under 10,000 requests per day…”

Is the above limit no longer true? Can the entire 770 calls for the 155,000 obs be made in one day without worrying about per day limits?

That would be helpful.

1 Like

Requests do not equal observations, so 770 API calls (=770 requests) is well under the 10,000 request per day limit.

1 Like

Got it! Thank you.


if you’re going to be making multiple requests, i would think it’s easier just to do something like this:

the female and male sets in the above example could just be pulled back as ids, and then you could just match them up by id with the “all” set in two extra columns where match would indicate male or female and lack of either match would indicate no male or female annotation.

for folks who know SQL, this might be expressed as:

   ON A.ID = M.ID
   ON A.ID = F.ID

(this assumes ID is unique within each set.)


To manually add observation fields for the query has already been suggested several times. I am chiming with that, and would like to have the opposite option as well: To delete observation fields from the query.
I once uploaded an observation to a project, where a lot of (very specific) fields were required. I will not use those fields ever again, but they are now taking up a considerable portion in the CSV download form, sitting there forever and cluttering the space, making it harder to find and tick other fields of use.


It would be useful to add taxon-associated data to a species list. I am thinking specifically of any threat-status. A significant use-case is the ability to query the occurrence of any threatened species in a particular area and to download that data for reports/management plans/consent monitoring etc. Perhaps already possible but I’ve not found a way.


Hi Carrie - thanks for this, I appreciate the links and example - but I am still holding out hope for a refreshed download format that includes annotations.


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.

I’ll second this. There are definitely multiple search terms that work in Explore/Identify that don’t work in the export tool, e.g. multiple place_ids or without_taxon_id.


It would also be great to have the ability to download the URLs for more photos (and at original resolution) added to the export tool. This would be super useful to support UK users to be able to log data on our recording hub, iRecord.

I made an interactive notebook to solve it for now, but long term would be great to have an inbuilt option for this. The notebook might also be of use to some folks on this thread too - see more here.
It has some code to grab annotations …and obs with IDs only by certain users for example, as @nathantaylor suggested …but might also offer a handy starting point for anyone wanting other bespoke data that the API enables but the website does not.

Also, likely a bit much to ask! … but in an ideal world, to have the ability to export as a UK map reference, as I have in the notebook, would solve the other major issue for UK users & iRecord.

1 Like

I’m not certain of this, but I don’t think the site keeps copies of the original resolution photos.

I’m not sure it would fit with their desire to not be used as a photo backup service, and if they have higher resolution photos in cases where users upload them, why incur the storage costs (both financial and technical) of keeping them and not displaying them?