API calls always return no more than 30, even when more is requested

The call https://api.inaturalist.org/v1/taxa/autocomplete?q=canis&per_page=106 results in the output;


And then I cannot access the remaining pages. It even does the same when I use the “Try it now!” feature on the API page.

Am I doing something wrong?

1 Like

I just tried my first API saved as JSON for use in Get & Transform in Excel. All worked beautifully until I saw only 30 records, instead of 962. Obviously there’s something in the “page:” and “per_page” that I’m doing wrong, but don’t know either how to fix.

Do I need an API Token?

@cmcheatle @kueda (Crusher of Dreams, although I still prefer Ken Nietzsche) You know about birthin’ APIs. Can you help us out, please?

Sorry you are getting outside my area. I know if you increase the number in a api call loaded as a browser session it works.
Thus https://api.inaturalist.org/v1/observations?user_id=409010&per_page=50&order=desc&order_by=created_at#

will get you 50 records (validated by switching to chart view). I’ve never tried to do an extract of the data through the api though.

Thanks for replying.

When I filled all the API selections I needed and hit “Try It Out”, of three choices I used the URL choice that has the same syntax as your browser example. Unfortunately I don’t know what Chart View is or how to switch to it.

I’ll keep messing with the page and per page fields to see what might work.

Maybe I should have read the implementation notes first.
" Given zero to many of following parameters, returns observations matching the search criteria. The large size of the observations index prevents us from supporting the page parameter when retrieving records from large result sets. If you need to retrieve large numbers of records, use the per_page and id_above or id_below parameters instead."

At least in Chrome, I dont know about other browsers, when you load a json document into a browser view, when it finishes loading, there should be a ‘Chart’ link in the top right which is a graphical visualization of the data.

1 Like

Ah, thanks. Appears that Firefox doesn’t support that functionality. May have to switch to Chrome.

you may be trying to use the API for something it wasn’t intended to do. the /taxa/autocomplete endpoint is intended to return a small list of taxa for an autocomplete feature. i can’t think of a reason to ever return more than 30 records for an autocomplete result list. the iNaturalist staff probably thought the same thing, and so they capped it at 30 to prevent anyone from consuming resources unnecessarily.

that said, some of the other endpoints have higher caps (in line with expected use cases). for example, the /taxa endpoint is capped at 500. so you could use that to get an entire list of 106 canis records in one request.

see above. which endpoint are you using? i think the highest per_page limit available to some of the endpoints is 500 records. so if you’re trying to get more than 500 records, you’ll have to make multiple calls.

for GET requests, you generally don’t need an API token unless you’re trying to pull back a huge result set or unless you want certain data that is available only to a specific user (ex. obscured locations).

1 Like

Thank you, @pisum. This is all pretty new to me but I suspect I’ll use the API a lot, not to keep connected, but to use for a saved JSON file for Excel as the current CSV download isn’t useful. This first foray was a bit of a test using the API/v1.

Endpoint was GET Identifications: Valid observation records regardless of research grade, one identifier, one taxon at genus level but returning only species, locations USA and Canada. ~960 total records. Here’s the URL: https://api.inaturalist.org/v1/identifications?rank=species&user_id=333026&current=true&place_id=1%2C6712&taxon_id=119861&page=1&per_page=200&order=desc&order_by=created_at

It wouldn’t take 500/page but I just got it to take 200. I’ll have to think about how I’d split the results into multiple calls, or maybe try see if a API token would allow me the total records.

It’s too bad that one can’t specify fields as the CSV can. Really only need about 10% of what’s in the records.

i don’t think the API token gives you a higher per_page cap. it just allows you to return higher pages. (i forget what the page limit is for unauthenticated calls, but it’s documented somewhere.)

just use page=2 and so on… (and probably order asc)… if you’re using Excel, you could write a macro that will pull back results from incrementally higher pages (until it reaches a set with no records) and then combines the results. just note the page limit for unauthenticated calls, and also note that the API throttles you (refuses requests) if you make too many within a short period of time.

1 Like

Great. Thanks so much for your expertise @pisum.

And I just discovered that the latest versions of Excel allow one to append additional queries to the first so no macro needed. Instead I can spend time removing 100 unneeded columns…

1 Like

hmmm… i’m not sure what’s involved in removing unneeded columns, but it may or may not be easy to get, say, 5 result sets with all the columns, and then just do a SQL query to combine all the result sets, specifying just the columns you want:

SELECT [column 1], [column 2], [...]
FROM [table 1]
SELECT [column 1], [column 2], [...]
FROM [table 2]
SELECT [column 1], [column 2], [...]
FROM [table 3]
ORDER BY [column 1], [column 2], [...] [<-- order by is optional]

Thanks. Yes, I’d thought of SQL but I’d still have to first look at each column(field) to determine what it is (and ponder iNat mysteries like why a taxon_id is, or should be, different than an iconic_taxon_id). And at least several fields are “exploding”, having multiple selection values. For this exercise it’s easier to just delete the unwanted columns.