Export is in queue for last five hours (has 13000 observations with many fields chosen))

Platform (Android, iOS, Website): Website

App version number, if a mobile app issue (shown under Settings or About): na

Browser, if a website issue (Firefox, Chrome, etc) : chrome

URLs (aka web addresses) of any relevant observations or pages: https://www.inaturalist.org/observations?place_id=any&subview=table&taxon_id=83736

Description of problem (please provide a set of steps we can use to replicate the issue, and make as many as you need.):

step 1
https://www.inaturalist.org/observations?
5place_id=any&subview=table&taxon_id=83736

step 2
for the above url, I wanted maximum field data, so from filter, added many fields

step 3
created the export, is said its in progress.

step 4
however, its still in progress for 5 hours!

why ?

1 Like

I think the answer to “why” is that it has 13,000 observations with many fields chosen. Especially if your internet is spotty, 5 hours is not unreasonable.

5 Likes

did you refresh the page? when you first create an export, there will be a little pop-up that sticks around for maybe a minute. if the export hasn’t completed within that time, you’ll be presented with an option to either get an e-mail when the export is done or to check the status yourself. if you choose to check the status yourself, you need to refresh that page occasionally to get the latest status.

if the status is up-to-date, then “Queued” would suggest that your job was stuck behind other jobs that others launched prior to yours. i just launched a small export job myself, and it completed just fine. so i assume that if your job was stuck behind other jobs, it’s no longer stuck.

if your job is still showing “Queued” however, it’s possible that something went wrong behind the scenes. in such a case, write a note here to indicate that the problem still exists, and then also contact the staff through the helpdesk at help.inaturalist.org, referencing this post. (they’ll probably need to clear the job before you’ll be able to launch any additional jobs.)

2 Likes

5 hours for 13,000 definitely seems like something is wrong from my experience; I’ve completed quite a lot of downloads of 100,000-200,000 records at once, including additional columns, and had them complete in 1.5-2 hours

3 Likes

That will vary enormously depending on the connection from your location to the iNat servers.

Where I am downloading that many observations would take a day or two and would probably time out and have to be restarted several times before it finally worked.

2 Likes

The download has not even been processed; there is a Queued date but no Started or Finished dates. So the issue is with the iNat server rather than the user’s connection. I’ve been trying to download only 5000 observations for the last 3 hours and have had the same problem.

1 Like

i just exported a csv with 1 record. i don’t know exactly how iNat prioritizes jobs in its processing queue, but i wouldn’t have expected it to prioritize my export over any other jobs.

try refreshing your export page. does the page still show that your job is queued?

Yes, I have tried refreshing multiple times.
Four days ago, I performed the same query, which was queued for only half an hour and completed in 13 minutes.

hmmm… i’m surprised that iNat wouldn’t process jobs first-in-first-out, but if you’re only trying to export a few thousand records, you can always do that via the API instead.

there’s a link to a Jupyter Notebook (using a variant of Python) that can help you accomplish this here: https://forum.inaturalist.org/t/whats-the-best-way-to-share-python-code-nowadays/48554

@kildor has a page that provides another way to export stuff: https://kildor.name/inat/download-observations

i’m sure there are other tools that can help with this, too, but those are the first ones i thought of off the top of my head.

1 Like

Thanks! Perhaps the servers are overloaded?
Is it possible to download also obscured observations from these alternative solutions?
I’m trying to download the observations from a project that I administer.

it should be possible with the Jupyter Notebook. you have to get a JSON Web Token. search in the notebook for jwt =, and follow the instructions in the comments above. then in the section that defines which columns you want to get, uncomment the private location fields so that these will be included in the results. when you make the main call, make sure you set parameters get_all_page=True and use_authorization=True.

1 Like

Thanks again! I will remember this for the next time, it appears it went through!
Queued 16:27 Started 20:13 Finished 20:26

That makes it seem like the queue is just long. The query finished quickly and reasonably (and in similar time to when you ran it earlier). So maybe there are just a lot of requests?

1 Like

Yes, it seems the server was quite busy.