I found myself realizing that if iNat disappeared overnight, I would entirely lose my life list of all invertebrates (and get a lot of confusion in vertebrates as my separate lists of them are not up-to-date).
An obvious solution is the CSV download. However this still has the downside of losing everything that has not yet been IDed. Sure, with iNat gone, I could no longer hope to slowly get IDs for everything on iNat, but as I am learning about new groups of animals, I am sometimes going back to my observations myself - or there could be a new similar platform. The problem here is that while I still have all the photos that I uploaded to iNat also stored on my computer, it would be a massive effort to collect them and establish their locations again (some have geolocation metadata, but many don’t)
Luckily, the CSV download also provides URLs for the photos - unfortunately it seems that only for one photo per observation and I haven’t found any way to change that, but that’s still vastly better than nothing. With a simple bash script, I am now slowly downloading all the images - renaming “medium” to “original” in the URL gives me the larger versions and when I just rename them with the observation ID, they are easy to link with the CSV file.
This is a bit of a waste of bandwidth, as I already have all of them, but I simply haven’t found any way how to link the existing images to the observations - the upload process removes the filename. In principle, I could tag each photo with an unique string automatically before upload and because the upload preserves the tags, this could be used for linking, but this could have unwanted side effects when managing the photos locally as the proliferation of different tag could crash some software.
Is there any more straightforward method which I am simply missing? The download works nice (should be done in about 4 days and any future update will be much faster, only downloading new observations), but it still feels a bit contrived.
the filename is maintained along with all of the original metadata (it’s hidden on photos which aren’t yours). You could modify your script to go to that page instead of the .jpg and then scrape the filename. I don’t think there’s an API endpoint for the photo metadata.
Interesting! This would be a bit more work, because it requires reading the webpage, but would be feasible to do. The main upside is that I would just find the original files and have them in full resolution this way (the downside that it requires thinking as opposed to brute force :))
for what it’s worth, since a couple of years ago, iNat photos that have been licensed (not all rights reserved) are stored over in the AWS Open Data repository. i don’t know for sure, but i suspect that even if the iNaturalist organization completely collapsed tomorrow, Amazon would still keep the AWS Open Dataset up and available for everyone for at least some period of time, if not indefinitely.
so i would guess that all the iNat observations that made it to GBIF (since they are licensed) would still have the underlying image files available. if GBIF were to go down, too, then you still have the AWS Open Dataset’s monthly metadata files. so you could still get basic data associated with the photos – observer, date, location, taxon.
if Amazon / AWS were to disappear… well, then half internet would be non-functional. so there would be bigger problems than missing iNat observations / photos in that case.
if you want to download your photos, that’s probably fine, but i think realistically, the main scenario that this kind of backup would save you from is if you (or someone who got access to your account) deleted all your observations (or deleted your iNat account altogether) and iNat staff could not undo this for some reason. or if you wanted to put your data into a time capsule to be opened many decades from now, then i guess this would be another scenario where it might make sense to download everything.
In way, although it was unintentional, iNaturalist has become a backup of many of my photos. I have a personal rule to not add observations unless I’ve labeled the photo in some way that I can easily search for it on my hard drive. For example:
Perhaps I’m an exception, but I try to label every photo I intend to keep with a basic name and date. I don’t use a mobile phone to take photos either and my tiny point and shoot camera is GPS enabled so all photos are automatically geotagged. If I make an ID mistake, or the taxonomy of an organism changes, or if an expert IDs an observation to a finer level, then it’s easy for me to search and locate the photo to rename it.
My personal rule means I sometimes wait weeks or months to find the time to label the photos so I can upload an observation, but I’ve found that I don’t have to worry about a website disappearing. Plus if my hard drive and my backup ever crashed, at least some of it would remain accessible on iNaturalist.
Donate, donate, donate so that doesn’t happen. I have most of my photos backed up on external hard drives in folders labeled by location and date. It would be a pain to wade through for them though. OTOH, I am more forward looking and my photos and observation skills get better and better. Unless a species I’ve uploaded goes extinct, I’ll keep looking. And fortunately, as a birder, I have full size backups at the Macaulay Library for most bird photos worth keeping. It’s a shame we don’t have more permanent repositories for other taxa.
I was thinking the same thing. I’ve started to upload all of my old historical records to iNaturalist or eBird (or both) and when I’m done, I’ll slowly destroy my old field notes especially the ones that were just scraps of paper or local park checklists. Then I was wondering what would happen if these sites closed down? So for that reason I still keep a paper life list of birds, mammals, herptiles which is easy to add to as you don’t often add LIFERS. But I otherwise don’t track observations of species other than on iNaturalist. If it were to go then having my observation in a box in my office wouldn’t help anyone. My kids will discard them when I’m gone. Not only will no one care if I saw Species X on May 17, 1974 at Newman Sound, Terra Nova National Park, Newfoundland but it also has no value if there isn’t a public way like iNaturalist to share it. So if iNaturalist goes then so goes my record keeping of seeing recurring species.
Re: field notes and collection books. You may find that a state or university herbarium, insect collection, or library would be willing to take your field notes and collection books, maybe photos, as historic records. They all have very limited resources, so this is most likely in the state where most of the records are. This has been done with the collection books that are associated with birds and mammals I prepared as research specimens that are mostly at the museum that decades later took the books.
So simply put, if we could have a method to re-download all of our images from our iNat accounts with the needed information somehow linked to the image, it will act as a iNat backup and solve our concerns.
(iNat often detects scientific names in names of an image and adds it as the ID automatically, same with dates and GPS data- so just like that)
I’d say the needed information would be like:
・location and location description
・ID (preferably with some higher taxonomic names)
・preferably something that indicates that a series of photos are from one observation- e.g. 100a jpg, 100b jpg),
Furthermore, if this became possible, iNat users can access to their observations OFFLINE! It would be extremely useful.
I can already access my observations offline if I keep them because I label them with ID, location, and date. I do NOT want to keep them all or download them all from iNaturalist because that would take more computer memory than I have or want to get, given that I also store large quantities of other files.
you’ll probably never get iNat staff working on something like this (unless iNat is near insolvency) because they don’t want folks using the system as a photo backup service.
however, it’s possible to code something to do what you’re describing. i’ve already written something that can give you the cURL commands to download photos with filenames prefixed with obs id, seq, date, taxon id, and taxon name. getting location, location description, and observation description would probably require code to actually download the photo for you and then add photo metadata. although it’s not super difficult to do this with some EXIF modules out there, i’m not going to do it because i don’t think it’s really worth the time.
(you can read my take on backup above. i don’t think there’s much point to downloading photos for backup purposes, except in a few limited cases.)