I’m taking a certified naturalist course, in California. I"m on an Android phone. The students are encouraged (well, okay, required) to submit iNaturalist observations. We’re doing it in a remote area. Other students seem to be getting a weak cell phone signal. I’m getting none. That probably depends on the mobile phone carrier.
The other students tell me that they are getting quick and accurate identifications, presumably from iNaturalist’s AI identification engine.
I submitted about a dozen observations, with two or three photos for each observation, all plants. My photos and other information didn’t upload for an hour or so, until I was back in WiFi range.
In every case, I got “Unknown. Needs ID.” In two cases, a human iNaturalist eventually made an identification, several hours later.
I should add that I have used iNaturalist on a few occasions in the past year or so, when I was in range of a cell phone tower. On those occasions, I got immediate and likely accurate IDs. I don’t know a lot about iNaturalist. At the time I assumed the ID came from an AI.
I’m trying to figure out why my fellow students are getting immediate IDs and I am not. I understand I have to wait until there is some way for my photos to upload. I thought I would get IDs of some kind shortly afterwards.
I can think of several explanations, one of which is ignorance or unrealistic expectations on my part. Someone here probably knows the answer. Thanks in advance.
If you start the upload without internet connectivity, the app will queue the upload with no ID unless you entered one manually.
To get an AI (or as we like to call it computer vision CV) ID, you can go to the observation page and press the suggest an identification button. That should load the CV suggestions and then you can select the ID which makes the most sense.
The CV suggestions are not automatically applied. Even when you have fast internet connectivity, you still need to select from a list of the top 10 CV suggestions. Although most people just choose the top ID, and that’s OK.
The tutorial ‘‘Adding an Observation on a Mobile Device’’'might help, but it seems not to include how to get the CV id.
I am hopeful. However, I can’t find the “suggest the identification button.” Please advise. Is it on the app, the website, or both?
Both - here is how it looks on Android:
I do most of my observations either without internet connection or (more often) when I do not want the mobile connection to be used or simply when I want to be fast and capture many objects. In that case I just store the photo in my ordinary Android photo gallery and I upload the photos later when I have enough time at my home. Sometimes it happens days after making the photos. One can then also determine the species with some book or internet resources and or one can just upload it with some broad identification like “birds” “vascular plants” “fungi” or similar.
Sometimes I load the photos into the app for an observation but then I become distracted when searching for the correct determination. If the app gets closed, it then starts to upload the observation with an empty ID. In that case I either stop the uploading, enter some ID and enable uploading again, or I let it upload and quickly issue some at least basic ID.
It is important to always have some basic ID, do not leave observations without any identification at all.
A good tip to add also is to add a location to your observation first, then check what the CV has to say about it. Adding the location first often narrows the field of options to a few choices (assuming good photos of course), which can be gone through by process of elimination.
Okay, I found it. Thanks everybody! The part I was missing was that I have to click on the “Edit” button. That reveals “What did you see?” Then it all falls into place.
I tried that. As far as I can tell, the app doesn’t get the location from the photo metadata. I can manually enter the location approximately, but it seems preferable to enter the precise location, in case others want to look for the same thing.
Is there some way to get the app to read the location metadata from the photo?
The default is for the app* to indeed pull location metadata from the photos, so perhaps you have location sharing settings on your phone globally locked down? Nowadays there’s usually a phone settings page where you can set permissions for which apps are allowed to pull location data, I’d check that.
- At least that’s my experience using the iPhone app, which from what I gather is generally inferior to the Android one in all respects.
You don’t have to edit an existing observation to add an ID. In the android app, there will be three icons/tabs underneath the photo: an “i” symbol, two overlapping speech bubbles, and a star. If you click on the speech bubbles, you can see any existing IDs and conversations. At the bottom of this there will be a box to suggest an ID.
You can suggest an ID or change your existing one at any time. If you don’t like any of the computer suggestions, you can type in one of your own and iNat will look up the taxon.
Are you sure your camera app actually stores the location metadata? Often it must be enabled. I never had to enable anything more for the iNaturalist app.
One other option is to use the Seek app (which is built by iNaturalist). It can ID stuff while completely offline. However, it’s not integrated with iNat, so it won’t upload your photos. So the workflow would be:
- Open Seek app to ID something.
- Take a picture with Seek.
- Open your phone’s camera app to take a “real” picture (at least on my iPhone, photos taken by Seek are low-res and not geotagged).
- When you’re in cellular or wifi range, upload the real photos while reviewing Seek for the IDs.
I see many observations created by Seek in the ID queue.
Huh, you’re right I don’t know why I never noticed that before, and it’s geotagged, too. I just tried it in airplane mode, and it queues offline observations for uploading later. That should save OP a few steps.
So…I have a couple of caveats about Seek as a way to make iNat observations while offline. This is based on my experiences as an IDer; I have not personally used Seek.
Note: I am not discouraging anyone from using Seek if they are using it for the specific features it offers (e.g., privacy, game elements). But if you plan to share the observation with iNat, I suspect that most of the time it makes more sense to make the observation with iNat in the first place, even in the situation outlined by the original poster.
For difficult taxa (e.g. bees), observations made via Seek tend to have a couple of specific problems with a much greater frequency than observations made with iNat.
First, multiple observations of the same organism. From what I understand, this results from the fact that Seek only lets you take one photo to submit for ID, so a user might photograph the organism multiple times trying to see if it provides a better suggestion. And these subsequently each get sent to iNat individually, leaving the IDers with half a dozen observations, each with one photo of the same bee taken from various angles. These photos really need to be in one observation, not just to prevent duplicate records, but because these different angles (say, one showing the head and one showing the tail) are needed for ID.
Second, the ID suggestions from Seek are often egregiously wrong for difficult taxa. iNat’s algorithm has trouble, too, but its suggestions are less bad on the whole. Because Seek is meant to be used offline, it cannot take geography into account. So it suggests North American or East Asian species for observations made in Europe. It also uses an older version of the Computer Vision model, so it has fewer species to choose from.