I was curious how people felt about the new app in general compared to the old one.
Here’s my thoughts:
Positives:
-The new interface looks nice
-Offline algorithm can be helpful, though not always
-Notifications work much better
-Easier to take multiple photos.
-I got it to not save all my iNat photos to the phone (it currently gives me a nag screen but that’s meant to be removed soon)
-It seems to get better GPS accuracy. It won’t post the observation until it gets a good reading, or at least it warns of bad accuracy. With the other one i’ve taken observations at 10,000 meter accuracy and such, which i don’t want to do.
-Upload process seems to work better. It doesn’t keep turning auto upload back on, which i never want. Instead i can choose whether to post it immediately or not right off the bat.
Negatives:
-it’s slower. More clicks to make an observation and they lag a little bit. No way to make an observation without running the algorithm and there are times i don’t want to.
-Less offline functionality. No way to put in a species name when offline and in general the app doesn’t seem to work that well without cell service. And when there is only a little bit of cell service it lags a lot.
Overall: I like the new app and think it has promise, but despite all the upsides, i find the slower observation flow and the lack of offline functionality to outweigh those. So for now i still mostly use the old app. Of course, it’s in development and will likely improve a lot over time. But for me, it hasn’t passed the point where i like it better than the old one yet.
Was liking it more than the old app until a few weeks ago and have stopped using it to make observations:
Like the creating an observation process because can mass import photos (using next or from photo gallery) and then combine them into separate observations.
Like being able to use the dynamic taxonomy ladder to choose a rung higher up for the ID of our observation.
Like the idea of the AI camera to scan for quick IDs without making an observation, though it’s slow to load for us (old phone).
Like the dot ranking of AI confidence in the suggested IDs.
Don’t like the audio observation bug that causes saved audio in an observation not uploaded yet to erase itself after a day (lost audio of elk bugling during the rut and mountain corvids calling during hikes ). If you’re out of cell service for long enough, you lose your audio!
Still not used to where the buttons are. For some reason, we want to choose the second CV suggestion over the first because we are used to the first suggestion being a higher level taxon suggestion on the old app (we’re pretty sure this is in…) and the second being the suggested species.
Cannot yet get used to the loss of being able to tentatively ID when out of cell range. Disrupts our preferred workflow and probably caused us to lose all that audio because we couldn’t ID and upload in the field.
Can’t get used to taking pics in the app. The camera focuses too far, too fast; and we seem unable to figure out how to add some photos to making a new obs while rejecting some photos. And when photographing animals, we need to move quickly, so we just use phone’s camera.
Without notifications from our ID work, we still have to use the web site. And with a backlog of un-uploaded audio observations that we’re still hoping can be recovered, we stopped using inat next for creating observations for the time being. We occasionally still use the AI camera.
If you find iNat Next slower than the legacy iPhone app, can you please be specific about what parts are slower for you? Are you experiencing a lag after tapping on a particular element, or is there a lag when you land on a particular screen before the content you want appears? It’s a bit hard for us to address complaints like “it’s slow” without these kinds of details.
I tried to preserve both a “no suggestions” flow and the placeholder and I lost both battles, but FWIW if you turn on “All observation options” in settings, you can start a “no evidence” observation and enter the camera from there to skip the suggestions. In lieu of the placeholder I add a name to the observation notes, but it’s definitely a less efficient way to make a quick observation.
For me, it’s two things. Firstly there are more things to click through, which slows me down. This may decrease if my workflow adapts to the new app and when the update comes along that removes the ‘frog nag screen’ about saving photos. But i don’t think it is possible to make an observation as quickly as the old app. it may be no one else cares about speed the way I do so it may not be relevant, but that’s how i’ve felt about the new app
yeah, that will be a major issue for me and will probably cause me to use iNat significantly less if my only option is an app with no placeholder. But again, not sure if others care or not and I know i use the app ‘weirdly’ so maybe the feedback is valuable and maybe not.
This is useful to know, thank you. I think the end result will just be me adding a bunch of observations without any ID at all if i’m not allowed to make a placeholder. I think others may do the same and it may result in a bigger burden for identifiers. But i could be wrong! And there could be other solutions, like make it easy to swipe left and right through observations in the app before i upload, after i get back to a place with connectivity.
Basically in my case, speed (even a few seconds per observation) and offline functionality are very important. I’m interested to hear the views of other app users here.
I just spent some time in the yard trying out the Next in-app camera and the AI camera. See my notes on this observation of Bush Croton. Maybe I’m missing something, but on my iPhone 14 (iOS 17.6.1), I see no editing tools available for photos taken via either of the above sources, not even the simplest like rotating or cropping a photo. That makes them almost non-starters for use in the field since field photos very frequently have to be cropped and/or rotated to be suitable for upload (at least to my personal standards). Otherwise, I’d be uploading many images sideways/upside-down or a small subject in a large field.
I tried a work-around by taking an image with the in-app camera, then going to Photos, editing (rotating) the image, then returning to the app. However, the edited version does not show up, only the original unedited (unrotated) version. Is that a bug or an intended action?
UPDATE: Part of the above issue arose because I had the iPhone locked in Portrait Orientation. Unlocking the orientation allowed either camera to eventually acquire a properly oriented photo, but the response was slow (a few seconds, not instanteous as with the native camera). That slow re-orientation of the camera will be a drawback for living/moving subjects. The lack of simple editing tools is still a concern however.
Here is my list, excluding stuff you already mentioned:
Positives:
bulk upload (this is so much nicer!)
CV confidence scores
creating an audio-observation is handled far better and I can now actually listen to the audio with headphones which has somehow never worked on the old app
being able to see profiles
the more web-like explore page including better filter options
checklist feature in explore which quickly shows you which species in your area you have observed
Negatives:
the way adding a non-CV ID works
top CV-recommendations not being left at genus-level or broader
explore showing “species” by default instead of observations
notifications not instantly being marked read after you have clicked on them
Additional Wishes:
push notifications so I can see iNat activity on the lock-screen
identify mode
ability to add and see annotations
filter box on explore being made a bit more compact so you have access to more filters without scrolling so much
uploading of observations not being interrupted and set all the way back to 0 when leaving the app
making the username on the “Your Observations” page a button that opens your profile page
Overall I think the new app is a huge improvement and I have been using it basically exclusively unless muscle memory has caused me to open the old app.
This made me think of something else. It’s true it doesn’t work right without cell service, but even with cell service, if it is slow, having to search online for the taxa slows the workflow down a lot. Basically if the CV works the app is better than the old one but if it doesn’t work it’s much worse. It seems like it would be easy to add/fix the functionality but i detect some form of user control where there’s this perception that if we don’t have the placeholder option we will have less unknown observations. In my opinion there will be more, not less, because people won’t add IDs at all. there may already be some indication of which is the case based on user data, but the testing pool may not include that many newbies and casual users.
If the current status quo is maintained i’d love to see some ‘fast mode’ on the app you can unlock. or at least some way to keep access to a depreciated version of the old app.
I have used the Next app primarily to submit photos from my camera roll. I find it easier to take photos with the native Camera app, especially as my workflow for invertebrates is often to take 10 or 20 images and then select and crop the best ones for upload. I didn’t use the old app much for making observations directly, and I’m unlikely to use Next much for that either, although I am likely to use it a lot more to upload photos rather than transfer them to the computer for web upload.
I may not be a typical user, but here are a few thoughts, some of which I have already shared via the feedback option within the app. There’s some overlap with previous contributors.
Things I like:
The ease of adding up to 20 images at a time, and organising multiple photos into observations
The little dots indicating how good to CV/AI suggestions are
Filter options for exploring observations
Notifications are visually clean and nicely laid out. I don’t miss seeing the user icons
I like that the icons for Research Grade etc seem to subtly de-emphasize reaching Research Grade. This could help to reduce some undesirable behaviours
Going to grid view, an improvement is the size of the images (compared to the grid in Explore on the old app)
In Observation view, the image size is improved
For audio recording, I like the helpful hints that appear under “?”
Factors that reduce my inclination to use the app directly for making observations:
It launches into the AI camera, or a set of options which require a second tap on a cluttered screen. I’d prefer to be able to launch directly into the standard camera with no AI suggestions
Having taken one photo with the AI camera, there is no evident way to add a second one. I prefer to take multiple photos of an organism (the standard camera is better in that regard and allows taking multiple photos)
It’s more complex than it was before to delete selected images from an observation, and I cannot figure out how to reorder them, which was easy in the old app. I frequently want to put as the first photo a photo other than the first one I took
Suggestions for improvements:
The @ symbol in front of usernames seems unnecessary clutter and could be removed
The notification messages are longer and less informative. Instead of “bob suggested an ID: Polistes versicolor” now we have “bob added an identification to an observation by you”. I would prefer to see the identification rather than know if it’s my observation or someone’s else’s
In the old app, it was clear when any new notifications had been loaded, and you could pull down and release to refresh. This process is more opaque in the Next app, and you have to just wait for a while to see if there are new notifications
In my observations, the “Welcome back, user” text seems superfluous and takes up extra space. Remove?
Both list and grid displays are too cluttered. In list view, the pin and clock icons are superfluous, as it’s clear that the text here refers to location and date/time. I’d suggest to remove the icons.
Still in list view, I have my preferences set to display scientific name and two languages for common name. I’d suggest that’s too many in this view, but I want to keep that option for the site. You could display as much as fits on two lines (one for scientific, one for common) but not allow the names to flow onto a third line. Or allow a different preference for the app than that which is set for the site
Still in list view, the icons for Needs ID, etc. are a bit too large and complex, I think, and you can’t click on them for an explanation. I’d suggest to make them less wide, remove the + and the tick mark, leaving only the one or two people. That would free up more space for taxon names. And there needs to be some way for new users to understand them, perhaps through onboarding
Going to grid view, an improvement is the size of the images (compared to the grid in Explore on the old app) but there’s too much information layered on top of the images (number of photos, IDs, comments, various names). Better to remove most of it and provide just the name in the single preferred language. The rest can be accessed by clicking through to the observation
In Observation view, I prefer the username and date/time above the image as in the old app. It helps to separate this info from that of identifiers
Still in Observation View, again suggest to remove the @ from in front of usernames
I agree with the person who mentioned that it’s hard to distinguish the community ID from individual IDs. When in Observation mode I’d be inclined to prefer seeing the Details tab, with the map, by default, rather than the Activity tab
In Other Data, for an observation, it would be nice to be able to copy the link rather than just open it, but that may be a phone issue rather an app issue
In Settings, rather than choose between AI camera and all options, I’d prefer a dropdown menu from which I could choose any of the options as my default, including the “all options” switcher
In Explore mode, there’s too much clutter in the grid. Leave the information for list mode, and present just the photos and at most the preferred taxon name in grid mode
If the AI camera can work offline, why can’t we have a dictionary of taxon names so we can manually add IDs offline too? If such a dictionary would be too heavy, perhaps it could be downloaded as regional packs, like eBird does, based on country checklists. I agree with others that not being able to add precise IDs when offline is a big negative
Sometimes I pull out Seek for something if I just want to see what the CV says without uploading, and often I have to turn on airplane mode because the app runs much faster in airplane mode. Sometimes when cell service is poor it just hangs indefinitely after your snap a picture outside of airplane mode.
At least in Montana, most places worth observing did not have any cell service.
I like the new app and prefer it for most of my activities.
Overal in my view it is faster in the upload features, because I can now upload up to 20 photos and group them if I have more than one photo for one observation (that is more or less the rule for my observations.
Synchronizing the obersvations is much faster than in the old app.
However, there are a number of flaws, that forces me back to the old version form time to time.
I enjoy the feature of the exploration page with the species and the observations of others’ and the AI camera is super nice to use in the field. However, I do wish the CV confidence grading would have color coded instead of grey and green, like red, yellow, and green along with the dots. Another thing that I wish that would be added on all platforms is reasoning of why the CV suggests all the taxa it suggests.
The best feature is loading pictures directly from iOS Camera/Photos. And the computer vision confidence indicators are very nice.
I’m not crazy about the busy visuales with text over the photos.
If one wants to react to notifications about IDs you made for others (e.g, Withdraw an incorrect ID ), you still need to do so on the Website, which is inexplicable to me.
Losing the ability to add observations to projects is a big disappointment, maybe even a deal breaker.
This isn’t a detailed analysis because I’m not putting very much energy into iNat now as I have other stuff I’m finding more absorbing. I imagine I’ll come back to it eventually.
The CV suggests particular taxa for an image because it thinks the image corresponds most closely with images labelled as these taxa in its training set. It can’t provide any more advanced reasons because it doesn’t actually know anything about biology or about the morphological traits of the organisms it is suggesting.
Using it on Android, but figured I can still share my experience here. The way I use the inat app is, I take pictures with both my phone and camera. The camera uploads to Google photos so the pictures get sorted into the photo app along with the cellphone pictures. I then select pictures for my observation (often 10-20, from both devices) and share with inat.
This seems to work well with the new app, except inat next wants to create 20 separate observations. I would much prefer it to default to a single observation (or a setting to do so at least) - or alternatively a “select all” button.
You can definitely withdraw your ID in iNat Next. Tap on the three dots on your ID and you’ll see that option. I’ve done it multiple times. If it doesn’t work then there’s a bug in the app.
As we’ve said, not all functionality is ready yet. Functionality for Projects will be coming.
This is true, but there have been some experiments done where the model can show you what part of the photo it’s relying on. We tried it out a bit and it was pretty cool seeing it highlight part of a Vanessa butterfly’s wing, for example. Making the model less of an impenetrable “black box” is something that would be great to do. But it would require a good amount of design work and I don’t know how feasible it is at scale.