iNat Interest in UX Research?

i’m struggling to imagine some sort of result along these lines that iNat staff would realistically work on any time soon.

i think where you’re trying go with this is to work out a way for the CV to suggest to the user sort of image templates that would increase the probability that someone would be able to identify an organism. just for example, in Seek, if you encounter an organism that its CV can’t get to species, Seek’s stock suggestion, i think, is to get closer to the subject. but what if it could somehow suggest to the user to try specific viewpoints by showing them sample images that have a high probability of being identified successfully?

something like this might make sense to try if you had unlimited resources, but could iNat staff realistically do anything with such an idea?

6 Likes

This is an interesting observation and suggests that maybe I should ask a different question, about the role of different efforts and investments in our ultimate goal. If it is more widespread good quality ID by humans, that’s a different problem.

I am also involved in efforts to improve ID. I work on the fungal family Cortinariacea and give many talks, have collaborated (led) description of 4 new species and am working on regional keys. I really like your connection to the outcome “productivity of identifiers” and will think about this more.

That said, the huge opportunity in fungal ID is coming through massively increased sequencing of species. Currently, the ID flow does not surface to me that an observation has been sequenced. The determination based on sequence is a different workflow that we don’t enable. If we are talking about improved quality, surely this datapoint should have more weight?

Anyway, thanks for the thoughtful response.

4 Likes

Actually, I am really agnostic to the solution. As a researcher, I routinely explore an area to see what the issues are and what problems might emerge. This conversation has already surfaced a lot of ideas and is valuable to me. However, the question of “could we ship something you identify” would be totally up to the dev team and I would not consider this a waste of time if we decide not. Having done this for a long time, one always learns something if you define the problem right (which you are all helping me to do).

Note: I will try the Seek app to see what you are talking about :)

Successful solutions could enable:

  • good IDers to identify more by improving incentives or priority or purpose so the CV has more signal
  • general IDers to id with greater confidence
  • more or finer training data on species concepts so the CV gets better faster

Btw, does anyone know what the CV model is trained against? What is a successful outcome for the model / how does the model know it is correct?

2 Likes

I don’t know if it’s useful to you, but I am a mycology (fungi) focused identifier, with “medium” level expertise. Might represent an average fungi-focused person on the site. One thing I wish I had were annotations for all Rust fungi (order Puccinales) prompting the labeling of the host plant. I was thinking about the other day what kind of annotations being always-present (like some of the ones for plants, animals are) would help us. Freshly emerged vs old specimen (to help with seasonality data), “Part observed” with options like “Spores” “Fungal Fruiting Body” “Wood Staining” (as in Chlorociboria sp.), etc. where you can click more than one of those (for observations containing pictures of the fungal fruiting body and microscopy of the spores for example).

The BIGGEST prompt people posting a picture of a fungi for the first time (as adjudicated by the computer ID thinking it’s a fungus) could have is to add more than one picture from more than one angle!! Especially of the “fertile surface” (tubes, pores, gills, ridges, etc) of a specimen.

Having a terms glossary with illustrations could be nice too, if it could be linked from all fungi taxa pages. Having the word for something not only helps with communication, it helps with seeing it and choosing to photograph with it in mind in the first place. Which helps good IDs get done.

I have also had a lot of success telling people about the Data Quality Assessment section at the bottom of each observation, and how we (as mycology identifiers) can use the “no it’s as good as it can be” feature to clean up identifications. There are plenty of examples where really, you aren’t getting better than genus without sequencing or microscopy. So, put it to the genus level and mark “no it’s as good as it can be”, and it still counts for that. That species complexes are getting their own labels on iNaturalist as options for us helps too.

As to the CV model I don’t know but I’m guessing it’s trained against all the iNaturalist data itself, including all the times we have identified something as Species A but then flipped it to Species B. That’s how it makes the “Similar Species” tab. I have seen it happen when someone goes through correcting something on a lot of observations that all of a sudden the similar species tab for those two taxa reflect that event, and have each other as “most misidentified as”.

2 Likes

i see what look like a lot of business buzzwords to me, and i still have trouble seeing where you’re trying to go with all of this.

if you’re just looking for general information on how people use the system in relation to fungi and computer vision, and how computer vision works, there’s already plenty of that kind of discussion in the forum. some of it goes relatively deep. you could read for hours, maybe days.

and people could spend hours or days rehashing a lot of that or bringing up new things. but for what purpose? how will whatever you’re proposing to do clarify things and, more importantly, spur action any better than efforts that have come before?

4 Likes

I think it is kind of hard for most of the people on this forum to understand because your posts are really buzzword/jargon heavy. Are you offering to do actual development work of some kind, perhaps creating a third party tool, or offering to tell the existing limited staff of developers that they should do even more things? If it is the second there is already a backlog of really obvious “pain points” that need fixed *cough* notifications *cough* (is this the correct sense of the term?).

When you talk about ‘sequencing workflow’, is that some kind of tech jargon term I am unfamiliar with, or are you literally talking about DNA sequencing, like the handful of observations with a DNA barcode copy-pasted into an observation field? If it is that second one just a bit over 0.01% of observations have that, and all but ~800 are already ID’d to at least genus, so I would expect that flipping to the ‘annotations’ tab in Identify is not an unreasonable barrier to surfacing such relatively niche content (is there literally any identifier who could just look at those by eye and glean anything useful from them anyway?).

6 Likes

@sulcatus : I might be mistaken, but I believe that iNat’s CV model is content-free with regard to knowledge of morphological structures and taxa. In other words, I believe it is rolling its own set of identification criteria strictly from the photos and associated metadata. So I don’t think it would be possible to have a separate UI for just fungi.

That said, I’ve seen enough posts on the parlous state of iNat fungi IDs to believe that some grand reset is probably in order. It sounds (to this non-fun-guy) like many people assume more knowledge than they have when it comes to fungi IDs.

SO, perhaps there could be some kind of link that dynamically appears whenever the community ID is in fungi, and that link would take the user to a Fungi Guide that would supply general ID principles to follow and pitfalls to avoid?

In other words, if we’re pitching this at human IDers, as opposed to changing the CV model itself, perhaps we could provide well-intentioned humans with enough knowledge to know when they don’t know.

5 Likes

This is similar to an existing feature request here:
https://forum.inaturalist.org/t/expand-the-similar-species-tab-into-an-editable-identification-guide/13890/23

The CV model is trained against the dataset of observations ID’d by inaturalist users. It uses a single model for all of the nodes it is trained on. Its ‘successful outcome’ is predicting the species (actually the node, which could be up to Genus or Family in some taxa where there aren’t many species-level IDs) that the observation was ID’d as on inaturalist. There are lots of blog and forum posts about it, here are a couple:
https://www.inaturalist.org/posts/59122-new-vision-model-training-started
https://forum.inaturalist.org/t/computer-vision-update-july-2021/24728

Related to what you were saying, I have said before that a really cool feature that I hope someone builds someday is a version of the CV model that can somehow give us some indication what features it is using; I think it is pretty clear that in some cases the CV has figured out some set of reliable ID features that field guides do not know about/include (i.e., it can sometimes correctly and with high confidence ID photos that contain none of the features the field guides describe). The inat team all but certainly does not have the bandwidth to build this feature right now so it would have to be a third-party app of some kind.

If you want a workflow for identifying specimens via sequences, the Barcode of Life Database is one.

Very few observations on iNat are going to have DNA sequences associated with them, because

  1. While the technology is at the point that someone with time and money to burn could conceivably be DNA barcoding samples from home, it’s still a lot of work and is most labor-intensive at small scales.

  2. There are ethical and legal issues with collecting samples/specimens for DNA barcoding that are not present for taking photos of organisms, even if you are temporarily capturing them to take the photos.

  3. If people frequently can’t get close enough to an organism to take a good photo, they’re usually not going to be able to collect a tissue sample. This may be less of an issue with plants and fungi, but they also have cell walls and a lot of secondary chemicals that make DNA extractions more difficult. Plant DNA extractions often use chloroform and other chemicals that need special ventilation systems to use safely.

I’m a biology professor who does DNA barcoding and has a fair amount of freedom in which organisms I barcode, but the overlap between the individual organisms that I have observed on iNat and that I have DNA barcoded is 0%. Because I study caterpillars and some people on iNat rear them, I’ve just started offering to DNA barcode a few interesting ones that died before the adults emerged or could not be IDed from the adults. But that’s a very different level of effort than trying to routinely DNA barcode everything interesting that I see on a hike.

3 Likes

That’s because it may be using arbitrary elements of the background rather than features of the organism itself. If, say, all the photos were of labelled museum specimens, the CV could literally make use of coffee stains on the labels if that gives the statistically best results. Its “vision” really is that blind. It has no more comprehension of the subject of a photo than an OCR program has of the plot of a novel it’s scanning. The credit for the IDs is entirely due the human identifiers who provided the dataset.

3 Likes

I disagree to an extent.

First: It certainly can use background features, but background features can be a perfectly legitimate ID feature. For example, it can learn that plant species X is only found in sand, species Y is only found in sparsely vegetated soil, and species Z is found in loose gravel, and that would be a perfectly legitimate thing for a human IDer to use as well. Just two days ago I found a species where a field guide said that the most reliable way to ID to species was to ID the plant to genus, ID nearby associated species, and see which list of known associated species was a better match.

Second: The CV can learn things about the habit, flower shapes, leaf orientations, etc that are difficult to describe precisely in words with available vocabulary, or do not survive pressing in museum specimens, and consequently do not get described well in keys. Expert IDers often learn these kinds of features through experience and actually use them all the time. I have absolutely found pairs of taxa where the best ID feature to distinguish them is not in the key I learned the taxa from. I also am certainly not using ‘key’ features when I ID species flying by at 55 mph through the passenger window of a car. Because the CV isn’t learning from a key, it can learn the features that real experts actually use, not just the features that are easy to describe in words.

Third: In some cases it can learn real, statistically accurate heuristics that would be very tedious to compute by hand. Hypothetically, it could learn that fish species X has on average 300+/-50 scales, while fish species Y has on average 500+/-50. Human IDers see patterns like this too, but might just describe it as ‘species X is usually not that big’ or something. Because the actual pattern is quantitative and not qualitative, this is the kind of feature you could reasonably expect a computer to be better at learning than a human.

Sure, features like minute statistical differences aren’t good enough for high confidence on their own, but most of the time in hard taxa no single feature will be good enough for a high confidence ID on its own. This is where the key gets down to diversity in the training set. A more diverse training set forces the CV to start learning the difficult features, which is what you want. 10 pictures each in 2 different taxa will never be enough to force the CV to learn difficult rules. With 1000 pictures from 10,000,000 different observers in 100,0000 taxa, the CV is for sure going to have to learn some difficult rules to get to 80-90% accuracy, not just dumb simple rules.

Of course there is no dispute that the credit is to the human IDers who provided the dataset; the CV is just codifying, and perhaps in some cases expanding on, their knowledge.

4 Likes

Please do not forget to credit the observers who provide their photographs (without which there would be nothing to identify) and the curators who manage the taxonomy (without which most identifications would be to tags like “bug” or “bush” or “danger noodle” and thus impossible to export to and merge with GBIF and other archives). Our vision dataset is the product of our whole community working together.

9 Likes

Oh yes 1000%! Can’t have good IDs without good pictures! Can’t get those super narrow range endemics in the dataset unless someone goes there! And, part of the value of the dataset is its sheer size. Out of curiosity, have you ever done any estimate of how many taxa there are for which inat has more observations than all herbaria/museum specimens combined?

2 Likes

3 posts were split to a new topic: iNat species numbers vs museum/herbaria

Claims like this may reveal far more about human psychology than the real capabilities of systems like the CV. Humans have a hair-trigger when it comes to ascribing agency to inanimate objects. The most cursory survey of religious rituals and folklore traditions make this abundantly clear. A modern manifestion of this is what might be called Cute Robot Mythology™ - to which the CV is clearly not immune.

A simple analogy can illustrate this. Imagine the familiar case of someone losing an earring, and then days later hearing a characteristic rattle inside the hose of their vacuum cleaner whilst they tidy their bedroom. Almost instantly, a human will make all the right inferences, and draw the most likely conclusions about what just happened. Now, would it be correct to claim that the vacuum cleaner (VC) has similar capabilities and somehow “knows” that it’s just found the earring? Does it perhaps possess some inscrutable robotic intuitions about earrings that are unknowable to humans? The VC seems very good at finding certain items that the dumb humans keep losing, so surely there must be something in it? At the very least, it seems very natural to thank the vacuum cleaner in some way: perhaps giving it a pat like a faithful old retriever.

The CV is playing exactly the same role here as the VC. A vacuum cleaner is a crude winnowing device. Humans have deliberately designed it to suck up a limited subset of things that are typically found in a specific range of environments. It has no feature detection capabilities whatsoever; nor does it have any capacity to delevop such capabilities - and it doesn’t need them, because that isn’t what it’s designed to do; they simply have no relevance to its human-assigned role. All we want it to do is reduce a large space of possibilities to a much more manageable one that humans can deal with. Thus, once the earring is in the bag, it becomes much easier for us to find.

In another post, it was suggested we should broaden the net when giving credit for CV identifications. But they only really scratched the surface. The evolution of eyes took hundreds of millions of years; the inference engines in our brains took several million years; and the multidimensional storehouse of human culture took tens of thousands of years. All of that biological and cultural inheritance is brought to bear whenever humans contribute to an identification. The notion that the capabilities of computer programs are in any way comparable is pure mythology (and/or marketing hype). If programs like the CV (or ChatGPT et al) occasionally appear to offer convincing simulations, that’s only because they operate within the very limited domains that are allotted to them by humans. Beyond that, it’s all just wish-fulfillment fantasy.

For almost two decades people used to say that chess engines were amazing at beating humans purely through stalwart defense, but lacked human intuition on offense. Some people still say it, but it hasn’t been true since 2017. Now Stockfish is a terrifying chess god that massively outperforms the best grandmasters in all but a tiny and ever-shrinking set of highly artificial positions deliberately concocted to confuse it.

In the same way, you can of course still identify ways in which systems like CV and ChatGPT underperform. And they aren’t unified into a full general intelligence. But to say that the capabilities aren’t comparable in any way and never will be is not a realistic assessment of the present situation as projected into the near future. In a way, it is underestimating the humans working on improving these systems.

1 Like

Here are links to guides on photographing fungi for id.

https://fundis.org/get-started/photograph
https://www.inaturalist.org/posts/3531-documenting-mushrooms
https://plantpath.ifas.ufl.edu/misc/media/fungi-submission.pdf
Enjoy!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.