Research Collaboration: Method for supporting non-experts to label in 'unpopular' taxa

Thanks everyone for all these really helpful suggestions and ideas! I’m really aware of the hubris of machine learning:


Hopefully the post came across as wanting to learn more rather than “I can solve all your problems :D”. Anyway. To reply to some of the issues raised:

how would you recruit non-experts in sufficient numbers to make a difference? - lynnharper

Fair point: I’m not expecting to be able to “solve everything” though. The question of numbers is presumably a problem that zooniverse projects also need to address.

Lumbricidae…often requiring a lot of pictures of hard to get areas like the underside of the clitellum - zee_z

Thanks for the suggestion! On Earthworms, this project seems quite relevant to your suggestion? You might want to ask them to contribute the collected & id-ed images to iNaturalist?

US and Canada fly group erikamitchell

this sounds really interesting - can a complete novice still join? It might be good to join anyway, to get more insight into the process. Thanks for the suggestion!

Responding to hanly
Thanks for your list, and taking the time to help with this. Replying to a few points:

start with…taking class Insecta down to order.

This is a great suggestion & sounds like a really good example: (1) it doesn’t require additional invisible info (e.g. microscopy) (2) probably is quite a well described problem (3) there is a need.

A lot of the biggest labeling needs are in the regions that lack guides, experts, and machine vision suggestions.

Good point. The approach I’m thinking will slot into the space between “super simple/already done” and “no guides or experts”.

Starting general is a good way to get someone to feel confident enough to start trying to ID at finer levels like genus and species.

One aspect I’ve not really thought enough about is supporting the new participant’s future interest/learning.

I do think that creating learning modules for more challenging taxa could be very useful…but I wonder how that would scale up without significant expert input for each module

It might be that, for example, zee_z needs to collate a dataset of well labelled images of Earthworms, and needs 300 of each species, and might be willing to, e.g. advise on how to move the text/advice from the key for that Family into the supporting material on the system. So there might be some expert help, but hopefully the return for them is worth their time!

It could be an intellectually interesting exercise on learning, but even for a well-defined 100 species group with no other issues, how long would it take to train a naïve identifier?

The idea is that individuals only learn (at least to start) a tiny task as part of the whole. For example, combining other individuals’ labelling, and the output of the computer vision system, indicates that a photo is of one of 4 species; the model of each individual tells the system that Emma is good at distinguishing between these (she has done well in the past at this and/or has been ‘trained’ on this). A valid criticism might be that this sounds boring for individuals, but (a) people do far more boring labelling on zooniverse! :) and (b) one thing that can put people off doing id is the feeling that it’s far far too hard. If they just have to decide if the wings of a bumblebee are dark or not (to e.g. separate a Red Tailed Bumblebee from a Red Tailed Cuckoo Bee) then they might quite like that this is within their capabilities.

The idea is that the probabilistic model and the reinforcement learning stage figure out who it is best to show each image to… so the “logic” above is somewhat anthropomorphising how the algorithm decides on the image.

Whether any of this actually works needs a bit more discussion and advice, and eventually testing! But that’s the idea.

I’ll send you a direct message if that’s ok hanly!

Responding to cthawley

How do you define “non-experts”

I mean people like myself - who basically have zero capabilities! By “expert” I was thinking of someone who can give roughly the most accurate ID that can be given, for a given set. For ‘non-expert’ I’m imagining the typical visitor to zooniverse, for example!

Can you confirm if labeling = IDing

Yes. Although a single attempt by a participant doesn’t lead necessarily to a definite label.

you’d be looking for taxa that have lots of observations, and for which photo IDs are possible, but these observations go unIDed because of a lack of qualified IDers (which may be due to a lack of general interest though also likely to a lack of availability of quality ID materials/guides).

Really well said — is this actually a true problem.

For the data underlying that graph, I don’t think the key issue is necessarily the lack of IDers, but lack of observers - many species are rare or located in areas with few observers, so these are not necessarily “unpopular” in a common usage of the word.

I imagined that collecting photos of things wasn’t the bottleneck - so this is a really crucial insight! Sorry for my naivety.

they are nearly impossible to ID to species with photo evidence. This might be the case for many fungi for instance - there are a good amount of fungi observations on iNat, but because taxonomy for fungi is so undetermined, and photo ID is difficult, these aren’t really “unpopular”.

This is the reason I actually assumed that would be the main issue - not the first two.

I wonder how much of the three reasons above restricts adding lots more research grade IDs? I.e. if we imagined that there were 1M photos of every species on Earth (removing the “observations” issue) how many taxa/species would remain unpopulated, and there were unlimited experts, how many taxa/species would then remain unpopulated on iNaturalist?? I guess I need to learn about these three problems to get some insight into if my approach is actually of any use!

From bugbaer
Really useful - thanks. I’m feeling substantial feelings of imposter syndrome; given how little knowledge I’ve got. My only efforts have been to learn bumblebees so I can take part in the Bumblebee Conservation Trust’s BeeWalk transects, and learning bird song (for my own interest).

Thanks! That’s not my profile – I’ve only IDed about 3 in my current profile though…
I’ll send you a direct message, bugbaer, if that’s ok [this reply is too long].

Responding to sedgequeen

Then, they need follow-up! They need to know when they’re right. Are you or those working with you ready to check each of the first few dozen identifications for each of your participants?

The optimistic idea here is that they get given some info (e.g. from the key) and are asked to id something (that we already know the id of!) and then they get feedback immediately & automatically. The idea is that as they improve (assuming they do) the ‘model’ of their ability keeps track of this, and can start to judge when they might start e.g. providing useful guesses; the idea also is that the tool should know where there are ‘gaps’ in its pools of users, so knows where we need to do more teaching etc.

In limited experiments this seems to work - but we really don’t know if it’s a good idea in reality!

Thanks again for the help.

Re lothlin’s point

Your example is a good one – something that can probably be taught to complete lay-people quite easily… maybe the computer vision (CV) system can get it down to those two species, and just needs to train-up a few participants to separate them… once there’s a handful of training examples labelled, it might be the re-trained CV system becomes really good at it, and we don’t need any more human help with that distinction… etc… but I don’t know if that would work! It’s just the idea!

Re sbrobeson

there’s practically no incentive to identify a common and readily identifiable organism like American Tuliptree

Yeah: I guess the idea is to pick on some area that has images, but not enough experts to help, and see if we can, by structuring things well enough, get non-experts to help label things.

I don’t see much non-experts can do there when the experts don’t even really know yet.

agreed - I’m assuming that at a minimum the image needs to be in principle /possible/ to id!

Responding to spiphany

You say you will be using a custom platform for the training, but if the purpose is to train participants to be able to help ID observations on iNat specifically…

Ah, no - my purpose is to see if there’s a way of combining some novel approaches to see if we can get good quality ID labels from a ‘crowd’ of novices, for problems that individually they probably couldn’t solve.

what is the background of the people currently involved in the project? Machine learning? Or do you have people with training in biology/taxonomy as well?

The main purpose of the post was to reach out to those involved in crowd-source labelling for biology/taxonomy. I’ve had a few meetings with colleagues in the biosciences about this; I’m also speaking to those in the School of Education (as that feels like a very relevant angle!).

it is essential to also learn the relevant morphology and scientific terminology, at least to a certain degree. This requires a very different, structured training approach than mere image recognition.

That’s an important insight, and maybe indicates that the idea is a terrible one! Would this be true if someone is just distinguishing between e.g. three species though?

Thanks to everyone! This reply is already too long. Thanks also schoenitz, and t_e_d and tallastro, etc.

I’ll have a go at IDing more things on iNaturalist as a thank you to all of you for your help!

It’s been really useful to get these insights. We’ll discuss this with our collaborators in biosicence. Thanks again!!

8 Likes

This is an interesting graph that provides important info. Anyway, I would give much importance also in the quality of the identifications. As regards, I do not know if it is a widespread issue or not and at which extent can be its consequences on the overall quality of the observations but, among the 100 top identifiers there are some users who could be defined “top confirmers”. They more or less confirm every identification, right or wrong. Another “issue” is that many identifications actually are “just” confirmation of the first suggestion provided by the computer vision. Again right or wrong.

This aspect suggests that people are commonly attracted by certain taxa while others, easy to be found or not, seem to be unobserved by many.

Another “issue” is that in most cases users upload just one photo for each observation making it in some cases unidentifiable. This is likely because they have made just one photo while making more photos of the same organism depicting more particulars (this applies in particular to plants) could encourage other users to provide an identification.

Apart these considerations, I think that your idea is good.
I also think that, among plants, users could try to give more attention to Poaceae and other Poales. Some of them can be easily found in urban habitats and are often abundant. In many cases they are not difficult but they can be difficult to photograph because they are “thin” and, especially if photographed from above, the photo is easy to result blurry.

Feel free to ask me for a further discussion on this topic.

2 Likes

I could use someone to help me with Cyamidae. They are severely under-observed so there’s not many observations of them but a lot of the time I’m not on INat for a while and I like to get them ID’d as quickly as possible. I can train anyone interested. Same goes for Oaks from Northern California.

Thanks for the suggestion! On Earthworms, this project seems quite relevant to your suggestion? You might want to ask them to contribute the collected & id-ed images to iNaturalist?

It might be that, for example, zee_z needs to collate a dataset of well labelled images of Earthworms, and needs 300 of each species, and might be willing to, e.g. advise on how to move the text/advice from the key for that Family into the supporting material on the system. So there might be some expert help, but hopefully the return for them is worth their time!

Interesting Ideas, I’ll look into them!

I will say, my messages intent wasn’t to advance a lot of the current earthworm species to correct species level ID’s. While that would still be important, the complexity of earthworm taxonomy and all of the details you’d need in an observation to get them to species level make it near impossible. What’s necessary to identify them ranges from measurements of their full body, end of head-to clitellum measurements, pictures of the ventral and dorsal sides of the clitellum, to macro pictures of the animal’s pores.

What I was proposing before is combatting misidentified animals which are stuck in a “loop”. Two great examples are Lithobius forficatus (Brown Centipede) and Lumbricus terrestris (Common Earthworm). Both of these are pretty difficult to ID to species, but when you look at Lithobiomorpha and Lumbricidae on iNat, they have thousands of observations over the other species. I’m not sure how this started, but I’m assuming it’s still progressing because of iNat continually suggesting those species, and people assuming what they’ve seen must be the most common species in their area. The sheer quantity of the “wrongly assumed” ID’s make it impossible for one person to clean it up. Having a team of novices that can understand the complexity of their ID’s and know what’s necessary to differentiate them would make it feasible to sift through observations on a large scale and regress ones where species level ID’s are impossible.

3 Likes

Hi @bugbaer, Would it be possible to share any of the material you use to teach students how to identify insects? I think a lot of folks would benefit from this. Thank you !

1 Like

I think that the general process you propose for training is very doable and would be a good way to learn to ID.

I’ve done undergraduate student training in the past with iNat by:

  1. Writing up a set of coding criteria

  2. Coding a training dataset (with experts, either me, or me + others)

  3. Checking the coding criteria for any issues that arose when experts coded the training dataset and modifying coding criteria as needed

  4. Training students on the coding criteria

  5. Having the students code the training dataset themselves (without looking at the “expert” labels)

  6. Having students check their work against the training dataset labels

  7. Students discussing any remaining issues with the experts (eg, where they don’t understand the reason for their coding difference with the training dataset)

  8. Correcting any issues with the training set (the students always find a couple of labels the experts got wrong!)

  9. Giving students entirely new data to work with

  10. Students work in groups of at least two to code new data

  11. Students check their data with each other and resolve any differences that they can

  12. Students bring any remaining “difficult” cases to the experts for final resolution for a “final” dataset

This process seems to work well, but is quite labor and time intensive. Part of this is that students are generally coding lots of different aspects of observations (not an ID, but behavioral or habitat data). Part of this is because the goal is to teach students about the process of data generation and give them the experience of learning how to do the process, design a way to code data - etc. The end goal isn’t just the data.

I think that students would benefit a ton from the immediate feedback and that this would really help speed learning, but it isn’t possible to have the “experts” on call at all times, so I think the potential for a training program in the general vein that you propose is high.

6 Likes

In this case, “the material” was literally a PowerPoint slide with pictures and distinguishing characters for the 5 most common orders we were finding in the samples. The rest of the teaching was done by reinforcement/correction as the students started looking at samples.

If people really want that, I’m happy to share it, but there are plenty of online resources that show the differences among insect orders

It might be simpler to list experts willing to be contacted to identify observations. For example, I posted several lichen observations in one day that weren’t identified. On the page was a list of people who were top identifiers for lichen. The list could be of experts willing to be contacted to check my observations. Ideally, I could pick the person who was familiar with my area, country.

1 Like

In context, I took it to mean people who do not feel qualified. The ones who – as the OP said – “don’t know where to even begin.”

I dont know if i find the terminology here helpful. As I see it Anybody can identify Anything with sufficient resources and some experience.
The problem with iNaturalist is that for many groups, “sufficient resources” requires firstly more than a simple photograph, although personally I believe that with the huge data resources of photographs, more taxonomists will focus on visible characters, and some currently intractable groups may become identifiable in the future (the classic is a plant taxonomist with microscope starting keys with ovules and placentation, when there are many useful other features that dont require dissection and are easily visible, even if not quite as efficient at distinguishing taxa).

But basically, there are regions and taxa for which almost all species are known, described and in field guides. Technically, there should be no issues to getting these identified on iNaturalist beyond the sheer volume of observations - one just needs more people helping with identifications, a little bit of training, and some practice.
The problem is those taxa where to make an ID one really needs access to herbarium and museum specimens, and where monographs and revisions are highly technical and often outdated, and lodged in obscure libraries.
Most field guides in species rich areas only scratch the surface and identify 1 in 5 to 1:100 of the actual species described. These tend to be either the most common or the most obvious species. Anyone using only these resources will mis-identify rarer and obscure species as common species, and anyone verifying these observations is likely to come to exactly the same incorrect conclusion. Even the AI CV makes exactly the same mistake: one can only identify what one knows about: “unknown taxa do not exist”. Unfortunately, unknown includes poorly known and poorly resourced taxa and areas - irrespective as to the validity or comprehensivity of the latest literature. Labelling these as “unpopular” is simply misleading and wrong.
The Earthworm example above is great. Almost all our earthworms in Cape Town are identified as either the “Common” Earthworm (which might be present as an invasive, but is not a recorded invasive on our checklists - of the almost 30 invasive earthworm species in s Africa), or one of the cultivated compost worms. https://www.inaturalist.org/observations?place_id=123155&taxon_id=333586&verifiable=any&view=species.
However, we supposedly have 11 endemic earthworms to the Cape Peninsula (https://www.inaturalist.org/check_lists/4377790-Peninsula-Endemic-Invertebrates) - but finding out where these occur is nigh impossible, and the keys and diagnostic illustrations are totally inaccessible. We dont even have a clue as to our indigenous more widespread species, or what species are likely to be most common, let alone how to identify them. It is highly unlikely that any identification of our earthworms will ever be possible, until a specialist revises the group and publishes guides, keys and distributions. That is not likely to happen in the foreseeable future.
Unfortunately, the same in true for much of the tropics and many of the species-rich areas of the world. It is not a matter of finding the resources or specialists to assist with identification: neither of these exist and identification of observations will remain a pipe-dream, until specialists (experts) are trained and paid to tackle these groups.
But the observations will roll in. And without any guidance as to what features are critical for finer identifications - so that these contributions may well be of dubious value …

6 Likes

You can duh if you wish, but I think you have missed my point. For sure, correct ids don’t need correction, but how are the non-experts going to decide which of the computer ids are correct?

I don’t think so. I’m pretty sure we agree on the proper function of #6. And I think the original also has the same intent. The language of the rule is just a bit ambiguous. Not that I’m entirely sure how to wordsmith it.

One class of errors that non-experts can address are the egregious machine vision errors. Some of these can be handled with correct higher level IDs. A good goal for non-experts to tackle.

I’ve done a few of these myself. I don’t know much about millipedes or earthworms. I do know that earthworms don’t have legs and pushing the ID up to Millipedes is a step in the right direction.

3 Likes

Having gone back and reread the initial post, I see an important gap that needs to be addressed. As has been stated on the forums several times, the number of us using the forums is but a small fraction of the number of iNaturalist users. So, any effort to recruit additional identifiers is going to have to reach users who are not on these forums; and the difficulty is that they may not be aware of projects or journals, either. What can we think of as suitable outreach steps?

6 Likes

Sorry for not being clear. I was writing fast and wasn’t trying to make an exhaustive or authoritative list of rules. Just dumping some thoughts.

I was thinking along the lines of the last comment by @tallastro where there are plenty of bad CV identifications that could be corrected, even if just to a high level and not to a new species level. So, more of being able to identify very bad IDs.

I don’t see an issue with someone using CV for all of their observations even if they don’t know if they are correct. It’s probably the most efficient way to get to a good community ID for 99% of observations. The only drawback is that if it is really wrong then it can get stuck in some obscure taxonomic level that no one is searching to identify.

I disagree about this being the most efficient way to get a good community ID.

The CV is … pretty bad at many hymenopteran groups. A lot of the time it doesn’t even suggest the right family, much less the right genus.

I don’t have a problem with people (judiciously) using the CV to get an idea of possibilities and narrow down the ID. But for taxa where the CV is unreliable, I would strongly encourage people to think about the suggestions and choose a higher level ID if they don’t know enough to assess what might be correct.

Users uncritically accepting the first or second ID suggested by the CV creates a lot of extra work for IDers. It is typically easier to refine a broad ID than to fix a wrong one. Incorrect IDs also have a tendency to be self-reinforcing (one observation with a particular ID makes that ID seem more plausible for the next person looking for an ID, and so on).

4 Likes

I have wondered if the apps are helping or not by providing suggestions when there’s no match in the computer vision. I’ve been observing mosses, fungi, and lichens recently - organisms that I’m not as confident in ID’ing myself - and I tend to put the CV id down only if it says “We’re pretty sure”. In cases where it provides some suggestions but didn’t say it was sure, I will look through them and learn about the possibilities and options, but usually leave the ID at a higher level. However, the app doesn’t make it actually very easy to do that. Choosing a suggested species-level ID is made easy, but if you want to be cautious and choose a higher-level ID, that I usually have to do by manually typing in the intended higher-level ID. That is sometimes not easy for people to do, because it requires knowledge of the taxonomy to know what higher-level IDs are reasonable to enter. The app could help by offering up some reasonable higher-level IDs.

5 Likes

That’s a good point. For most of the things I observe, I have at least a general idea what I saw (bug/beetle/spider etc.), but there are exceptions, and putting the same broad undifferentiated label on everything (“moss”) when you know you saw multiple different species certainly isn’t very satisfying for the observer. Nor is using iconic taxa such as “insects” the most effective way to get observations seen by experts.

I often will compare the taxonomic tree of suggestions to see if they all share some common higher taxon, but this isn’t necessarily particularly convenient to check either on the app or the website.

I do think that the CV model would be more useful if it would incorporate more suggestions for mid-level taxa, rather than focusing primarily on genus/species. (Sometimes it will suggest a tribe or family or whatever, but I have never understood exactly why the algorithm decides to do this in some cases and not in others.)

7 Likes

For flowering plants, in general it is possible to create annotated photo guides or training materials that allow a relative novice to distinguish a small group of similar species (assuming this is possible at all from photos). A guide of this sort can avoid specialist terminology, but given the work involved it tends to make more sense to explain the terminology instead:

“Each group of flowers (rhipidium) is enclosed within a pair of specialized bracts (called spathes). In Species unum the spathes (marked with arrows) are about the same length (subequal), whereas in Species duo the outer spathe (red arrow) is about twice the length of the inner spathe (white arrow). Other differences include…”

It seems to me that creating this training material may be the big challenge. Most existing material does assume extensive familiarity with the scientific terminology. It’s also quite rare that there is one publication that describes and contrasts all the similar species (e.g. a recent monograph for the genus, or even a key to the species in a particular area). It’s great when people take on the work of synthesizing the existing literature to create guides (e.g. the fantastic Fly Guide).

That leaves me with two questions:

  1. Am I right to think that the “auto-managed bite-sized ID” approach that @msmith proposes will require the same investment of time to create training materials as is currently needed to enable non-expert iNaturalist users to provide high-quality IDs? Basically, do both approaches need an expert to create some type of photo guide first of all?
  2. Assuming a photo guide is available are there reasons to think that @msmith’s proposed approach could be more productive than the current iNat interface?

My hunch is that people using either approach will need the same type of training material, although for the bite-sized approach it could perhaps have a narrower scope as the goal is to distinguish a smaller number species. Insofar as the work to create these photo guides is a limiting factor, it may not help much to adjust the identification process.

What I do think has big potential to improve the volume and quality of identifications would be the integration of photo guide material into the identification interface. This is something that iNat users have suggested for a lot time, but (understandably) it seems always to have been too complex to implement. It does seem that @msmith plans to integrate guide materials within the identification process, and I feel that has real potential to increase ID volume and quality.

Lastly… I would prefer that the project doesn’t create a separate large database of (semi-)identified observations. I understand that it may need to be a separate system to allow the researchers to test various ID workflows, but I’m concerned that the project could generates tens of thousands of IDs that will forever remain unconnected to any iNat observations. If I was a participant, I’d rather that my IDs were being contributed back to iNat in some way. Could the project manage the ID process outside iNat, but then let users choose to export their IDs to the production iNat system (via API)?

6 Likes

In my experience, the CV is very good at identifying conspicuous, distinct, frequently observed species, particularly North American species. It really does get a lot of common wildflowers right. It even gets some distinctive grasses right. In other words, it gets right many of the species people are likely to try to ID. I’m impressed by how many of the species in my area it gets right. Overall, I think that using the CV suggestions is a good try if you don’t know.

That said, for small, inconspicuous species, especially those in Third World countries, and for those in complexes of very similar species, the CV is poor, sometimes astonishingly bad.

4 Likes

OK, so what do you envision as the application for this training? i.e., once your novices have been trained, what data sets they then be using their new skills to label? Who and/or what sorts of projects would benefit from having access to this crowdsourcing potential?

Since you say it wouldn’t necessarily be intended for iNat users specifically, would the purpose be something like that sketched by @cthawley above – as a streamlined way for scientists to train research assistants for a particular project? The idea being that iNat’s body of verified observations would serve as the basis for training, and once the training model has been developed, it could be supplied with a new photo set and adapted with little work to a different group of organisms?

I think in most cases this would require some initial input from the scientists about identification traits rather than expecting users to intuitively figure out the differences.

As an example, take oil beetles (Meloe). There’s a good, concise overview of British species and their differences here: http://johnwalters.co.uk/research/oil-beetles.php

These are large, distinctive beetles which tend to be somewhat “underlabelled” compared to many other beetle groups. There are only a handful of species in the UK and continental Europe and I suspect, given proper guidance and feedback about identification traits and a suitable set of reliably verified photos, it would be reasonably easy to train a novice to identify the local species.

However, if merely given a set of photos without being told what to look for, I suspect most people would end up frustrated and confused, because they would try to use an obvious trait, like color, to distinguish M. violacea and M. proscarabaeus and would find it difficult to understand why this doesn’t reliably work. Or they might think that the differences in the shape of the antennae mean different species, when in fact this is a sex-based trait.

Meloe also undergo substantial changes in their appearance during the course of their adult life – they expand to at least twice their original length through eating. A novice who has learned to recognize them in their engorged state would (quite understandably) be likely to assume that a freshly emerged adult is a completely different organism altogether. Without an explanation about why the appearance is different, they may resist accepting what the computer is telling them (“the computer must be wrong”).

These factors aren’t unique to Meloe – lots of organisms have more than one form (subadult/non-breeding/breeding plumage in birds, sexual dimorphism, etc.).

Another issue that complicates identification is that the required traits aren’t necessarily always visible in photos because the person taking the photos has to know what to photograph. A lot of the observations on iNat are less than ideal for this purpose, and I suspect that a lot of the observations that are identified may not be correctly identified (see: lack of IDers). So any iNat photos used for training people would probably need to be verified first.

I am not suggesting that the idea won’t work, but these are some factors I see that are important to consider from the outset.

5 Likes