What is evidence?

hmm. well, i don’t see how getting a third opinion by a ‘random person’ is worse than just having the one id by a different ‘random person’. And to me if checklists and quizzes were added when trying to do IDs, i’d probably do a lot less IDs. That sounds annoying. Maybe the occasioal informational popup “did you know Toxicodendron rydbergii also occurs in this area? It is difficult to distinguish from Toxicodendron radicans if you only have leaves” type thing with a ‘dont show me this again’ checkbox. For observers as well as IDers.

The bottom line is, there are some taxa with high error rates, but overall the error rate of RG is pretty low at least in the areas and taxa i look at. The only real way to help now is getting more people to look. Maybe the system could also show people things they have had success with. In the long term i know the inat devs are looking at an intrinsic reputation system to weight IDs and that might help a lot with this too.


the difference is that in the latter situation, that rando was going to make the ID anyway. i wouldn’t want to invite a rando to make an id on a research grade, potentially messing up the community ID. (based on what i see, once there’s a consensus, it’s usually only people who have more experience who continue to add IDs.)

anyway, i’ve strayed too far from the original topic at this point. apologies to the discussion originator.

1 Like

similar to how others use other resources like google street view, often, i’ll reference other observations with better photos as evidence for observations with poor quality photos, if i can logically tie them together. for example, this observation of a pileated woodpecker (https://www.inaturalist.org/observations/21242065) with a grainy photo taken on a phone from a long distance probably was never going to achieve a research grade on its own, but i bumped it up to research grade based on another observation that was made roughly around the same time in the same location.


I’m pretty conservative about what counts as “evidence” personally - to me, unless it is verifiable by an outsider looking at the post, it shouldn’t be counted.

I’ve been on field trips with highly-educated expert botanists who mis-identified a common plant in a moment of inattention - it happens to the best of us. However, if they were to post that and people just agreed based on their reputation, that would be a significant error in the data.

And illustrations, unless the artist is a trained botanical illustrator, will be iffy at best. They may miss key features, be unconsciously biased, or simply lack the skill to record it in any identifiable manner.


i think we’re saying the same thing here at the end of the day. the algorithm that decides when to do a “traffic stop” would definitely need to be tuned properly to avoid disincentivizing IDs, and how much work you ask an identifier to do during a “traffic stop” – whether just clicking ok to an extra pop-up or something more challenging – would definitely need to be tuned as well to avoid disincentivizing IDs.


This is an interesting thread. I just wanted to push back against the view that observations without evidence should be discouraged from iNat because they are unlikely to be of use for research. I disagree, completely, and I say that with my university researcher hat on.

To do research with biodiversity data, it is ideal to estimate the probability that an observation ID is correct. With museum and herbarium collection data, it is usually assumed that mis-identifications are unlikely and IDs can be assumed to be correct. That’s not the case with iNat data, whether or not an observation is “research grade”. (We probably shouldn’t be ignoring the mis-identifiction probability in collections either.)

The probability of an iNat ID being correct can be estimated using a combination of the proportion of times particular observers and particular identifiers have misidentified a particular taxon. Some taxa are also inherently easier to ID than others, which can be quantified with iNat data.

If there’s a “research grade” observation with a photo, it could still have the wrong ID. That probability could be calculated from the frequency with which this species has been misidentified in other iNat observations, and the track record of the observer and identifiers at identifying this species.

If there’s no photo with an observation, the ID can still have a high probability of being correct. That’s when that observer has an excellent track record identifying that species on iNat, especially when that species is infrequently mis-identified generally. These probabilities can be estimated with iNat data.

In other words, for both “research grade” observation and observations without evidence, the probability that the ID is correct can be estimated using iNat data. In both cases, observations with a higher probability of being correctly identified are more useful.

We shouldn’t be thinking of each iNat observation sitting alone and independent. For research, it’s just as important to know the context of the demonstrated accuracy of the observers and identifiers.

My view is that we should be encouraging everyone to get outside and make lots of observations of wild species to document the natural world and how it’s changing. That’s with and without photos.

If a particular species is “your thing”, and you have demonstrated on iNat that you are very good at identifying that species, it is counterproductive for us to say that iNat only wants your observations with good photos. I’m worried by the thought that we encourage these users to keep all of their other observations on a spreadsheet on their computer because we tell them that they’re of no use to others. That would be a massive lost opportunity to better document nature.

(If our objection to observations without evidence is just that they clutter the site and get in the way, then that’s a UI issue, not a justification to discourage these observations. iNat is already very good at keeping these observations out of view from default searches.)


I agree! Well said!


Not just well said, but a very good explanation of the “wisdom of the crowd” aspect at play with iNat. I’ve always considered that even if the data isn’t 100% accurate, the general picture that the data creates is very useful, if only in identifying areas worth more rigorous scrutiny. I think because we come from a position of dealing with the data on a piece by piece basis, we forget what statistical analysis can do with “big data”


i do add a fair bit of photoless data, for instance i am slowly adding some old plot data for surveys i did years ago in the Santa Monica Mountains. I figure people can judge for themselves because there’s plenty to go by in terms of my other observations and comments on the site… and anyhow the data has already been used, it was just never put in GBIF. I have other side projects and things too. ata is data, building the community and connecting with nature is important, but so is data. We all know we are in a horrible biodiversity crisis and data is one of the most important things that may allow us to mitigate it.


(Apparently I’ve had a reply in draft mode for most of a month - yikes! I’ll preface that the following comes from the perspective of entomology, where sometimes the number of species in a genus exceeds the number of total species an individual may know about.)

I think the main question is whether the evidence presented represents a legitimate observation by the submitter, not just what types of evidences as permissible. I think there’s also the thought of how specific the evidence actually can support. For consideration:

Does a drawing of an arthropod have 8 legs, wings, and apparent feathers represent a species record? Almost certainly not. It directly contradicts any existing species.

Does a drawing that looks like an amoeba represent an observation of an animal? I’d say probably not. With species records, this really doesn’t provide support of seeing anything even though it may have been intended to represent an observation.

Does a drawing with at least a vague semblance to a described organism represent a species record? I’d say probably, though it may be a fair point to ask if this represents a sketch of a wild-observed species as opposed to more general artwork. It may not necessarily support a species-level ID, though.

As someone quite familiar with paper wasps (incidentally, one of the taxa with rather high rates of error), does my description of a wasp represent a valid species record? I’d agree with Charlie and the other Jon that it probably should be considered as such. Heck, I’m often asking questions about traits to supplement observations, and it’s that description that validates a species. I have at least one observation where I managed to see the wasp up close rather nicely (to the point of a usable description) but wasn’t able to get a decent photo as I had to chase after it when it flew away. Now, at the same time, I do often get a bit perturbed when scientific authors make claims of observing some particular species with no form of (adequate) description or photography (how was it separated from cryptic species B?), so I think there should be some weight to validate that particular species beyond reputation.

Would DNA sampling info validate an observation? If the DNA came from an observed, wild specimen, I would probably say that this is the single best evidence possible, provided there’s decent barcoding for its group. This goes down to what actually defines the physical differences we may observe between species.

Does an observation lacking anything but species ID, location, and time represent a species record? I think this really has to depend on the identifier’s knowledge of the species in addition to the range (whether the species is already known to occur in that area). I’ll offer a counterpoint to Jon’s thoughts in noting instances of professional misidentification of museum specimens.

I think there’s also something to be said about quality of evidence. If an observation’s photo (and other evidence and observer background, for that matter) lacks sufficient detail such that it can’t be proven right or wrong, then there may be a need to go case-by-case as to whether it’s appropriate evidence. Cryptic species immediately come to mind (considering Jon’s notes on the probability of a particular ID to be correct, these are groups with very low rates of accuracy where the user base tends to be unaware of all but a single species). Then, of course, there are species that are painfully difficult to misidentify.


I used to be on the records committee for the Iowa Ornithologists’ Union. We evaluated observations of rare birds. A problem: one guy knew his birds so well he could ALWAYS write a convincing description. But one day when I was with him, I swear he didn’t actually see species A, which he described well.

I think iNaturalist shouldn’t try to be all things. Stick to evidence we can evaluate directly – photos, recordings, sonograms. Let other organizations deal with written descriptions.


A post was split to a new topic: Encouraging a sense of scale in photos

I came across this conversation and I’m curious what people think: https://www.inaturalist.org/observations/35150745
Apparently the virus is necessarily present, but there’s no direct evidence of it. In my observation the lady beetle would not be behaving the way it was without the activity of the virus, so I think that’s direct evidence of the virus. But if every observation of the wasp species is duplicated for the virus, is that even useful information?

I’m guessing there a number of species of bacteria and virus that are present in every human on the planet. Theoretically you could make a duplicate observation for Escherichia coli for every mammal and bird observation. Is this different from that in any way? Would that be a valid observation?


If the subject organism (a virus in this case) is causing some diagnostic behavior, change in appearance, etc. in another organism and that is exhibited in the photo, then I think it’s okay. However, I think the evidence would be best classified as “sign” and not the actual organism.

I’ve seen photos of dead prairie dogs that reportedly died due to sylvatic plague from Yersinia pestis. Possibly that might work as an iNat record of Yersinia, although the evidence would have to be supported by lab testing of the carcass to confirm cause of death. But in reality there’s no way for an independent reviewer to see something in that photo that says Yersinia.

I don’t think photographing a multicellular organism that happens to carry some bacterium or virus (but doesn’t show evidence of carrying it in the photo) would count as an observation of that microorganism.


Nice to see this discussion, esp coming in light of my own upcoming project to “encourage” people to post old observations.

So apart from detailed field notes / sketches with descriptions etc what do people think about

  1. ebird checklists used in the reference to a bird observation
  2. Hand written checklists made on pre ebird trips (again for a single organism)

As an ebird reviewer I know that many “species” are automatically accepted due to filters setup by experts, on the other hand many species are not accepted / not confirmed for the same reason. While some don’t even appear in the public domain (marked not for public).

But ebird is largely based on honesty tempered with “reviewers” cross checking both species level regional filters, and a user trust system based either on history and / or personal or professional interactions with the observer.

Personally, I think I would not use an ebird checklist reference to make an bird observation in inaturalist but do think it may be a feasible idea.

Very good point and a very relevant one in the world of birding


to add more reason to why i would, personally, not use ebird to make an observation on inaturalist. Because

  1. ebird records are adequate for me - to know that the bird has been recorded for posterity and it need not be duplicated on inaturalist just for the sake of species building etc
  2. I have a camera and sound recorder so would ideally want to share something interesting

To re-iterate I would not be averse to “helping” others identify a species by ebird checklist and description if they would like to do so.

Based on this, it is probably a reasonable assumption that there is some additional number of eBird checklist taxa that were also wrong, but happened to pass the expert filters and so were not caught. That seems like an additional good reason not to use an eBird checklist as the sole evidence for an iNat observation.

That is true,
some of them do not meet the standards set and are marked “not for public” and this data does not make it into the public domain.

Usually this is people who are either new to ebirding and start putting on their entire life list on one checklist or the usual group of people for who showing off what they have seen, even by fakery, is the norm.

My point was not the entire checklist but certain species that are “rare / unusual” etc - these normally get flagged by the filters and mostly are then vetted for reliability and accuracy.

While ebird is a massive the platform by and large the reviewers are from regional levels who in turn know many people though various interactions, or can see their history of birding .

Of course the ebird checklists are not based on “documentary” based evidence - it would be fairly impossible for ebird to function if that was the case. In fact birding would collapse if digital evidence for each bird was sought. (not to mention a lot of bird research as well).

Some grammar and spelling edits made

1 Like

Nothing would collapse really, having a recorder with you working all the tie would be enough to get all the sounds you heard and camera could capture everything else, silent but seen. I see tons of checklists with very doubtful birds seen, but there’s no photo, so it can’t be proved or disproved, or there is a photo and it’s of another species or it’s an american species seen in the middle of Siberia with no photo and description “common”, so it’s pretty safe to assume big part of checklists have wrong data and shouldn’t be used at all. And we see sae on iNat, user ids are far from being right all the time.

1 Like