What is evidence?

Photos and sound files of the organism itself, or sign of that organism, like tracks/scat/nests/galls are considered evidence on iNaturalist. What about other things, like:

  • an illustration
  • an attached description or comment by the observer describing a sound or feature not depicted in the photo
  • spectrogram of a bat call
  • a link to or observation field with DNA sampling info
  • ?

Whether evidence is provided determines whether the observation will be casual grade or needs ID/research grade. Some, like illustrations, are formally considered evidence in the current site guidelines, as long as it depicts the actual organism. Others may be up to interpretationā€¦

10 Likes

Personally i count illustrations (obviously, since i occasionally add them), descriptions with features not depicted (though if without media no real way to make them RG) and also the spectogram (if i had a clue how to ID one). I think there are grey areas, for instance posting a picture of a forest and saying ā€˜i heard a chickadeeā€™ would be pretty iffy for getting research grade for a chickadee (though if in the northeastern US there is probably no forest without one anyway).

I personally donā€™t count DNA because i think thereā€™d need to be a mechanism beyond iNat crowdsourcing to ID those. I could definitely see a point some day where DNA is formally accepted on iNat and tracked in parallel, but there is just so much we donā€™t know and so much the average user doesnā€™t know, that i personally wouldnā€™t add RG based on DNA yet. Maybe some day.

Others may have other guidelinesā€¦ this is a good discussion to have. Thanks for starting this thread!

3 Likes

I would come down strongly on the side of being quite conservative as to what counts as evidence. As a general rule, we should presume honesty when considering evidence, but we should be very cautious if making any presumptions as to human error or the lack thereof.

So:

  • spectrogram of a bat call

is fine, no different from an audio recording.

  • a link to or observation field with DNA sampling info

is probably fine, provided that the possibility that the association between the observation and the DNA is mistaken is considered (i.e. how likely is it that contamination or human error has resulted in the DNA tested is not from the specimen in question).

  • an illustration

can maybe be fine. But human fallibility needs to be accounted for very carefully. Presume honesty, but donā€™t presume that illustrations are accurate in every detail. Itā€™s very easy to draw what you think you saw, or what you think you should have seen, rather than what you actually saw. Illustrations should not really be used for difficult identifications, especially where shape/structure features are important.

  • an attached description or comment by the observer describing a sound or feature not depicted in the photo

Need to be very careful. After all, if something isnā€™t identifiable from the photo, but we give research grade from the description, how is that any different from giving research grade to an excellent description with no photo at all? I donā€™t think anyone here would advocate for that?

Again, presuming honesty seems reasonable. But donā€™t presume that someone isnā€™t fallible. If something is very straightforward and objective (e.g. ā€œwintergreen scent to twigsā€, ā€œmeasured diameter at breast height of 72 cmā€), that seems likely reasonable to use as evidence. But typically things are less so (e.g. ā€œidentified by callā€, ā€œappeared to be about a foot tallā€). And at that point I donā€™t think itā€™s really evidence, itā€™s just affirming that somebody elseā€™s identification process is probably right.

8 Likes

hmmā€¦ i wouldnā€™t be so sure. If combined with some sort of implicit reputation system or something like that i might advocate for that? Not strongly, butā€¦ i donā€™t see it as a huge problem. Lots of us get paid to do just that.

In terms of illustrations I think a nice approach if someone is posting ones that donā€™t seem sufficient for IDā€¦ would be to say something like 'wow, it is really neat that you are doing these botanical sketches! (or whatever). Can you add a photo of the plant along with the sketch? ā€™ That is something i try to do now if i add a sketch, too. And most people have their phone with them most of the time anyway

1 Like

Just to throw the cat among the pigeons, if someone has correctly identified hundreds of house sparrow observations on iNat (first ID on their or othersā€™ observations), and never misidentified a house sparrow, Iā€™d regard that as good evidence that they were right if they added an observation of a house sparrow with no photo or illustration or audio or description.

Iā€™d certainly regard that observation as ā€œresearch gradeā€ in the sense that it would be worth including in a researcherā€™s dataset. Iā€™m sure some researchers are already doing this kind of processing with iNat data. Some kind of in-built reputation system on iNat could help to encourage this.

After all, if we want to map all the worldā€™s biodiversity and how itā€™s changing, we shouldnā€™t be requiring/encouraging people to be photographing/recording every observation they make of species they have demonstrated that they can identify with certainty.

9 Likes

For context: https://forum.inaturalist.org/t/is-there-an-observation-field-in-use-to-flag-observations-with-drawings-sketches/464/3

My main concern is about consistency. 1) in messaging 2) in policy/ protocol 3) in enforcement/ application

As evidenced (pun partially intended) by the many embarrassing questions I ask here, I am someone who has been on the site fewer than two years, am trying to be helpful and a valuable community member as I learn more but struggle with not being a professionally trained research (hard, social doesnā€™t count here I think) scientist nor a computer-savvy person who can understand half of what you very experienced folks discuss here. I imagine if Iā€™m having these questions other people might be as well and asking them will possibly benefit the community even if I annoy folks along the way.

Evidence is a driver of everything on iNat, as there would be no data without some evidence, however it may be defined. I think then, that all efforts to engage an audience and active user base, in keeping with the siteā€™s stated mission, should be made which include removing or clarifying conflicting information, (or making it really clear that there is a lot of subjectivity in the appearance of conflicting information) and a clearer path to become a more integral part of the community (which can be defined more broadly elsewhere) in a way that perhaps minimizes some of what I see as mistakes that can be mitigated with the aforementioned improved messaging. e.g.: I feel awful when I learn I thought I was doing something helpful but may instead, have been disseminating incorrect information with unknown consequences.

yes. essentially. Iā€™ve been told that because of a blurry photo (regardless of evidence that I can make an ID, provision of supporting information, field marks, etc.) something wasnā€™t going to be able to be IDed past a certain point. If I were to have instead drawn a picture of what I saw and that could become research grade then that is incredibly confusing. I am neither in favor nor opposed to using illustrations, I just think that consistency is something that would keep me coming back and probably others enjoy it too.

Iā€™ll end by saying that my overall involvement and comments here and elsewhere are driven primarily by an overwhelming horror at what has happened to global biodiversity, a desire to proactively contribute to the work of others through my interaction with the natural world and data collection to protect what we have left and mitigate as much future disaster as possible and to do so with a sense of urgency I feel is generally lacking among our species. Thanks.

3 Likes

Consistency definitely is a good thing! More standardized protocols are a good thing! I think many of us driven to collect mass amounts of data share some common brain characteristics, such that having things laid out clearly and standardized reduces issues. Or maybe thatā€™s all humans.

Iā€™m here for much the same reasons as you, which is part of why i place so much emphasis on data, especially spatial data. data with inaccurate or imprecise locations are pretty useless for documenting fine scale biodiversity loss, for instance. This is why I want to keep iNat focused on this important task, which is a subset of, but by no means encompassed by, ā€˜connecting people with natureā€™. We are here for different reasons, and we want to make sure the site reasonably meets all our needs. And to me, thatā€™s the largest amount of accurate data we can get. No matter how ugly, if it can be verified or if we can assess trust of the user, iā€™d rather the data be here (mapped correctly of course :) )

2 Likes

Good discussion. I offer an example:

Background: I work for a land conservancy and manage 200+ preserves and reserves. I utilize iNat to gather and database species occurrence records on our lands and invite volunteers, members, and the public to join us in this effort.

One of our great volunteers is a retired state wildlife biologist. He often inventories our lands for us and he uses iNat. He does not always capture a photo and Iā€™ve let him know that his records without photos are still valuable to me and our database/records of a property. Sometimes I want to agree with an observation of his without a photo because I know his expertise (sometimes I have). Here is a sample: https://www.inaturalist.org/observations/11432633 of one his casual obs.

Again the data is helpful to me regardless of its casual status, but I could verify his observation based on my knowledge of his expertise and then maybe it would be valuable to the community. I am not necessarily saying I would opt for this as approved evidence, but I wanted to add an example to the discussion.

Also, my two cents: Just like any other repository of data, any outliers and something pointing to something new (e.g. a range extension) should be verified or supported with a second data sample. This means to me that taking a more liberal approach to what is verifiable would be okay.
-Derek

4 Likes

Personally, I think that evidence is required on iNat. I know lots of people posting evidence free observations, some of whom are perfectly trustworthy, but quite honestly I would prefer that sort of observation on some checklist or atlas website, rather than iNaturalist.
I have no problems with sonograms or sketches (and none at all with sketches such as these), but exclusively verbal descriptions of morphology or call, I cannot evaluate.
DNA is theoretical at this stage, but I predict will become more prevalent in the future (e.g. identifying all amphibians or fish or may- and caddies-flies from a drop of pond water).
Obviously, the latter is beyond human possibility, but I guess I am saying that if an AI can make an ID, then I am happy. And conversely, if an AI cannot make an ID, then I am not happy.
The bottom line is that we need limits, and as a community we should agree and decide on the limits. I dont know how a discussion will achieve this on its own (and I cannot imagine it being put to ā€œa voteā€). More likely user pressure (what we post) and feedback (the IDs and validations received) will strongly guide this and shape what is acceptable on the site.

What would be nice to get some feedback on is who is using this data and for what? So is anyone prepared to use unverifiable data,or are those posting it the deluded exclusive users of their own data (perhaps better managed on an excel spread sheet)?
If there is a demand for such data, then perhaps we should encourage it - or alternatively refer it to a site that manages it better?

{An aside:
But how much data is ā€œunverifiableā€?
https://www.inaturalist.org/observations?place_id=any&verifiable=false&view=observers
(frankly I was at first perplexed by this result, until I realized that it was not just that there was no ID evidence, but also date and location ā€œevidenceā€ - how many of these users realize that they dont have dates, localities or other features rendering their data as ā€œunverifiableā€?). - Perhaps this needs a new thread?}

there simply isnā€™t any other plce to store that data that works nearly as well. There are so many reasons - couldnā€™t get a photo for whatever reason, had old data you wanted included, want to build out and expand range maps, a bioblitz from pre-inaturalist with a species list but no photos, i could go on and on. Butā€¦ no one is really asking whether or not it should be allowed thatā€™s already how the site is set up, so if you donā€™t like them i suggest you use the very easy to use filters to just turn them off.

And yes i use the unverified dataā€¦ for tracking range of species, seasonality, change over time, and all kinds of other spatial ecology stuff. Granted a lot of the unverified data i use is my own data or from a source i know and understand. Often itā€™s vegetation plot data from pre-inaturalist or from sources i have permission to use but didnā€™t take photos of. Or, while doing vegetation plots, i donā€™t always have time to photo everything, or else itā€™s pouring rain or thereā€™s some other issue. I could see a filtering system where one could ignore or vote down an unverified data source in some cases. But in the end they are what they are - someone saying they saw something somewhere, and itā€™s your choice what you do with that. Banning others from seeing or using the data isnā€™t a good option, but if you donā€™t want to see it, you can turn it off on the range maps and your filters, and then it wonā€™t bother you.

1 Like

I take your point.
I would never stop someone from using a hammer to put in screws.
But is it the best tool for the job? Might there not be a better place, or a better way of doing it?

e.g. One of our contributors to the BioGaps project has gone and created 60 5X5km places and put their data in as checklists (to my horror into the default checklist, instead of a specific checklist, but it is now done). It is certainly a valid use of iNaturalist as a tool. But is it a good way?
https://www.inaturalist.org/projects/biogaps-transects
But at least it has saved us the clutter of a few thousand unverifiable observations (although more than half of the data are backed up by herbarium specimens).

1 Like

Itā€™s not the perfect tool for the job but itā€™s the best tool for the job by far. At least for my purposes. But it is what it is. I guess our best option is lots of filters and map layers.

1 Like

Evidence also means;

  • A photo taken from an aeroplane, identified to a daisy herb.
  • An almost silhouette of a tree taken at twilight, identified to subspecies level.
  • A photo of a lizard under a rock with a tail sticking out.
  • Any photo of an organism that cannot separate it from other possibilities.

Itā€™s simple and uncomplicated to agree with something that you canā€™t be proven wrong. Photos that donā€™t display the relevant identifiable features such as fruit or hair from a plant make it easy to identify as the identifier canā€™t be proven wrong, itā€™s a guess identity but these are the observations that mostly make it to research grade, where as an observation of plant showing Fruit, Buds, Barks, Leaves, Form, Hairs, are never (or very rarely) identified as itā€™ll create effort keying that species out. There is more reward in clicking ā€˜agreeā€™ then actually keying a species out, itā€™s more satisfying.

My version of evidence is to provide utmost certainty that no other possibility remains. This is not the same interpretation of iNaturalist as they have their own path. (I also posted a feature request to include metadata for additional features such as flowers, but there was no community interests what so ever).

There is no filter that can distinguish the difference between quality observations displaying identifiable features to those that are listed above such as the aeroplane. That would require displaying metadata for each photo such as ā€˜leaves or a separate photo of ā€˜flowerā€™ while allowing users to search for ā€˜additional evidence of organismā€™ in the filter options. All data has value, but it depends on the way that information has been filtered that interprets the researches version of quality. So unless a researcher can filter out the observations that donā€™t meet their expectations, than the current results iNat provides fails to have a reputation for accuracy.

Therefore ā€˜evidenceā€™ has multiple interpretation depending on the userā€™s expectations.

3 Likes

This has been a really interesting thread. I think itā€™s inevitable that at times we have to accept as evidence descriptive information that simply canā€™t be photographed or audio-recorded- for example the only clear way (but a very reliable one) to separate the aster Symphiotrichum laevis (very common) from S. oolentangiense (very rare) for much of the year is that the leaves of the latter are scabrous to touch. This is not a trait that photographs well (or at all). I donā€™t think many here in this thread would find the idea that noting that in the description counts as useful evidence to be controversial.

In concept, though, I feel thereā€™s not a lot separating that from someone making an illustration in the field of a plant thatā€™s right in front of them. Sure, drawing is a subjective process, but if the illustrator understands which traits are diagnostic and is careful to represent them, then someone else who is also familiar with the traits of that plant would be able to confirm the ID. I donā€™t see why considering such an observation ā€œresearch gradeā€ wouldnā€™t be as valid or moreso than a lot of the observations Iā€™ve IDā€™d based on the ā€œgestaltā€ of a plant depicted by a poor photograph.

That said, Iā€™m not sure Iā€™d feel any different about the evidence quality of a detailed verbal description with no image at all. If it describes the diagnostic traits, itā€™s confirmable. The only reason I can imagine to treat that differently would be the question ā€œwithout a photograph how do we know theyā€™re not lying about having seen anything at all?ā€, which is a question that could be as easily leveled at an illustration. Then again, you can lie about the provenance of a photograph. I canā€™t think of many hypothetical scenarios where this is the only form of an observation youā€™d be able to make, though. Maybe youā€™re in the field with a sounds recorder and a camera but no notebook, and your camera dies, but then you find a plant youā€™d like to observe? I donā€™t know. It does seem a pretty niche consideration, but the questionā€™s in this thread.

Poor illustrations and poor descriptions are obviously not worth much as evidence, but often neither are poor photographs.

4 Likes

itā€™s been discussed along with the reputation system that one could make some evidence free observations research grade if the userā€™s reputation warranted. For instance, i collect plant dat afor work, itā€™s ā€˜research gradeā€™ because i am a botanist, but when I add plot data here without photos itā€™s ā€˜casualā€™. Which doesnā€™t bother me, but one can see the contradiction. However, the reputation system is another issue so iā€™ll leave it at that.

2 Likes

Iā€™m feeling very conflicted about this whole thread. How much weight should be given to descriptions that the observer writes but doesnā€™t document with photos/sounds/etc.? This is particularly difficult with species that have look-alikes. For example, thereā€™s a study that argues that long-tailed weasels and short-tailed weasels canā€™t be told apart based on tail length if the individual has an intermediate length tail. So when someone posts a photo that doesnā€™t even show a tail and then says theyā€™re sure itā€™s a long-tailed, what am I supposed to do? Even field biologists with calipers canā€™t always be sure.

1 Like

try asking them and if they donā€™t respond you can consider knocking it to genus?

The point is they do respond ā€“ they say theyā€™re sure itā€™s long-tailed, but Iā€™m saying even a field biologist would often have trouble. If I would be skeptical of an ID made with calipers, shouldnā€™t I be much more skeptical of an ID made by sight alone? Is the evidence ā€œI saw it and it had a long tailā€ sufficient? Should it be?

1 Like

I dunno. Maybe they are weasel experts and you just canā€™t tell. At that point iā€™d probably personally just not verify it but not vote against it either. But there are lots of grey areas. In my experience (with plants) most people either say ā€˜oh, i didnā€™t consider that other speciesā€™ and adjust accordingly or give further evidence not captured by the photo that lets me verify the ID.

If the species truly canā€™t be told apart at all without a skeleton and a microscope or whatever, iNat is probably never going to be a good place to track them anyway, so i think thatā€™s just going to have to be taken into account, rather than forcing people to genus. But thatā€™s just me.

My version of evidence is to provide utmost certainty that no other possibility remains.

I agree with that. I was recently asked to identify a moth that I was unfamiliar with. I did so, and posted the reasoning behind my decision. Itā€™s possible that I could be wrong, as the photo had an equivocal image of a key feature, but I did document that as well. I think most of us do our best to properly identify ā€˜unknownsā€™, and add to the database.
Ian

1 Like