Curiosity about the Observation Streaks calculation

It seems that if I submit a new observation every day, it carries those consecutive observations as a streak when calculating personal stats, but if I make a new observation every day, but don’t upload them every day, it doesn’t consider those submits as “observation streaks” even though I have an observation logged for each calendar day in the period. So, are “observation streaks” really “submit/upload streaks”?

ie. I make an observation every day for 50 days and upload them in real time, I have a 50 day streak. If I make an observation every day for 50 more days, but only upload weekly, I have 7 streaks broken up even though every calendar day has a new observation accounted for…

It’s really not important beyond my curiosity of how the platform works. I have found myself in the habit of making at least one new observation daily and that is accurately reflected on the “calendar”, but my stats show breaks where I perhaps didn’t upload until the next day.

Thanks for the clarification! :)


1 Like

As far as I can tell streaks count observations observed but not submitted on a particular day. I tested it by generating my stats for 2019 (I joined iNat in 2020) and it shows that I have 11 streaks. That seems like fairly solid evidence to me!

Thanks… I appreciate the response.

ps. I saw you stuff-in-webs project… what a neat idea! I have a recent observation that may fit:

1 Like

Thanks! The project is not for observations of spiders, so that observation could be added if you duplicate it and ID the trapped insect (dragonfly?) instead.

So weird. I thought maybe when the new Year and Review was available, the observation streak data may be updated/accurate. I even regenerated my personal stats with no change. I have at least one daily observation since January 1, 2020 (709 days), but that isn’t reflected as a continuous streak. I guess I’m doing it wrong. LOL.

the year in review streaks are calculated based on observed day (not submit day), as far as i can tell, but they count only verifiable observations:

An observation streak is a period of at least five days when someone got outside and recorded new, verifiable observations every single day. Here we’re showing the longest streaks that began this year or were in progress when these stats were generated (for individual users we’re also including streaks that ended this year).

so while you may have had observations every day since 1 Jan 2020, not every day since then has included a verifiable observation. see

shifting slightly off-topic a second, one thing that looks a little odd to me is that when you hover over each of the little bars that represents a streak in the YIR page, you’ll see the date range and number of days. the number of days looks fine, but the date range seems to be too early by a day. for example, your shortest streak is noted as 9 days from 7 Nov 2021 to 15 Nov 2021, but as far as i can tell the actual range for these verifiable observations is 8 Nov 2021 to 16 Nov 2021. maybe @carrieseltzer or @tiwane can explain the discrepancy?

finally, if you want to to see your streaks, including non-verifiable observations, you could use this 3rd party streak finder here:

To account for time zone differences and such, Ken-ich said he thinks he’s defining a streak as ending a day before stats are generated, “e.g. if we’re on either side of the dateline, we might both be on a streak even though we have both made an observation on the most recent day.”

Wow. That is really cool… and your explanation makes sense to me. I suppose I didn’t/don’t know what makes an observation “verifiable”. I really appreciate you explaining what was going on. Thank you for that!

1 Like

i can understand generating stats only up to a particular day based on some standard calendar / clock so that folks in Australia don’t get a day’s advantage over folks in the Americas when creating streaks, but shifting the entire range (both starting and ending date) of a particular streak doesn’t seem like a solution to that kind of problem. i think something else is going on.

if you look at the bottom of the page on any given observation in the website, you’ll see some Data Quality Assessment metrics, like so:

if you click on the information icon next to the section header, you’ll see an explanation of research grade, needs ID, and casual. verifiable is basically either research grade or needs ID. or looked at another way, verifiable is anything that’s not casual.

here’s the relevant text when you click on the icon:

The data quality assessment is a summary of an observation’s accuracy. All observations start as “casual” grade, and become “needs ID” when

  • the observation has a date
  • the observation is georeferenced (i.e. has lat/lon coordinates)
  • the observation has photos or sounds
  • the observation isn’t of a human

Observations become “research grade” when

  • the iNat community agrees on species-level ID or lower, i.e. when more than 2/3 of identifiers agree on a taxon (if the community has voted that the Community Taxon can no longer be improved, this reverts to subfamily-level ID or lower)

Observations will revert to “casual” if the above conditions aren’t met or the community agrees

  • the location doesn’t look accurate (e.g. monkeys in the middle of the ocean, hippos in office buildings, etc.)
  • the organism isn’t wild/naturalized (e.g. captive or cultivated by humans or intelligent space aliens)
  • the observation doesn’t present evidence of an organism, e.g. images of landscapes, water features, rocks, etc.
  • the observation doesn’t present recent (~100 years) evidence of the organism (e.g. fossils, but tracks, scat, and dead leaves are ok)
  • the observation no longer needs an ID and the community ID is above family
  • the observer has opted out of the community ID and the community ID taxon is not an ancestor or descendant of the taxon associated with the observer’s ID
1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.