Please fill out the following sections to the best of your ability, it will help us investigate bugs if we have this information at the outset. Screenshots are especially helpful, so please provide those if you can.
Platform (Android, iOS, Website):
App version number, if a mobile app issue (shown under Settings or About):
Browser, if a website issue (Firefox, Chrome, etc) :
URLs (aka web addresses) of any relevant observations or pages:
Screenshots of what you are seeing (instructions for taking a screenshot on computers and mobile devices: https://www.take-a-screenshot.org/):
Description of problem (please provide a set of steps we can use to replicate the issue, and make as many as you need.):
Step 1: MacOS Monterey 12.5.1
Step 2: Safari
Step 3: I uploaded a photo of a spider that was obviously a spider (no plants or anything else in the shot). The suggested ids that came up were of flowers.
The computer vision can be of questionable use for some critters. I find it’s pretty bad for NZ spiders, although it generally gets them into the correct Order at least. There are a lot of factors at play that affect CV accuracy, including how many critters it has been trained on, and how many humans have provided (hopefully correct) IDs for the same critters. Long story short, don’t trust the CV.
There may also be an issue with whether you are choosing on nearby suggestions or not. If you have CV set to include local suggestions, and there are not many observations nearby, it may suggest strange things. You can try allowing suggestions from all locations, but you may also get some less obviously wrong suggestions then (like spiders which aren’t present on your continent, etc.)
The CV will struggle with uncropped photos, so if the spider doesn’t fill most of the frame, try cropping it so that it does and see if the CV gives better suggestions.
Yeah, that happens sometimes, though this particular example is pretty far off. Remember, it’s a computer- it doesn’t know anything. It doesn’t know that a spider and a flower are different, it can only try to match up what’s in the picture with what it knows.
currently the taxon is stuck at State of Matter because the observer has identified it as a Japanese Snowbell for some reason (perhaps she doesn’t realize she can specify “spider” on her own?), but i guess this does allow the full set of suggestions from the observation page to be seen without being limited to just spiders.
interestingly, from the observation detail page, the visual score for Japanese Snowbell is 38.04850340769848, which is surprisingly (to me) relatively high. it seems like the computer vision must be zeroing in on the spider’s abdomen as the object to be identified and then trying to classify the abdomen. (if you do think about the abdomen in isolation, it does sort of look like a snowbell fruit or another kind of fruit.)
yep, I think you’re right @psium. This is fascinating, I haven’t seen this before. Thanks for the example @nehall I’ll definitely be looking at this
If I crop out the abdomen, I get spider suggestions.
A lot of my mushrooms get auto-ID’d as “bluegills” (fish), i fix them of course and just roll my eyes, I’m not sure why it does that (typically with boletes). When I don’t know species, my filename is usually just “mushroom (number) - (view ie side top stain etc)” that I replace once I ID it, so even if it just pulled the file name it should go fungi one would think…it does that for all except the ones it is sure is a lil fish haha