@pisum In my problem (appears below after being moved here) I am getting exactly the same list of things as you got for your blank image. So this looks like CV is getting a blank image instead of the real one? I however wonder why for the microscope things, it does this and for other pictures it just gives up? But the photos do come from a different camera.
My laptop is 100% AMD, so I guess that eliminates the nVidia theory.
The most important thing to learn is that it’s a Firefox issue - works fine in Edge, so I can just use that, all browsers are the same anyway.
On several observations from microscope, CV gave me nonsensical recommendations - and those are exactly the same list for each photo. The same list also returns from photos that it previously made sensible guesses for.
I tested CV on simple photos of birds (that it previously had no issues with) and it tells me “We’re not confident enough to make a recommendation.” Haven’t found any photo on which it would give me sensible guess, so it’s completely useless for me now.
that’s interesting. if that’s the case, then maybe you should send one of the microscope images to iNat staff. maybe it is something strange about how firefox handles certain images after all.
or if it’s failing for all photos now, you could check for a setting in Firefox related to canvas fingerprinting. if you turn on the feature to resist canvas fingerprinting, that should cause these sorts of problems on any image loaded in that upload page (since that page sends stuff to be evaluated by the CV after some manipulation using the canvas element in the browser).
Adding iNaturalist as an exception to "advanced tracking protection” solves the problem. Not sure what is Firefox trying to protect me against without me even knowing about it, but that’s the current trend in everything. Thanks, this is now solved for me.
my understanding is that the way your machine renders a particular image or text on a hidden canvas element is unique to your particular hardware and software setup. so folks who want to track you can tell your browser generate such an image and turn that into a hash (fingerprint). each time this occurs, the same hash will be generated (unless you change your hardware / software), allowing it to be used to track you across the internet (regardless of whether you use private/incognito mode), separate from tracking cookies and other such standard trackers.
fingerprint protection blurs the canvas, injects noise, or otherwise tweaks the canvas to prevent the resulting hash / fingerprint from being unique / consistent, thereby preventing tracking.
Wow, I can’t even imagine how this works. It’s interesting, I used to be a lot into programming 25 years ago and new all sorts of hacks, but since then, the world completely changed and most of my understanding is now entirely useless!