Do observers’ hands often in the photographs help the computer vision (as a scale object?, for example)
It’s important to remember that computer vision is not trained on the organism itself, but on photos of the organism which have been submitted to iNaturalist . So if photos of a species on iNat often have hands in them, the computer vision model will have been trained on photos that include hands. The model doesn’t take scale into account, just pixels. But, definitely feel free to use your hands for scale or to hold a plant steady, etc.
Interestingly, when we were just starting to play with the first computer vision model, Homo sapiens was not one of the species in the model. Therefore, photos of humans generally got IDed as Sceloporus occidentalis, which was at the time our most observed species, and many photos of it included hands holding the lizards. Since that was the most visually similar taxon, then, the model suggested it.
I’ve seen observations of yellow flower’s petals held between an observer’s fingers, making it all look like a butterfly or a whimsical flower, yet the computer vision was the one IDing the flower correctly.
Hands are one way to provide scale for users to confirm IDs.
Tiwane, that is an interesting effect that hands had on CV!
I should say it’s just a hypothesis, but quite likely the cause. Another interesting example happened when I used some Ramalina menziesii for a live demonstration at the California Academy of Sciences one night. It was rolled up into a ball and computer vision kept suggesting birds, because it looked like a bird nest. When I straightened it out, computer vision got it right!
Only problem I see with hands in obs is, I would never be able to get away with a crime.
Any authority will have lots of images of finger prints, palm prints etc.
With the amount of my obs with my hands in, I better stay the right side of the law.
Aves… jailbird :D
I sometimes used my hands as a focusing tool for things like spiders in webs,
And of course I also occasionally use my hands to hold up flowers
It’s really just to make the picture “good enough for inat”
so did you happen to think that the Computer vision has learned to use the images of hands as a reference for color and size, thereby making correct IDs faster (working more efficiently)?
I’m not sure to be honest. I feel that at least if I make a better photo using my hand as a focusing tool then it would in general more likely to be reaching RG quality. If there are more RG photos for a particular species the computer vision has access to then I think it can make a more reliable suggestion. That’s my thought. I’m not too sure on how machine learning works.
I would err on the side of taking a photo which another human identify, and a lot of times holding a flower is the easiest way to do that, especially for a smart phone camera. Don’t worry too much about how it will affect the machine learning.
thank you all.
While I understand some people doing it, I personally quite never use my hands.
I don’t do that for multiple reasons :
- I consider it can affect the subject (it’s even forbidden in some areas for some species)
- It is most of the time not required for helping identification
- It is not aesthetic at all
- It mights affect IA results
It is clear that a photograph could move himself instead of moving the subject most of the time.
There is some identifications that are not possible without using hands or a small piece of wood, but for me it is the exception, not the rule. Now, I understand everyone see the thing with his point of view and that my opinion is probably not the majority.
Personally, I find it challenging to get good point of views and that’s something that gives me pleasure.
About taking flowers : using a small piece of wood as a preparation for the photo is sometimes enough to get a good angle without the hand in the picture. It’s not that hard
@valentinhamon Welcome to the forum! Thank you for adding your point of view!
What would be your method to photograph a bush having a mass of foliage each leaf identical about 2-3 cm, totally obscuring stems and trunk, no buds, flowers, or fruit, no clues as to the relative size?
Usually breezy when I photograph flowers, so I am forced to hold it steady. If my hand is out of the picture, the flower is probably still blurring in the photo.
What I see is if the plant of interest is green and the background foliage is shades of green or dark green, then the best camera still offers no chance of crisp contrast. The flesh color of one’s hand is the best background.
I found that white background washes out important details on flowers or insects. Cardboard is good, surprisingly, because it is flesh colored too?
I noticed with my pinned specimens, when photographed against white background the contrast is horrible, but against a grey background it almost always contrasts well. Viewing with the eye it is the other way round, the white background and the contrast makes things easier for the eye to perceive, and the grey somehow obscures the detail. I always found that quite curious…
It might be caused from the difference between pigment colors and illumination colors, at least if the photographed samples are viewed from a screen.
That could be it… I’ll try experimenting with printed images!