Computer Vision bias toward animals?

Sebastian, you make an important and related point regarding observations with multiple images. Does anyone known if/how CV incorporates or assesses 2nd, 3rd, etc., images in a set?

Particularly for plants, I often upload multiple images with close-ups of the flower, leaves, habit, etc., but I will always try to lead with a close-up of the flower since that is often the most useful element from a taxonomic standpoint. CV doesn’t go through botanical keys, but since it is training on details of images, I want to emphasize those aspects of a plant that are likely to be most diagnostic. In @earthknight’s example, my inclination would be to have the close-up flower image as 1st in line.

As a recent example, here’s a plant I documented in New Mexico on a vacation visit:
https://www.inaturalist.org/observations/66949773
I ordered the images to have the first two images show the flower, then the leaves, then the overall habit of the plant.

1 Like

Blockquote
due to the steepness of the cliffs that means that it’s impossible to see into the interior in most areas. Therefore observations are biased to the periphery of the island.

Aside: Would drones work to do surveys in your preserve?

Not really. The langurs are often difficult to spot and would be even more difficult to spot via drone unless you were flying it very close to them, and to do that you need to already have them in sight.

As well, the animals react badly to drones. We carefully tried some drone work when filming some documentaries and they really don’t like the drones at all.

Drones as a method for animal observation and tracking is one of those things that has to be handled with a lot of care. The research done on the effects of drones on wildlife so far indicates that it has a much greater negative impact than previously thought, but it depends a lot on what type of drone you’re using, how it’s being used, and for what purpose:

In my experience, the best drones to use for wildlife observations are fixed-wing drones that fly high and cover a large area. These have been successfully used in orangutan conservation to count nests and for large animal surveys in Africa.

Fixed-wing droned have the advantage of being much more quiet than either quad-style drones or aircraft, and the fact that they don’t linger in an area means that their impact is somewhat less as well. They can cover larger areas than quads too, so they are really good for mapping.

Wich and Koh have a good book Conservation drones: Mapping and monitoring biodiversity that goes into some of various uses, issues, benefits, and roles that drones can play in conservation.

In our area our best use of drones would be for mapping of areas we can’t access and to monitor seasonal vegetation changes (or to map out areas burned from fires), as well as urban expansion.

Here are some photos of what the landscape here looks like to give an idea of the situation:

3 Likes

UNBELIEVABLE…

Jeepers! Cat Ba is ~stunning~ :star_struck:

" When using computer vision, only the first image is assessed." https://inaturalist.nz/pages/help

My understanding is that computer vison doesn’t see them at all, which is why your first image should try to be the best representative image of the species. (In training CV will use photos from across all photos, but when analysing uploads will just look at what you lead with).

It certainly has its moments. Right now it’s overcast and chilly though, so landscape photos would be a bit of a disappointment today.

Yes, it has a bias; it is biased to what was has been identified. It is answering the question with:

“Well, the most popular choice for pictures like this are the following species.”

The ranking is about trying to quantify what it means to be pictures like this, but then the most common IDs given.

Thus it is a popularity contest.

-Paul

I’m guessing the CV doesn’t start with some type of flower and some type of bee nor a score for the entire photo, but I believe it starts with trying to work out the subject and going from there. I know that is a typical first step in image processing (consider facial recognition).

Maybe it tries a few subjects, but obviously learning from examples, the subject usually is whatever blob dominates the scene.

But I think it will be a very long time before any such algorithm will weigh that a photographer really wanted to ID some little black spots (mites) on a mushroom (fungal fruiting body) and not the mushroom when the photographer didn’t have a macro lens to a detail enough photo.

The ant on leaves example may be a case where the CV didn’t even have a good subject and had to compare on something more general like the overall picture likeness (thus contradicting my thought that it doesn’t score the overall scene) and then another scene with similar forest floor gets a similarity score even if it has no ants! Ooops.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.