I am developing an application that uses iNat observations to send users to locations where flowers with their preferred colors grow. Flower color detection is part of the pipeline and it is challenging. I do not want any dominant color but flower dominant color. Here I explain some about how this is challenging: https://medium.com/@neda.jabbari/bouquetfy-part-1-find-the-dominant-color-in-the-image-8e27a3e4cb88
I’d love to see this as part of a search-by-key-features feature, where I could specify characteristics of a key and have iNat combine that with its AI data (and geographical info!!) to help ID unknown specimens. I would see it as complementary to the community ID approach. It’s a feature I really like from other sites, but iNat has so much more data to draw from. Very labor-intensive to implement, but perhaps the key characteristics could be cadged from someone else. It’s not plagiarism, it’s standardization!
This seems really cool! It could help with the identifications of certain flowers, since, even though iNaturalist’s recognition AI is pretty good, it sometimes has troubles differentiating flowers that look similar but have noticeably (but not drastically) different colors and photos of flowers where the flower is not exactly the main focus of the photo. If you ever need help annotating images, I am always willing to help! Also, since most users set their photo licences to “Attribution-Noncommercial”, it would be perfectly fine to use a vast majority of them for your dataset. I created a project that collects images of flowers in the past, if you want to use it. There is a function that allows you to export datasets from projects, but I don’t 100% know how it works.
Something tells me that Machine Learning might be the easiest way to do something like this. you create a dataset of images tagged with “[colour] flower” (which also includes some images of non-flowers), and feed these into an ML serivce to generate a model.
That way you don’t need to work out how to isolate the flower in the image, as the system trains itself to do it.
I believe there are a number of repositories of labelled images out there that you could use as a starting point.
Yes, adding color as an additional feature to find ID can be tricky but it can also make identification faster. For example, there is no such a thing as blue gerbera as far as I know…
To be honest, I’m not 100% sure, but there’s a way to export observations by pressing the observations tab, and pressing “Export Observations”. You will be given a checklist of options. Unclick all of them except for image URL. I guess after that you can use an automated url image grabber.
If you want some r code to download inat images from urls:
install.packages(rinat)
library(rinat)
#### Set this to the directory where you want the photos to go
setwd("folder_where_you_want_to_download_photos")
####Change this query to get the images you want
get_inat_obs(query = NULL, taxon_name = NULL, taxon_id = NULL,
quality = NULL, geo = NULL, year = NULL, month = NULL, day = NULL,
bounds = NULL, maxresults = 100, meta = FALSE)
#### This code downloads the images
for(i in 1:nrow(inat_results)){
file_source <- inat_results$Image.url[i]
file_name <- paste(inat_results$Id[i],".jpg", sep = "")
download.file(file_source, file_name, mode = 'wb')
}
You just have to change to the file you want to download the images to and change the observation query. As its set right now, it saves each image as [observation ID].jpg
Just be careful with the number of images you try to download at once, this is an easy way to max out the space on your computer. I only use this to download direct to an external drive.