Automatically add a spectrogram view to observations with sounds

Maybe have an “Evidence of Organism” toggle for everything that’s not a photo of the animal itself, and then exclude that from CV learning?

5 Likes

11 posts were split to a new topic: Question about evidence of human activity

I don’t have experience processing sound but I’m wondering if bat recording observations are rare enough relative to other kinds of sound that it could default to eBird-like settings and then have a button to show the full spectrogram? (analogous to the zoom/brighten buttons for images) I’m not sure if insect sounds are more similar to birds or bats.
I guess one potential issue with that is if it means you have to save two images for each recording.

2 Likes

@kueda: My opinion based on somewhat limited knowledge of ML and computer vision is that you already have that problem with “garbage photos” - ones that are misidentified, ones that have insufficient resolution or clarity, ones where the specimen is very small, ones where multiple species are depicted. Yet the algorithms handle it. Suggestion: if a particular image comes back with very low likelihood of being the depicted species, have it autoflagged as “identified species not visible” so that the image is not used, or is used properly, as part of a training set. Then we could include hostplants, spectrograms, weather conditions - parts of the field notes that are desperately needed.

Imagine a world where the spectrogram could be used by AI for bird ID, instead of what happens now – one posts a sound recording, and within a couple of years someone else finds it.

2 Likes

Not to beat a dead horse, but I thought the AI response to my sparrow song was funny:
image

Like it or not the AI is picking up on spectrograms, and as of now bats beat birds in the spectrogram-to-photograph ratio. Perhaps “spectrogram” could be established as a “pseudo taxon” that the AI learns to recognize, and then never has to suggest.

8 Likes

my experience is that birds are identified fairly quickly on iNaturalist, if they can be identified easily, even if the only evidence is audio.

BirdNET – The easiest way to identify birds by sound. (cornell.edu)

there are a couple of videos on that page that give some basic explanation of how they do their thing. when asked if their algorithm could be adapted for animals other than birds, the answer is “maybe… other animals are using other frequency ranges than birds, and it gets more challenging for insects who are using higher frequencies, and it gets more challenging for bats… you need more specialized equipment for that [rodents, bats]… you can’t use your phone for that…”. apparently, you could theoretically leverage their open source code or even hook into their API to develop your own apps.

to me, it seems like it would be a lot of work for relatively little benefit to develop something specifically for birds in iNat, considering other things already exist. if you’re going to develop something that can cover any organism, then that might be interesting, though probably exponentially more challenging.

also, at least right now, the number of observations with sound in iNaturalist is not very large – so there’s not necessarily a lot of data to train on. currently, there are only a little over 144,000 observations with sounds (mostly birds), representing just under 6,000 species. but if you look at how many of these species have more than 100 observations, you’re sitting at around 270 species, and if you limit that to just research grade, you’re down to around 240 species.

a lot of the data submitted is not even actually audio, since spectrograms are technically images, not audio exactly…

1 Like

I like this idea a lot. Having spectrograms attached is really useful, and if displaying them from the .wav file isn’t possible, this seems like a reasonable solution.

3 Likes

I dont know if Tadarida Toolbox makes spectograms but it can tell you which species the sound contains. I thougt it was used for online websites in UK en FR.

https://openresearchsoftware.metajnl.com/articles/10.5334/jors.154/
image

https://github.com/YvesBas/Tadarida-C/commit/145d84f7fc57581733a8bef335ea7dfebaf9b9e3

While this is interesting, I don’t have an enormous amount of trust in automatic sound classifiers (background: I spend about 8 hours a day during the non-field season manually vetting the outputs of the SonoBat classifier and it’s correct some of the time, correct but not confident some of the time, and plain wrong some of the time).

Is your suggestion that iNat implement code for doing “computer vision” on .wav files? That would be interesting, but would still necessitate people confirming IDs to download the .wav file, look at it in a sonogram/spectrogram viewer, and then come back to iNat to add their ID. I often end up doing this when a .wav file is uploaded anyway, but sometimes that step isn’t necessary, and when it isn’t you’ve saved several minutes of steps and therefore can make more IDs. There’s a massive backlog of potentially identifiable acoustic bat observations on iNat, and very few people who both have the expertise and time to work on them. Making it faster and easier to do could help with that.

3 Likes

You need only a few mammals, crickets and birds so the hit ratio was high.

If iNaturalist does make support for spectrogram viewing maybe the spectrograms could be used to train the normal computer vision model instead of making a whole new computer vision model for sound.

Audio is for your ears.

Light for your eyes.

1 Like

I don’t think it would work very well. Most sounds in observations are very poor quality, have lots of noise, and are very long (not to mention the frequency range of interest). If you automate a spectrogram then you will not be able to pick out the important bits in most observations. Why not, as the observer or identifier, just edit and create the spectrogram yourself (that’s what I do), and then I upload it as an image (or download the sound if you are identifying). Audacity is free and very easy program to use for this.

2 Likes

The Merlin app is remarkably good at ID by cruddy wavefile - better, in fact, than by photoID. The reason, I think, is that it picks out specific frequencies in a 1D FFT and maps them to a relatively small set of signatures. Thus, a lot of the noise is irrelevant.

1 Like

Indeed. Audio recording useful for visually impaired, spectrogram useful for deaf person. Observation with both photo, recording and spectrogram of use to most people.

1 Like

i don’t think you should crop the spectrogram exactly. i think the best thing to do is something like what Audacity does. it displays a default spectrogram frequency range up to ~20kHz (since human hearing is generally described as 20Hz to 20kHz) and allows the user to “zoom” in an out on that range, up to the frequency range captured in the file.

so in my mind, the ideal interface would allow the user to specify the min and max frequency (with default 0 to 20000 Hz) and the type of scaling (i.e. linear, logarithmic, etc.). this kind of interface would be dynamic, showing you the detailed spectrogram for a given window of audio, and that window would move as the audio was played. something like this:

if you need to save default snapshots of the spectrograms so that you can deliver visual previews of the sounds easily, then i think the best thing is to have a handful of standard configurations, similar to how you have a handful of standard configurations for delivering photos. these preview spectrogram images could all be the same height and width – say 150px by 50px. and your standard configurations could be something like:

  • human hearing range (20Hz to 20kHz), logarithmic scale
  • human hearing range (20Hz to 20kHz), linear scale
  • full range of audio file, logarithmic scale
  • full range of audio file, linear scale

then if the user needs to see more detail, then they could look at the observation detail, and the player there would dynamically provide more detail, as described above.

unfortunately, it doesn’t look there are many easily found modules that would generate dynamic spectrograms for a given window of time. so in the absence of something like that, i don’t know if it makes sense given limited resources to approach the interface the way i’m thinking.

this looks interesting, but it looks like it’s no longer in development: https://github.com/miguelmota/spectrogram

this looks interesting, but it generates spectrograms from the microphone: https://github.com/borismus/spectrogram.

2 Likes

That seams to be the easiest way for now - have a category, or could we use “tracks and signs”? I really don’t see the difference to scat etc

1 Like

It is possible to generate and show spectrograms directly using Web Audio API.

https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API

Perhaps creating a module that would read the audio data and show interface from audio files on the fly would be interesting. Also providing interactive elements to change frequency and zoom would make visualization of the audio files much more useful than static spectogram images.

I have a somewhat clunky Greasyfork script I wrote that generates spectrograms on iNat. It’s ugly, the controls are a bit hard to use, and the spectrograms take as much time to generate as the audio is long (e.g. a 1 minute audio clip would take 1 minute to generate a spectrogram) but it’s something.

This is using a library I found called spectrogram.js but I’m definitely going to try to put together a better script from scratch at some point!

https://greasyfork.org/en/scripts/482904-inaturalist-spectrogram

2 Likes