Automatically add a spectrogram view to recorded audio like eBird. It makes for much more accurate bird identifications. There are vocal characteristics visible in spectrograms that are difficult to pick out by ear.
Thanks for posting, @jeffhines! I couldn’t agree more that iNat would really benefit from spectrogram support. There is another thread where we discussed audio support in general. I believe the first step is to make it easier to make field recordings and upload them from the app. I would love if there was an evolution toward an iNat app native recorder with a clipping function and spectrogram generation.
This would make it so that audio observations would start contributing to and benefiting from image recognition identification. The user @mpgranch is currently experimenting with that on iNat.
Great to hear that it’s being worked towards, thanks for letting me know!
In the meantime, my setup for sounds is for VLC to convert to a camera movie file to MP3 and Raven Lite to edit the file and provide a picture of the sound file.
For those submitting audio files, it would be useful to edit them to get rid of junk, like people talking, other wildlife calls, and human noises. An experience person may even try to eliminate the high or low part of the sound (e.g. eliminate low pitched hums from hydro lines). If iNaturalist doesn’t allow editing of the audio file, users will still need third party programs like Raven Lite to do it. If an audio file is not edited (when necessary) it will make it harder for the AI to learn and ID the sound.
Audio people have traditionally set a high bar for technical quality, but iNaturalist is about documentation. I have just been using the very basic Voice Memos app that is native on iOS and records as mp4. It is easy to use and easy to edit out the garbage and blanks (no sophisticated filters). It displays a spectrogram, but I don’t know how to export that. It also records date (but not time of day) but it does not export date or location, so I have to make a note or trust memory. I also don’t know how to go direct from the app to iNaturalist; I email the file to myself and submit it via the web interface, but that’s no more trouble than submitting photos from a camera. Can anyone tell me how to export a spectrogram from this app? Sorry if this is stretching the topic but this seems like the right audience.
I would love this feature as well. E-bird has a pretty nice automatic spectrogram upon audio upload. I think xeno canto does too.
I have uploaded spectrograms as images along with sound files for a few of my observations on iNaturalist. I was just playing around though. I used Audacity (which is free) to make the spectrograms and took a screen capture to save the images.
A couple of examples:
iNat may be able to partner with the McCaulay Library (@ Cornell with eBird) to house and process sound files.
That aside, they have some good how-to documents. I seem to remember reading in it somewhere that unmodified files are best. You can clip for length but no high pass or low pass filters and such.
Is explicit partnering with e.g. BirdNET a possible solution?
Strongly supportive of this feature request, which I expect will make it much more appealing for iNat users to contribute audio recordings. At the moment all my bird song files only go to eBird.
This is one of several things I’d love to do with audio, mostly because it might stop people from uploading spectrograms as observation photos, which drives me right up the wall. I’m not committing to actually doing this any time soon, but I did spent a bit of time today learning about spectrograms and exploring what’s possible and I’m not feeling great about what I found. I can make an ok, sort-of-eBird-like spectrogram of a WAV like this quail I recorded using
sox quail.wav -n spectrogram -mlar -o quail-spectrogram.png
That’s not so bad. I can use something like
ffprobe -v quiet -print_format json -show_streams quail.wav to get metadata like the sample rate (44,100 Hz) and duration (20.093968 s) to infer that the height is 22,050 Hz and the width is 20.093968 s, so I could theoretically annotate it correctly. I could even do what eBird does and crop it so we only show 0-10 kHz.
That kind of approach will probably work fine for things that people can actually hear and I’m sure we could make some kind of player like the one eBird has.
However, bats complicate this. For example, taking the same approach with this bat recording with a sample rate of 384,000 Hz I get this (I’ve left in the axes etc that
sox adds here):
That presents two problems
- There’s clearly no data above ~65 kHz, but how can I know that programmatically so I could automatically crop it to a range that has data? Not seeing that in the
ffprobeoutput, but maybe I’m missing something? Or is there a better commandline tool for this?
- Given the spectrogram provided with that observation, it seems like relevant, identifiable sound happens in ~10 milliseconds spans. To really see that properly, I need to make the the x scale something like 1000 px / second or more, yielding some extremely large images, way larger than warranted for sounds humans can perceive
I could probably write something custom to deal with 1 (though it would probably be less performative), or maybe there’s another command line solution. 2 seems much trickier. How are we supposed to know what the x scale should be?
And then there are observations like this one that include a spectrogram (boo), a raw WAV recording (though not at the sample rate claimed), and an altered version of the WAV file meant to be hearable by humans. Even if we had taxon-specific info about an appropriate x scale and a relevant frequency range to crop to, one of these files would get a messed up spectrogram, or we just show the spectrogram as is with lots of empty space and/or a non-diagnostic x scale. Allowing the user to choose these things just seems way too fussy to contemplate just to facilitate hypersonic, really fast vocalizations like bats make (are there other organisms that do things like this that people record?).
Anyway, that’s all a long way of saying I tried some stuff and while I’m not writing this off, I did learn that it’s complicated. Haven’t even looked into what this looks like for non-WAV files.
Ken-ichi, I don’t understand why an uploaded spectrogram is such an annoyance for you…?
Thank you for doing all that research!
Yes, I’ve done this on a number of occasions. What’s the objection?
eta: I’ve done even worse. I used a smaller segment from a longer sound file as the example for the spectrogram to get around the (2) x axis problem.
Observation photos are intended and assumed to be photographic evidence for the recent presence of an organism, i.e. they should communicate what you saw in the field. Not spectrograms, not habitat shots, not pictures of the sky to show what the weather was like, not photos of photos, just actual photos that show someone what you saw, and hopefully look like what others might see when seeing similar evidence for the recent presence of the same taxon. We make that assumption when showing observations photos on the taxon page, when training our computer vision system, when sharing data with partners like GBIF, etc., and all those non-organism shots break that assumption and cause us to use and share inaccurate information (we claim something is a photo of an organism when it’s actually a spectrogram). If at some point we support some way to categorize observation photos or support some other form of ancillary photographic material to be attached to an obs, then that stuff would be ok, but at present we don’t. I realize tracks & signs screw that up and I admit my tolerance for them is a lot higher than it is for spectrograms, but I think that’s b/c they at least show something unique about the organism that helps others learn to recognize it in person (“but what about microscopy” etc etc). Spectrograms are great evidence and really interesting (as are habitat shots, microscopy, most of the other kinds of images that people upload as obs photos), but if we’re not going to distinguish them from photos of organisms then I don’t think people should upload them. Maybe post them elsewhere and embed them in the description or a comment or something.
I’m not sure how to square that attitude with the, well, mantra of ‘connecting people with nature’, and any scientific data being a welcome side benefit. You have expressed this yourself.
So far, my attitude towards observations (others as well as mine) has been that it’s OK to show evidence of any kind. There has been discussion of drawings as you know. If this creates problems for the AI, then the AI should be improved. (Not knowing too much about the technical side of that, my hunch is that it should be easy for the AI to distinguish a spectrogram from a photo.)
But if this is the official position we’ll just have to deal. I won’t go as far as telling others not to upload spectrograms, but I can set my own spectrogram-as-picture observations as ‘no evidence.’ Is this the preferred course of action?
Okay. I don’t understand the aversion to some waves though. Photos are representations of an organism in light waves, and spectrograms are light-wave representations of sound waves. All the waves describe an organism whether we hear them, see them, feel them or are completely unaware of them because human senses can’t detect them. They vibrate nonetheless, and it’s all the same ‘stuff’ :-)
I didn’t say it was an official position. It is my opinion. If we on staff thought this was a serious problem we would build tools to support or suppress these kinds of images, or at least include some kind of statement of policy in the FAQ or the Curator’s Guide. I would personally prefer that people not post these kinds of images as observation photos, but currently we don’t have an official position on them.
This convo has gone a bit off the rails. If anyone has input on better ways to programmatically make spectrograms that accommodate all (or a least most) use cases on iNat, including bats, I’m all ears (yuk yuk).
Thanks for clarifying. It’s too easy to take your word for policy. I am happy to see you’re looking into this.
Back on the original topic, it looks like you’d be all set for audible sound files (max. 44 kHz). You could defer handling of higher sampling rates to whenever the tools become available. Zoomable/scrollable axes may be nice, if that is an option.
Any progress on this? I’ve recently upped my field recording game a little. The iNat app now integrates with my audio recording app (RecForge) mostly seamlessly so that most repeating calls can get reliably captured. I’ve also been playing with BirdNet, mentioned above, with satisfying results.
I feel the pain of @kueda when I first saw spectrograms, but learned to appreciate the hack. In the absence of audio recognition, a spectrograms can train the image recognition… IF everyone used the same spectrogram protocol, IF it didn’t de-train normal images, IF it didn’t cause confusion and distress in a community unfamiliar with them, etc.
I’d love to see the progression continue toward more native support of audio observations and unified spectrogrammetry. Search the forum and audio interest is there. Any movement since the last post here?
I haven’t put any time into solving these problems since I described them, and was sort of hoping folks with more experience processing sound on the command line could suggest a way or tool to do the cropping for 1, and how to create a single spectrogram format (size, px / second to show on x axis, kHz range to show on y axis, etc) for all sounds for 2.
Thanks, @kueda, I know you guys have plenty to work on :) I also hope someone can help out.
Not sure where to recruit from within the iNat community, but maybe there would be some willingness to collaborate from the folks at Cornell Lab of Ornithology or Macauley Library? Maybe Wildlife Acoustics, a for-profit company, might be willing to help out since it could allow their bat detector product to integrate with iNat? Just a few places to try.