It isn’t a native Lightroom function, but I use the excellent Search Replace Transfer plugin by John Beardsworth to batch copy the altitude field to the caption field which maps to the Notes field in iNaturalist. I also use this plugin for a whole lot more “twiddling” with metadata and would be lost without it. Can’t recommend it highly enough.
Do the iNat devs have a framework to help decide when something should be an annotation or geodata input instead of an observation field? Because a proliferation of observation fields for the same data point is the sort of thing that should spark that conversation. The number of observation fields shows that observers are interested in logging it, but anyone who wants to consume it has to figure out what all the possible fields are, query them and then normalize the data. It would be more efficient to enforce the normalization at input time. (Whether the devs have higher priorities is a separate, but worthwhile, debate.)
This discussion about when something shouldn’t be relegated to observation fields is relevant to more than elevation data. Depth has the same problem, but without the workaround of mapping GPS coordinates to DEM files. It also applies to my longstanding pain point of simply trying to find in-situ observations of marine species. There’s no first class way to filter out beach finds and fishing catches. (I looked at so many empty shells trying to find whelk photos last week…) It would be trivial to tag these observations at upload time if there was an agreed upon way to do so.
When using elevation data for research I would think the first thing to check would be the accuracy of lat/long. Less than 30% of observations indicate an accuracy of 10 meters or less. Many observations do not even indicate accuracy. Google uses an “Auxillary sphere”. This will get you close most anywhere.
There are many different coordinate systems so you may have to convert the data. Some coordinate systems are designed to give you more accuracy for a specific area ( like a continent or state) but do not project well on other parts of the earth.
For an example, go to: https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/gcs_vs_pcs/
and scroll down a short way to the photo under the title “Where: Geographic Coordinate Systems”.
I think the best we can do is give the most precise location possible with the best accuracy possible, and let the researcher decide how to use it.
If you put your coordinates in Google Earth it will show you the elevation on the lower right.
Living in an area where altitude may vary by 500-1000m within 200m of horizontal distance, and where species may be confined to zones within this range, one appreciates the significance and importance of altitude data.
However, using DM to calculate altitude is fraught with problems of location accuracy. When a 100m latitude-longitude uncertainty can translate into a 1000m altitude difference on the DMs, it means that computed altitude is most useless where it is most important.
Even the worst GPS altitude data is far more accurate, rarely being out by more than 100m, except on steep cliffs and overhangs, probably because the GPS integrates recent data as it determines position.
Having observation fields for altitude is great, but it is laborious to fill it in. An automated way of getting data from the exif into iNat would be useful. As would a filter for extracting records above say, 1000m, or 2000m or 3000m, or below 100m.
Even given the inaccuracies of GPS altitude data, it is extremely useful in weeding out (and even locating) inaccurate localities - especially for higher altitude observations which are usually where rarer, more localized species occur.
Exactly the problem!
is an accurate statement of the problem, not a comforting statement about how to deal with this characteristic of observations.
That almost sounds like a feature request waiting to happen. Here on the Forums, when you start a new thread, a popup tells you if there are other threads that the algorithm thinks are similar. A similar popup when creating observation fields could encourage people to use existing ones if they are relevant.
I worked through the process for adding elevation in a database downloaded from iNaturalist. In case this is useful to others, this is my process. Note that it requires a subscription to Esri ArcGIS.
iNaturalist does not record elevation of observations, so it is necessary to intersect observation points with a Digital Elevation Model in GIS. Export observations from iNaturalist and process in Excel. Add observations to ArcGIS Pro and display using their XY coordinates. Connect to elevation in Esri AGOL Living Atlas : Ground Surface Elevation - 30m (image service / raster DEM). Use Geoprocessing tool Extract Multi Values to Points ( note that you need a Spatial Analyst license for this step) to add elevation. Copy resulting data back to excel.
since your data is based on USGS’s North America elevation dataset, if you’re just trying to get this data into a tabular format in Excel, it might be easier to get this using the data using the USGS’s Elevation API.
this explains how to get results from that API in Excel: https://forum.inaturalist.org/t/an-optimized-workflow-to-determine-the-altitude-of-an-observation/17465/6; however, the API has changed a bit since that post, and you would now use these formulas:
- in C1:
=WEBSERVICE("https://epqs.nationalmap.gov/v1/xml?y="&A1&"&x="&B1&"&units=Feet&")
- in D1:
=MID($C1,FIND("<value>",$C1)+7,FIND("</value>",$C1)-FIND("<value>",$C1)-7)
here’s an example of what a request to that API would return: https://epqs.nationalmap.gov/v1/xml?y=30&x=-90&units=Feet
Thank you! That is very helpful.
Also, I forgot to mention that it is important for this analysis to remove records with low geospatial accuracy and any records that have obscured locations.
you can do this in the URL filters when you download / extract your observations:
- removing obscured and private observations:
&geoprivacy=open&taxon_geoprivacy=open
- removing accuracy below x meters:
acc_below=x
oracc_below_or_unknown=x
, depending on how you want to treat observations without accuracy recorded.
This is all great! Thanks @conorflynn and @pisum for these procedures.
Now imagine if iNat made a similar automatic lookup to a DEM when an observation was created or its location edited and stored the resulting calculated elevation and estimated accuracy. The HUGE difference would be that this elevation info would now be available to any identifier looking to search for, say, Sisyrinchium observations in central Mexico above 4000 m (3 candidate species out of about 46).
I’m glad that Conor’s probelm was solved, and this approach would work for anyone looking to study a particular scientific question in depth, but the broader benefit for iNat identifiers can only happen when estimated and/or reported elevation data is searchable on the platform itself.
i tend to take the view that elevation from a DEM is not the same as altitude recorded by a user. even if iNaturalist were to record these in the system, i personally would keep these separate. so that means that fitlering would not necessarily be a simple filter on one field.
in other words, i think if you’re going to be filtering on a field like this, it should happen outside of the system anyway (because it’s going to be complicated to do anyway).
note that i would have said the same thing about positional accuracy in the system, if i had been involved in the original design of the data model for that. right now, it’s a single field that captures all sorts of values recorded using all sorts of different methods. and because you don’t really know how that value was captured in most cases, to me, it makes that value sort of meaningless in many cases.
I agree 100%. Earlier in this thread I suggested that calculated and recorded elevation data should always be separate fields (possibly with some logic to allow a search to return records with either one).
But this change doesn’t seem massively complex (says the guy who has written no code for 10+ years):
- Add maybe four fields (two each for reported and estimated elevation)
- Write some code to take reported elevation data and accuracy from EXIF in the first image where present (because that’s the way it works for location data)
- Invoke the above code when a new observation is created with a photo
- (Optional) Update the various UIs to allow this reported elevation to be edited
- Write some code to retrieve estimated elevation from a DEM API and store it in fields not editable by the user
- Invoke the above code when a new observation is saved (with a sufficiently precise location) and when the location is edited
- Update the observation and identify search functionality to support URL parameters for reported and estimated elevation
- Run a slow background process to call the DEM API and add elevation data to all of the existing 200 million observations with sufficient location accuracy. (At 7 updates per second, this would take about a year to complete.)
- (Optional) Update the search functionality to support a combined either/or elevation search
- (Optional) Update the Identify and Explore UIs to expose the elevation search capability
The UI changes in tasks 4 and 10 would need careful planning, I can see. I’m aware that choosing the DEM might be a challenge. It needs to be free, reliable, fairly accurate and ideally global. But iNat could implement this in stages if the criteria were only met for parts of the world.
To me, elevation data seem very similar in concept to state/province/region/county names, which are calculated and stored in the observation based on similar lookups.
just to clarify, i wasn’t really talking about the challenge of sourcing the DEM information. i was assuming that that challenge would have already been solved and was talking more about actually filtering against data.
some folks might want to filter solely against the DEM elevations.
some folks might want ot filter solely against the user altitudes.
some folks might want to flter using a combination of the two, prioritizing user altitudes.
then you throw in challenges like what to do if elevation data doesn’t exist in either case. do you keep that data or drop it? or do you offer the user the option to do one or the other?
for stuff like this, i tend to think that it’s not worth it to implement lots of functionality to handle lots of different options. my view is that folks who need to do this need to just take on the complexity on their own.
if you were to implement all this in the system, though, i would think it would need to be done in a simplified way like they did for accuracy, which i guess is fine if you view the world from a better-than-nothing perspective, but which from my perspective is sort of a waste of effort and resources.
One could say the same thing about the lat-long coordinates themselves. They can be recorded by the various devices used to take photographs, or on a separate device, or mapped manually on the various base maps, or guessed at 40 years hence, or…? To me, having the positional accuracy field actually helps to mitigate the wide variation in how horizontal coordinates are captured (at least when observers know about and use that field as intended). I’d think the same considerations would apply to capturing the vertical coordinate.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.