Way to Search for observations faved by specific users?

In the tutorials for making an advanced search in inat, I can’t find a way to look for observations favourited by specific users and was curious if there was a way to do this? I’m mainly just curious because I’d like to see stats on my faves that are a bit more in-depth than what’s on the faves page just out of curiosity. Thanks in advance!

It’s in a format like this www.inaturalist.org/faves/(username) so for mine, it would be www.inaturalist.org/faves/minhkhang but I rarely fav stuff so don’t ask me why it’s so short :grinning:

1 Like

By the way, welcome to the forum!

yeah i know that but i want to see it in the explore mode so i can see things like the list of theories and how many obs. of each species and be able to see a map of all the faves instead of just one page at a time.

Then idk

You can see them on a map from that view (kind of top left). But yes, I would like to be able to view favourites in something more like explore view, where you could filter them in various ways.

1 Like

if you want to make a lot of various stats on a set of faved observations, you might want to put them into a traditional project.

the only other way i can think of to work with that kind of set would involve at least a little bit of coding. the general process would be:

  • scrape the ids of the observations from the faves page
  • get observation details based on that list of ids (probably using the iNat API)
  • analyze your extracted set of observation details as you like
1 Like

yeah but the map only shows one page at a time so you can’t see every fave on the map which is kind of annoying

i thought about doing a project but i just feel like it would be a little bit annoying for everyone involved if i went about doing it that way.

I doubt anyone will notice your project unless you make an announcement about it, so don’t worry about annoying people.

2 Likes

i was thinking about this again (in the context of https://forum.inaturalist.org/t/how-to-use-inaturalists-search-urls-wiki-part-2-of-2/18792/52), and i think there’s a variant of this where you could scrape the ids of observations from the faves page and use that list as the basis for a &id=[list] filter in the Explore page. i’m not sure what is the maximum number of ids you can include in the &id= filter parameter, but i just tested with a list of 751 observations ids, and it returned 751 observations in the Explore page. whatever the maximum limit is is going to be your limiting factor for this approach.

if you have Windows 11 (and maybe it works in 10, too?), i think the easiest way to scrape a web page is to use the (Microsoft) Power Automate app. it should already exist in your system, but you have to initialize it and associate it with your Microsoft account. it’ll also download extensions for Firefox, Chrome, and Edge, and you’ll also want to enable the extension in the browser that you want to use for web scraping purposes. from there:

  1. open Power Automate
  2. create a New Flow. (Name it whatever you like.)
  3. add a new Action: Browser Automation > Launch new Firefox / Chrome / Edge. (pick the action for your preferred browser.) in the Initial URL field, input the URL for the favorites page of a given user (ex. https://www.inaturalist.org/faves/kueda). leave all the other fields unchanged, and save.
  4. now you’ll have to create a data extraction action. this is a little tricky because the faves page can have a somewhat inconsistent structure. so i’ll make it easy for you. just copy the code snippet below and paste into the “Main” flow tab in the your flow edit window in Power Automate:
    WebAutomation.ExtractData.ExtractListUsingPagingFromAllPagesInExcel BrowserInstance: Browser Control: $'''html > body > div:eq(0) > div:eq(1) > div > div:eq(1) > div > div:eq(1) > div''' ExtractionParameters: {[$'''div:eq(1) > div > a:eq(0)''', $'''Href''', $'''%''%'''] } PagerCssSelector: $'''div[id=\"wrapper\"] > div > div:eq(1) > div > div:eq(2) > a[class=\"next_page\"], html > body > div:eq(0) > div:eq(1) > div > div:eq(1) > div > div:eq(2) > a[class=\"next_page\"]''' ExcelInstance=> ExcelInstance
    (that should create a new “Extract data from web page” action which will iterate through each of the observations on the favorites page, then repeat for each next page, extracting the URL of each of the observations and outputting the list into an Excel spreadsheet.)
  5. if you created everything correctly, you should have 2 actions defined in your flow, similar to the screenshot below:
  6. at this point, you can save, and the run the flow. a successful run should open up a new browser window, cycle through all the faves for a user, and then display the list of observations in a new Excel spreadsheet. here’s a sample of the output:

note that you can modify the flow as you like. for example, instead of opening up the results into Excel, you could edit the Extract action to save output the data into a variable. from there, you have many options.

just for example, if instead of using the above code to create the second action, you could create some actions that would parse the list and return a comma separated list of fave observation ids to your clipboard:

WebAutomation.ExtractData.ExtractListUsingPagingFromAllPages BrowserInstance: Browser Control: $'''html > body > div:eq(0) > div:eq(1) > div > div:eq(1) > div > div:eq(1) > div''' ExtractionParameters: {[$'''div:eq(1) > div > a:eq(0)''', $'''Href''', $'''%''%'''] } PagerCssSelector: $'''div[id=\"wrapper\"] > div > div:eq(1) > div > div:eq(2) > a[class=\"next_page\"], html > body > div:eq(0) > div:eq(1) > div > div:eq(1) > div > div:eq(2) > a[class=\"next_page\"]''' ExtractedData=> DataFromWebPage
Variables.CreateNewList List=> listOuput
LOOP FOREACH CurrentItem IN DataFromWebPage
    Text.GetSubtext.GetSubtextFrom Text: CurrentItem CharacterPosition: 41 Subtext=> Subtext
    Variables.AddItemToList Item: Subtext List: listOuput NewList=> listOuput
END
Text.JoinText.JoinWithCustomDelimiter List: listOuput CustomDelimiter: $''',''' Result=> JoinedText
Clipboard.SetText Text: JoinedText

(if you have questions, feel free to ask.)

also, here’s a good video that covers basic web scraping using Power Automate for desktop: https://www.youtube.com/watch?v=DgBZiBIgh3w.

4 Likes

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.