Great, thanks for making these changes!
One of the questions has an incorrect picture. The image is of a caracara but it says scissor tailed flycatcher.
@brennafarrell the site pulls research grade observations directly from iNat, so this might happen if some of them are inaccurate. Looking at the username, it looks like the one you’re referring to came from this observation. Oddly enough, this user has another observation with the same photo, but that one is labeled as cacara.
This is from an observation of a Scissor-tailed Flycatcher harassing a Caracara. In that particular photo the Flycatcher is almost on the Caracara’s back, in the middle of dive-bombing it. It’s just bad luck that you happened to get a difficult photo.
Oops I didn’t even read the observation description . Thanks for the clarification Jeremy!
Love the “more chances” feature! “Get back and try again”, great stuff!
Ok, so this is awesome! Thanks for sharing! Is there any way that it could be tailored to pull from a particular project, rather than all photos from a state or region? I’m wondering whether this could be used for quizzing students for a class, but based on a small subset of species that they’re expected to know.
Welcome to the forum! I hope your request can be done, because I would be interested in that too.
@bug_girl @petervanzandt. Definitely! I had thought about that before but didn’t think people would be that interested. I’m glad to have been proven wrong. I’ll build that in there and let you know once it’s ready.
Ok, that was impressive - thanks! Unfortunately, I’ve tried this a few times and the results are inconsistent in that it works with some projects and not with others. This isn’t a problem for me, but I don’t understand why it doesn’t always work. I’m able to do this with a small class project of mine (BSCbot20), but not with another project (Moths of Alabama). I’m a member of both of these projects, and if anything I would’ve thought that the class project (BSCbot20) wouldn’t work because it’s so small. For the Moths of Alabama project, (and for other large public projects, like national moth week 2018: Alabama), I get to the “your quiz is ready” screen, but after I select “start”, I get a screen that says, “We couldn’t find any taxa to build this quiz with”. Any guesses on why this is glitching?
Here’s a related question: For BSCbot20 (currently only has 25 observations), the quiz only had 4 questions, while for the Flora of North Alabama project (>57,000 observations), the quiz generates 10 questions. Is the number of questions in the quiz based on the size of the project? This seems logical, but I just wanted to check.
Thanks so much for tailoring this for projects!
this could be really useful I think if you keep developing it
as you / others say I would love to see more niche taxa
a quiz just on hoverflies for example would be a really helpful learning / reminding tool
also the option to make it more difficult would be great
e.g. by adding more photos to choose from and decreasing chances of random guesswork
…this could also just be different levels within the game as you continue
I wonder if it could also work in reverse to flag up mistaken identities within iNat…
or to further gamify the existing ID system and encourage more people to take part in identification
it could also feed into an informal badge system within iNat to show user knowledge…
a multiplayer game would also be great, would be cool to pitch your skills against others…
loads of potential !
great stuff, look forward to seeing how this is developed :)
Hi @petervanzandt , thanks for the feedback. I took a look and fixed the inconsistent project taxa results.
I just tried it with ‘Moths of Alabama’ and it should be good now, but please try it and let me know!
To answer your other questions:
Any guesses on why this is glitching?
As of right now ‘Your quiz is ready’ always appears when a quiz is built without error. However, it would be better to have it so that it shows something else if no taxa were found and the “quiz” that was built is actually empty (so 0 questions), that way the user would never be prompted to click ‘start’ to begin with and instead would be asked to try another query. Thanks for pointing that out. I will make that change.
Is the number of questions in the quiz based on the size of the project?
Yes. Based on the taxa found by the query, the quiz builds up to 10 questions, with 4 taxa per question.
I took a look at BSCbot20, there are 23 species observed, but only 14 of them have research-grade observations, which I have it set up to filter for.
Fantastic! It works like a charm now. Thanks so much!
@sbushes I appreciate it - that means a lot. I’d love to implement the additional options you mentioned. I would need to figure out a way to do so in a clean way, because I don’t want to clutter the screen with so many options that users get overwhelmed. Maybe a separate ‘advanced options’ screen would do the trick. I’ll think of something - thanks!
As far as flagging mistaken identities. I love that idea. On a related note, a professor I showed this to recently tried it and he noticed that some of the plant species were accurate but had failed to be labeled as ‘cultivated’. I’m sure that the ability to flag these is something he would appreciate.
All of these are awesome suggestions :)
This is amazing, I love it!
welcome to the forum :)