How do iNatters use AI?

it’s usually the new small companies that adopt new technology, not the old large ones.

Well, I started ghost writing texts on basically any subject, whether I had a real clue or not, for years for money and that really opened my eyes back then (must have been around 2006ish) already being very cautious about almost any information you read online (and a lot offline as well actually). This line of payed work is now almost dead with AI taking over this task… but it is really not that this task had not been done before as well leading to similar untrustworthy results.

4 Likes

Very true.
Especially if those companies want to produce mediocre to bad content in the form of only web accessible books, songs, videos, websites, or customer platforms in dozens per day with the same business concept as traditional spam e-mails have.
Or if the business case is to create content only for the sake of generating a threshold number of clicks on facebook which will then be paid by meta.
All good content to back-feed AI with its own slop!
But at the point where standards will be low enough, i agree it will try to bite its way through more serious business, provided it still functions then the way we know it today.
Did I mention the movie “Ideocracy” in a previous chat contribution ?
Watch and enjoy !
“The Simpsons” was the past oracle.
My bets are that “Ideocracy” is the new one!

we disagree almost completely, but appreciate the response. :)

I have fundamental ethical problems with LLMs, but IMO the worst are AI summaries. Search engines crawl the web, learn from copyrighted content, and then give you an AI summary designed to keep you on their platform so they can show you more advertising, at the cost of starving the hardworking humans who created the content of the visitors that fund them.

I refuse to use them under any circumstances. On the other hand, we’ve been using the same fundamental technology in basic machine-learning models for decades to produce useful outputs. It’s not so much the technology, but the way it’s being trained and exploited for LLMs and generative AI that I have a problem with.

19 Likes

I disagree. Many times I ask for the summary to corroborate the Id I receive from the CV. They list their sources and I can read the first hand sources if I want. The summary helps me understand what the CV is suggesting. The CV is informed (the way I understand it) from the ID’ers here. Which is fine… but it means the experts here are feeding data into what the CV spits out. If anything… if I want corroboration… I should ask another engine.

Most recent use of AI summary to understand what CV might be looking at…
”diagnostic traits of tantalus sphinx”

I’m sure it’s easy to ID if you already know something about it, which I don’t. I could use the wiki for the taxon but I can’t ask it follow ups about it’s sources.

I… don’t. I didn’t need all the “AI” stuff before and I don’t need it now. I have books on the majority of my main subject areas and I have custom filters set up using uBlock to block the AI summaries from showing up in my search results (works about 80% of the time). To be fair it has its uses and can definitely be used well but I prefer to just not go near it - it’s too easy to fall into the trap of letting a predictive model (all that AI really is) do all the “thinking” for you.

6 Likes

I avoid using LLMs as they’re wildly unreliable and essentially just fancy text predictors, prone to making serious errors and making things up.

Directed AI that’s focused on a specific purpose (eg. iNat’s ‘computer vision’, tools like the suite of Topaz plugins for photoshop, ones that assist in identifying astronomical objects, etc) I’m perfectly fine with and use.

11 Likes

that’s so interesting. really hard to understand where these negative takes are coming from. i love the AI summaries and find them generally reliable and aligned with responses from things like the CV app. at least every time I look they seem to corroborate what the CV comes up with.

what’s more is… you can ask follow up questions based on the response.

I doubt that companies like Google train their AI on the texts you wrote. I really don’t think they’d still factor in at all. In fact, my understanding is that the LLMs are smart enough to prioritize more recent data, datasets, and studies. if there are currently people polluting the more modern datasets, the problem is with those people, not the technology.

which bit do you disagree with? Your statements after this aren’t very clear.

3 Likes

Eh, I don’t actively seek it out to use it. I’ve found the information the generated AI summary in searches can be misleading/not true.

I once looked up if owning an hognose snake in my state was legal, and the AI summary said “not legal.” And in the slightly darker letters right by the “read more” button, it said, “you cannot own/possess venomous snakes that aren’t native to (my state) except hognoses.” So. AI seems to have no reading comprehension. I don’t trust it to give me the correct information.

And I’m against the use of generative AI for its “art” and for the programs like Character AI. I used to use CAI, and I can tell you that it’s a total energy drain, time suck, it’s addictive (like all the little buttons on our phones), and the bots are just stupid.

I’m mostly against it because I am an artist and I surround myself with other artists on social media and we all don’t like the AI-generated slop that people are pushing. Especially when it puts artists out of work.

Now, I get that it can be beneficial, but we’re pushing use for it in the wrong ways (like with AI-generated images). It needs to be regulated and developed WAY more. I don’t want AI to be summarizing my search unless it can properly convey the search results and NOT tell me that I can’t own a snake that’s legal to own in my state.

Kind of turned into a bit of a ramble, but whatever.

Have a wonderful day/night!! Make sure to hydrate and eat something today!!

5 Likes

I use AI taxon suggestions on iNat. That’s basically it.

I ignore and avoid using the AI-generated search summaries. If I remember to, I’ll include “-ai” in my searches, so it doesn’t waste time/energy generating that. So far I have found zero use for it.

I have family that uses that, to my chagrin. I consider it to be untrustworthy because it’s frequently inaccurate.

I guess there’s also the “AI” setting on our laundry machines but I’m pretty sure it’s just a fancy name for some sensors, since it’s not even connected to the internet.

2 Likes

I sometimes test the output of LLMs built into browsers (e.g. search assist) before I look at what actual humans say about a topic. Very often the LLM is factually incorrect in some way, so that’s why I used the word “sometimes,” and this is especially true for areas where I am fairly confident in my previous knowledge on a subject to determine when it hallucinates.

LLMs and large context models (LCMs) are terrible for most purposes of my job/work, and I’m very afraid of forgetting to cite them, so I don’t use them unless absolutely required. I can’t use them to rephrase my words because I have myself to reread, revise, and proofread my writing. I can’t use them to write scripts because of all the negations and dramatic pauses they incorporate by default because they don’t respond well to prompt engineering asking them to sound more like a younger or “less professional” person wrote it. I can’t use LCMs to process my data because I’m very afraid it will do it wrong or leak private data, and for my purposes the data processing task is doable without machine learning…I process the data to create machine learning models, so I know that if I used machine learning in preprocessing for anything other than a synthetic data generation model I am doing it wrong.

The jobs I am applying for (seasonal internships involving data science, quality assurance, etc.) will likely require LLMs and LCMs of some sort, but this is probably just for drafting reports and making the language of my non-confidential emails sound similar to the language used in “work culture. “I’m not very good at writing like a tech worker, but I will have to get good at it somehow, and since my future coworkers will all be using it I might as well run a small, local model to corporate-ify messages if I don’t have a coworker on hand that can vibe check my writing.

Additionally, Reddit pushing Reddit Answers on increasingly specific subreddits I frequent is annoying. I do not want to use it, and since the mobile UI now directs users to Reddit Answers instead of the main subreddit page I am searching for a less-slopified ask-and-answer social media with similar structure (Quora became dead to me years ago, so that’s not an option). I used Answers once for something related to my school and it was completely incorrect. Never again.

I have not in the past several years used machine learning to generate non-text content, and I never will. It seems useless when creating vector graphics is so easy and most of my work creating them involves charts and diagrams that almost any no-financial-cost-to-the-user models cannot replicate without hallucination.

The best use of “artificial intelligence” I’ve used for science is translation. I communicate with non-English-speaking scientists on occasion over the Internet, and I use Google Translate. In high school when I was muy competente en Español I volunteered to correct the grammar of the model’s English-to-Spanish and Spanish-to-English translations in hopes I would be helpful, but it seems like it was all for naught now with so much user demand. Learning languages the normal way would be better in the objective sense when communicating with others lacking a common language, but not a lot of people interested in wasting their time with Duolingo or some similar “AI-first” company.

We should probably poll iNatters about this.

1 Like

That is not what I wrote or meant.

Just saying, before AI, texts on the internet where not necessarily more real, as they might have been written by paid actors… ranging from reviews, forum texts like these, blog posts, how tos, product descriptions and even seemingly scientific texts..

edited for language

4 Likes

Back to the original question: Maybe the most scientific value I see in today’s everyday interaction with AI is that I have started re-thinking my understanding of the term “intelligence”. Humans automatically believe that an entity with language skills is intelligent. This psychological effect has been known for many decades, there is even a name for it - which I forgot thanks to my human limitations :joy: .
This effect, in the past of mere theoretical interest, now gained sudenly pretty extreme relevance in everyday life. If I am not alone and the AI scenario is making many people re-think what intelligence actually is, then it is already of value. Expanding the concept towards human-human interaction (e.g. my bosses, who manage many scientists), it just confirms that not everything we believe as to be intelligent in fact really is.
Language, compliance, loyality to the employer and the ability to predict what people want to hear alone is not sufficient to drive business in an intelligent way.
Society will have to find other outlets for those skill sets or simply skip them because they can be automatized obviously already today. Maybe future managers will need more empathic and psychological skills if the employer still sees a need for managers at all ?
Down-turn: If I now need to spend more time on validating information for myself and for/from others, AI makes my work less efficient.

I believe the only point where we can possibly agree is the fact that there is a disruptive socio-economic change ongoing and that it’s coming VERY fast.

6 Likes

The ‘negative’ takes on LLMs are based on their actual, real world performance. The fact that they’re massively error prone and subject to hallucinations is a well known and well documented fact.

As is the fact that they have an adverse effect on memory, reasoning, writing, and more.

Summaries of something like a website or a document are generally less prone to error as the LLM is limited to using what’s in that document, but using them as a search engine, which has become extremely common behavior, they often fail badly.

Trusting an LLM to provide accurate information is a guaranteed way to wind up with wildly incorrect information, sometimes lethally so, and general use overall has a negative impact on a number of aspects of cognition.

15 Likes

I think it really depends on how you approach it and what kind of prompts are used. You’re right that telling AI to “design X for me” is often going to yeild poor results. But telling AI your own rough ideas for a design and asking “what are the pros and cons of X?” or “please critique X” or “Help me to workshop this idea” can very often provide some valuable assistance and insights that you might not otherwise have had.

3 Likes

That’s a really good point. Hadn’t thought of using them to critique. I feel like it’s really difficult to understand how simple a flow needs to be without actually participating in the process alongside those you are designing the flow for. Careful observation of those in the process works too… but until AI has even more vision (and mobility)… this would be hard.