Has there been any attempt yet to use the recent developments of AI, using text to image conversion to generate “phantom” pictures for the species not pictured? Would be very interesting to have, particularly for the older literature on tropical moth species. Perhaps this would need training of AI using a data set with pictures, to be able to generate the “missing” pictures by AI.
i suspect it would be fine (better than nothing) to use some sort of image generation AI to produce images of organisms that have been described in text but not in photos for some reason, with the following caveats:
you would definitely want to label such photos as AI-generated
you would need someone to select and vouch for the accuracy of any photos generated this way. it would be sort of like creating a police line-up photo based on a witness description. you still need the witness to confirm that the resulting photo actually looks looks like the suspect.
Andreas, as you well know, the descriptions of many moths in the early literature often leave much to be desired. The written descriptions are sometimes minimal, with subjective descriptions of patterns and names of colors that are sometimes archaic. I can’t imagine even the most erudite AI being able to generate a suitable (i.e. somewhat realistic) image of some un-illustrated moth. Maybe I’m selling AI (or its current state of affairs) short.
I have actually gone through the process of manually trying to sketch an un-illustrated moth from a published description. In one case I’m thinking of (a neotropical species of the Acentropine genus Petrophila), I had the advantage of knowing the expected wing shape and the context of the markings which were only briefly described in the original 19th-century publication. Nonetheless, the effort gave me some general sense of what the species hypothetically looked like.
When police generate an image from a witness description - do they use a human artist, or is it computer generated?
The advantage either way, is that the witness has SEEN the original.
there are many ways sketches can be generated these days. my understanding is most sketches are done electronically these days but by a human artist. they use tools that can provide different levels of automation for creating the sketches – for example, allowing you to select from a range of pre-defined hairstyles or allowing you to tweak the size, shape, and color of eyes on the fly.
there are efforts to use AI image generators to create sketches, too, but they are controversial because they would generally present different entire faces to choose from, based on some training set, as a starting point, as opposed to narrowing down the individual features first and then putting those elements together. (the issue is that people’s memories are malleable and can be swayed by the images they see. so they may latch onto whatever the AI generates first, even if it’s not really a great match for what the people really saw.)
there have been other controversial technology-assisted sketch generation methods, such as using DNA and phenotyping to predict what a person will look like. (it’s theoretically possible to use DNA to predict what a person’s face will look like, hair color, skin color, how tall they will be, and even what their voice will sound like.)