Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artist uses AI to fake 70s science fiction pulp covers – artwork and titles (cdm.link)
149 points by glitcher on Feb 7, 2022 | hide | past | favorite | 88 comments


It's pretty easy to dismiss this stuff but I've found it has been very fun to play around with and certainly has allowed me to create imagery that far exceeds my skill.

Twitter seems to have an endless stream of examples with the aiart and generativeart hashtags. Also I really like Katherine Crowson's twitter because she builds Colab notebooks that n00bs like me can use to tinker:

https://twitter.com/RiversHaveWings

https://github.com/crowsonkb

Some examples I shared a while back using Katherine's VQGAN+CLIP notebook: https://imgur.com/a/1R5UZQb Promp was 'mad max dieselpunk alien spaceship landed in a desert' and then I upscaled the images. These are cherry picked from a set of about 3x this size.

One thing I find pretty remarkable is how good it handles shadows and depth of field. These are all kind of mud tones but if you let it run away with color you'll get some really brilliant results.


I was just looking at some of her stuff yesterday, very cool. I'm even more of a noob though, since I'm not sure how to get this code (similar to other notebooks I've played with) into Colab to test out. If there's an obvious first/next step I should be taking, could anyone point me in that direction?


I think you need a google account to access free collab runtimes. If you have one, click here - https://colab.research.google.com/drive/1QBsaDAZv8np29FPbvjf...

Scroll down to prompts and put in your prompt. I just did:

    prompts = ['fire ants attacking a school bus cinematic 35mm']
(re 'cinematic 35mm', you'll find that there are little bumper phrases, like 'in the style of <artist>' or 'trending on artstation' that sort of thematically guide the output)

Click on 'Runtime->Run all' and let the notebook execute it's initialization steps and then start to generate the image. By default it goes 1000 iterations, and spits out what it's working with every 100 iterations. Once you get one you like you can update the prompt, re-run that cell and the one below it to generate something new.

I pay $9/mo to get access to better GPUs (usually P100 sometimes get better), the free accounts tend to get slower ones so it might take 30-60 minutes to complete per image.


Oh yes I pay for Colab - didn't see this linked from the other page, or rather wasn't sure where to find it in the various notebooks. This is exactly what I was looking for - setting up to do a run now. Thanks very much.


No problem. This is an older notebook. Poke around, Katherine has posted more recent ones and lots of people have added bells and whistles to make them easier to use. Have fun!


Wow, thanks for this!! It’s really amazing and I had now idea how to get started.


do i need to pay for it to work? i'm running the colab and i get memory issues

"CUDA out of memory."


It should work with all of the GPUs that collab has I believe. Try clicking Runtime->Restart and/or Restart and Run to see if that helps.


Try reducing the pixel dimensions.


There are several Patreon creators you can support of coders creating customized Colab notebooks

Check out my other post for those that use a very customized version of ruDALL-e


Do you have recommendations for creators to follow or Colab notebooks to use?


I love using ru-DALLe to create artwork. I do several 3 hour runs a day. It's a "kind" of hobby in that it takes effort to collect the source materials and it takes effort to adjust the models to my liking.

Rather than calling these the final products, these are more like something I would take to an artist to produce some commissioned work in finer detail with their personal touch.

https://imgur.com/a/zeCPnyA https://imgur.com/a/w2ifLVm https://imgur.com/a/gVlzlUG https://imgur.com/a/LOLq2ad https://imgur.com/a/QHyyrtY https://imgur.com/a/HFozuFe https://imgur.com/a/eR6PTBR https://imgur.com/a/L7uKC8d https://imgur.com/a/kmOIQiW https://imgur.com/a/GWGt56i

I have tons of these each day...though several of these examples use the same sources with different settings if you notice any monotony.


I love it. Unlike many machine learning examples this one does not highlight how good the model is, but highlights how poor the source material often is. Many of these are completely believable covers.


There was nothing poor about the source material.

This actually highlights how this kind of AI simply smushes sources up randomly without any sense of content or meaning.

It has a look - more horror than SF - but that's because distorted smushing with vague hints of distorted body parts is a look in itself. (See also artists like Francis Bacon.)

Neither AI nor developers aimed for this look deliberately. It just happens to be what falls out of the process. (See also Wombo, which has a similar aesthetic.)

What it doesn't do is evoke the imaginative narratives of the originals, which were consciously crafted to suggest a mood, setting, or context. Compare with (say) Bruce Pennington or Chris Foss and it's easy to see what's missing.


Thanks for mentioning some original artists. One thing I am realizing here is that the original artists who worked hard to create the source material for these algorithms are often totally overlooked. "I threw a bunch of book covers in to an algorithm and look at all this cool output!" Sure, but who are the people who created all that source material? Can we credit their names as we build off of their work? We can and should credit the algorithm developers too and the person who put all of it together. But I scanned through TFA and I don't see the names of any original artists or mention of the source material, and that seems like quite an unfortunate oversight.


Indeed. I have a soft spot for 1960s-1970s SF paperbacks and always end up with a few when I visit used book stores. These look uncannily like the covers of those.


Typical AI generated art, only look like the real thing if you squint real hard or don't pay attention.

Very low value in my opinion, pretty much garbage in fact.


Go back two centuries… people said exactly the same about Impressionists. Then afterwards about Modernists. So on, and so forward.

Many artists went nowhere back then, others used their experiments to create masterpieces. It is always relative and it baffles me how some people only think about the masterpieces and big names, without looking the whole timeline.

Besides, generative and AI art are so damn cool :) It is something closer to us developers, but still far (not everyone knows how to implement it).


All of the AI art I see has a sort of melted look.


I agree. It's a mish-mash of melting shapes, nothing resembles anything.

I'm really confused why would someone call this "perfectly fake". Have those people seen the real covers? The covers were meant to catch your eye, to evoke some kind of emotion. Compared to them, most of the examples evokes only repulsion in me.


I have the same reaction. They're like an alien with only a description of human activities but no real knowledge of behavior or society, tried to create a painting. Everything is strangely squished, there's no clear dividing line between one thing and another, nothing is a functional symbol but just an impression of a symbol. "Humans like yellow; lets put a yellow shape in there!"


This technology has been developing very fast; pictures look a lot more realistic than they used to, and you can guide the process with real images along the way. (Zoetrope is a good notebook for this.)

The main issue is that it's hard to generate good looking faces without a specialized model (because we're so picky about humans), and the companies generating models like DALL-E don't want to teach them how to generate faces in case of bias.


GAN art is really a one trick pony.

I loved it at first and got quite into it but I have seen enough of these images now that the novelty has worn off and they are just kind of shit.

Physical painting inspired by GAN art might be cool but I don't have the talent for that. There is also something lost for sure when I know I could make a 100 of these album covers in maybe 20 minutes of tinkering around?


They should put a human in a fMRI scanner, and place this in the feedback loop of the GAN. When the human feels repulsion, the fMRI scanner picks this up and sends negative feedback to the network.


The human would most likely quickly be desensitized and only output feelings of self hatred and suicide. But to be sure we should start a multibillion dollar startup that generates jobs for millions of starving third worlders.


With some craft it's possible to produce better work. Here's two of mine for example:

https://hypertele.fi/0416e43936606257

https://hypertele.fi/a78fd2489d1b8df8


There are many ways to avoid that. One way is to use an AI to perform "style extraction", and then apply it to existing images.

Here's a presentation for Papers We Love San Diego on a paper describing such a technique:

https://www.youtube.com/watch?v=KFORVMT8AWs

I would argue that the proper way to do this while respecting other creators is to create your own original source images and then train the model based on those, before then applying the style to some other original image of your own.


Because this is actually what is it after all - you take some pictures, melt them and cast new ones.


I love how the images include the wear and tear of old paperbacks giving these covers an authentic look... but I think the titles need some slight photoshopping to reflect those scratches :)


this artist has captured so many of the aesthetic qualities of these classic cover images here. One that is missing, I noticed, is the quality of sparking my imagination, or the evocation of a world. There's nothing in those images that is connecting with me on that kind of level.

Which makes me wonder if this is a problem with many/most of the AI images one sees. I think so?


Neural nets can paint whatever you like, but you have to choose the right training material, model, prompt, etc.


This might have been useful to me in early eighties high school, I had a class in which I had to write book reports, once per week I think, and there was a sort of template as to how they had to be written with analysis of character motivations etc.

It's been a long time but I found something about the critical methodology used just extremely insulting to any piece of literature it was applied to, reductive and harmful to actually appreciating anything, so despite reading a minimum of 4 books a week I decided to write fake book reports. Generally crappy science fiction and fantasy was the genre.

I only remember one fake book I wrote a review of - The Hinterlands of Horlon - a crappy Tolkien rip off with a quest and everything, written by James R. Canton which was also a role playing character name I had for some occult spy game I was in at the time. Fake cover art would have been a nice topper to the fraud.


I find these kind of art kinda annoying to look at, because it takes like 3 or 4 secs until you realize that you are watching nonsense. The initial feeling fools me.


>The initial feeling fools me.

I'm the exact same way and I think this says something about how the methods used to generate these images relates in some way to how our mind processes them.


Seems similar to the incoherent material in doodling, role playing, or dreaming. Humans do a lot of that.


I believe that generative models are changing how content and art are created. Inspiration and judgement are becoming more important than skill: In music, a good artist needed years of learning in theory and practice to become skilled at playing the instrument and creating harmonies. In the futures, the best artist are the ones that can be explore the input space of the AI. They can produce the most inspired prompts and judge best which results are good and how they can be changed.

The same can be said for painting (we are already seeing it now) and with writing.


I feel like this technique would be much more successful for body horror genre.


The next step of course is for AI to do all the reading - as is actually already the case today. Bots that scan blogs/ websites and post artificial comments, together with a drop in reading among humans never seen before, with widespread functional illiteracy anong the newer generations.

https://www.theguardian.com/education/2020/feb/29/children-r...


My son put up a similar AI portal for his mother (who is a writer and artist). It takes a title and creates the art. For instance, I entered "Ford F150 by Frida Kahlo" and got a pastiche of red truck parts, faces and the letter 'F' followed by three or four quasi-symbols. All in a saturated style similar to Kahlo's paintings.

I'm guessing the faces because she did so much portraiture.


These covers aren't great, but if you could design a process that could produce great ones, then write books to fulfill the best ones, you wouldn't be that far off from the editorial process of a lot of pulp publishers that wrote books based on good covers.

AI generated covers and titles used as writing prompts would be very fun. They would have to be far better than these, though.


Soooo...

There's this? https://tvtropes.org/pmwiki/storygen.php

And that is a wonderful way to get ideas. :D


I notice the only cover with an author popular enough to have their name bigger than the title of the book is One Telling Ten Thousand by Isabella V. Rivera which does seem to go against the whole boys club feel of Science Fiction of that time. I think it would have been more likely a gender neutral name like Andre Norton's, maybe Kim Rivera or V. Rivera.


It reminded me of Octavia E. Butler, which is such a big-font author.


right, big font authors are big font because they are popular, so Isabella was chosen as a popular author's name, but most of the big font authors had male names (or unisex); Octavia Butler was from a later generation than these books are supposed to be from.


"Bizarre pastiche" describes the training material as well as the output. Well played!


There's a part of me that actually wants to write a short story called "Green Glass is the Color of the Wind" now, since I think that's a pretty funny title.


"The moon is just sunlight" was my favorite. I'd read that!


What we need now is a second AI to generate book review blurbs to put on the back cover.

"Wondrous...thought-provoking...panda." - The NY Times


How long until GPT-3 can do that?


That honestly might be more appropriate. A machine learning AI drawing a cover and a machine learning AI writing a book.

The singularity might come sooner than I thought.


I haven't tried this yet but I'm sure you could coax GPT3 into describing images as part of an illustrated book, then feed those into one of these generators.

Welp, i just tried and: https://imgur.com/x9bwRq2


Wow, this is one of the most impressive demos of GPT-3 I've seen.


What site is this?


Open AI gpt3 playground. This was using the davinci model in ‘Q and A’ mode with default settings. I’m sure somebody with more experience could do a better job getting it to play along.


I think I'm going to be staying up late tonight lol:

---Prompt----

I am a children's book author and I will write an illustrated short story for you. I will start with the title, then the first page, then a description of an illustration in square brackets like so: [describe illustration here]. Then I will provide two more pages and illustrations. The subject of the book is a puppy that gets lost and then finds its way home.

Q: How does the book start?

---Answer---- (I just keep clicking Generate after it inserts the next 'Q:')

A: The book starts with the puppy waking up from a nap and noticing that its mom and dad are gone. It starts to get scared and starts to cry.

Q: A: The puppy keeps crying until it falls asleep. [The puppy is shown sleeping in a bed with a teddy bear next to it. There is a window in the background with light shining in, and a door is visible at the side of the bed.]

Q: A: The next time the puppy wakes up, it's dark outside and it's scared again. [The puppy is shown in a different room this time, with a nightlight on and a few toys scattered around. It's looking out a window at the stars.]

Q: A: The puppy spends the whole day looking for its mom and dad, but it can't find them. [The puppy is shown walking through a park, then a city street, then a forest. It's looking tired and sad.]

Q: A: The puppy finds a home in the forest and falls asleep. [The puppy is shown curled up in a bed of leaves. It's raining outside and the window is foggy.]

Q: A: The next day, the puppy wakes up and realizes that it's not in its own home. It starts to cry again. [The puppy is shown looking out the window of the home it found. It's raining and the window is foggy.]

Q: A: The puppy sees a light in the distance and starts to run towards it. [The puppy is shown running towards a house. It's still raining and the ground is wet.]

Q: A: The puppy reaches the house and starts to cry. A woman comes outside and picks it up. [The woman is shown picking up the puppy and hugging it. The puppy is crying and the woman has a sad expression.]

Q: A: The woman takes the puppy inside and starts to cry. [The woman is shown crying on the couch. The puppy is sitting on the floor next to her, looking sad.]

Q: A: The woman calls the police and they come to take the puppy back to its home. [The woman is shown talking on the phone. The puppy is sitting on the floor next to her, looking sad.]

Q: A: The police take the puppy back to its home and it's happy to be back. [The puppy is shown being reunited with its mom and dad. They're all happy and the puppy is wagging its tail.]

Q: A: The end.


They can be the same AI, cross-modal training is working very well.


I assume it can already. The real test is whether it can write a story that people would enjoy reading (and not just as a novelty).


Does anyone know why the covers were usually like this in the 70s? They are so characteristic and replicated across many books that I bet there is some back story

EDIT: wrong decade there


An artist who delivers work on time and is easy to work with is an artist who gets asked to work on your next bookcover. Especially if they are also an artist who regularly delivers something eye-catching that makes the book more likely to jump off the shelves into someone's hands. Start looking into the art credits and you will see recurring names.

There is also the choices made by the art directors at the publishers to consider; they were aware of current trends in SF/F book covers, as well as other genres, and would select artists based on how well they worked with those trends. Or against them. Consider for instance Richard Powers (http://www.richardmpowers.com/1970s) who did a ton of psychedelic SF cover work in the 60s, and less work in the 70s as things swung to a more realistic look.

The history of printing and art materials also plays into this. Full color printing was cheap enough for full color covers to be normal; compare 1970s SF covers to SF covers from the 1920s or 1930s, when color separations had to be done by hand, and when acrylic paints were not yet available!


> There is also the choices made by the art directors at the publishers to consider; they were aware of current trends in SF/F book covers, as well as other genres, and would select artists based on how well they worked with those trends. Or against them. Consider for instance Richard Powers (http://www.richardmpowers.com/1970s) who did a ton of psychedelic SF cover work in the 60s, and less work in the 70s as things swung to a more realistic look.

Powers' work definitely evokes that particular era, but (to me at least) it seems a bit... dour, I suppose, and a poor fit for lighter hearted works from then (Laumer's Retief, Harrison's Stainless Steel Rat novels). Definitely a case where ADs were conforming to a trend, although the later examples in those series aren't as jarring.



These are great and very surreal. I have always been a fan of the old sci-fi covers. These however look like something an artist might envision during a fever dream.


The thing with AI generated images is that often a lot of the patterns are repeated elsewhere in the image which to my eye always looks odd and feels “machiney”.


This is not 'perfectly faked' or even good.


Something I'm observing are these eye-like structures popping up from about every AI generated image. I call it reverse AI pareidolia.


It doesn't seem to be coherent in what it wants. As you go down in the image things change into different things.


Apparently orange circles were big in the 70s (or at least in the training dataset)


My headcanon is that everything was brown and orange in the ‘70s because it won’t show nicotine stains as easily.


White plastic tends to turn yellow from sunlight too.


"One Telling Ten Thousand (by Isabella T. Rivera)"

Ten thousand what?

Man, I hate it when that happens...


Well, what's the 'One'? The implication is 'people'.


Oh, you're right, that way it makes sense. I parsed it as "One telling" (with "telling" as the noun, as in "one sitting").


Huh. (wince) Wow, that ambiguity whooshed right by without me noticing. (pulls out a fork and eats some humble pie)


So, who's the real artist here? The AI, or the guy who programmed the AI?


In photography it's neither the camera or the camera manufacturer that's considered the artist, I think the same logic would apply.


It seems that the original artists who created the source works feeding the model did not even enter your consideration. :(

Looking at these images I am all but certain that there are going to be identifiable portions of the originals in the AI-generated images. They are derivative works. Erasing the original artist's role in creating these images is immoral.


A human brain can only paint what it has observed or derivatives of observation. How is digital remixing any different?


If these images were painted by hand, they would also be derivative.

But technologists insist that since one step in the chain of creation was algorithmic, the technologist gets all the credit and the original artist gets none.


I don't buy that there's anything special about "technologists" here. For example if I were to check the credits of the original Pirates of the Carribean movie, where is the credit to Keith Richards?

That's what Depp is doing right? Captain Jack Sparrow? It's a Keith Richards impersonation, but although Richards gets a part in one of the later sequels partly off the back of that, I don't see a credit for it in the first movie.

This is just how culture works. The pollination shades over from straight plagiarism, through homage and satire, until it's just unconscious assumptions and then it slips out of view entirely and even the creator has no idea it was there. Long before you disappear into the unconscious you're into a space where what you thought you were saying isn't what your audience understood at all. There are Warhammer 40K fans who don't realise that the Ultramarines are a joke. Yes they're blue, why is everybody laughing?


We have a court system which adjudicates whether works are considered derivative under copyright law. If there are recognizable subsections of the originals in the generated images, good luck with that court case — and good luck convincing the court of public opinion that you aren't taking credit for somebody else's work.


Yes!


> The Past is Fictional

It’s interesting to think about how we tend to grade the “smarts” of machine intelligence by comparing them to what us humans believe to be the truth. Even if they were trying to teach us something how would we ever know?


I liked the title "The Past is Fictional"



Wow looks super bad...except the fonts. It's just pixelvomit.


There’s something deeply unsettling to me about art like this, as if one of the hidden layers is “colonoscopy”


Haha true, mixed with partially damaged DivX Movies.


One of the more refined and well executed AI art projects I've seen! The subject matter has a nice eerie connection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: