DescriptionNine algorithmically-generated anime-style artworks created from a single Stable Diffusion prompt.png
Demonstration of the uniformity of artstyles for algorithmically-generated artworks created using exact same text prompt when using the wd-v1-2-full-ema.ckpt AI diffusion model, which is a fine-tuned adaptation of the Stable Diffusion V1-4 AI model that is specifically conditioned on high-quality anime images. When the text prompt is kept the same, there is, subjectively at least, a definite visual similarity in motifs, painting styles, human anatomical proportions, and lighting effects among AI-generated outputs.
As an example of how the visual style of generated AI artwork does differ when the text prompt is changed, compare with the following image which uses a different prompt to generate the images:
Procedure/Methodology
All artworks created using a single NVIDIA RTX 3090. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.
A batch of nine 512x512 images were generated with txt2img using the following prompts:
Prompt: tone mapped, ambient lighting, highly detailed, digital painting, artstation, concept art, sharp focus, art style of makoto shinkai and akihiko yoshida and hidari and wlop, fantasy clothes, cute face, beautiful face, (full body), a portrait of a young japanese dryad girl nude (intricate), ((sexy))
Negative prompt: By Antonio Mora, Clive Barker, Egon Schiele, Ernst Ludwig Kirchner, By Francis Bacon, Frida Kahlo, Giuseppe Arcimboldo, Jean-Michel Basquiat, John Lasseter, John Wilhelm, Junji Ito, Kazuo Umezu, Laurie Lipton, Naoto Hattori, Otto Dix, ((((mutated hands and fingers))))
Settings: Steps: 50, Sampler: Euler a, CFG scale: 11
Then, two passes of the SD upscale script using "Real-ESRGAN 4x plus anime 6B" were run within img2img. The first pass used a tile overlap of 64, denoising strength of 0.3, 50 sampling steps with Euler a, and a CFG scale of 7. The second pass used a tile overlap of 128, denoising strength of 0.1, 10 sampling steps with Euler a, and a CFG scale of 7.
As the creator of the output images, I release this image under the licence displayed within the template below.
Stable Diffusion AI model
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.
wd-v1-2-full-ema.ckpt adaptation of Stable Diffusion
The wd-v1-2-full-ema.ckpt adaptation of Stable Diffusion, being a derivative work of the original Stable Diffusion V1-4 model, is released under the CreativeML OpenRAIL-M License.
Dataset used to teach the neural network for wd-v1-2-full-ema.ckpt
The dataset used for fine-tuning the wd-v1-2-full-ema.ckpt AI diffusion model consists of a random sample of 56,000 images from Danbooru with an aesthetic score greater than 6.0. Artworks generated by wd-v1-2-full-ema.ckpt are algorithmically created based on the AI diffusion model's neural network as a result of learning from this dataset; the algorithm does not use preexisting images from the dataset to create the new image. Ergo, generated artworks cannot be considered derivative works of components from within the original dataset, nor can any coincidental resemblance to any particular artist's drawing style fall foul of de minimis. While an artist can claim copyright over individual works, they cannot claim copyright over mere resemblance over an artistic drawing or painting style. In simpler terms, Vincent van Gogh can claim copyright to The Starry Night, however he cannot claim copyright to a picture of a T-34 tank painted with similar brushstroke styles as Gogh's The Starry Night created by someone else.
Licensing
I, the copyright holder of this work, hereby publish it under the following licenses:
to share – to copy, distribute and transmit the work
to remix – to adapt the work
Under the following conditions:
attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.http://www.gnu.org/copyleft/fdl.htmlGFDLGNU Free Documentation Licensetruetrue
You may select the license of your choice.
Captions
Add a one-line explanation of what this file represents
{{Information |Description=Demonstration of the uniformity of artstyles for algorithmically-generated artworks created using exact same text prompt when using the [https://huggingface.co/hakurei/waifu-diffusion wd-v1-2-full-ema.ckpt] AI diffusion model, which is a fine-tuned adaptation of the [https://github.com/CompVis/stable-diffusion Stable Diffusion V1-4] AI model that is specifically conditioned on high-quality anime images. When the text prompt is kept the same, there is, subjectively a...
File usage
No pages on the English Wikipedia use this file (pages on other projects are not listed).
Metadata
This file contains additional information, probably added from the digital camera or scanner used to create or digitize it.
If the file has been modified from its original state, some details may not fully reflect the modified file.