Al Generative Art is a genre of art that relies on collaboration between a human and an autonomous system. It is typically visual. An artificial intelligence programme, algorithm, or model that can carry out difficult tasks without the assistance of a programmer is referred to as an “autonomous system.”
Images produced by AI algorithms are increasingly making their way into the public consciousness, from the weird visual juxtapositions produced by Dall-E Mini to the NFT market. In reality, Midjourney and DALL-E 2 are two significant efforts on the topic that demands analysis.
Of course, Twitter has also been updated with the news. Charles Hoskinson, among others, made comments on it and wrote:
Algenerative art: Early experiments and features
Now that we understand what generative art is, it’s important to emphasize one of its underlying principles. randomness. This is a fundamental property of generative art.
Depending on the type of software, an autonomous system can handle unique results that are always different each time a generation command is executed, or return a variable number of results depending on user input.
The first experiments in generative art were those of Harold Cohen and his AARON program, dating back to the 1960s. Cohen initially used stand-alone software to create abstract artwork inspired by pop art screenprints. Cohen’s work is currently on display at the Tate Gallery in London. Another characteristic of generative art is the repetition of patterns or abstract elements provided by programmers and implemented in software code.
Additionally, the development of increasingly complex neural networks dealing with text-image associations has enabled the development of generative models capable of producing more realistic and accurate images. The most famous example of this category of generative art is Dall-E.
Dall-E is a multimodal neural network based on the GPT-3 deep learning model by OpenAI. OpenAI, the same company that recently developed ChatGPT, launched in November 2022, is a chatbot labeled “supervised” and optimized for its amplified learning technology.
Going back to Dall-E, we see that the system can generate images from textual descriptions called ‘prompts’ based on a dataset of text-image pairs. The first version of Dall-E, released to the public in January 2021, remains the prerogative of a few experts in the field and represents a true revolution regarding this type of generative model, the innovation of GPT-3 self surpassed.
Also worth noting is the fact that the accuracy of results processed by Dall-E proves to be a perfect margin for another of his OpenAI solutions. CLIP (contrasted speech image pre-training).
A neural network for image classification and grading, trained on text-image associations such as B. Captions found on the Internet. Thanks to CLIP’s intervention that reduced the number of results suggested to the user per prompt to 32, Dall-E delivered satisfactory images most of the time.
Sometimes: Design, human infrastructure, artificial intelligence
As expected, Midjourney is an important project that is part of the new Al Generative Art concept. Specifically, Midjourney is an independent laboratory that explores new ways of thinking and expands human imagination.
Usage is simple:
First, you’ll need to create an account on Discord, the platform that hosts the various communities Midjourney participates. There are various chat rooms within the application, whether or not you can actively participate in discussions.
It’s important to point out that to use artificial intelligence for the first time, you need to visit the “beginner” channel with 25 free renders available. Rendering is equivalent to generating four different variants generated from the same text input. Therefore, the 25 renderings are related to 25 processing jobs performed by the traveling bot. As a result, generating an image requires interacting with a traveling bot via a text message called a “prompt.” This message describes the image that the user has in mind using keywords.
You can include as much information as you like, but it’s crucial to separate the keywords with commas. Following the completion of the rendering, the computer presents four options for the user to select from. Furthermore, after the application has done rendering, you can express your preferences based on the images and, if you’d like, have four further variants made.
Dar-E 2: New AI system for artwork
Besides Midjourney, DALL-E 2 is also a new AI system that can create realistic images and artwork from natural language descriptions. Additionally, DALL-E 2 can combine concepts, attributes, and styles.
The strength of the new AI system also lies in its ability to magnify images beyond what exists on the original canvas, creating new rich compositions. Plus, you can make realistic changes to existing images from natural language captions, adding and removing elements while accounting for shadows, reflections, and textures.
DALL-E 2’s capabilities also include taking an image and creating multiple variations inspired by the original. DALL-E 2 learned the relationship between images and the text used to describe them.
It uses a process called “diffusion” that starts with a random pattern of dots and gradually changes that pattern towards the image as certain aspects of that image is recognized. So, with the launch of DALL-E by OpenAI in January 2021, the most recent system, DALL-E 2, now produces images that are four times as realistic and accurate.
Initially developed as a research project, DALL-E 2 is currently accessible in beta form. Limiting the system’s ability to produce violent, hateful, or adult images are one security mitigation that the system has developed and is constantly improving. Another is learning-based phased deployment.