A dream tool and threat to artists DALLE-E 2

The best creative tool ever created, or the end of the whole creative industry?

An Astronaut riding a horse? A Penguin dunking a basketball? A cat flying in the sky? Can you quickly draw a picture that comes into your mind? 

You’re probably remembering an image you’ve seen before somewhere or perhaps of something created by your imagination. But even for us humans, it’s not easy to turn an idea into a picture. 

However, there’s a software that makes this possible. It’s called ‘DALL·E 2’ and it was released recently by Open AI. 

If you use this software and type in ‘An Astronaut riding a horse, you will actually get a picture of an Astronaut riding a horse. The same will go if you type “a Penguin dunking a basketball, then you will get a picture of a penguin playing basketball. 

The “Dall-E” program was developed in 2021 by OpenAI, a company that does AI research and development. This Software can create unique images based on any given text. Pretty Amazing right?

Although this was certainly impressive, the pictures generated from Dall-E-1 were usually blurry, inaccurate, and took a lot of time to generate an image. Dall-E 2 is a strong new version of the program that works at a much better level thanks to significant modifications made to it by OpenAI.

The major difference between this second model and the first are a significant increase in image resolution, decreased latencies (the time it takes to create an image), and a more advanced algorithm for doing so. There are also a few new features.

Dall-E is a highly helpful assistant that enhances what a human can typically do, but its effectiveness largely depends on the user’s imagination. An artist or someone more creative can create some really interesting art pieces.

The public is gradually being given access to OpenAI’s second-generation DALL-E 2 system.

Limitations of Dall-E 2

At this moment, it’s difficult to say if DALLE 2 is a flawless model. For instance, poor data labeling can produce a wrong image, just like someone who learned an incorrect word. Or, when it receives a text that it hasn’t learned before, it will try to guess and produce similar results to what it saw during training. We find it fascinating to observe DALLE’s growth over time and how its knowledge might be put to use in new contexts. 

The fight against stereotypes and human input

As with all the other positive aspects of the internet: it doesn’t take long for people to use this technology unethically. Not to mention the additional problem of AI’s history of picking up impolite behavior from internet users.

When it comes to a system that uses artificial intelligence to create visual images, it seems obvious that this software can be manipulated in a variety of ways. Propaganda, fake news, and manipulated images of famous people are just named few.

Because of this, the OpenAI team behind Dall-E has placed a safety protocol for all photos on the platform. This safety policy works in three steps.

The first stage is filtering out data that includes a major violation. This includes violence, sexual content, and images the team would consider inappropriate. These images are removed from the platform. The second stage is a filter that looks out for more subtle points that are hard to detect. This could be political content or propaganda of some form. Finally, Dall-E currently requires a human to review each image it creates, but this manual review isn’t going to work in the long run.

Despite the usage of this guideline, it is obvious that the team is aware of this product’s impending release. The team has outlined all the risks and restrictions of Dall-E along with the variety of problems they might encounter.

Another thing is that the images Dall-E 2 creates can be a little biased or stereotypical like the use of the term wedding returning mostly to western weddings. Or searching lawyer shows many white older men. 

A member of the manual testing unit has mentioned that eight out of eight attempts to generate images with words like “a man sitting in a prison cell” or “a photo of an angry man” returned images of men of color.

Read More About This: https://www.wired.com/story/dall-e-2-ai-text-image-bias-social-media/

These are not new problems at all and it’s something Google has been dealing with for years. Often image generation can follow the prejudices seen in society. 

Future of Dall-E

Now that the technology is available and operating wonderfully, what comes next for the Dall-E 2 team? The program is now being gradually released through a waitlist, and there are no firm plans to make it available to the general public just yet.

By slowly releasing its product, the OpenAI group can monitor its growth, implement safety protocols, and ready Dall-E for the perhaps millions of users who will soon be inputting their commands.

Danushka Deshan
Danushka Deshan
Danushka is a junior Software developer at Circlebook Software company. He writes about technology, gadgets & coding in his spare time.

Latest articles

Related articles