thomas_voland_female_in_sunglasses_made_of_crystals_and_imagina_44eb0b14-1f0b-4990-a302-97a240101bb3

Midjourney v4. AI in 2023 – artificial intelligence for generating graphics. Guide, configuration, prompts, etc.

Midjourney AI is one of the most popular and best artificial intelligences for generating graphics, which is also significantly more attractive in terms of price than DALL-E 2. It offers, among other things, an unlimited package of generated graphics and a free trial package. In version v4, complex commands and facial generation have been greatly improved, so each graphic in this article is labeled with the prompt used to create it. Additionally, I have written an article continuing this topic – about creating photorealistic graphics in Midjourney.

Update: Midjourney v5 is now available and has many advantages, but for creating graphics rather than “photographs”, many people prefer the v4 style. I included my comments on v5 in the article.


Table of contents

  • Introduction to Midjourney
  • Midjourney v1, v2, v3 i v4
  • Midjourney v4 want to be beautiful at all costs
  • Lack of watermarks and other markings
  • How to get access to Midjourney
  • Interface
  • How to have a free, private Midjourney channel
  • Settings for generated graphics
  • Midjourney performance
  • Multiple upscaling of the same image
  • Variants/Remixes
  • Aspect ratios of frames
  • Commands determining the appearance of graphics
    • Full body
    • Weight of commands
    • Number of objects
    • Styles
    • Lightning
    • Weather conditions
  • Basing on existing graphics and photos
  • Miksowanie różnych grafik
  • The biggest problem of all A.I.
  • Resolution of generated works

Don’t miss out on additional materials!

I am preparing an article about upscaling graphics with A.I.; about using Midjourney to modify your photographs, and a short tutorial video on the first steps in Midjourney.

There is also a continuation of this article coming up that will explore the topic of photorealism in Midjourney. Additionally, I will show live demonstrations on how I use A.I. and prepare a list of the most useful tools based on artificial intelligence. Leave your email below and I’ll let you know about it.

In total, there are several hundred graphics in this publication and the topic is only briefly touched upon as it’s material for a book, not an article. That’s why I consider it to be part of a series that will be continued both in written and filmed form.

In this post, I will show you which commands work when the results are very generic and when they start to be more “creative”, as well as how to configure this tool to get the best results.

Representation of sadness as a spiritual animal

Midjourney v1, v2, v3 and v4

At the start, it’s worth showing the progress that has been made in the four versions of Modjourney. I used the same command, so simple that even the oldest algorithms can understand it, and generated an image based on the same “seed”. So if I execute this command a million times, I will get the same result a million times, not a different one each time as would normally be expected (in v4 differences have been completely eliminated, earlier they were minor). As a result, we are looking at exactly the same image but interpreted differently by various versions of Midjourney. It’s a bit like taking a photo at exactly the same moment from the same location with four different cameras in order to compare them later. Midjourney gives us four options with every prompt, so I’ll show them all:

As we can see, comparing DALL-E 2 to v3 was impossible in any meaningful way, but v4 has made such great progress that I decided to create this article.

Below you can see the result on v5. Surprisingly, although v5 is almost always better than v4 for photorealistic graphics, in this case it is not necessarily the case.

The parameter –seed will sometimes come back in my commands when I want to show what difference a specific phrase makes.

Photorealistic visualisation of a soul

If you want to check the seed of a generated graphic, you need to hover over it and click on the add reaction icon, which is inserting an emoji. Some emojis have different functions in Midjourney, but now we are interested in an envelope.

After clicking it, the graphic will be sent to us in a private message along with the seed:

Midjourney v4 wants to be pretty at all costs.

After generating thousands of graphics, I have no doubt that Midjourney v4 tries very hard to create beautiful images, which is something I wouldn’t say about other A.I. systems. As a result, even when using a very short command in Midjourney, you get an aesthetically pleasing image, unlike DALL-E 2 where only a more precise prompt yields such results.

With DALL-E, I tried to specify everything – the framing type, color scheme, lens type and detailed appearance of characters etc. With MJ however, even if I don’t do this specifically it will still likely produce something visually appealing. This makes Midjourney very enjoyable and easy to use even if you’re not entirely sure what your expectations are.

Unfortunately, this A.I.’s “creativity” suffers greatly as a result (even in v5); the results are much more generic, less varied and simply “the same old thing”. Although the parameter –chaos 100 entered at the end of the command causes the four graphics proposals we receive to be more varied (a smaller number means less variety), still – the more Midjourney illustrations we see, the more repetitive they seem and the stronger is our awareness of differences in comparison to DALL-E 2.

Since there is not that much pressure to create long prompts, as everything looks nice anyway, I am posting here some images based mainly on short commands. The chapter “Commands specifying the appearance of graphics” discusses how to complicate them in order to obtain more precise results.

Photorealistic 24yo fashion model, abstract clothes, editorial photo

There are no watermarks and other markings

Apparently recruiters are already receiving a multitude of portfolios generated with the use of AI, allegedly presenting works from 3Ds Max, Blender, Maya or even just sketches. Graphics from Midjourney are not marked in any way and do not have watermarks like those from DALL-E 2, and even if they did, removing them would still only take a matter of seconds. Of course, Midjourney’s faux-rendered works are being caught as they are quite distinctive and if you see thousands of generated graphics you will also see patterns, but more on that later. Progress has also been made in this area and similarities are much smaller than they were several months ago.

How will such A.I. affect the job market, what it will be used for and what it is already being used for – I won’t go into detail here, because I wrote about all of this in a related article: DALL-E 2: artificial intelligence creating art, graphics and “photos”, which I recommend opening in a separate tab.

Midjourney prices

Midjourney offers a free trial with 25 graphics (not anymore), and for $10 a month, you can buy 200 images. For $30/month, there is an unlimited package which I highly recommend trying out at first to test everything. The paid packages come with licenses for commercial use of the graphics, and 200 is quite a lot if you know what you want and have experience using A.I.; but it’s nothing to start with. My graphics on Midjourney were counted in thousands in first few weeks, and I probably made no less in DALL-E 2. There’s also the highest package for $60 for those who need to quickly generate enormous numbers of graphics.

How to get access to Midjourney AI?

Just go to their website, sign up for the beta and add their server to your Discord (you don’t need to install an application, there is also a web version of Discord).

Interface

The fact that there is no one responsible for design in the Midjourney team is obvious at first glance. As I mentioned before, everything is based on Discord where we enter commands and receive 4 low-quality images, and then we can choose which ones to enlarge or what variations we would like to see. In other words: the alpha version in full form.

Fortunately, despite everything, Midjourney is easy to use (and I say this as a person who doesn’t like using Discord) because most of the time it’s just a matter of typing in what we want to see and clicking a button.

So, after accepting the terms and conditions, you enter the command /imagine on some Midjourney channel (the channels are on the left side in a column, e.g. #newbies-96, but in the next chapter I describe a better way than this official one), and then you describe the graphics you want to achieve (texts under all graphics in this article are exactly what command was entered).

Standardowy sposób korzystania z Midjourney, a w kolejnym rozdziale polecam lepszą opcję

After starting writing, prompts will appear anyway, so the second time after entering just a forward slash, it’s most likely that you’ll be able to press enter because auto-completion will do the rest for us. But I’ll write more about commands later.

The first time you may need to click on the command suggestion that appears just after starting typing /image.

Usually, I enter many prompts one after another or copy the prompt and paste it five times to have a large selection. After confirming 13 commands, the next one cannot be executed until something finishes generating. Later, I browse through the proposed graphics, choose those that I want to upscale and download them to my disk. However, when I accumulate a lot of them, instead of saving them from Discord, I do it from Midjourney user’s profile accessible via www website: https://www.midjourney.com/app. Clicking on three dots on the image and selecting the save option works much faster for me in the browser than on Discord which requires confirmation of the folder to save and delays displaying the dialog box.

However, the most convenient option is to select multiple photos and save them all at once. But if you enter a prompt incorrectly or forget something in it (in my case, proportions are often forgotten), the task can be canceled. Then on Discord, click on three dots next to the message > Apps > Cancel Job.

We have already announced that Midjourney will operate without Discord as an independent website, but for now it is what it is.

How to have a Midjourney channel just for yourself for free.

Normally, on the Midjourney server, there are many channels where everyone enters commands, and everyone can see the results. As a result, in addition to our own graphics, hundreds of others scroll through the timeline as well, which is simply distracting. Only by purchasing an option for $60 do we get a private channel.

Therefore, instead of using those public channels, I recommend sending prompts via chat, which means simply as private messages to Midjourney or creating your own server and then adding the Midjourney bot to it. It takes a few seconds, costs nothing extra, and thanks to this we have our own channel exclusively with our graphics that becomes much clearer and more enjoyable to use.

Notifications regarding new positions only apply to our schedules, not to those of the other hundreds of thousands of people, which makes using the platform even more enjoyable. Alternatively, you can search for your username in the search bar but previous solutions have more advantages. The images will still be publicly visible on the user’s profile page. Only with the most expensive package does a command /private hide everything, although Midjourney team still has access to them.

Settings for generated graphics

Before starting to generate graphics, it’s worth checking the /settings command. After typing it in, we can click on how our Midjourney should work:

MJ version – for photorealistic results I always recommend using the latest ‘stable’ version, for other images you can try v4 and v5 and in very special cases, try older versions.

MJ Test and MJ Test Photo theoretically can be better and give completely different results than usual, and most importantly, much more varied. Test Photo gives graphics that resemble photos, often with an analog atmosphere, but it works so poorly for me that I treat it only as a curiosity. Even the frames are different – they are often strangely cut, a bit like in DALL-E 2, which likes to cut just above the eye, etc.

MJ Test (not Photo) works the same way, but the results are more like paintings. The results in these modes are much more varied than usual and look promising, as they give hope for another very big improvement when errors are eliminated. However, in their current state, they are too underdeveloped for me to bother with.

🌈Niji Mode gives results more in the style of anime. Even the characters are defaultly Asian.

Since 🌈Niji mode is used for drawing content, I don’t think it makes sense to ask for photorealism, but of course you can try:

The results still seem stylized, but much more 3D than standard.

Base quality and High quality produce photos of the same resolution, but the time spent on ‘thinking’ about the graphics will be longer (so using the ‘Fast’ mode will delete us as if for two graphics). This results in more details. However, to be honest, I don’t see a significant difference between them – where base quality failed me, high quality did not improve the situation, and in other cases, base quality was enough.

Style can only be set to medium when using v4 version of MJ. Although it can be regulated somewhat with a command even in v4 and beta versions, it is much less than in v3, so I wouldn’t bother with it unless it is modified in the future. In simplified terms, the stronger the ‘style’, the more Midjourney tried to beautify the graphics, paying less attention to the content of the prompt.

Regular upscale is an option that gives the best results when enlarging graphics, but does not achieve as high a resolution as Beta upscale.

Beta upscale enlarges much more than regular (2048×2048 px), but it spoils them so much that they are not usable, and I have no idea why some people use it and even recommend it in tutorials. Perhaps they just made tutorials quickly, without paying attention to the actual quality, only to the number next to the resolution, but the upscaler should be compared only in full size, after all, it is used for enlarging. So, on the thumbnail, such graphics sometimes look OK, but in normal size, they are only suitable for the trash.

There is also a problem with the overexaggerated color palette, as if someone was just starting out with graphics and didn’t quite understand the saturation and contrast sliders. It doesn’t look bad on many graphics, but it’s worst with people because their skin looks very unnatural then.

Since v5 beta upscale is not used anymore.

If you want a lot of graphics, using Regular upscale and enlarging it with an external tool gives incomparably better results and is currently the only reasonable solution (I use Topaz Gigapixel AI and Photo AI, I will show how they perform later).

I skipped the light scale because it’s a simplified version that I have no use for, but if you want to limit details (e.g. in sketches, illustrations, etc.), it’s one of the ways to do it. Another popular way is normal upscaling with the parameter –stop 80, where the number can be different and indicates at what percentage the enlarging process should stop.

Public mode must be enabled if you do not have the $60 package, but for convenience, it does not matter when using Midjourney in the way I showed. However, for privacy, it does matter, because only disabling the Public mode or using the /private command (also requiring the $60 package) hides the graphics.

Remix mode allows you to make different variations of graphics with a slightly modified command, so simply clicking on, for example, V1 will bring up a window with a prompt to edit. The icons for the variant appear regardless of whether the Remix mode is active, so you don’t have to mess around with checking the seed of the graphic and generating another one based on it.

Fast mode generates graphics in a priority mode, which is faster. In the cheapest package, everything is generated this way. However, in the unlimited package, we have limited server time in this mode – 15 or 30 hours. It can be generalized that generating one graphic or upscaling takes 1 minute of GPU time. Making variants is much less resource-intensive; about 200 need to be made to consume 1 hour. You can also generate without any limits in Relax mode (but not in the cheapest package) and if you don’t generate a lot of graphics, usually this mode is enough.

Performance of Midjourney v4

The speed of A.I. is very subjective and a minute of GPU time does not necessarily mean we will get the graphic after a minute. For Disco Diffusion user, Midjourney will be lightning fast, but for someone using DALL-E 2, it may be slow or extremely slow depending on server load.

While in the OpenAI solution, the time to receive a result is always very similar, I am unable to determine it in Midjourney. After about a minute, the image is generated, but before that happens, one often has to wait for their turn – in Fast mode they happen faster than in Relaxed mode. However, it’s never just a matter of 5-10 seconds like in DALL-E 2; rather it always takes significantly longer. Sometimes I generate images one after another while other times I lose interest in writing an article because I have been waiting for so long on Midjourney. Nonetheless, regular users are unlikely to generate thousands of graphics per week like me and should be satisfied with the platform – especially since I have the impression that my speed is limited when generating many graphics consecutively.

Midjourney cannot distinguish between Jordan models, so after longer tests, I gave up and stopped identifying their numbering.

Photorealistic new mortal kombat character, light is his superpower, darkness

Multiple upscaling of the same image

When four graphics proposals are created, it is possible to upscale only one of them multiple times, always receiving a slightly different result (in MJ versions v1-v4). In the process of scaling up, Midjourney adds details and sometimes even quite large elements that can differ. This works similarly to making variants without changing the prompt.

At first glance it looks the same, but upon closer inspection almost everything is different – from the hairstyle, through the device on the head, makeup, spots on the face, clothing and metal around the neck, ending with flames in the background. Apart from the above graphics, all other images in this post are separate images.

Variations/Remixes

As I mentioned earlier, apart from upscaling graphics (U1, U2, U3, U4), it is possible to create its variant (V1, V2, V3, V4). Then a window appears in which you can specify the command more precisely. However, in striving for photorealism, variants did not work too well for me because there were clearly more deformations on them than on graphics generated from scratch. I also have the impression that small modifications work better than adding long sentences.


As can be seen, adding the phrase “ultra realistic photo” to a prompt does not necessarily mean that we will get a photorealistic result.

Image ratio

The proportions of generated graphics can be defined, and instead of being square, they can have a horizontal 3:2 or vertical 2:3 aspect ratio. This way we will get more pixels on the longer edge, while the shorter one will have 1024 pixels.

The proportions of generated graphics can be defined, and instead of being square, they can have a horizontal 3:2 or vertical 2:3 aspect ratio. This way we will get more pixels on the longer edge, while the shorter one will have 1024 pixels.

Commands determining the appearance of graphics

Full body

In Midjourney, if the frame is not specified, a portrait will often be produced, and in many commands it’s even the case that graphics from the previous paragraph are a great example of this – I have created hundreds of similar ones and have only received portraits. If for some reason this is not the case, then the “portrait” command should provide just a portrait shot. Adding the phrase “full body” alone does not always work for me when generating entire characters. I have a feeling that “full body view” more often results in an entire character being generated, but poses are often like those of mannequins.

Adding “dancing” will introduce less rigid poses.

Many people add “wide angle” to their “full body” shots, but as every photographer knows, the same frame with a wide angle lens will come out completely different than with a narrower one because it changes the distance from model, so it changes perspective (I often shoot models in studio with a 135mm lens, like for portraits, and use less than 50mm only in very specific situations). Additionally, wide angle is mainly used by Midjourney for graphics with non-studio backgrounds.

Sometimes, however, Midjourney stubbornly crops the frames, so you have to add that unfortunate wide angle to the prompt, but it often fits well with outdoor scenes. Although even then, many frames are often cut off. This is a characteristic of many artistic A.I.

Specifying focal lengths (35 mm, 85 mm, etc.) in Midjourney, unlike DALL-E 2, does not produce desired changes, so I do not enter them. Besides, generally, whole characters come out poorly – they are often distorted and reveal the biggest flaws of the current generation of A.I., which I have devoted a separate chapter to. I have to generate an excess of graphics to choose those without obvious problems.

Command weights

The size of the characters does not matter, commas also change almost nothing and are mainly for us to make the prompt look clear, but you can also assign weights to command fragments, i.e., determine how important something is.

The sunny landscape will include people and kites, but the people are three times more important, so the frame will focus on them. Most likely, they will be larger and in a closer shot, all due to the numbers after the double colon.

people::3 kites::1

On the other hand, if I typed:

people::9 kites::3

…the action will be the same as in the previous prompt, because what matters is the ratio of one number to another. The actual size doesn’t matter, and 3 to 1 is the same as 9 to 3. However, if there are too many other items in the prompt without numbers after colons, it will become significant because all undefined things have a value of ::1. The difference between 1 and 2 is already quite big, so I advise not to exaggerate with numbers.

The command with inverse values will prioritize kites:

However, the weight can also be negative, for example people::-0.5 (fractions are also allowed). This is used when something is not present in the prompt but frequently appears in the image. You can also use the –no parameter so that Midjourney will try to completely remove it from the frame (but sometimes, something may still remain).

Number of objects

The numbers determining the things in the frame works quite well. Instead of asking for nuns going to church, it’s worth specifying their number.

Three nuns going to church

Styles

Artists’ style

Styles can be defined by specific artists, e.g. “by Banksy”:

Person with guitar by Banksy

This has a huge impact on the final effect, as it determines many things. Depending on the artist, this can be color scheme, framing, choice of character appearance, contrast, exposure, color grading, dynamics and elements that will be present in the frame etc. Styles can be combined:

Person with guitar by Banksy and Pablo Picasso

In the future, we will be flooded with legal cases that will determine whether such practices will continue to be possible in the future, but currently they are.

General styles

The opposite of photorealism are stylized graphics, and here the method described earlier and on the next page, namely typing in the artist’s name, works best. However, terms such as digital art, sketch, pixel art, low poly, pen , noir etc. also work but they are much more general. They generally always work while words like graffiti, cell shading and many others may not be interpreted correctly by Midjourney.

Almost everything can determine the style

Photos taken using analog equipment look different than those taken with the latest technology, and even more so with polaroids. The style and appearance of characters in 1990 are different from those in the ’70s or now. Rooms and people’s dress will look different depending on the country, etc.

All such matters significantly affect the appearance of generated graphics, so it is worth specifying them if you have a specific result in mind.

I discovered that you can also create graphics styled after specific games. As a fanatic of the titles below, I can confirm without hesitation that what Midjourney has produced fits the game style perfectly:

Styling to game screenshots is also possible:

As I’ve already written – almost everything can be defined by a style. Ideas on how to use this in the game design process are already multiplying in my head. Finally, here’s different example:

Lightning

Apart from the type of light which I’ll describe in a moment, its color can be specified. The most general terms: cold and warm – will probably be the most popular.


Now it’s time for types of lighting. A lot depends on the type of graphic and what’s on it, but these particular lighting options are quite universal and rarely fail. Also, if we use “lightning” instead of “light”, Midjourney might understand that we want a lightning bolt in the graphic rather than just light in a certain atmosphere, which has happened to me often enough that I only use “lightning” when I specifically mean lightning bolt.

Soft light is a very general term meaning soft lighting without harsh shadows. This doesn’t necessarily mean low contrast or lack of blacks (although sometimes it does), just that shadows will disappear gradually like on an cloudy day rather than immediately as in sunlight.

Cinematic light is a universal prompt when we are not quite sure what we want, as it can give very different results.

Volumetric light is visible lighting in the atmosphere, meaning that light comes from a source and due to fog, dust or other particles in the air it is visible not only on objects, but also in the air itself. When photographing Midjourney’s characters, it might not be as relevant, but in outdoor locations it works well.

Sunrise light:

Sunset light:

Golden hour is actually the same as before – sunrise or sunset. Golden hour is considered a perfect moment for photography.

God rays are a specific volumetric effect – visible rays of light.

Neon lights are often highly reflective on the floor and work well in cities or for photos of people, with a specified color such as neon blue and red lights, and/or as a rim light known as neon rim light. I often use this type of lighting for cyberpunk and robotic sci-fi atmospheres. However, it can also create an interesting effect even in a cabin in the woods.

Colorful light:

Dark light is simply dim, nighttime lighting that is subdued:

Atmospheric conditions

A description of the weather has a strong impact on the graphics, as it determines both the lighting and scenery.

Fog

Storm

Snow

Sunny

Modification of existing graphics and photos

You can paste a link to an image or photograph and enter a prompt. This way something completely new can be created, but still related in some way to the original. The original is then treated as inspiration.

It can be seen that the distinctive style of the upper part of the clothing is preserved. Usually, 20% is based on the image, and 80% on prompt. It can also be a remake, like a photoshopped image:

Editing graphics is a topic for a separate article and I have it planned. However, it should be known that the image needs to be on the internet, not just on your computer’s hard drive. The easiest way is to drag your photo into the Midjourney window, which will upload it to the server and make it available online. Then, by right-clicking with your mouse, you can copy the link and paste it into another prompt.

Mixing different graphics

It is possible to add links in Midjourney, e.g. to two graphics, so they will be merged into something completely new. These can be your own photos/graphics or, as in the example below, two graphics generated in Midjourney…

…combined into something completely new. I inserted links to both of these images in Midjourney and without adding any commands, I accepted it. The result was:

Usually, photos will only be the beginning of a prompt, and the text will clarify what the intended result should be.

Depending on the graphics and commands, the style, colors, objects, etc. will be inherited from them. So it’s not just about combining two objects, the possibilities are much greater.

The biggest problem of all A.I.

By the way, you can notice the Achilles’ heel, or rather the hand, of all current A.I.

Generally, I don’t recommend relying on hands generated by artificial intelligence, let alone counting fingers.

Graphic generated based on a photo from my photoshoot, with the inscription haute couture –ar 2:3.

Teeth are also problematic, and even crooked, but on close-up portraits they usually look good, worse in wider shots (then there can be too many of them, they can be of different widths etc.), but this is the case with all deformities. Fortunately, teeth are like Tom Cruise’s – as long as no one tells you about it, you watch movies for 40 years without knowing. Once you find out that instead of two front teeth he has one, you won’t be able to think about anything else when the camera zooms in on him. You’re welcome.

Beautiful smile, wide angle
Female in sunglasses made of crystals and imagination, a lot of light Shards flying in the air

Hands nad teeth problem is much smaller in v5.

Midjourney also struggles with subtitles placed in graphics. They don’t make sense, characters are swapped around, and sometimes they are even symbols that aren’t in the alphabet.

I also have the impression that changing the aspect ratio of the frame to vertical or horizontal is not ideal. There are more frequent problems with generating the correct shape of the irises than in square frames.

Resolution of generated works

Just like in the article about DALL-E 2, I posted here the original graphics straight from the A.I.; without any corrections in Photoshop and in standard size, so at a very low resolution of 1024×1024 pixels (or a longer side of 1536 px in the case of vertical and horizontal frames). However, with Gigapixel AI, I can greatly improve their quality. Here is an example of a frame excerpt after upscaling to 4K:

Handsom vampire female after a feast

I have a post in preparation with lots of examples before and after gigapixel scaling.

Photorealistic cybernetic angel of darkness came down to earth to destroy all humanity. Darkness, light, destruction, shards, photo

Summary

Compared to previous versions, v4 is a milestone. The graphics look beautiful, they are very aesthetic and incredibly easy to create. Just type in anything, and a nice picture will come out. In combination with an unlimited package, I think it is a much better tool for start than DALL-E 2 or Stable Diffusion, as long as you can handle it through Discord.

However, the results (even in v5) are schematic, and often when I see graphics on the internet, I know it comes from Midjourney, even before I look at the description. But even in this aspect, there has been a huge progress between v3 and v4. Graphics from DALL-E 2 are harder to identify, they are more diverse and creative, and can look almost like the work of a senior artist, not a junior. DALL-E 2 understands more precise commands, which allows for tremendous control, but it is more difficult for beginners because it does not inherently strive to make graphics as beautiful as possible. In the end, the quality-to-price ratio in Midjourney is great, and I highly recommend trying it.

BTW.

Remember that this article is only part of the whole series and the best is yet to come. Leave your email, and I will put you on my list. The next articles will be about creating photorealistic graphics in Midjourney, super-quality upscaling, and using Midjourney to work with photos.


Update: Photorealism

The issue that I just touched upon in this article is obtaining graphics that look like photos, but I have created a continuation dedicated solely to this topic: check it out here.

In the next article, I will describe how to create such graphics.

Udostępnij