Visual effects and their importance in the field of visual media creation

The paper presents visual effects and their importance in the creation of visual media and film industry. After defining the field and the term visual effects, the reader is introduced to the techniques and approaches used to create visual effects, i.e., computer-generated Imagery, 3D computer graphics, motion capture, matchmoving, chroma key, rotoscoping, matte painting, and digital compositing. This is followed by a presentation of the history of visual effects from its beginnings to the digital age, taking in the most successful examples of film production such as Terminator, Toy Story, The Matrix, and Star Wars. As an example of the most representative production, the paper includes a more detailed description of the techniques, methods, and approaches used in the Lord of the Rings film trilogy, focusing on the creation of the visual appearance of the Gollum character, his movement, and facial expressions, the creation of crowds with autonomous agents and the introduction of digital duplicates. The review concludes with an overview of trends for the future of the field.


Introduction
Let us take you on a journey to the world of visual effects (VFX).How do we define visual effects?What is the difference between visual and special effects?And why do we actually use them for?
For easier understanding of this particular topic, we will present the techniques most commonly used to create so-called »digital magic«.After that, we will focus on the history of visual effects, which will then be upgraded with a presentation of some examples of good practice.Those examples will be based on just one film trilogy, as it contains some key elements for a quality presentation of several different techniques and innovations in the field.Finally, we will briefly summarize some trends in VFX that are currently present and will probably occur in the future.

Visual effects
According to the established model of filmmaking, visual effects are primarily used to make the storytelling more believable.However, visual effects have often been treated as a less important area of film art and, therefore, also misinterpreted on several occasions.The reason for this misinterpretation is that visual effects are too often associated with tasteless spectacles, visually oversaturated hits, or fantasy adventures aimed primarily at young audiences.In some cases, they even seem to be more important than the storytelling itself.With the digital advancement and the creation of films, such as Avatar (directed by James Cameron, 2009), the traditional view of film production is changing.Such films place the visual effects at the very core of storytelling, as practically the entire visual image of the film is created digitally.However, such examples are merely indicators of endless possibilities of visual creation.Furthermore, while it is true that visual effects can create a spectacle, they more often operate on a much more covert and less spectacular level (Prince, 2007).

Hand in hand -definition, differentiation, and connection between visual and special effects
To comprehensively address the field of visual effects in film, we must first clarify what visual effects actually are, and that is a subcategory of special effects (SFX or FX for short).And although the two areas are by no means mutually exclusive moreover, in most cases, they are even interconnected and complementary; they should by no means be equated (UKEssays, 2017).
Visual and special effects are often misunderstood to be the same thing.Such misunderstanding can occur due to the constant interconnection of the two fields in film art.In the past -that is before the development of digital technologies, the areas related to the creative process were very interconnected, as visual effects, as well as special effects, were often created with recording equipment on the film set.However, unlike special effects, which are made mainly with the use of various stage aids, visual effects have always been created with the help of optical processing, with the use of special photographic lenses, or the creation of optical illusions.In the digital age, the process of making visual effects has been moved completely to postproduction, while special effects are still being made during the actual live-action filming (Britannica, 2012).
At this point, the obvious question arises -what is the real difference between visual and special effects?
Visual effects include visual material made or adapted for use in a visual medium and cannot be created or recorded live.The art of creating visual effects in the digital age is usually performed in postproduction, that is after the basic capture of a recording.Visual effects are created using various techniques and added to the recordings with the help of computer programs.With technological development, digital equipment for the production and incorporation of visual effects into recorded material is becoming more and more accessible, so it has become a standard part of the filmmakers' equipment.Unlike visual, special effects cover all the effects that can be created during the actual recording of visual material -that is why they are also known as practical effects.The most typical examples of special effects are a recreation of rain and fire, the use of firearms, road races with various vehicles, and, after all, the use of the good old dynamite to simulate explosions.Visual and special effects have always gone hand in hand, but thanks to the progress that has been made in the field of VFX over time, mainly with the introduction of digital processing, the making of special effects has been able to evolve as well (Oknu & Zwerman, 2010).

The usefulness of digital magic
There are three main reasons for using visual effects in film art.The first is that certain scenes from the script simply cannot be recorded live in any way.Such an example would be the astronauts flying around the Moon in Apollo 13 (directed by Ron Howard, 1995).Another reason for using visual effects in a film is that a certain scene could be filmed live, but would endanger someone's life.An example of this would be the many death execution scenes in various films where, obviously, none of the actors is actually being killed.The third reason for using visual effects is primarily of a financial nature.In some cases, it is more practical and financially advantageous to recreate a specific space with the help of digital techniques than to shoot the scenes at the actual location.Some examples for that kind of usage of visual effects can be found in pretty much every modern production that contains scenes from a larger area of the landscape (Oknu & Zwerman, 2010).

Techniques, techniques, and techniques again
Before diving deep into the world of visual effects history and development, and for anyone's better understanding of the field, we will present some techniques that are most useful and, therefore, most commonly used for the creation of VFX.

Computer-Generated Imagery (CGI)
Computer-generated graphics is probably one of the most commonly used techniques for creating visual effects nowadays.The term is used for all the elements that are created and integrated into real shots with the help of a computer, whether that means only background elements, such as landscape, or movie characters that establish some kind of interaction with other parts of the scene.These elements can be two-dimensional or three-dimensional, but the term CGI is mainly used for the latter.Moreover, although a computer can do it all, in most cases, rather than resorting to this technique in its entirety, filmmakers use it only to upgrade the recorded material -for example, adding the audience to an actually empty stadium.The advantage of this particular technique is that everything created with it can be quite easily corrected and improved (Izmofx, 2019;StoryboardThat, 2020).

3D graphics
There are a few more techniques in the category of computer-generated graphics that definitely deserve an explanation.One of them, which represents a whole extra field, is 3D modeling and animation.Simply explained, it is used to digitally create an object that doesn't really exist, or to recreate a scene that would be too difficult to build in reality (MasterClass, 2021).

Motion capture
Another commonly used technique that can be included in the field of computer-generated graphics is motion capture.This technique is mentioned here mainly due to the fact that motion capture is usually transferred to a computer-generated (in most cases a 3D) model.For this technique, a special suit with tracking points is used to capture the movement and facial expressions of a real person, which is then converted into the movement of a computer model.Thus, with the help of digital processing, actors' visual appearance can be partly or even completely changed (MasterClass, 2021;RedShark News, 2017).

Matchmoving
Another technique we can mention is matchmoving.This technique is used to place digitally generated elements in real space images.It is used to determine the space and movement of the camera in the image, which enables the correct integration of the 3D model into a certain space.The space in the image is marked with dots, based on which the computer calculates the movements.However, it can happen that, due to the ambiguity of the space, the points and their movements need to be determined manually, meaning each dot for each frame of the recording must be set by hand.The outcome of a successful matchmoving is that the integrated model does not move along with the digitally created camera but follows the location relative to the captured space.Simply put, a virtual camera follows the movements of the real camera (MassiveVFX Studios, 2020;Pluralsight, 2014).

Chroma Key
Chroma key or green screen, as it is more commonly known as, which, by the way, can also be blue (or pretty much any colour you want it to be, as long as it is different from the scene elements), is used for digital replacement of the scene background in postproduction.The color spectrum that the program overlooks during visualization is determined, which means that it can be digitally deleted or replaced with a different background.Nowadays, the chroma key is widely used, one of the examples being television weather forecasting (Izmofx, 2019;MasterClass, 2021;MassiveVFX Studios, 2020).

Rotoscopy
The idea of rotoscopy is similar to that of the chroma key, with one major exception -there is no green or blue background.Thus, it is necessary to create a mask over the object we want to keep on screen.That mask then follows the object's movement with the help of computer-generated points or the help of the manual work done by the mask editor.Everything around this mask is digitally replaceable (MassiveVFX Studios, 2020;Animation Boss, 2019;Future Learn, 2018).

Matte Painting
Using the matte painting technique, we can create a background drawn on a two-dimensional surface (using either 2D or 3D techniques), that gives us a sense of realistic environment.The background can stand physically on the set during the shooting itself, or it can be created digitally and then implemented in the recording (Izmofx, 2019;StoryboardThat, 2020).

Digital Compositing
Digital compositing means using all or just some of the previously mentioned techniques to assemble different parts of shots or images into a single visually uniformed scene.Various software allow us to replace parts of one clip with parts of another clip (replacing parts of the same clip is also possible).This technique is also known and used in other fields of digital art, such as graphic design or photography (where it is known as photomontage) (Izmofx, 2019;MasterClass, 2021).

Old school effects
Have you ever wondered about how (or perhaps where) it all began and how the field of visual effects has evolved to the level we know today?Well, this chapter holds the answers to your questions.
At the beginning of the development of film art, special effects were strongly intertwined with visual effects.The two concepts only began to differ with the introduction of digital technologies and the proliferation of computer graphics.Various optical tricks that today are considered visual effects were used for different purposes in films at that time, but back then, they were still referred to as special effects.Therefore, at the beginning of this chapter, when we are talking about special effects, we are also talking about visual effects, as both terms meant the same thing at that point in film history.
The year was 1895 when a gentleman named Alfred Clarke used a certain trick that, at least to today's spectator, might seem rather simple.While filming the beheading scene in his short film The Execution of Mary, Queen of Scots (directed by Alfred Clarke, 1895), at the moment when the executioner raised his axe, Clarke stopped filming and ordered the entire cast to hold still.Meanwhile, the actor who portrayed Mary was ordered to withdraw from the stage so that a doll could be placed in his place.Clarke then continued filming, and the executioner was able to cut off the doll's head.And this so-called stop-trick went down in history as the first film special effect (UKEssays, 2017; Gradišnik & Parkinson, 2000;Nuts Computer Graphics, 2018).
Around the same time, that being the end of the 19th century, a Frenchman named George Méliès, otherwise the owner of the Robert-Houdin Theater in Paris, where magic performances were staged from 1888 to World War I started to happen, accidentally discovered the same stop-trick effect.While filming on the streets of Paris, his film roll got stuck in the camera, and when he later developed it, he noticed that at one point the events in the footage were instantly changed.He later found that he accidentally filmed one shot over the other; thus, a particular visual effect was made.Méliès then began to uncover the possibilities of such tricks and continued to make films.He made almost a thousand short films between 1896 and 1914, but unfortunately, only 140 are preserved to this day.In making his films, he managed to develop various filming and processing techniques, such as dimming, darkening, overlaying and overflowing scenes, as well as hand-painting the film.Due to the development of these techniques, Méliès blurred the boundaries of how reality can be portrayed in films.That is well illustrated by his probably best-known work, The Voyage to the Moon (or Le voyage dans la lune, directed by George Méliès, 1902), which includes some extremely surreal images.Because of the use of special effects in this particular film, which include a combination of real shots and animation, as well as background painting, Méliès is considered to be a pioneer of the science-fiction genre in films.He is also known to be one of the first filmmakers who used storyboards to help him with planning and creating his films (UKEssays, 2017; Gradišnik & Parkinson, 2000;Nuts Computer Graphics, 2018).
Let us move a little closer to modern times, to a period closer to the middle of the 20th century.King Kong (directed by Merian C. Cooper and Ernest B. Schoedsack, 1933) is considered to be one of the pioneering productions in special and visual effects.A leading man of the special effects team was Willis O'Brien, who thought of combining separately filmed scenes in postproduction.With that, he led the path to the technique known as background projection, which later became the film industry standard (Britannica, 2010).
In Citizen Kane (directed by Orson Welles, 1941), many scenes were also adapted from postproduction footage processing, but due to their extremely precise production, they were barely noticeable when the film was released.The man responsible for creating those subtle optical effects was Linwood Dun, and the impressive background painting was done by the hands of Mario Larrinaga.Both had previously worked on visual effects in the production of previously mentioned King Kong.They did a remarkable job of combining different shots and images, mainly with the use of the optical projector.This is a device used for combining different shots of film, which means combining all scenes that contain any kind of special effects.The technique that is used to combine all these elements into a stylistically uniform image, however, is known as optical composition (FX Making of, 2014a).
Although at that time films were still mostly shot on black-and-white film tapes, color films also began to gain their popularity.The problem was that they were slightly more demanding in terms of visual effects, which had the greatest impact on the complexity of the background painting technique.The first major use of color background painting can be seen in the film Gone with the Wind (directed by Victor Fleming, 1939) (Nuts Computer Graphics, 2018; FX Making of, 2014b).
The main problem was the recording of background projections, as they were not bright enough for the cameras to capture them well.That was solved by introducing a new projection system, presented in 1940 by the Paramount studio.The technique of combined background painting, which combines scenes shot in different locations, also had to be adapted to add to the color film.The mentioned technique was first used in The Thief of Baghdad (directed by Ludwig Berger, Michael Powell, and Tim Whelan, 1940), where a blue-colored screen was used for the background on set, and the background environment that can be seen in the film was then added later in the postproduction.The Thief of Baghdad won an Oscar for special effects, one reason being the introduction of the technique in question (Nuts Computer Graphics, 2018;FX Making of, 2014b).
Despite progress and adjustments, the processing of color film still posed a significant financial burden for both filmmakers and production studios, so the use of black-and-white film tape still prevailed over the use of color tape (Nuts Computer Graphics, 2018; FX Making of, 2014b).
The line between special and visual effects became more apparent in the 1960s, when the first Oscars were awarded specifically for visual effects.Two films were nominated, Cleopatra (directed by Joseph L. Mankiewicz, 1963) and The Birds (directed by Alfred Hitchcock, 1963).The winner was Cleopatra, where mainly two techniques were used -glass painting and background painting.In The Birds, the technique of the so-called yellow background was used, where sodium vapor lights are used for the desired effect, in this case, the flapping of the bird's wings.Another film masterpiece of this era that deserves to be mentioned is 2001: A Space Odyssey (directed by Stanley Kubrick, 1968), which was also awarded an Oscar for visual effects.Although the visual effects in the film were made with techniques that were already known and used before, the scene worth mentioning is the one with the stargate, where we see a traveling stream of colored beams.The effect was created with the Slit Scan device, an invention of one Douglas Trumbull, and can capture two seemingly infinite levels.Any additional effects for the scene were created using different color filters in aerial landscape photographs and with recordings of various chemical reactions (Heavy, 2010;Special FX, 2010;The 2001Archive, 1999).
The field of special and visual effects has constantly been evolving over the years, making the effects more and more refined.The biggest technological change in the field of special and visual effects occurred with the introduction of the use of computers for the purpose of processing images.Computers not only made it easier for filmmakers to work by digitally editing film material, but they also made it possible to create digital 3D models that could be convincingly implemented into real shots (Gradišnik & Parkinson, 2000).
That being said, in 1972, Ed Catmull and Fred Parke took an important step in the field of computer graphics -they created the first computer-generated three-dimensional model.The hand model was made out of casting a real human hand, which was then marked with polygon points.Then a wire-like computer 3D model was created, which was, with the help of complementing and shading, transformed into a fairly realistic three-dimensional model of a human hand.Thus, the two men paved the way for the implementation of digitally created elements in film production (Knific Košir, 2018).
It was not long before a small revolution in visual effects happened; the reason is one of the first sci-fi saga masterpieces of popular film culture.The year was 1977 when the Star Wars: Episode IV -A New Hope (directed by George Lucas, 1977) shook the public with breathtaking visual effects.John Dykstra, the visual effects supervisor, designed a completely new computer-controlled moving camera system for the film, called Dykstraflex, which allowed mister Lucas to take accurate repetitions of shots with various elements, which were then assembled in postproduction (Thrillist, 2016).Lucas used the same recording method in the sequel, Star Wars: Episode V -The Empire Strikes Back (directed by George Lucas, 1980), which contains a stop-motion animation of AT-AT robots models that were placed in a snowy landscape where the battle takes place.In addition to improving existing ones, ILM has developed many filming and processing techniques that have helped create the Star Wars saga.We can mention the use of VistaVision cameras for shots that were later used as scene backgrounds, or for shots that were used to replace the blue screen.VistaVision cameras were an older version of cameras that shot on a 35 mm film, but instead of vertically, the film ran horizontally through the camera, allowing the director to capture larger areas with miniatures and other elements, which subsequently meant better scene composition.And that is one of the reasons why, while watching the film, a spectator gets the feeling that the Death Star is actually floating in space, although most of the Death Star scenes were really filmed in the studio's parking lot (Thrillist, 2016).
Another film worth mentioning is one from the Star Trek franchise, Star Trek II: The Wrath of Khan (directed by Nicholas Meyer, 1982), which is one of the first films, that massively used computer graphics to create visual effects.A real star database was used for the creation of a digital simulation of star fields.In addition, film shows the first fully computer-generated scenes, one of them being the opening scene of a star system flight simulation and the other, probably the more famous one, being a demonstration of the Genesis device effect on an abandoned planet, which also introduced the use of computer-generated particle system technique.

Digital age
With the beginning of the widespread use of computer-generated visual effects in the film industry, quite a few films were recorded in the annals that had a significant impact on the development of the industry.In the 1990s, the film industry graced us with real treats.Visual effects became more and more realistic, as seen in movies like Terminator 2: Judgment Day (or.Terminator 2: Judgment Day, directed by James Cameron, 1991), for which the team created liquid metal for the robot T-1000.For the movie Jurassic Park (directed by: Steven Spielberg, 1993), they created a realistic T. Rex using a real model and a computer.And that's not all.In 1995, the first fully computer-generated feature film was released, the 3D animated film Toy Story (or Toy Story, directed by John Lasseter, 1995).At the end of the millennium, the public was driven mad by The Matrix (directed by The Wachowskis, 1999), using a combination of slow-motion, camera rotation, and digital projectiles that the year before in Blade (directed by: Stephen Norrington, 1998), was almost absurd.However, the new millennium has given us a slew of visual masterpieces, beginning with The Lord of the Rings trilogy (directed by Peter Jackson, 2001-2003).The production of the trilogy developed a good portion of the software that is still used in computer graphics today, and furthermore, some scenes are created entirely digitally (WatchCulture, 2014).
All this brings us back to Star Wars.Everything that has happened in the field of visual effects since the last film in the saga was important to the sequel.For the fourth film in the saga, Star Wars: Episode I -The Phantom Menace, directed by George Lucas, one of the first fully digitally created characters was produced in 1999.Jar Jar Binks represents a milestone in motion capture technology, computer animation, and rendering, despite the endless derision of the character's immaturity that continues to this day from fans of the saga.Jar Jar Binks, however, is not the only innovation in Threatening Ghosts.For the racing scenes with floating race cars, it was necessary to depict a vast desert territory that ILM could not fully create with a computer alone (Thrillist, 2016).Thus, they first photographed miniature versions of rock formations from different angles and then projected them onto simple geometric shells of the same formations in post-processing.This method allowed the camera more room to move, and they also achieved a high degree of persuasiveness this way.Hard to believe that the fifth film in the saga, Star Wars: Episode II: Attack of the Clones, directed by George Lucas ( 2002), was only the third film in history to be shot entirely with digital cameras.With Star Wars, ILM paved the way for the development of visual effects.The company not only developed technologies to create more convincing effects, but also changed the way of working in this field.Visual effects were thus not just part of film, but began to be seen as a field in its own right, requiring a coordinated set of different trades to function successfully (Thrillist, 2016).
The last twenty years have offered us a number of films with visual effects, most notably the film Avatar (or.Avatar, directed by James Cameron, 2009), which was created almost entirely using computer graphics.For some scenes, only the actors' movements were captured; everything else was created entirely digitally.Also, most of the movies with Marvel heroes, which have become very popular in the last decade, are known for a large number of visual effects, as this way of working has become faster and cheaper, and ultimately safer, with the development of computer graphics (WatchCulture, 2014).

Examples of a good practice
Here are some examples of good practice.And no, it is no coincidence that almost all the examples shown come from the Lord of the Rings film trilogy.These films, after all, represent a turning point in digital graphics and certainly set a milestone in showing what could be achieved with them in terms of visual effects.
The Lord of the Rings film trilogy, based on the book trilogy of the same name by J. R. R. Tolkien, became a cultural phenomenon shortly after the release of the first film in 2001.The reason for this was not only the masterfully told story on screen, but also the visual presentation as such.Groundbreaking software was created in the field of visual effects to meet the needs of the film, allowing for a visual spectacle that few could have imagined at the beginning of the third millennium.Below some computer-generated characters from the trilogy that was important not only to the advancement of visual effects creation, but also to the advancement of computer graphics in general are presented.It should also be mentioned that Weta Digital mainly used the computer programs Maya and ZBrush for their needs in creating the characters and scenes presented below (Aitken et al., 2004).

Gollum
You may have liked him, you may have hated him, but he certainly did not leave you indifferent.Everyone familiar with the Lord of the Rings film saga knows Gollum, who is probably the most visually stunning character in the film trilogy.And he is undoubtedly the character with whom the studio Weta Digital changed the way computer-generated effects are used in film.Gollum, represents a cinematic milestone because the creators of the visual effects had to use a computer to create a character that the audience could perceive as real.And, of course, they succeeded (Animation World Network, 2004;Vulture, 2018).
By the time the first film in the Lord of the Rings trilogy hit theaters, computer-generated characters had been featured in many films, but none of them were on the level of Gollum.The high level of detail with which he was portrayed was meant to make the audience actually see him as one of the actors, and not just another scary monster twenty years after the film's release.Several systems within computer programs were improved and even created for the needs of Gollum's depiction, including the object surface shader system now widely used in 3D programs, and with it the visualization of materials that scatter light below the surface and between surfaces (subsurface scattering), which even won the film crew an Oscar for technical achievement.However, let us go nicely in order.And we will start with the movement (Aitken et al., 2004;Animation World Network, 2004;Vulture, 2018).
Gollum's movement is created using the technique of "capturing movement", which is characterized by the fact that the movement of a character created in the computer follows the movements of a real person.In the first part of The Lord of the Rings: The Fellowship of the Ring trilogy (directed by Peter Jackson, 2001), the movement of the body was captured using a special suit, and the facial expressions of the 3D model were made entirely by hand.by animators who followed actual footage of the actor's facial expressions (Aitken et al., 2004;Animation World Network, 2004;Vulture, 2018).
Later, with the help of new technologies, it was also possible to capture facial expressions, but it was still necessary to adjust them manually.The computer model of Gollum used in the first film was actually a mapping of a puppet model that was then edited in computer programs.However, for the purposes of the second film, Lord of the Rings: The Two Towers, directed by Peter Jackson (2002), they found that the scanning technique had too many shortcomings.Therefore, they recreated a computer model that met the requirements for the movement of the characters.They also improved the model's facial expressions since Gollum appears several times in this film, and the story also requires a more sophisticated emotional expression of the character.For this purpose, wrinkles and pores were painted on the character, and the wrinkle planes of the facial skin, especially on the forehead and around the mouth and eyes, were adjusted (Aitken et al., 2004;Animation World Network, 2004;Vulture, 2018).
The successful portrayal of the Golum movement in the second film of the trilogy prompted the creators to completely revise the managing directions of the Gollum computer model for the third film, The Lord of the Rings: The Return of the King (directed by Peter Jackson, 2003).The standard for building 3D models became the surface subdivision technique, which allowed greater control over the build, texturing, and movement of a computer character's body, something the creators took advantage of.One of the reasons Gollum looks so authentic is the structure of the computer model's anatomy.The skeleton that allows the animation of the movements is actually modeled after real human bones.Not only was the skin or the surface of the model attached to the fabricated skeleton, which was common until then, but also the muscles, taking into account the anatomy of the body since the figure of Gollum has an extremely thin skin through which one can see its shape (Aitken et al., 2004;Animation World Network, 2004;Vulture, 2018).
The extraordinary attention to detail in the visual representation of Gollum did not stop there.The material used for the skin was created completely from scratch.The use of semi-transparent materials for the passage of light made the skin on the model appear slightly translucent, allowing anatomical details such as veins to be seen on the model.Various scratches and bruises were also added to Gollum's skin, as well as a bloody nose and a bump on the head.One of the more difficult areas of creating Gollum was the hair, mainly because it is very sparse, meaning that any flaws in its structure and animation would be noticed sooner.Until then, the simulations of hair dynamics within the programs were quite limited, so it was necessary to find a separate solution for practically every scene.Additional problems were caused by the scenes where Gollum is standing on the edge of a cliff, as a wind simulation had to be added.However, the creators were able to overcome this obstacle as well by splitting the individual hairs into smaller segments that functioned more autonomously, thus preventing them from intertwining.All the little details, including the nail dirt, that the creators had in mind when creating Gollum allowed the audience to see him as a flesh and blood character, one of the actors, even though he was created entirely digitally.Gollum, however, deserves respect not only from the point of view of a successful portrayal of a character that can be seen by an ordinary viewer, he deserves it mainly because his revival on the big screen required the work of a great number of film geniuses, both in the field of 3D computer modeling and animation and in the field of visual effects (Aitken et al., 2004;Animation World Network, 2004;Vulture, 2018).
But the story of visual effects in the Lord of the Rings trilogy does not end with Gollum.Other digitally created characters also appear in the films, which required a lot of ingenuity on the part of the creators to create a successful script.

Crowds
In order to display the battles with a large number of participants, Weta Digital has developed its software, namely a crowd animation program called Massive.
The program for step-by-step animation of the autonomous agents (i.e., the individual building blocks of the crowd) uses various aspects of artificial life, i.e., it assigns each of the agents an artificial brain capable of basic logical reasoning.The agents are also assigned a basic ability to observe and listen so that they can respond to their immediate environment and to each other.In this way, scenes quickly create the sense that each agent is acting individually and that patterns of behavior are not repeated, allowing for a higher degree of realism.The creation of a new agent starts with the creation of a so-called movement tree.This tree determines the movement options available to the agent at any given time, and a typical combat agent has 200 different actions available.The motion trees are then gradually divided into individual scenes, and the motion for these scenes is basically obtained using the technique of capturing the motion of real people, and for those where this is not possible, classical animation with key thumbnails is used.In addition to capturing motion for a single agent, secondary dynamics, i.e., hair and cloth motion, are also obtained.The variability of the terrain is also taken into account, which is determined by three components: Level or Plane, Maximum Depth, and Maximum Height.The motion capture images are first digitally processed and then inserted into the Massive program according to the motion tree.Here, the motion of the joints in the limbs is converted into the inverse kinematics, which is used to animate the motion.The latter allows agents to vary in size, adapt to the terrain, and coordinate motion within the same scene.Agents are composed of basic geometric components, and they are also assigned variations in their size and component design.All information is then written to a file of each creature, which contains all possible variations in geometry, textures, and color shades of a given agent.Movement in an individual scene is then determined with the help of the agent's brain, which uses basic logic and the senses to decide for itself how it will react to its environment.Upgrades are dual agents used with horses and riders.Despite the interdependent actions of such agents, each of them is still autonomous.The rider's brain determines the horse's position as well as the effect of the horse's movement on the rider's movement but it has no influence on the horse when the rider moves on it, for example, when shooting an arrow in combat.Using the captured movements of the horses, they created a library of 200 to 300 different movements.For some scenes, they also used multi-body agents, meaning that individual brains were operated by 25 soldiers.And another interesting fact: In the last part of the trilogy, 400 scenes were created with the help of the Massive program, in which over 200,000 agents acted (Aitken et al., 2004).

Digital duplicates
Because of the difficulty of recreating the action as envisioned by Mr. Jackson, the director of the trilogy, they had to create a few scenes entirely digitally.They had to create digital duplicates of the players for them, which they did with the help of mappings and photographic references.Digital models had to be improved during the making of the films, as some scenes also required close-ups where much more detail could be seen.Digital scenes are made in many layers, as they were very demanding due to all the events.Thus, they recorded or created several different layers separately, i.e., the scene, the main characters, the characters in the background, which they then combined into one scene with the help of digital assembly (Aitken et al., 2004).

Conclusions
What can we expect in the next few years?In the following, we briefly summarize some trends that we are already confronted with in the field of visual effects and that we probably will not be able to escape in the future -but don't want to.
There has never been a better time to be a visual effects connoisseur and creator.In the last decade, the number of superhero movies, science fiction movies, and other films that by their nature require a large amount of visual effects has increased, and with it the demand for people who can create such visual treats.There is also a growing demand for visual effects in advertising, educational content, and computer games.Technology is constantly improving, so we can expect new systems and programs to give us an even better visual experience.One example is the digital resurrection of actors who have already died, using CGI to resurrect them for the purposes of film, usually sparking debate about the quality of such attempts.Probably the most famous example in recent years is the resurrection of the late actress Carrie Fischer as Leia Skywalker and the late actor Peter Cushing as Governor Tarkin for Rogue One: A Star Wars Story, directed by Gareth Edwards, 2016 ).Of course, the whole thing is still very controversial, as there is also the question of exploiting the identity of already deceased actors at this point (3D-Ace, 2018;VFXV, 2017).
Despite the increased use and improvement in the appearance of digital visual effects, there is still a belief that they simply do not look the same as practical effects, i.e., physically created gadgets, models, and mock-ups.That's probably true.And that's why the usage of practical gadgets is returning wherever possible, but there's still the opportunity to enhance them digitally in postproduction.So -practicality remains relevant (3D-Ace, 2018;VFXV, 2017).
Virtual reality simulations already incorporate many visual effects, and according to the guidelines, it seems that their use will only increase as the technology develops and becomes cheaper.The logical consequence is that more and more workers will find themselves in the world of creating effects for virtual or augmented reality.