Cartoons impact children’s lives, conveying positive and negative messages. They can boost learning, imagination, and language skills while encouraging future aspirations. However, they can also have adverse effects. This report analyzes the effects of cartoons on children.
FMP DESTRUCTION: CRITICAL EVALUATION
The Final Major Project presented an exciting opportunity for me to explore new challenges and learn about various topics. The project was a creative endeavor that allowed me to expand my knowledge and skills in the field of special effects, which was an area of great interest to me. I was able to engage in a range of activities that helped me evaluate my knowledge in this field and gain a deeper understanding of the technical aspects involved. Though I had various difficulties this experience has been instrumental in helping me chart out my future career path, and I am now more convinced than ever that I want to pursue a career in FX..
Throughout my project, my primary objective was to gain a deeper understanding of FX concepts, specifically VEX, RBD, and Debris. VEX, in particular, was the key concept that I focused on learning. As one of the most crucial concepts in Houdini, it provided a unique perspective while working. I encountered some challenges during my self-learning process, as VEX is an extremely broad and complex topic. However, with the help of various online platforms such as SideFX, YouTube, and other websites, I was able to gradually build my knowledge and skills related to VEX.
At the outset, I embarked on a journey to familiarize myself with the rudimentary aspects of VEX, the programming language used in my project. Given my lack of background in coding, this proved to be a formidable undertaking. However, I persevered and documented every instance of VEX I employed throughout the project on my blog, providing a detailed explanation for each one.
Our study of Rigid Body Dynamics involved exploring various techniques for holding fractured geometries together. However, I encountered a significant challenge when dealing with Glue constraints, which are crucial for this purpose. Despite seeking guidance from our professors, I struggled to find a solution to this problem. Ultimately, I had to rebuild the entire system using a different geometry, which allowed me to gain valuable insight into a new technique for destructing geometry using Forces. Although it was a difficult and frustrating experience as it costed me a lot of time, I am grateful for the opportunity to learn and grow from it.
Forces which I have utilized in this project are meta balls. Meta ball, as the name suggest are ball on which forces such as pull, push, disrupting the points and vertices can be applied. This technique was really helpful for my project as it braked the geometry as needed.
As I delved deeper into the realm of debris and pyrotechnics, I found myself captivated by the intricate details and complexity of the subject matter. Although the task was challenging, it provided me with a unique opportunity to learn the intricacies of VEX code, as explained in the accompanying blog. Through trial and error, I was able to achieve the desired effect, bringing to life a stunning display of pyrotechnic artistry. It was an experience that not only enhanced my technical skills but also deepened my appreciation for the art form.
The most time consuming concept for this project was the file caching and rendering of the entire stimulation. Caching various contents such as fractured geometry, destruction of this geometry, debris and its source, Pyro and it source costed me a lot of time and memory. It took me more than 3 days just to cache these things. Same goes for rendering, which was the most crucial all of, it took each frame around 22-25 minutes per frame (250 total).
While having various difficulties during this project, it was a great experience for learning various new techniques and this would help me a lot while working in the future in the industry of VFX.
FMP: DESTRUCTION FINAL OUTPUT
FMP: DESTRUCTION: TEXTURING, RENDERING
For the final step i.e., Texturing and rendering.
Have you ever noticed that every surface has its own texture? It’s fascinating to think about how the texture you choose and the way you use it can completely change how a space feels. For instance, smooth and shiny textures reflect more light, making a room feel cooler. But if you opt for soft, raised textures instead, they’ll absorb more light and give a cozy, warm vibe to the same space. It’s amazing how much difference such a small detail can make!
When creating the texture for the building, I opted to use the principle shader as it offers a wide range of options for customization. To create a concrete texture that looks as realistic as possible, I applied a combination of roughness and height maps.
The roughness map plays a crucial role in ensuring that the concrete surface does not appear shiny. This is because the roughness map controls how much light the surface reflects, making it possible to mimic the texture of real concrete.
In addition, I utilized the height map to add a sense of depth and dimension to the texture. By creating displacement, the height map allowed me to simulate the bumps and imperfections that are present on actual concrete surfaces.
By using these techniques, I was able to generate a highly detailed and lifelike texture that accurately represents the appearance of concrete.


In order to enhance the realism of the scene, I employed a second texture for the shadows of the entire simulation on the base surface. It was a critical step as without it, the objects in the scene would have appeared to be floating, thus reducing the visual appeal. To achieve this effect, I created a shadow matte that was applied to a grid which calculated the shadows of all the objects and their contents, resulting in a highly detailed and authentic shadowed appearance. By employing this technique, the scene was transformed into a more lifelike and immersive experience.

Rendering is a critical step in the process of creating lifelike animations that closely resemble reality. Its aim is to produce high-quality output that mimics the natural movements and texture of real-life objects. The process involves the conversion of 2D or 3D models into a sequence of individual frames, which are then used to generate a series of individual pixel-based frames or a video clip. The end goal is to ensure that the final product is of the highest quality and accuracy by creating visual graphics that closely resemble real-life objects. In summary, rendering is a fundamental process in computer animation that plays a crucial role in producing lifelike animations that are as close to reality as possible.
Our team utilized Mantra as the rendering engine to create the scene. In order to obtain renders of each component, we took a total of 5 render systems. This approach was essential as it allowed us to have complete control over each individual element and make any necessary changes or adjustments during the compositing process. By having separate renders of every component, we can easily composite them together and create a final product that meets our high standards of quality.
We have utilized a render farm to process our output on multiple systems, rather than solely on our own. This system has proven to be highly beneficial by reducing the rendering load on our current system.

To achieve a higher quality output with Mantra’s rendering system, it is crucial to make some important adjustments. Firstly, increasing the pixel samples is necessary as it allows for a more defined and detailed output. Additionally, we can tweak various components such as SSS quality, volume quality, and min and max ray samples to obtain the best possible results.
It is important to note that physical-based rendering and the addition of motion blur can greatly enhance the realism of the output. By implementing these techniques, we can create a more lifelike and immersive visual experience. These adjustments may seem minor, but they can have a significant impact on the overall quality of the rendered output.

To achieve individual renders, there is a crucial step that must be followed, which is the use of the “Force Objects” tool. This feature is designed to render only the assigned object, allowing you to achieve precise results. However, to achieve the best outcome, it is essential to continue this process by forcing other objects that will mask or matte your primary object. For instance, when rendering your primary object ‘A’, if another object ‘B’ overlaps, it will create a hollow part or mask the part of ‘A’ that is not visible. Therefore, it is necessary to repeat this step for all components to ensure that each object is properly masked and matted. By following these steps, you can achieve a high-quality individual render of each object with ease.








Once we have imported all the necessary files into Nuke, we proceed to call each element individually and merge them according to our specific requirements. This process involves meticulous attention to detail, as we carefully analyze each element’s position, size, and visual appearance to create a seamless and realistic final result. The outcome of this process will be showcased in the upcoming blog post, where you will see the final composite we have created.
FMP: DESTRUCTION: PYRO FX
Pyro, one of the most important elements for RBD destruction. No destruction is complete without smoke. Pyro is a set volume element in Houdini which represent smoke. The most crucial part for creating such pyro are points (volume) having the attributes of Temperature and Density. Hence, the first step for creating pyro stimulation is creating its smoke source.
For creating this source we start by taking the same source of Debris as smoke is created from the places things are broken. we then delete all the points whose velocity is less than 0.01 as we do not want smoke emitting from places not required. After this step, we replicate the points as we will need a larger surface instead of just points.
After this step, we add density, the vex that is used for this step was:
f@density = 2;
f@temperature = 2;
where f denotes Float value which is given to volumes specifically. And with the help of this vex the density and temperature value is set to 2 respectively. And later, we add some noise to create a swirly motion to the temperature.
Then, we rasterize the points in order to create the volume. Hence, the volume rasterize node. Adding volume to temperature, density and velocity creates our source for pyro.

Then we come to the dop network where the main calculation happen, “The Pyro Solver”. Similar to pop Solver, Pyro Solver is the element which deals with all the calculation given in the network for creating smoke/fire.
We add few various nodes to create the smoke. These are the nodes that I have used for Pyro solver:
Gas wind: Helped me with getting a wind direction and noises to the stimulation.
Gas Resize Fluid Dynamic: This node is used for the bounding region for the smoke. This node helps the solver to calculate smoke and its field in a given space itself. We can auto direct this box or create a max bound, where the solver wont calculate things outside of the bounds.
Volume Source: This is the nodes which brings in the pyro source which we created earlier.
Smoke Object: The Smoke object helps us to give definition to the stimulation. The lower the value of this the better result we get. For this instance, I have used the value of 0.005. The pyro stimulation is extremely heavy and hence we have to insure that the system does not collapse at the time of stimulation.
Ground Plane: This node creates artificial ground which acts as a collusion barrier.
Static Object: Similar to ground plane, static object is a object which is used for collusion. The difference is that, we can assign any geometry with volume for collusion with the help of this node. The part which sets apart this node from ground plane is that the geometry can be static or deforming. For this instance I have used the RBD and its volume so that is collides with the smoke and we get much more natural result.

In pyro solver the things that we have to look out for are:
Dissipation: this calculates how fast the smoke is going to disappear. When this value is higher, the smoke disappear slower.
Disturbance and Turbulence: These node are specifically used create noisy field for the pyro. These nodes are best to break the mushroom effect that we typically get in pyro. It break the smooth effects is various block sizes resulting into a more natural and disturbed feel.
Sharpening: The node which helps in getting sharper result in the smoke. With the usage of this node, we get a much more natural and crispy effect.


For the pyro wind, I have also animated the directions for the wind as in a natural case, the smoke outside always have some wind turbulence.


FMP:DESTRUCTION FX: DEBRI
Scattered, damaged or chipped off material from concrete is called as Debris. Even though it is a secondary element in rigid body stimulation, it has a major effect on visuals. The process of creating debris is as follows:
In order to create realistic debris, it is crucial to have an accurate understanding of its source. To achieve this, we need to utilize a fractured geometry that can provide us with the necessary data to compute the velocity and generate particles. As previously mentioned, we have created a fractured geometry that allows us to simulate and observe the motion of debris.
Once we have gathered the required data, we remove any superfluous components from the geometry to optimize the file size. Finally, by utilizing the trail node, we can calculate the velocity required to create points and generate particles that accurately represent the debris. As we have called out our fractured geometry it comes with 2 groups that are inside_building and outside_building.
When simulating the behavior of objects in a 3D environment, it is essential to follow a naming convention to avoid confusion when dealing with multiple fracture geometries. These geometries can create problems during rigid body stimulation, which can cause inaccuracies and glitches in the simulation.
To create debris, we add a debris source after the fracture geometries are defined. This debris source node plays a crucial role in the whole process by computing the velocity and creating the necessary points for debris creation stimulation. This helps generate debris points based on the computed velocity, which is necessary for simulating the realistic behavior of debris.

To generate particles for debris in Houdini, we must first import the source points into the dop network. This is the critical setup for particle creation. The source points are created in the popsource node, and they are then multiplied as needed using the pop replicate node. The pop drag and pop force nodes are responsible for controlling the particle’s movement and can be adjusted as required for the project.
The pop solver node is where all the primary stimulation takes place. It is the core of the simulation, and without it, none of the networks will work. The pop solver node takes in information from all around and calculates the stimulation, including factors like bounce, drag, weight, and density. It is responsible for ensuring the particles move realistically and behave as expected.
Once we have the particles moving as we want them, we export them using the dop import node, and then we can move on to the rest of the simulation. This step is crucial, as it allows us to carry out the remainder of the simulation while still maintaining the particle’s realistic movement and behavior.

The process which I used for making pieces for debris is as follows:
With the help of rbd material fracture I fractured the geometry in various pieces. then I used the “For each node”. This node specializes in calculating factors that we tell them to each and every component that it carries. The vex command I gave was as follows:
-ch(“px”) i.e. channel position ‘X’ ‘Y’ ‘Z’ respectively, as we have to give command to the position and pivot to move to the center of the viewport. This is done specifically to get the pivot of all the pieces to the center of their mass. As the points have their own pivot and the pieces have their own, it will cause us some weird artifacts.
The second code that I have used it $CEX/Y/Z to center pivot for all the pieces.


Then I added these pieces to the node name attributewrangle2. (info below)
After the dop import node, as the points were a bit too much, I deleted random points with the help of the VEX code (mentioned below). This vex code illustrates an “if” statement. If the statement is usually a code which will have 2 factors, ‘if’, and ‘then’. For example, if (…this happens)
then (….this should be the result);
While taking the same concept in mind the code states that if a random (rand) point number (ptnum) is bigger than this channel (chf) named (“deletepoints”), delete (removepoint) the points randomly.

After this step, we tell Houdini to align random pieces to the newly fractured box pieces which would act as a shard, resembling debris. To carry out this function, I have used the VEX code mentioned below. (please refer to random ID match).
1) float (a terminology used for fractional or floating point numbers) and = rand(@ptnum);
This says that all points have random floating numbers which are not the same or identical to each other.
2) int npts (number of points) = npoints(1)+1;
the points we have start from 0 and there is no piece which has the number 0 hence we have to change the numbers given to the points so that we don’t get any error while instancing the pieces to the points.
3) i@id (id of individual points) = (int)fit01(rand, 0, npts);
Here I have used the FIT statement, where we can fit the given number to a different range. For example, we can fit 1-100 and change it to 1-10. it would still work the same just the numbers are different.
For this instance, we did it to fit the random number of fractured pieces to a random number of points available.

After this step I copy stamped all the pieces to the points and we have our debris.


To ensure a successful Rigid Body stimulation, it is crucial to begin with a clean and error-free body. As my project is centered around Destruction, I meticulously designed a basic building structure in Houdini to yield maximum output from the simulations. I employed various tools in Houdini, such as Bevel, Boolean, Extrude, Copy nodes, attribute editors, and more, to create the building. This enabled me to craft a highly detailed and precise model that can withstand the complex simulations that will be conducted. The building structure is designed to be robust and sturdy, which will make it an ideal candidate for simulating various types of destruction scenarios.


To begin with RBD, the first step is to Fracture the geometry and create glue constraints so that the geometry sticks together. This is the workflow of creating RBD in Houdini:
To create points for destruction, I used the ISO Offset node which gives me a volume on which I could scatter points. And after we connect this points to Voronoi Fracture we get the desired Fracture. There are various techniques by which we can create such fractures such as:
- The Boolean Technique, in which we copy various places on to the points created and it cuts the geometry to create fracture.
- Material Fracture: In this technique, the node itself create the points, fracture and also all the constraints required for the RBD Stimulation. This is the most user friendly and easy to learn technique. But the reason I didn’t use this technique was that it was tooo straight forward, due to which it limits me from learning various new concepts.
- Voronoi Fracture: This technique requires various points as mentioned before, and it creates Fracture as per the points provided. And hence this was one of my reason to use this technique as it gave me much more control for creating fracturing pieces.
After this step, I had to assign Active part and Inactive parts, to this fractured geometry. This is one crucial step in RBD stimulations as it helps the stimulation by not calculating factors that are not required. And hence, I assigned (by grouping the points) few part of the building that I do not wish to destruct.
After this step, we have to assemble the fractured as this step helps in packing the geometry in 1 piece, reducing the burden on the system. And after this step we get our desired RBD Fracture geometry.
Carrying forward to the next part GLUE CONSTRAINTS. As the name suggest, Is helps is gluing the fractured geometry, which helps in keeping the pieces together rather than falling apart. These glue constraints can be broken through various factors such as external forces, secondary geometry, static deforming objects, etc. For this project we have gone with forces, which we will see in other chapter.
These glue constraints are generally created with the help of the fractured geometry, as we require the points on the edges to carry out the process. After deleting all the unnecessary attributes such as color, name, etc. we add node called as Connect adjacent pieces, which as the name suggests, connects neighboring components together. After this, in attribute wrangle I added vex by which the constraint gets named and assigned its type. The vex was as follows:
s@constraint_name = “Glue”;
s@constraint_type = “all”;
where s stands for ‘String’ and @ stands for ‘assign to’.
After this step I added cluster (variations) to the constraint by which i was able to have internal as well as external strength for these glue constraints. The VEX utilised for this instance was:
i@external = @strength == ch(“threshold”);
where i stands for ‘intiger’ and ch(“thresholds)




FMP: DESTRUCTION FX: VEX
My decision to choose this particular topic stems from my eagerness to learn all the different aspects of the FX field. During my personal project in the third term, I was introduced to the fascinating “Pyro Technique” in Houdini, which was an enriching experience. Building on that, I am now eager to delve deeper into the world of FX. As someone who is just starting out in the FX industry, this project will be instrumental in enhancing my knowledge about various areas such as Rigid body destruction, particles, debris, and most importantly, VEX.
VEX is a high-performance expression language that is extensively used in Houdini, a 3D animation software. It is a powerful coding language that enables artists to harness the software’s vast capabilities and work more efficiently. However, mastering VEX and coding can be a daunting task as it involves learning a complex set of rules and syntax.
VEX can be considered a fundamental part of the Houdini language, as it allows artists to create complex and dynamic visual effects, simulations, and procedural modeling. With its vast range of features and functions, VEX can be used to manipulate geometry, create custom shaders, and even write custom tools.
Taking on the challenge of learning VEX from scratch can be both exciting and rewarding. It requires patience, dedication, and a willingness to experiment and explore new ideas. As I embark on this journey, I look forward to discovering the potential of this powerful language and pushing the boundaries of what is possible in Houdini.
VEX evaluation is typically very efficient giving performance close to compiled C/C++ code. VEX is not an alternative to scripting, but rather a smaller, more efficient general purpose language for writing shaders and custom nodes.
One can say that VEX is a part of C language but emphasis more on C++.
We will learn more about VEX and the commands as we go on with the project.
FINAL MAJOR PROJECT: THE DESTRUCTION FX


