Latest Entries »

Here is the Final Piece to my Final Major Project. Enjoy!’

For the colour correction there were some shots that required a lot and being brutal with crunching down the colours, and some others that needed merely minor changes. To keep the consistency of the video I decided to primarily pick a colour and stick to it and I thought that blue would work being that it is an extremely unnatural form of light, the intriguing thing about that is I’m trying to make something that’s unnatural feel natural.

This slideshow requires JavaScript.

For the macro shots they needed to be severely crunched down to bring out their textures and to dramatically change their colours to blue as most of them had primarily got a high amount of red channel. This is due to the lighting not having a neutral colour however this problem was easily rectified in colour correction. For Harry he was a challenge to colour correct because I needed to over expose the white room shots to hide the small details of the white room on the walls to make it look endless. This meant not being able to severely crunch down any highlights, this became a problem because Harry’s face had gone quite severely red from heat of the lighting. However by finding a balance in between the greens and the blues and curving the mid to highs I managed to find a reasonable balance in the colour correction without compromising the look of the white background.

For the animation shots the colour correction was extremely subtle, the only part that was changed was the RGB channel just so I could crunch down some of the over exposed edges from the renders.

Overall I think I managed to keep the colour correction quite consistent throughout the piece, from peoples reactions towards the consistency of the video in terms of colour and how the shots work there appears to be no negativity so far.

The editing was in fact one of the most straight forward parts of the project when it came to finalising it, I had already made an extremely rough edit before I started the heavy post production which means for the whole time whilst working on post I had the flow of the edit on my mind.

This slideshow requires JavaScript.

I wanted the shots to be rather punchy and choppy and to be of a fast tempo to keep up with the song. I needed to find that balance in between the audience being able to identify what’s going on and getting a nice quick cut in between shots. Overall I wanted the edit to have moments of building up and then descending into a train wreck of high tempo clips.

The final macro shoot involved using ferrofluid like on the specialist to represent the liquid spheres. However this time I took a much different approach to capturing its movement, last time I was more interested in its spiky patterns. This time I was more interested in its smooth fluid surface movements. They complimented the shots extremely well and made for great cut away’s whilst giving viewers that slight questioning interpretation of whteher it is CGI or real shots.

This slideshow requires JavaScript.

This time I decided to go all out and use a large amount to capture those surface ripples, I simply used a normal white plate and some magnets and got to work on manipulating into it some currents. Shot on a medium Iso of around 400 on the macro lens once again.

The design for Harry’s shots were obviously something that I found to be a challenge, more or less to get his movements right and to make him live within the scene. To make the fake backdrop react to his movements and to truly make him feel lost in the world of “Synapse”

The 3D Landscapes

Orientated using Boujou camera data I managed to place Harry in these 3D depth worlds with a lot of effort. Most of the effort came into constructing the scenes using images and depth of field on cameras whilst also using after effects 3D light system and its shadow mapping. The gravel was bump mapped using a black white pre composed layer of its self being combined with cc glass and its bump map feature to give it fake definition and shadow depth. The lens flares are composited in 3D space as well to follow the 3d light patterns.

The lightning or “Synapse” strikes

The synapse strikes are to represent the inner working of the human mind and its neuro chemical synapse’s. It was done using trapcode particular and 3D stroke like on Eye of the Storm. I used null objects to draw the multiple paths of where the lightning would go, making sure that trapcode had no velocity on its emitters so that when they followed the null path they would leave the lightning trail. The colour evolves over life as well and has a soft glow to give it that light effect. The big strike at the end of the composition is the 3d stroke, this was done by copying the keyframes of the null into 3D strokes mask path and then animating its evolution, colour and opacity settings and its taper. Also when the strike hits I usually composited a lens flare on its strike just to emphasise its brightness.

Liquid Spheres

These certain liquid spheres were not done in Cinema 4D they were actually done in After Effects. By using cc sphere and mapping them with the same sort of obsidian texture I used within cinema 4d I then added a turbulent displace to them to give them that “globby” effect and then parented them with trapcode particular to animate their physics.

Optical flares

These were added using video co-pilots optical flare plugin which lets you create and design your own flares for the piece. I found that using it really set the mood off within the shots and made it seem like Harry is in another world. When I designed the flares I wanted them to be energetic and quite apparent when it came onto camera but I also wanted it to flow with the camera and not distract the audience too much from the more physical objects within the shot.

Dust

The floating dust was merely a subtle piece put into the pieces to make it seem quite spacey and to give the shots more 3D depth, this becomes emphasised when I turned on the depth of field settings on the cameras.

Camera movement without Boujou

These shots were tricky because I had to animate some of them by freehand, usually to make them match the shaky cam that Harry has been initially filmed on I apply a wiggle expression to the cameras orientation and tweak it until it fits the shots movement. This combined with depth of field from the camera makes the scene work and feel more organic, but it still required key framing at certain points of Harry’s movement to match what he was doing and the shots overall orientation.

Making Harry Float

This was a tough one, but not unachievable. For is body I managed to bend his spine using cc bend to make sure that it didn’t look too flat to indicate a table was there, also I added a slight turbulent displace to the fabric on his back using a mask just so it again doesn’t look too flat. I then key framed the movement and rate of wear he was going and the rest of the shot spoke for itself. In terms of placement in the shot I wanted him to not be directly in then centre, as I wanted it to still feel quite organic rather than looking symmetrical.

Overall A lot of Harry’s shots were made around him, I wanted him to be just as part as the animations as they were with him so I felt they needed to interact in movement. Harry himself was extremely energetic on the days of filming and I needed those animations to keep up with him and I believe I managed to succeed in doing that.

This slideshow requires JavaScript.

Well another crucial aspect of the project was keylighting the green out the shots of course, this started off as an easytask until I started getting towards some of the more difficult shots with quite a bit of green splash on Harry’s edges I decided to use Adobe’s dynamic link so that I would only work solely on the rough cuts that I would be using within the final edit, rather than keying out a lot of footage that I may not even use in the end. More or less the green itself was absolutely fine, a clean matte green worked besutifully when it came to its removal, but it was more the edges of Harry that became a concern.

This slideshow requires JavaScript.

Crunching down the white and black channels was the biggest priority when it came to keying him out of the background. But unfortunately in some of the shots there appeared to of been so much splash from the green screen itself. So I decided to go even further and think about rotoscoping when I wasn’t happy with the results.

After effetcs now has its own rotobrush facility that helps the process of rotoscoping become a lot faster. It predicts where the subject is like on magic wand in photoshop and roto’s the specific object out. Unfortunately you have to make it learn as it goes, which means still having to go through every single frame and correcting it. Once the subject has been completely rotoscope’d I then refined its matte and added the motion blur to its edges to make Harry’s movements a lot more realistic. A problem that a lot of people forget about with rotoscoping or even keying is the fact that the keys and roto’s can get rid of the sampling information in between frames on a moving subject, therefore making them sometimes come out looking too clean, which then makes the scene look very fake. ┬áThe green screen itself also helped rotobrush figure out which parts I was planning to key out which really helped me out on the process of time, however this was still one of the most time consuming parts of the project allotting to almost 5 solid days.

For most of the shots I had planned on getting to know the software Boujou and its matchmoving facilities, the software itself worked out brilliantly however I had come across a fatal error with many of my shots. They had no tracking markers! This was at a point where I nearly lost my head the project and called it a day. However I realised that a lot of the wide angle shots could be salvaged because of the objects on the outside of green screen being captured.

This slideshow requires JavaScript.

So the first step of the process was to track the scene, Boujou itself has an auto tracker but sometimes it needs some guidance to help it solve the camera orientation. So primarily I would use it at least twice on a shot, in-between that I would mark particular trackers that stay on track and then run it through the Boujou track again. Also I would have to draw an animated mask around Harry to make sure the markers wouldn’t track him, otherwise that would be completely pointless for the camera solve as it would confuse the 3D orientation. ┬áThen afterwards I let it camera solve to figure out the 3d camera orientation, which I could then export to lightwave which then exports to Cinema 4D. At this point I came to the conclusion that I should export the 3d data to after effects instead, on the grounds of time scale and sticking to what I know I believed I could still create some consistent CGI shots within after effects matching the macro shots and animations.

Although I was happy with the cinema 4D footage in itself It still needed the sprinkles on top to really add some punch to their appearance. So I decided to look into Cinema 4D’s workflow into After Effects. First I needed to add a compositing tag to particular objects I wanted to control in the scene with after effects. That meant adding composite tags to the lighting, and then setting them to the buffer render 1. This separates them from the rest of the composition and renders out the 3d data with the footage.

This slideshow requires JavaScript.

This gives me complete control on the lighting within the scene and helps me to track optical flares to the lights without having to animate them by keyframes. However the flares still have to be designed and their evolution values have to be animated. The camera from Cinema 4D also gets brought over to after effects and can become extremely useful for making realistic movement within the dust particles in trapcode particular.

I wanted the Cinema 4D visuals to primarily match what I had planned with the green screen shots of Harry, so the flares and dust managed to keep the visuals looking consistent with the piece. The dust was created in trapcode particular, I wanted it to be very light and to behave like glowing dust to give some added depth to the shots. I also set the cameras aperture rate up to 100 pixels to give the particles that bit more emphasis on the Z depth.

One of the most crucial yet tedious parts of the project involved rendering the animations. I underestimated the extent of how long most of the shots would take to render and more or less almost over shot on the deadline with particular shots. One of the biggest killers on the render was the pyro cluster, which was initiating the volumetric lighting in the shots, even on the pyro clusters lowest settings I was looking at day long render times per shot. After speaking my tutor Liam he kindly tracked down a license which was housed in the Graphic design department at university, which was used for specialist tutorial purposes. The system was a quad core i7 which would seriously help the system to not bottle neck on rendering the frames. So Richard Hurst who is in Graphic design kindly let me utilise this system, however we came across a snag.

This slideshow requires JavaScript.

At first we came across the problem of the system shutting itself down after a period of time because of its auto shut down preferences implanted by the ICT department at university. So Rich managed to organise them giving me special preferences to make sure the system does not shut itself down, however this for some unknown reason didn’t stop the system shutting down after half an hour. So I tried quite an old trick which is to open up photoshop and make a change to file and not save it. This makes the system attempt to shut down photoshop before C4D, which means the shut down process will prompt to save on photoshop before shutting down, leaving cinema 4D to run all night.

After this problem was solved I came across another one. Rich had started becoming concerned with the temperature levels of the Mac pro as it had become so hot we couldn’t even touch it. After talking to ICT we came to the decision of shutting it down for the night and then restarting it on early morning. This wasn’t a problem on the render to stop it half way through as I luckily thought ahead and rendered the files out in PNG sequences.

When the Easter bank holidays came around I had to shut down the render on their system completely which left me with my home system running all night, but again there was another problem. I found that the CPU had started overheating to temperatures of 95 degrees, so I managed to burrow a desk fan off of Phil and took off the shell of my computer to keep it cool. It began to stay around 65 to 70 degrees which was a relief.

For the settings it was 1080p at 25 fps into a PNG sequence.

An important aspect to good animation is of course making sure the camera angles, movement and positioning are up to standard. I wanted the camera to flow with the animations and to keep them consistent with the rest of the footage on the project, I also wanted the camera to take a macro perspective to things so that the viewers still loses their perception of scale whilst watching the video. This was one of the techniques used previously on the specialist project but it was interesting to see it come to life within this video on Cinema 4D.

This slideshow requires JavaScript.

The way to get such a smooth camera movement was almost the same technique I applied to the attractor objects within the composition. I parented the camera to a spline and key framed its position path whilst settings its intermediate points to uniform to keep the shot nice and smooth. On a lot of the shots however I used the target camera facility instead of the normal camera. The target camera has a directional null which acts as a permanent focus point for the camera, which I means I can also attach that to a spline and animate it panning whilst tracking.

Most of the shots I wasn’t afraid to get up close to the objects but that meant being a bit more careful about particular textures as of course the closer you are the less detail there is on a single object. So by putting most of the textures on high resolution, typically 3,000 by 3,000 it enabled me to still keep that macro aspect in the shots. However this meant some serious increase in loading times, especially when it came to the liquid spheres because of their constant displacement on the textures.

Follow

Get every new post delivered to your Inbox.