Thursday, October 20, 2011

Tagline

Project complete! It's been a while since my last post, which was all about the concepts for our projection mapping/augmented reality project. It's been a long road, but I am overall happy with the result, which you can see in a video below:



Above is our video, stitched together, where you can see all three parts. In the actual space, though, there are three separate videos playing on three separate walls, like this:



The program we used to project the videos is called VPT. It can be used to play multiple videos at the same time, and can skew and stretch the videos to fit on surfaces that aren't flat.



This is the interface for VPT. The program worked for our project, but barely so. It was hard to control, confusing, and very fickle about which videos it would play. For example, some video codecs worked better than others, and if we tried to play a video of any sort of significant quality it would drop the frame-rate to below 10 fps. At first we were using Mike's computer to run VPT, but we discovered that a faster computer could handle higher quality videos, so we ended up using Oscar's newly built (and much more powerful) desktop.


To get to this point, however, we had a long road. There are always elements of trial and error in every project, but in Tagline we experienced it more than usual. Almost every facet of the project was redone more than once. For example, we wanted the silhouettes of ourselves to look like this:


To get this effect, we recorded ourselves in front of a green screen, with the intent of keying out the background later. The first attempt didn't turn out too well.



Not the best. Obviously, we needed to record again. We got Glen to help us with the lighting of the screen, which was great. The silhouettes turned out much cleaner, as you can see in our final videos.





As you can see, it's much better. Instead of just recording in front of a screen, we used the physical lighting to make a silhouette and recorded that. It was much easier to key that way.

Another part of the project that went through many iterations was the wall-break. Near the end of our video, the "guardian" of the space breaks through the wall and attacks Oscar. The first attempt we had for this lacked depth and failed to realistically depict the wall breaking, even somewhat.



We decided to entirely revamp this, and, as you can see in our final video, it turned out much better.



Above is a frame from the final render of the wall breaking. There is more depth, better lighting, and it is overall more realistic and immersive.

The monster we used in our project went through several iterations as well. Here is the original concept sketch, done by Mike.



Below is a rough draft we showed on Thursday, before the project was due.



We decided to change the color of the monster, as it didn't fit in the environment as well as we'd hoped. We considered changing the electricity, but decided to keep it, as it was visually appealing and added a certain dynamic energy to the overall composition. Below is the final monster.



We were happy with how the monster turned out. The lighting and texturing were greatly improved on the final monster, and I think that it worked well.

Overall, I think that this project turned out well. I was pleased with the way that our narrative and video fit with our space. The urban theme and adventurous feel was what we were going for from the beginning, and I think that we succeeded in our goal. When people went to our space and experienced the project, they were entertained and had a fun time "spraying" the monster. One person said that he felt like he was "in a Disneyland ride," which was exactly what we were going for. By only playing the video at certain times, limiting the audience each time, and providing props, we built anticipation for the project and made it feel more like a ride or event, something immersive that you could experience with friends.

The fact that our group "performed" the piece helped with the concept of "augmented reality." We went from a real environment where we talked and interacted with the crowd to the screen, bringing our conversation to the space where we were projecting. The fact that we included the audience in the event also helped, and most people would pretend to spray the walls and monster when the voice onscreen instructed them to.

If we were to extend this project further into the future, I think that we would work more with the projection techniques. VPT wasn't an ideal program to use, and the quality of the videos, and therefore immersion, suffered. Also, I would want to flesh out the "monster" part of the project more, maybe including an extended fight scene or more audience interaction.

As for what each team member did, the answers are as follows: Oscar modeled, textured, and animated the monster, as well as creating the wall break. Mike was in charge of the VPT side of things, as well as researching and helping with all other aspects of the project, including the wall break, the spray from the can, and the graffiti on the wall. I was in charge of compositing everything together in AfterEffects, which consisted of keying the blue-screen videos, making smoke, creating the background, timing, audio, and rendering. It was a great group effort, and we all worked hard to achieve our goal.

I am pleased with the success of this project, and I hope to work more with projection mapping and augmented reality in the future. It's a great way to make projects on a grander scale, both physically and and in the amount of exposure you get. Most people are very receptive to projects such as this, which makes them perfect for introducing and showcasing visualization techniques to the public.

0 comments:

Post a Comment