We've finally come to the end of our final project for VIST 405. It's been a long six weeks with an incalculable number of problems, but I'd have to say that I'm happy with what we turned out with in the end. Before I speak any more about the project, watch our documentation video. I think it explains a lot about what our project was and what it turned out to be. It's better than typing it all again.
So there you have it. A good summary of what our project was. Here is the entire 8 minute video clip of what we projected on each wall of the cube, in case you're interested. The video is at a strange resolution because it is four "screens" across.
Cool. I think it's better to let these images speak for themselves. I've previously explained my parts of the video, which are the intro and the interlude. I think that the whole video came together well in the end.
As I mentioned earlier, we had many problems with our project. The cube itself caused most of these. We were originally going for an 8ft cube. We successfully built and projected on this cube, as evident in the video shown here:
We liked the bigger cube, but unfortunately it was susceptible to the weather and various other conditions, including light. It was because of this that we decided to build a smaller cube, approximately 3.5ft, to show at the Viz Show. We had previously built a small cube of around 2ft in order to test projections while the large cube was still in progress. Experiencing the videos from inside this smaller cube was different from the large one, but still immersive. It's a more personal experience, as only one person at a time could be in the cube.
Displaying Reflections at the Viz Show was an interesting experience. A surprisingly large number of people showed up to view the installation, including many professors. Everyone seemed to enjoy the experience. Several people told me that watching the visuals from outside the cube was very engaging, even though that's not how the installation was meant to be experienced. Several people also seemed very emotionally affected by the piece, leaving the room with tears in their eyes. That, I think, means we were successful in our efforts.
An unfortunate mishap occurred while I was showing the piece, though. A girl ran into the cube, which was suspended from the ceiling with only fishing wire. The wire broke, and I had to shoddily patch it back up. The mapping was sort of off from then on, but I don't think anyone seemed to notice too much or care.
In the end, Reflections took a lot out of me. It was a lot more work than I thought it was going to be, mostly because of the complications we had with the cube. It took a lot of time and effort to set up the installation each time, which soon grew tiring. I think if I were to make it again, I'd make it a more permanent piece, building the cube with more solid materials and using higher quality screens. I was happy with the final result, though, in spite of all the problems. It was one of the only projects I've worked on that people seemed to really enjoy and connect with. For that reason, I'm glad I spent all the time I did on Reflections. It was a learning experience in many ways, and I'm glad we had something to show after all is said and done.
Saturday, December 10, 2011
Thursday, November 17, 2011
Update 11/17/11
It's been a while since my last post. Sorry about that.
I've been working on my sections of the song. I think it's best that I just show you.
Above is a test render of part of the interlude I'm working on. The theme of the song is about love and loss and life, so I used pictures and videos of my family to give it that sort of timeless or ethereal feel.
Above is a second render of the intro. I wanted to introduce the song with intense imagery, which includes sort of psychedelic imagery. I use the flashing colors, stars, and faux 3d to accomplish this. It fades to white at the end in order to transition to Mike's part of the song.
In other news, we are having lots of problems with projecting our images on the cube. We can't get a computer or program to have two video outputs plus a screen to set things up. Today we're talking about it, but if we can't figure it out soon we may have to change how we project our images, which would mean reworking some of the things I already have. We'll see.
I've been working on my sections of the song. I think it's best that I just show you.
Above is a test render of part of the interlude I'm working on. The theme of the song is about love and loss and life, so I used pictures and videos of my family to give it that sort of timeless or ethereal feel.
Above is a second render of the intro. I wanted to introduce the song with intense imagery, which includes sort of psychedelic imagery. I use the flashing colors, stars, and faux 3d to accomplish this. It fades to white at the end in order to transition to Mike's part of the song.
In other news, we are having lots of problems with projecting our images on the cube. We can't get a computer or program to have two video outputs plus a screen to set things up. Today we're talking about it, but if we can't figure it out soon we may have to change how we project our images, which would mean reworking some of the things I already have. We'll see.
Tuesday, November 8, 2011
Update 11/8/11
Since the last post, we split up the song into three parts so that all of us could work on it. I am working on the intro and interlude to the song. Below is a first test of some ideas for the intro.
These are just some ideas floating around. The screen is split into four parts. From left to right, the different areas are: right side of the cube, front, left, back. Should be cool.
These are just some ideas floating around. The screen is split into four parts. From left to right, the different areas are: right side of the cube, front, left, back. Should be cool.
Tuesday, November 1, 2011
Final Project
It's time for the final project! These last five(ish) weeks I will be working with Mike and Autumn on an immersive motion graphics projection project. Let me explain further. We will be building a person-sized cube structure, and projecting onto its surfaces from the outside. The imagery will be to a song: "Age of Adz" by Sufjan Stevens. Here is that song:
Above are concept photos to what our installation might look like in the space. Will we be projecting on all four screens.
As for the visual style, we have some references. We're still working on the details, but here are a few videos that are generally sort of similar to what we're going for: All of the Lights, Enter the Void, A Clockwork Orange, Love & Theft.
More later. Stay tuned!
The song is about how losing a loved one feels like the end of the world, but it is only the end of your world. The world seems the same to others around you, and this distinction is what fuels the story of the space. You will be physically stepping into a place where you and your emotions are separate from the rest of the world. Only those who are inside can really feel what it's like.
Above are concept photos to what our installation might look like in the space. Will we be projecting on all four screens.
As for the visual style, we have some references. We're still working on the details, but here are a few videos that are generally sort of similar to what we're going for: All of the Lights, Enter the Void, A Clockwork Orange, Love & Theft.
More later. Stay tuned!
Tuesday, October 25, 2011
Pictoplasma
We were asked to present about a festival, conference, or competition of our choosing in order to determine which one we'd like to enter our final project into at the end of the semester. I chose Pictoplasma.
Pictoplasma is the world's leading festival that celebrates contemporary character design. It started as an annual festival in Berlin, but has now expanded to happening every year in Paris and New York along with Berlin, as well as a "tour," which travels across various cities in Europe.
The festivals consist of one week full of screenings, artist presentations, workshops, installations, and crazy parties. All types of media are accepted, including animation, narration, music visuals, experimental work, and motion graphics. The only real requirement is a focus on character. Here is a video with work that has been previously shown at Pictoplasma:
Neat, huh? The good thing about this festival is that it accepts a wide range of entries, and you have several chances to get your entry into a festival. Another great aspect of entering in Pictoplasma is that there is no submission fee. The deadline for 2012 festivals is February 1st, 2012. The Berlin festival takes place in April (I was there!), the NYC festival in November, and Paris in December.
More information and pretty pictures can be found here.
Thursday, October 20, 2011
Tagline
Project complete! It's been a while since my last post, which was all about the concepts for our projection mapping/augmented reality project. It's been a long road, but I am overall happy with the result, which you can see in a video below:
Above is our video, stitched together, where you can see all three parts. In the actual space, though, there are three separate videos playing on three separate walls, like this:
The program we used to project the videos is called VPT. It can be used to play multiple videos at the same time, and can skew and stretch the videos to fit on surfaces that aren't flat.
This is the interface for VPT. The program worked for our project, but barely so. It was hard to control, confusing, and very fickle about which videos it would play. For example, some video codecs worked better than others, and if we tried to play a video of any sort of significant quality it would drop the frame-rate to below 10 fps. At first we were using Mike's computer to run VPT, but we discovered that a faster computer could handle higher quality videos, so we ended up using Oscar's newly built (and much more powerful) desktop.
To get to this point, however, we had a long road. There are always elements of trial and error in every project, but in Tagline we experienced it more than usual. Almost every facet of the project was redone more than once. For example, we wanted the silhouettes of ourselves to look like this:
To get this effect, we recorded ourselves in front of a green screen, with the intent of keying out the background later. The first attempt didn't turn out too well.
Not the best. Obviously, we needed to record again. We got Glen to help us with the lighting of the screen, which was great. The silhouettes turned out much cleaner, as you can see in our final videos.
As you can see, it's much better. Instead of just recording in front of a screen, we used the physical lighting to make a silhouette and recorded that. It was much easier to key that way.
Another part of the project that went through many iterations was the wall-break. Near the end of our video, the "guardian" of the space breaks through the wall and attacks Oscar. The first attempt we had for this lacked depth and failed to realistically depict the wall breaking, even somewhat.
We decided to entirely revamp this, and, as you can see in our final video, it turned out much better.
Above is a frame from the final render of the wall breaking. There is more depth, better lighting, and it is overall more realistic and immersive.
The monster we used in our project went through several iterations as well. Here is the original concept sketch, done by Mike.
Below is a rough draft we showed on Thursday, before the project was due.
We decided to change the color of the monster, as it didn't fit in the environment as well as we'd hoped. We considered changing the electricity, but decided to keep it, as it was visually appealing and added a certain dynamic energy to the overall composition. Below is the final monster.
We were happy with how the monster turned out. The lighting and texturing were greatly improved on the final monster, and I think that it worked well.
Overall, I think that this project turned out well. I was pleased with the way that our narrative and video fit with our space. The urban theme and adventurous feel was what we were going for from the beginning, and I think that we succeeded in our goal. When people went to our space and experienced the project, they were entertained and had a fun time "spraying" the monster. One person said that he felt like he was "in a Disneyland ride," which was exactly what we were going for. By only playing the video at certain times, limiting the audience each time, and providing props, we built anticipation for the project and made it feel more like a ride or event, something immersive that you could experience with friends.
The fact that our group "performed" the piece helped with the concept of "augmented reality." We went from a real environment where we talked and interacted with the crowd to the screen, bringing our conversation to the space where we were projecting. The fact that we included the audience in the event also helped, and most people would pretend to spray the walls and monster when the voice onscreen instructed them to.
If we were to extend this project further into the future, I think that we would work more with the projection techniques. VPT wasn't an ideal program to use, and the quality of the videos, and therefore immersion, suffered. Also, I would want to flesh out the "monster" part of the project more, maybe including an extended fight scene or more audience interaction.
As for what each team member did, the answers are as follows: Oscar modeled, textured, and animated the monster, as well as creating the wall break. Mike was in charge of the VPT side of things, as well as researching and helping with all other aspects of the project, including the wall break, the spray from the can, and the graffiti on the wall. I was in charge of compositing everything together in AfterEffects, which consisted of keying the blue-screen videos, making smoke, creating the background, timing, audio, and rendering. It was a great group effort, and we all worked hard to achieve our goal.
I am pleased with the success of this project, and I hope to work more with projection mapping and augmented reality in the future. It's a great way to make projects on a grander scale, both physically and and in the amount of exposure you get. Most people are very receptive to projects such as this, which makes them perfect for introducing and showcasing visualization techniques to the public.
Above is our video, stitched together, where you can see all three parts. In the actual space, though, there are three separate videos playing on three separate walls, like this:
The program we used to project the videos is called VPT. It can be used to play multiple videos at the same time, and can skew and stretch the videos to fit on surfaces that aren't flat.
This is the interface for VPT. The program worked for our project, but barely so. It was hard to control, confusing, and very fickle about which videos it would play. For example, some video codecs worked better than others, and if we tried to play a video of any sort of significant quality it would drop the frame-rate to below 10 fps. At first we were using Mike's computer to run VPT, but we discovered that a faster computer could handle higher quality videos, so we ended up using Oscar's newly built (and much more powerful) desktop.
To get to this point, however, we had a long road. There are always elements of trial and error in every project, but in Tagline we experienced it more than usual. Almost every facet of the project was redone more than once. For example, we wanted the silhouettes of ourselves to look like this:
To get this effect, we recorded ourselves in front of a green screen, with the intent of keying out the background later. The first attempt didn't turn out too well.
Not the best. Obviously, we needed to record again. We got Glen to help us with the lighting of the screen, which was great. The silhouettes turned out much cleaner, as you can see in our final videos.
As you can see, it's much better. Instead of just recording in front of a screen, we used the physical lighting to make a silhouette and recorded that. It was much easier to key that way.
Another part of the project that went through many iterations was the wall-break. Near the end of our video, the "guardian" of the space breaks through the wall and attacks Oscar. The first attempt we had for this lacked depth and failed to realistically depict the wall breaking, even somewhat.
We decided to entirely revamp this, and, as you can see in our final video, it turned out much better.
Above is a frame from the final render of the wall breaking. There is more depth, better lighting, and it is overall more realistic and immersive.
The monster we used in our project went through several iterations as well. Here is the original concept sketch, done by Mike.
Below is a rough draft we showed on Thursday, before the project was due.
We decided to change the color of the monster, as it didn't fit in the environment as well as we'd hoped. We considered changing the electricity, but decided to keep it, as it was visually appealing and added a certain dynamic energy to the overall composition. Below is the final monster.
We were happy with how the monster turned out. The lighting and texturing were greatly improved on the final monster, and I think that it worked well.
Overall, I think that this project turned out well. I was pleased with the way that our narrative and video fit with our space. The urban theme and adventurous feel was what we were going for from the beginning, and I think that we succeeded in our goal. When people went to our space and experienced the project, they were entertained and had a fun time "spraying" the monster. One person said that he felt like he was "in a Disneyland ride," which was exactly what we were going for. By only playing the video at certain times, limiting the audience each time, and providing props, we built anticipation for the project and made it feel more like a ride or event, something immersive that you could experience with friends.
The fact that our group "performed" the piece helped with the concept of "augmented reality." We went from a real environment where we talked and interacted with the crowd to the screen, bringing our conversation to the space where we were projecting. The fact that we included the audience in the event also helped, and most people would pretend to spray the walls and monster when the voice onscreen instructed them to.
If we were to extend this project further into the future, I think that we would work more with the projection techniques. VPT wasn't an ideal program to use, and the quality of the videos, and therefore immersion, suffered. Also, I would want to flesh out the "monster" part of the project more, maybe including an extended fight scene or more audience interaction.
As for what each team member did, the answers are as follows: Oscar modeled, textured, and animated the monster, as well as creating the wall break. Mike was in charge of the VPT side of things, as well as researching and helping with all other aspects of the project, including the wall break, the spray from the can, and the graffiti on the wall. I was in charge of compositing everything together in AfterEffects, which consisted of keying the blue-screen videos, making smoke, creating the background, timing, audio, and rendering. It was a great group effort, and we all worked hard to achieve our goal.
I am pleased with the success of this project, and I hope to work more with projection mapping and augmented reality in the future. It's a great way to make projects on a grander scale, both physically and and in the amount of exposure you get. Most people are very receptive to projects such as this, which makes them perfect for introducing and showcasing visualization techniques to the public.
Tuesday, October 4, 2011
Project 2 Research
I started researching different styles for our projection mapping project. Right now, our story is that there are a few kids spray-painting a wall, and as they are doing so the "guardian" of the wall, a giant monster, breaks through and starts causing chaos and just generally wrecking the place. The feel is sort of like a Disney interactive adventure ride, with the viewer taking the place of another mischievous kid.
Our space is in the emergency exit stairwell of Langford, which is generally abandoned, gritty, littered, and a perfect place for our story, which was inspired by this sort of urban, underground feel.
The projector would sit on top of the stairs (sort of where the viewpoint is in the picture above), and project onto the three walls ahead. The viewing space is on the second floor landing.
Above is a view of the landing where the people would stand.
The space also extends up four stories, giving it a vast, empty, and abandoned feeling. The large space also works well for sound, as it echoes.
I researched the general look and feel of the project, as well as the kids, which will be shadows or silhouettes. The lighting of the project will be low key, with highlights and high contrast a priority. Below are some examples
The style we're going for is evident in those pictures. It's a film-noir-meets-graffiti kind of feel. Here are a few videos that show the silhouette/shadow style.
The reference for the latter is around 18 seconds in.
Anyway, we have the basic concepts and look down. We just need to finish out the narrative, get the timing down, and start working.
Our space is in the emergency exit stairwell of Langford, which is generally abandoned, gritty, littered, and a perfect place for our story, which was inspired by this sort of urban, underground feel.
The projector would sit on top of the stairs (sort of where the viewpoint is in the picture above), and project onto the three walls ahead. The viewing space is on the second floor landing.
Above is a view of the landing where the people would stand.
The space also extends up four stories, giving it a vast, empty, and abandoned feeling. The large space also works well for sound, as it echoes.
I researched the general look and feel of the project, as well as the kids, which will be shadows or silhouettes. The lighting of the project will be low key, with highlights and high contrast a priority. Below are some examples
The style we're going for is evident in those pictures. It's a film-noir-meets-graffiti kind of feel. Here are a few videos that show the silhouette/shadow style.
The reference for the latter is around 18 seconds in.
Anyway, we have the basic concepts and look down. We just need to finish out the narrative, get the timing down, and start working.
Thursday, September 29, 2011
Project 2 Exercise
Here's the video for the exercise over projection we're doing prior to project 2.
We created a dark, brooding, and sort of mysterious mood. We had 6 different videos on the six different panels of the object, all playing at once. They fade in and out somewhat arbitrarily, as we were not able to figure out how to sync up the different videos in VPT.
Sorry about the bad video quality. The only decent camera we had was on Mike's phone. You can still get the idea.
We created a dark, brooding, and sort of mysterious mood. We had 6 different videos on the six different panels of the object, all playing at once. They fade in and out somewhat arbitrarily, as we were not able to figure out how to sync up the different videos in VPT.
Sorry about the bad video quality. The only decent camera we had was on Mike's phone. You can still get the idea.
Saturday, September 24, 2011
Project 1
Jarrod and I finished our project this week. It turned out better than we expected. We were having a lot of trouble earlier this week with drawing multiple shapes, but it seems like we fixed it. We made the jump from 2d to 3d, and the images looked a lot better, and the depth granted by the 3d shapes greatly increased the immersion.
That is a sample image from our program. The black keys drew lines (planes), and the white keys drew the circles (spheres). We programmed in the use of the modulator wheel to control the rotation of the lines. It made drawing more interesting, and you could create new shapes out of just the rotation.
From our observations, Jarrod and I observed that most people would carefully play around with the drawing tools before trying to actually play a song or melody on the keyboard. After they got the hang of drawing they would have much less reservations about playing around with the set-up.
It was interesting to see how everyone reacted to the drawing controls. People seemed to like playing with the extremes of the piece; for example, they would press keys very softly, or play a lot of keys very quickly, in order to test what the program was capable of.
Final Presentation
That is a sample image from our program. The black keys drew lines (planes), and the white keys drew the circles (spheres). We programmed in the use of the modulator wheel to control the rotation of the lines. It made drawing more interesting, and you could create new shapes out of just the rotation.
Before we presented, we let some classmates test it.
From our observations, Jarrod and I observed that most people would carefully play around with the drawing tools before trying to actually play a song or melody on the keyboard. After they got the hang of drawing they would have much less reservations about playing around with the set-up.
It was interesting to see how everyone reacted to the drawing controls. People seemed to like playing with the extremes of the piece; for example, they would press keys very softly, or play a lot of keys very quickly, in order to test what the program was capable of.
Overall, I learned a lot from this project. I had to learn a new programming language in Max 5, which was an interesting experience. It's a powerful program, but it has setbacks. The node-based system can be finicky, and the flow of the program confusing. It ended up working, though, and I think our final project was a success.
Final Presentation
Tuesday, September 13, 2011
Update 9/13/11
We can draw! Pressing the black keys now draws a circle, and pressing a white key draws a line. The color of the shapes is determined at random via a patch that generates random numbers when sent a bang. The color is different each time you draw a shape.
Drawing multiple shapes is still something we're working on, and it's proving a little more difficult than anticipated. OpenGl in Max is sort of difficult to work with when it comes to these kinds of things. In our programming class a couple years ago we made a program for drawing 2d shapes, but used linked lists to be able to draw an unlimited amount. Linked lists are weird and finicky and may be beyond the scope of this project (as neither me nor Jarrod are programmers), so we may need to find another solution.
Anyway, that's all for now. We're still trying to work with our patch in Max, cleaning up the object layout and making routine functions like generating random numbers and recognizing MIDI input a more streamlined process.
Wednesday, September 7, 2011
Update 9/7/11
Tonight, Jarrod and I spent some time hammering out the details of our visuals, as well as exploring our options with Max MSP.
We decided to adopt a style similar to Kandinsky's "synesthesia" phase. Examples:
The black keys on the piano will draw circles, while the white keys will draw lines. We will use the "velocity" of the key press (how hard you press) to determine the opacity of the shapes. Size will be based on how long the key is pressed down. Other factors could be changed as well, using the modulator or pitch bend functions of the keyboard, but these are yet to be determined.
In order to maintain some consistency in the placement of the shapes, the keys will be placed in groups of three, with each group pertaining to a part of a 4x4 grid. For example, the keys which are the lowest in tone will be in the bottom-left gridspace, and the highest will be in the top-right. This way, the user can retain a degree of control over the placement of the shapes.
After determining these graphical characteristics, we started building a program in Max which would detect MIDI inputs. We made one which could determine the key being pressed and the velocity of that press. Changes in the modulator and pitch-bend peripherals could also be monitored. By the end, the program could determine if a specific set of notes were being played, and turn a switch if they were. Progress!
Monday, September 5, 2011
VIST 405, Project 1, Studio Exercise 1
Ad Libitum is an interactive installation which explores the boundaries between auditory and visual beauty. No instrument is as vital to musical composition as the piano, which, in our piece, is the tool used to explore the environment. The tension that arises from the pairing of audio and visuals is highlighted; a delicate sonata may produce a clumsy or uninspired image, while a seemingly random series of notes could be beautiful. The notion that music can be art is rarely challenged, but the belief that auditory art is inherently paired with something of visual significance is an idea to be contemplated.
The installation itself will be relatively simple, consisting of only a musical keyboard interface and a projection screen. As a user enters the space, he will be confronted with an absence of auditory and visual stimulation. It will be as if he entered a void. Once the user interacts with the keyboard, the environment will be revealed as colored images directly related to his musical input appear on screen. The environment itself is constantly in flux as it responds to the user's input.
An additional layer of exploration is created by providing the user a pair of sound-proof headphones which eliminate the sound of music being played, allowing him to concentrate solely on the art.
Our hope in creating Ab Libitum is to encourage the users to think outside of their traditional definitions of visual and auditory art and how they relate to one another. This "synesthetic" experience allows the users to experience their world in a new manner and take away a new understanding of the relationship between visual art and music.
Thursday, September 1, 2011
INTRODUCTION
Well, I suppose this is the first post in a long series of posts I will be making for our studio course, VIST 405. I'll be updating the blog with ideas, concepts, and documentation for the three projects we will be completing over the course of the next 15 weeks. Cool!
Subscribe to:
Posts (Atom)