I played around with the footage so that the image of Christy playing matched the song in the background better.
I talked with my friend about how she got started playing piano, and how she composes music.
I realize that the B-roll isn’t that great, and that it’s all just filler at this point. I’m going to get in touch with my friend and see if we can’t get some better footage this weekend.
Yorgo Alexopoulos’ No Feeling is Final seems to use a mix of purely graphic elements and still images in order to create a video that seems almost to play with the concept of indexicality. Throughout the video, Alexopoulos makes use of several panels, each one displaying either a whole picture or part of a picture/graphic which is then animated in some way.
With the still images specifically, Alexopoulos seems to use the paneling in the video to great effect, moving a cohesive still image through several panels in order to create a pseudo parallax effect, using silhouettes and line to create continuity between panels, and using some minor animations across the screen to give us a sense that what we’re looking at is just simple footage of something as it scrolls by us in the distance. Throughout much of the video, these still images – something we’d normally conceive of as indexical to what was photographed – are juxtaposed with minimalist graphics (such as a diagonal line, or a triangle) in a way that seems to emphasize the realism of the photo.
I think that the way this video seems to play with indexicality touches on something that Manovich mentions in his essay, where as soon as an image is rendered digital – when they just become a series of easily manipulable pixels – it loses its indexicality. This is on full display in Alexopoulos’ video, especially as it progresses, the photos begin to be warped, being displayed multiple times on-screen, or spread across several panels at once, when the photo is edited to change its color, or to make it a silhouette. The graphics and animations go on to warp the images further as they animate across the panels or are superimposed onto the still image to create something new.
Running with this theme of challenging indexicality, I think that if I were to do my own hybrid cinema, I’d want to take a similar approach to indexicality. I feel like it isn’t quite correct to say that just because an image or movie is digital, it no longer is an index of what’s on-screen, as Manovich seems to suggest. If I see a picture of a mountain on my computer that was taken with a digital camera, then in my opinion, I’m still looking at the index of that mountain. Even making edits of the picture doesn’t change my mind much on this; if the picture is edited to be a slightly different color, or some other such thing, then I’m still looking at the tree – after all, if we look at a tree once during the day and once at night, we’re still looking at a tree, even if what we’re seeing is different. (Heavy edits are a different can of worms that I don’t want to get into).
It really depends on how the show pulls off the social media portion of the show. I feel like there are a number of shows both on tv and online that try to reach some level of user/audience participation by setting up the story in a “choose your own adventure” type of style, where the outcome of the show is dependent on what the audience wants – for example, kid’s TV shows that ask the audience to decide which ending they want to see on tv, or an online video series (such as the “A Date with Markiplier” videos on YouTube) that asks users to decide which path they go on as the video progresses.
I don’t particularly like this way of interacting with the audience too much, because it isn’t organic – it’s like those choices in videogames that seem just fine, but end up going in a direction you didn’t think it would end up in (ProZD has a funny vine parodying this). You want to influence the story some way, and even though it seems like you might be, in reality you’re not making any impact whatsoever – everything you’re watching was already planned, and you’re just deciding what video to watch.
SKAM, according to the article, seems to do a little better than this, with videos and social media updating in real time according to the story, with some slight interactions between the characters’ social media and that of the audience’s, but from the sound of it, the interaction between the show and the users didn’t get much more sophisticated than just a simple follow-back or two. This level of interaction, compared to the previous examples, is nice, but if I had the time and resources, I would want to push things a little bit further. I’d want to have a series that takes as much advantage of social media and mobile technology as possible, where there was more genuine interaction with the characters of the show (a full conversation with a character on a chatting app, rather than a simple follow-back) which would then go on to influence the story in real time – at least a little bit, anyways.
And I realize while writing this that there’s a game that achieves a vibe that is very similar to what I’m trying to get at here, the biggest difference being that none of it is real: Simulacra. It’s a horror/mystery game that has the user going through various social media, emails, and texts of a missing woman, trying to find out what happened to her. The game is constantly having you chat with different characters using the missing woman’s social media, with “real-time” (in the game world, anyways) audio, video, and text updates across all the different mediums that you use in the game. (Both Markiplier and Jacksepticeye have full playthroughs if you’re interested in looking at the game). I think that if a web series could achieve this level of interactivity, it would prove to be a very interesting experience.
If I were to film a 5 minute story on the Eagle Creek fire and its impact on the community, I think I’d try ton interview people who were affected by the fires in some way, like having to evacuate their homes for a time and what that experience was like. I think I’d want to focus on how the fires would have impacted families emotionally, how it affected relationships, etc. Visual evidence might include the people/families trying to go through their lives while they’re living somewhere else (for example, if they’re staying in a motel or something, show them trying to manage their morning routine in their rooms, or if they’re living with relatives or close friends, show what it’s like having to suddenly deal with more people every day), or if their house was caught in a fire (I’m not really familiar with what happened in the eagle creek fires), then show them sifting through the remains of the house. Most notably, I’d want to use footage that shows families interacting with each other, like if two relatives are arguing over something about living situations, or if a couple people are trying to comfort each other in the aftermath of a/the fire.
Run Lola Run often uses techniques like graphic matches, match-on-action, motivated POV, and empty frames to fulfill a variety of needs. the latter three techniques are used to ensure there are no jarring cuts between scenes when a character (usually Lola) is moving, Match-on-action and graphic matches are used a few times to help bridge the start and ends of the film, and to transition viewers from the present in the film to flash-backs.
I feel like the main way that this movie features a digital aesthetic is the way that it plays with time. Rombes specifically mentions in his chapter “Time, Memory” that, while users cannot yet “alter the content within a frame” through the DVD, we can still play with the film in a general sense, deciding what scenes to watch when, what order to watch them in, whether to watch forward or in reverse, or even whether to start or finish a film at all. Run Lola Run can be seen playing with this to an extent with its multiple endings as well as the montages for the different characters whose lives she affects. Just like how we the audience can decide at any time to stop watching a movie if we didn’t like it, or to skip scenes til we’re at a more exciting part (something that I find my dad doing more often than not with his movies), Lola plays with time, continually changing itself in sometimes minute, sometimes major way with each run until we come to a desirable ending.
This is also one way that Run Lola Run differs from most conventional films – while it isn’t necessarily cyclical (I don’t want to use that word for this because I don’t think the movie does loops upon itself), Lola repeats itself until a desirable ending is reached, whereas most hollywood movies just chug along in a linear fashion until its inevitable end. Within each of these slightly changing narratives, however, I think the story plays out just like a hollywood movie, using things like parallel action between Lola and Manni, as well as the expansion and compression of time (expansion, when Lola is waiting to see if she won the game of roulette, compression during all the running scenes, when Manni is entering the store to steal money, and especially during all the death scenes) to create tension.
A lot of the best animated gifs that I see tend to be ones that loop seamlessly, or are very close to doing so. They seem to provide both a sense of time moving forward smoothly (real life tends not to stutter as a motion or action repeats or transitions to something else), as well as a sense of time standing still (that tea in the first gif would be overflowing by the time you got to this part of the post).
Seamless loops (or loops that are very close to seamless) present a very similar situation to the one that Mccloud talks about in his comic, where our experience with photos and pictures tells us that each panel/picture is a single instant, while several other factors – such as simulated movement (movement lines in comics, the animation in the gifs), or speech bubbles – indicate that the image is in fact taking place over several instants. The main difference here is the fact that instead of being shown a still frame with faked motion, we’re being shown a moving image – something which mimics real life more easily, resulting in an uncanny valley effect. We see things moving, but we don’t really see things changing in the way they should be. The cup never fills, Hasselhoff never gets where he’s going, and the sun never finishes rising.
This brings up another point that Mccloud talks about though – time in comics is regulated by how fast – or slow – the reader goes through the panels. I’ve read comics before where a page features an animated panel instead of a normal static picture, and because I was too focused on reading the comic and progressing the story, it never struck me as strange or as unsettling that the two characters never reached a new destination by the time I got to the next panel, no matter how long I looked at the gif.
All that being said, I feel like gifs don’t have a place in comics if they’re just to put in motion that we can already signal with things like motion lines – using a gif of two people walking instead of just implying movement through previously developed strategies seems unnecessary, and can lead to that uncanny valley effect and distract from the rest of the comic. If people found a way to use gifs to strengthen what we already have instead of replacing it, then maybe they could be used like that.
Though I originally intended to be “original” in with what the event would be, I still ended up going with murder. I really wish I had more appropriate lighting (bright yellow/orange lights don’t lend very well to an oppressive, sad feeling), but all in all, I think I got what I was going for.
Framing Exercise – Monty Python and the Holy Grail:
Medium Close up
Medium Close up
Medium Close up
I think this scene, in regards to framing, is… pretty generic. Fight scenes are usually either medium or long shots so all the action is on display, while dialogue is done mostly in close ups. Camera angle is usually kept pretty even with the characters’ faces as well, except for the last couple of shots here. The last two medium close ups that I show here – taken just after the Black Knight had his arm cut off – have some interesting framing in regards to the Black Knight and King Arthur. Dialogue between the two at this point tells us that each character thinks he is better than the other, and the camera angles seem to reflect these statements as well. When the camera is positioned behind Black Knight, it’s angled so it’s looking up at him, and so that Arthur looks significantly smaller than the Black Knight, clearly putting the latter character in a position of power. When the camera switches sides, however, we’re no longer looking up at the Black Knight, and Arthur is looks much bigger as well.
Another interesting thing about this clip that I found is that earlier in the scene (not shown here), the Black Knight is shown fighting with another person, and this fight sequence uses a number of more varied shots in it, quickly switching between long, medium, and close ups, and between eye and ground-level camera angles. The fight between Arthur and the Black Knight, however, stays mostly in medium and long shots, with few changes in camera angle. I wonder if this is meant for comedic effect (we just got done watching this amazing fight scene, and all of a sudden the Black Knight seems almost incompetent against Arthur), especially as the Black Knight continually gets weaker and weaker as the fight progresses.
I feel that, while several iPhone movies are trying to break traditions in what they’re capturing and how they’re capturing it (Atkinson states in the article, for example, that STARVECROW attempts to capture subjects in “improvised footage,” similar to how most people use devices like iPhones: as in-the-moment ways to record things). Tangerine seems like an interesting example of how the filmmakers seem to blend traditional filming with more iPhone-esque filming – for example, the shot reverse-shot scene we get of the two women in the diner versus the shaky filming of the fight scenes, or how we saw a woman being filmed through a car window.
For some reason I’m having a hard time explaining it, but I think an automatism of digital video – footage shot on mobile phones in particular – is the kind of hasty, shaky movement, and footage of things only half-caught on camera; a quick way to record something you think is worth recording. A lot of vines come to mind with this sort of thing, Fresh Avocado being one of the first ones I think of. Another automatism is the fact that these “movies” aren’t read so much as being filmed from a 3rd-person source/perspective, but from a 1st-person point of view. Many people who film using phones aren’t setting up scenes to look like a movie, they’re trying to capture whatever they’re seeing from their perspective before it stops. (Kylee Henke’s vine “It’s the simple things in life” is an example of the 1st-person perspective, though the setup for the vine is deliberate, rather than impromptu).
I feel like digital film these days wants to be a 1st-person, in-your-face sort of experience, but is currently still struggling to break free of the more traditional, out of body, 3rd-person experience that we’ve mostly been seeing. One of the best examples of this struggle might be during one of the first action scenes in Captain America: Civil War. In it, a small team of Avengers are sent to deal with some sort of terrorist organization, and a couple of heroes are sent to deal with an impromptu bomb threat in a crowded market (or something to that effect). Up until this point, the camera has been almost completely 3rd-person – we’re outsiders looking in on the Avengers doing their job, when suddenly, both the camera and us as viewers are thrown into 1st-person as the camera quickly blurs through the crowd, acting like a person chasing Black Widow through a crowd as she tries to fight off a few bad guys.
While I understand their usage of a more 1st-person perspective for those shots (we wouldn’t quite the same sense of action or thrills if we were kept outside the action), I also think that maybe there would be a better option, because as it is now, the camera work for that scene almost implies what the audience is a part of this action, despite the fact that we’re doing nothing but watch Black Widow do all the fighting.
Background in Video
I don’t have much background in live-action video, but I have at least a little experience in animation, and I know how to do very minor editing (like, I can make cuts and add text and that’s about it) in both Adobe Premier and After Effects.
I own an iPhone 5.
Aims for this Class
I intend on being a professional animator, and since (by my understanding) a lot of animation has a base in film and cinema, I wanted to learn those terms, compositions, and techniques to help my animations look and feel better… I guess.
Major and Interests
I’m a DTC major, and my super power is animation! Professional interests include 2D and 3D animation, 3D modeling, and game development.
Interesting Video Approach
Hmm… I don’t watch a lot of movies these days, so I can’t really think of anything from there, but an interesting video I’ve found on YouTube is the recording of “High Seas Hi-Jinx” from Cuphead:
The video starts out with a quick pan across a handful of scores from the game, before giving us a shot of the drums (which you can hardly hear), then the rest of the band. Audio-wise, the video starts out exactly how you wouldn’t expect a professional jazz band to sound: there are next to no drums, bass, or piano present, and the rest of the music just sounds empty when it’s played. A few shots of each section playing are shown, before the video cuts to in-game footage, and the entire song – with everything put together and mixed into its final version – begins playing over what we’re seeing.