VIDEO: Friday – Dispatches from Margate

The final video piece produced during a week long artist residency in Margate.

Building on the ideas explored in the previous piece, I decided to evolve the square framing and slide-like presentation to include simple shots of shapes and colours I found around Margate. However this time worked with a centre division in the frame, juxtapositioning two different shots.

When I was out shooting I did have this in mind, so I concentrated mainly on rectangle and square shapes, lining shots up in a way that would work with this format. The edit was more a proof on concept, it’ll be interesting to see how I can expand on it and use it within something more substantial.

Shot on a Sony A7s.

Previous pieces:

Tuesday (blog post)

Wednesday (blog post)

Thursday (blog post)

Advertisements

Video: Playing with the Panasonic GH4

I decided to buy myself a new camera over Christmas and after much deliberation (including watching countless video reviews) I bought myself a Panasonic GH4. It’s a small mirror-less Micro-Four-Thirds DSLR that comes crammed with some incredible features, such as 4k internal recording and a 96fps shooting mode in 1080p! Oh and it’s a pretty cheap piece of kit for what it does.

I took a few walks out with my new camera and cut together a few bits of test footage – take a look below (I’m still getting to grips with the camera, so do excuse the shoddiness)

Both were shot with the Panasonic Lumix 12-35mm f2.8 – it’s a fantastically sharp little lens with image stabilisation built in. The footage was cut together in FCPX and graded with the help of Impulz LUTs.

The camera isn’t perfect, but I’m fantastically happy with it – the 4k footage is beautifully detailed and can be downscaled in the edit for a 1080 export – providing some useful cropping options as well. Here’s an arty picture of me sitting on the ground in Regent’s Park, with my GH4:

me

I’ll likely post more test footage over the coming months.

(Re)constructing Reality

Okay, so here are a couple of interesting videos relating to the field of photography (or digital image processing to be more precise) that I’ve come across over the last couple of months. Some of these have been around for a while, but I thought as a collection they were sufficiently interesting to post up on here. What I find interesting about them is the way in which they deconstruct or alter the way in which we relate to reality; slowing down time to observe imperceivable movements or reinterpreting images to reveal seemingly hidden information.

A trillion frames per second

The first is a video from researchers at MIT who’ve developed a trillion frame per second (fps) camera. That’s correct – a trillion. So you’re probably used to watching video in the region of 30 fps and that’s fast enough to trick your mind into perceiving motion between frames.

However, this camera is capable of capturing light, as it travels from point A to point B. Although it doesn’t seem to be able to capture the movement of individual photons, it does seem able to capture individual pulses of light, as they move across the frame or scatter as they interact with certain materials.

Interestingly, it’s dubbed by it’s creators as the world’s ‘slowest fastest camera’ – despite being able to capture the speed of light, it can only record data in two dimensions and only one of these is spatial (the other is time).

So in order to record enough data to obtain a multidimensional movie, it must record the scene multiple times from slightly different angles and this takes time (up to an hour apparently). Anyway, the video below elaborates on this and features some of the incredible footage captured by the camera.

As the camera requires multiple takes to obtain enough data, it’s seems that its applications are somewhat limited. However it’s ability to capture light as it scatters across a scene is certainly valuable in the analysis of different materials and could even be used for what its designers describe as ‘ultrasound with light’. Read more here and here.

The camera never lies?

The first time I saw this I was pretty stunned. This video basically outlines a new and simple method of realistically inserting objects into an image after it was taken (in post-production essentially). This is done without the user having to perform complex measurements with perspective or lighting – instead, with minimal annotation the user can place objects into an image and the system will work out all the necessary lighting conditions to which it should to conform to. The result, as you will see, is incredible – with the inserted objects appearing as if they existed in the original scene.

What’s more, the researchers also found that subjects were unable to tell the difference between real images and images generated by their system. It’s looks so good it’s almost a little disturbing.

You sort of have to see it to believe it:

You can read more about it here, or their research paper here.

Reconstructing reality?

The last two videos are also pretty smart, describing processes by which poor quality images can be reinterpreted or reconstructed using the information within them.

If you can get past the rather dry voice over, the first involves the reinterpretation of data within an image allowing one to:

“Decide later if it stays a photo, becomes a video or turns into a lightfield so you can digitally refocus”

The final is one you are likely to have come across and details an extraordinary feature in the upcoming release of Adobe’s photoshop series (CS6) – It’s an image deblurring feature which seems to work remarkably well, able to pluck lost detail from what seems like nowhere:

I definetely ran out of steam towards the end of this post.

Thanks.