Saturday, February 16, 2013

Remixing

I watched this video some time ago but you should watch it, and the whole series if you haven't...


Everything is a Remix Part 1 from Kirby Ferguson on Vimeo.

Friday, April 20, 2012

How i remember it

There’s an app coming soon that allows you to remove things from your photographs. Irritating things like passers-by or cars that obscure you - or even the background you were hoping for. Fittingly it’s called Remove and it works quite simply by taking a few pictures whenever you think you are taking one and then identifies the bits that moved (and can therefore be removed).



Now that sounds simple enough, and quite a promising service – even one that phone and camera manufacturers might include as standard once we get over storage issues and the like. But it got me thinking about memories. I’m lucky enough to be old enough that my entire life hasn’t been captured on camera and a good proportion of what has, hasn’t yet made it on to the web (yet). I don’t really remember much before being about 7, but most of what I do remember of that, and probably the next 5 years, is largely connected to or based on photos and/or stories recounted to me.

What does this mean for those younger than me, both now and in the future? At first glance they’re in a better position, with a greater degree of certainty. The building blocks of our memories are now stored for us on the web; on Flickr, Instagram, YouTube and the like. However, as Jonah recounted in Wired last year, memories are reconsolidated - they aren’t static but are recreated at the point of recall.

We all know what Photoshop does for supermodels, and in film I’ve seen what can be done working with a great company that does product placement. They digitally embed brands into pre-recorded film and TV content. They can (and do) switch the model and brand of TV or PC in your favourite soap, or insert a can of a certain brand of cola in front of the presenter throughout that talent show you watch but shouldn’t. Now they can’t do this real-time yet but it is coming soon enough. It’s also not hard to imagine how one could use technology a bit like Photosynth to adapt photographs in a similar way, using location to find other photos and matching up the content to reshape, replace and remove items in the shot either dynamically or in batches.

As brands seek to build deeper and more meaningful relationships with consumers, and are willing to pay the premium for doing so, it’s not hard to see where these trends and technologies could take us. As someone like Facebook can just buy the rights to my photos on Instagram, how long before they are adapted and reshuffled to reconsolidate my memory or other’s perceptions of me, my taste and preferences? How different is this really to a sophisticated superstitial, or to Project Glass and augmented reality?

Perhaps, as the guy in Eternal sunshine says, these adaptations will be “on a par with a night of heavy drinking. Nothing you'll miss”, and probably we’ll be willing to suffer the intrusion if the benefit is significant enough. But I think I prefer Huxley’s notion – “that every man's memory is his private literature”.



Monday, February 27, 2012

Pinteresting formats

I've been using pinterest for a while now, and at first treated it as largely an outgoing channel. I generally don't do much of it at work, just on the way to and fro - and hence on the mobile. But recently I started consuming a bit more, browsing the lists of my friends, and of the masses.

Now I have a lot of apps; rss readers, social apps and all the usual suspects and one thing beats home about this one - it's the scrolling. It's the one aspect of the experience that seems to be new, well not new, but embraced in the app. It's a pita if you have a poor connection (and to be honest the app crashes quite regularly for me) but when it works it works nicely, and because its just pictures its nice to browse like that. I guess because it's just big pictures that people knew would be scaled down it works like that...

So I was thinking that this kind of stream, with reasonably low importance and high visual recognition in the main, could suit use of accelerometer tilt as an interface. I think it would speed things up a little - particularly when you explore the All category in the evening! I don't know if this is because I'm in the UK but I find that browsing All in the morning yields much more interesting stuff - perhaps its to do with traditional news reading patterns, and of course that I am not that interested in floral arrangements, pictures of cats, or finger nails (in that way it reminds me of groupon). I also suspect we'll see more really long graphics designed to be consumed downwards in the pinterest world, vying for some viral fame...

Monday, February 20, 2012

A ramble about quantified selflessness

It doesn't sound very charitable but in this age of chuggers and £10 direct debits, people often feel like they want something in return for their charitable support... I'm sure it's a combination of doubt, suspicion and more besides but its still a major issue for charities.

In the past the focus was often on feedback, and on providing some means of equating your ongoing commitment to the cause. We created episodic sites that created links with communities, the opportunity to communicate with them, and provided a much richer look at the situations and how they were improved with aid. With a group of other charities we looked into means of tracking a donation, how far and wide it traveled, the good it really did, the people it reached. We talked about metadata, rfid platforms, technology and all that jazz. But it all fell at the feet of practical considerations, and although it was a worthwhile goal and there was a lot of support, there were too many challenges to move forward.

Roll on a few years and we have banks helping us by rounding up pennies in pounds, gadgets like striiv enabling us to donate when we achieve our fitness goals and little checkboxes on restaurant bills and planes to add a little something for others. We even have platforms like theMutual that encourages donations by providing incentives like exclusive deals. So there are a load of good ideas around that make it easier to do a little good. But as we approach a world where the most accurate record of our contributions becomes our Facebook timeline, and where our obsession with creating and gathering data seems to grow and grow, it seems that people are and can increasingly be satisfied by a little social selflessness status update now and again.

Often it seems that to get the real mass market into thinking about doing something they need to be able to really see what's in it for them perhaps there is opportunity in this area of what i find myself (slightly squirmingly) calling quantified selflessness; feeding back to a passive individual the good that they are doing and then allowing them to share it as they see fit.

Friday, February 10, 2012

Colour me

I have been mulling over an idea for a while now, since watching something on the TV about perception. I think it was Horizon or similar, but it was broadly looking at how the brain works and how we perceive the world around us. Part of this revolved around colours. Colour perception, it seems, is closely linked to language - to the extent that apparently we can only easily perceive colours for which we have a word. This was illustrated through an African tribe who's language only contains 5 colour words one of which covers most of what we would call blues and greens. When shown a selection of coloured shapes and asked to distinguish the odd one out, they struggled to spot the one that was (in our language and to my brain) distinctly a blue amongst greens.

The conclusion was basically that our perception of the world is affected by our knowledge of language - that people with few discernible colour words might see the world differently. This struck me particularly because my sons are colourblind so I already know that they perceive the world differently, but I guess I hadn't considered that language would be such a key factor.

Anyway it got me to thinking that it makes sense to try to help people get as good a grasp of the language of colour as possible. I'd obviously really like my kids to have as rich an experience of the world as they can, but also all those other people who haven't encountered, learnt or experienced a wide variety of colours.

So a project perhaps, to teach people more colours. A simple dynamic interface, I thought, that draws colours with distinct names from a source (like Wikipedia) and presents the name, description (perhaps to aid those who have difficulty with some colours) and then a selection of photographs that give contexts in which the colour appears. This apparently is the other element that's important in aiding perception, and one would imagine, recall. Could use something like the ideelabs api I would think.

Not fully formed but given some time or a helping hand an achievable little project.