top of page

Dr Memo Akten

Goldsmiths

iGGi Alum

Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression


Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression.

This research investigates how the latest developments in Deep Learning can be used to create intelligent systems that enhance artistic expression. These are systems that learn – both offline and online – and people interact with and gesturally ‘conduct’ to expressively produce and manipulate text, images and sounds.

The desired relationship between human and machine is analogous to that between an Art Director and graphic designer, or film director and video editor – i.e. a visionary communicates their vision to a ‘doer’ who produces the output under the direction of the visionary, shaping the output with their own vision and skills. Crucially, the desired human-machine relationship here also draws inspirations from that between a pianist and piano, or a conductor and orchestra – i.e. again a visionary communicates their vision to a system which produces the output, but this communication is real-time, continuous and expressive; it’s an immediate response to everything that has been produced so far, creating a closed feedback loop.

The key area that the research tackles is as follows: Given a large corpus (e.g. thousands or millions) of example data, we can train a generative deep model. That model will hopefully contain some kind of ‘knowledge’ about the data and its underlying structure. The questions are: i) How can we investigate what the model has learnt? ii) how can we do this interactively and in real-time, and expressively explore the knowledge that the model contains iii) how can we use this to steer the model to produce not just anything that resembles the training data, but what *we* want it to produce, *when* we want it to produce it, again in real-time and through expressive, continuous interaction and control.


 

Memo Akten is an artist and researcher from Istanbul, Turkey. His work explores the collisions between nature, science, technology, ethics, ritual, tradition and religion. He studies and works with complex systems, behaviour, algorithms and software; and collaborates across many disciplines spanning video, sound, light, dance, software, online works, installations and performances.

Akten received the Prix Ars Electronica Golden Nica in 2013 for his collaboration with Quayola, ‘Forms’. Exhibitions and performances include the Grand Palais, Paris; Victoria & Albert Museum, London; Royal Opera House, London; Garage Center for Contemporary Culture, Moscow; La Gaîté lyrique, Paris; Holon Design Museum, Israel and the EYE Film Institute, Amsterdam.

Please note:
Updating of profile text
in progress

Mastodon

Other links

Website

LinkedIn

Twitter

Github

-

bottom of page