The acceleration of technological progess has been the central feature of this century. Predictions are that the pattern of exponential growth (Moore’s Law being the most prominent example) that technology follows will lead to change comparable to the rise of human life on earth, or change so rapid and profound that it will represent a rupture in the fabric of human history. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence; this change is the hypothetical event defined as “Technological Singularityâ€. When technological progress becomes so rapid and the growth of artificial intelligence so great, the future after the singularity will become qualitatively different and harder to predict. Utopian predictions offer a gradual ascent with enormous benefits to mankind, contrasted starkly with other darker visions of the singularity with rapidly self-improving superhuman intelligences creating paradigm shifts that would represent a breakdown in the ability of humans to model the future thereafter, leading to the end of the human era.
In respect to the aforementioned subject, our projects primary concern is to simulate conceptual modeling techniques implemented in the design, learning and production processes of an Artificially Intelligent Digital Signal Processor. In this case our simulation is realized via sound generation and in direct connection is the abstraction that, an AI-DSP could be acknowledged as an extension of it’s creator and would be designed in this respect. Although sensitive and concerned with the implications of the singularity in general, there is great potential that leads us to have positive and even utopian thoughts of a human-equivalent machine designed to generate sound autonomously for the benefit of mankind.
Musically the topic was implemented in various ways: The composition is divided into three parts: Design, Test and Launch. Design begins with pure sine waves which generate different interferences. Thereafter, a more complicated structure, consisting of voice, feedback and bit errors develops. Test starts with processed voice recordings played in the form of a canon. Different tones and words simulate sounds already existing in our surroundings. Bit by bit the machine takes over the action of the musicians and manipulates the voice recordings by means of a logical defined structure. Launch is based on Christian Wolff’s techniques for structured improvisation. Digital noise processed with different filter settings is played according to a call & response scheme. Lastly, sounds are generated with spatial movement according to a score.
How to Survive in the Post−Human Era was performed live as an 8 channel version and debuted at the Festival for Applied Acoustics, Köln-Germany, 11.06.2011. Ephraim Wegner: Laptop & Midi Controller; Cem Güney: Laptop & Midi Controller.