gif-3 is a live archive/annotator/working tool for experiments in real-time composition and emergence in meanderings. slow-evolving sounds in a process that is itself slow-evolving, a system for remembering and remembering to remember:

you can't will spontaneity. but you can introduce it with a pair of scissors.
– william burroughs, the third mind

structure and process


pursuing a cyclic process of experimenting, capturing, listening again, and preparing, the work embraces a spirit of meandering – in a physical sense, of slowly changing environments, and stopping at the places in-between, mediating in real-time the experience with the immediate soundscape – and recording. to experience recordings afterwards is to filter them through the capturing device and and listening device and through the listening situation in a later time and place.



recording in-betweens, capturing perspectives of a process

rehearing recordings mediated in feedback echo/looping systems that require active feeding

Jonah - radar delay - subtracting (in a loop), a delayed version of the signal from itself

Keep in google docs, render nicely?



cluster feedback feedback improvisation improvisation feedback--improvisation free play free play feedback--free play performance ecosystem performance ecosystem feedback--performance ecosystem recursion recursion feedback--recursion improvisation--free play real-time composition real-time composition improvisation--real-time composition meandering meandering improvisation--meandering strategisch-geplante spontänitatet strategisch-geplante spontänitatet performance ecosystem--strategisch-geplante spontänitatet performance ecosystem--real-time composition field recording field recording performance ecosystem--field recording listening listening performance ecosystem--listening strategisch-geplante spontänitatet--real-time composition strategisch-geplante spontänitatet--field recording real-time composition--listening walking walking real-time composition--walking macroform from microform macroform from microform real-time composition--macroform from microform field recording--listening responding responding listening--responding attention attention listening--attention ephemerality ephemerality listening--ephemerality responding--attention delay delay ephemerality--delay ambience ambience soft fascinations soft fascinations ambience--soft fascinations place language place language ambience--place language meandering--walking walking--place language buffer buffer buffer--ephemerality buffer--delay delay--feedback delay--performance ecosystem archive archive archive--attention self-reference self-reference macroform from microform--self-reference self-reference--recursion minimalism minimalism minimalism--self-reference minimalism--recursion isomorphism isomorphism minimalism--isomorphism recursion--isomorphism acoustic ecology acoustic ecology acoustic ecology--listening acoustic ecology--attention acoustic ecology--ambience acoustic ecology--soft fascinations

Emergence in audibly-structured soundscape

emergence audiosystem audiosystem imprint in space imprint in space audiosystem->imprint in space field recorder field recorder audiosystem->field recorder perciever-performer perciever-performer imprint in space->perciever-performer imprint in space->field recorder perciever-performer->audiosystem perciever-performer->imprint in space

Figure: Emergence in place-language improvisation

Understanding emergent sound structures as macroforms emerging from interactions of the microforms.

macroform_from_microform cluster_0 real-time audio processor space space perciever-performer perciever-performer space->perciever-performer input input space->input perciever-performer->space perciever-performer->input process process input->process output output process->output output->space

Figure: Emergent macroform from microform interaction

A real-time composition, to let "the musical (macro-level) structure emerge from sound itself and its internal organization (micro-level)." 1

Resonating with selections from Analysing Audible Ecosystems and Emergent Sound Structures in DiScipio’s Music (Renaud Meric, Makis Solomos)1 :

While composing with an ecosystemic approach, the composer creates an audio system that interacts with the environment (i.e. space). This space, in which and from which music emerges, is also the listener’s space. Thus what emerges is the result of a confrontation between the listener’s cognitive system and the audio system used in the musical work. The emergent sound is difficult to define: its general outline is unpredictable and unstable; it is dependent on a dynamic musical space, which is constructed by active listening and an active audio system simultaneously.

focusing on the ephemeral moment in which music emerges in the interaction between the listener and the product of the audio system inside a specific space.

in reality, we don’t listen to sound but to its own “imprint” (empreinte), in the sense of the word developed by Georges Didi-Huberman (2008).

in his own music, Di Scipio opted for complex dynamic systems: “Chaos and the dynamics of complex systems, as accessible with iterated numerical processes, represented for me a way to compose small sonic units such that a higher-level sonority would manifest itself in the process” (Di Scipio inAnderson, 2005)

In one of his first articles (Di Scipio, 1994), he elaborated a “theory of sonological emergence”, whereby form (macroform) is viewed as “a process of timbre formation” (Di Scipio, 1994: 205)

The idea of emergent sound structures is related to the elaboration of a sub-symbolic theory. In the “theory of sonological emergence”, the emergence of a higher level should happen through grains and samples, neither of which are symbols, as they are located on a low level (cf. Di Scipio, 1994: 207). With composed interactions (cf. infra), Di Scipio puts the interaction at the signal level: all the information exchanges have a sonic nature (cf. Di Scipio, 2003: 272). We can draw a parallel between this strategy and the model of emergence in cognitive science. To the question “What is cognition?” the “computationalist” model answers “Data processing: the manipulation of symbols from rules” (Varela, 1996: 42), while the emergence model answers “The emergence of global states in a network of simple components” (Varela, 1996: 77). Regarding music, the issue at stake here is as follows: if we want the higher level (the macroform) to appear as an emergence and not as an independent construction, we have to work only at the lower level, abandoning the intermediate level, which is the level of symbols.

According to emergence theory, the emergence of sound structures is possible because of the fact that the composer develops systems (in the sense of cybernetics) close to living systems, which are characterized by their capacity for auto-organization

Whilst not directly applicable (much of the recordings of free play references past melodies played and familiar intervals), it speaks to an attitude of openness and responsiveness in dialogue with the current situation and other players through which the musical moment emerges.


inspiration from tim shaw

resonating while listening to tim shaw on listening and field recording

Tim Shaw, on indeterminancy and uncertainty in field recording and soundwalks, finds environmental sound, much richer to practice responding to specific sites/themes outside of studio. Listening being the key practice of the work.

Field recording not as a documentary of one place to another, more as a live, performative act. Whereas traditionally it might be understood as: go somewhere with mics, sit, press record, bring it back to studio, edit/layer... the performative aspect of walking to the site, setting up, these gestures aren't visible. There's a dislocation between making and presentation. How to fold together?

Recording, composing, improvising with the soundscapes that move through.

Normally, recordings are deleted at the end of the walks. Flatten presentation and process, all mistakes like handling noise.

Headphones allow dynamics to come-in - is it sounding in the world outside or it is processed through? Starting with omni-directional mics, then take recordings using contact/hydro/electromic.

The system is malleable and shifting, like the soundscape, always shifting and indeterminate.

Always tries to carry a recorder, decides to start when something catches interest, finding resonant spaces, or attaching a contact mic. Starts by walking and seeking with the ear and eye.

How long to let the recorder roll for is question of composition in itself.

Not about archiving, possesion, but about the process, and the making and the act of it is often more interesting than the recording itself. Using this process to learn the space, how does it react? So it's expansive - what are the possibilites, in the most holistic way?

Not going into a space with too many ideas, being open to the unpredictability, allowing those unexpected events to be just as meaningful as those planned.


field recording

if the project seems broad, scattered, loosely organized, and unfocused, that's because it is. but one crucial theme is the experience of making-in-real-time, acting and reacting in a live way.

the records then are soundscapes of a "genuine but ephemeral set of circumstances existing in a precise moment of time," rather than soundscapes "meticulously fabricated and controlled in the vacuum of an audio editor".1

actual events from a specific time and place ... a moment’s explicit actuality and serendipitous fragility. (Swift 8)

... a soundscape is not simply a sum of all these sounds; rather it is a particular subset filtered through the context of a given environmental condition (Swift 6)

Curiously, wishing to capture these events often blocks these events from occurring. This could be an example of the observer effect, a "common phenomenon where the act of observation can alter the situation being observed." (Swift 3) Perhaps the psychic attention to recording takes valuable attention away from the actual act, a moment of flow, a sort of reset.

Oft noticed is autotelic free play on an instrument -> maybe a moment, groove, emerges - though to capture this! -> stop playing, set up recorder, try to find it again, takes some time, energy is distracted, and the recording then captures this difference and distraction, not the original moment of free play and discovery.

Several strategies to deal with this have been considered:

Setting these to run still requires some forethought, whereas the moments occur seemingly unexpectedly/unpredictably, and most notably while not recording. A possible possible solution is to intrinsically integrate the recorder into the sound ecosystem so that recording is a seamless act with listening while using the system.

as an aside, the work is about sounds situated in spaces. > One of Bernie Krause's tenets is that sounds should experienced in the context of their environment rather than attempting to isolate the sound. (Swift 10)

  1. Swift 

2020.06.18/meta | allgaü

archiving process

(voice memo transcription, need to clean)

the archiving process is to have the original raw file, with maybe some markers at the beginning fo the file to talk in what the context of the thing is (first section is meta description). to audition, drop it into a folder that compresses the original file, drops the original onto a hard drive. and it's available for audition in the annotator. layers to configure/toggle:

-mark points of time/regions
-tag/make notes on these markers

That's an annotation added in the process of reviewing. In the process of making/creating the moment you can easily make a marker or start and end a region, which saves some time, then when listening back you already have a place to start watching/listening.

In addition to the manually entered annotation, any interactions with the patches are logged with a line which is the information-preserved-transformation of that interaction. so the interaction is say: route the mic to the speaker, fade up in 10 secs to 50%, do that in first in 5 seconds. this is a simple single line instruction, and this is written down in that way, possibly also as a human-readable, possibly easy to type, format. and this is saved as a log file and can also be toggled, shown on and off in the annotations.

the benefit of also keeping track of all these digital interactions as instructions is one has the easy possibility to change later. what would have happen if we didn't trigger this? so you can remix it later by running it through the patch again, in real time, in this case loading the original file.

the patch saves three files:
-text file: log and annotations
-sound: raw, straight from the microphone
-sound: from patch, exactly what's sent to speakers

with the first two elements, we can recreate the third file, from the list of log instructions. then it can be revisited.


trying to get organized






toggle waveform
zoom: 0:00 / 0:00