Jonathan Marmor

"02-03-04"

Last night, a few of my colleagues at The Echo Nest and I stayed late after work to make an informal recording of music I wrote ten years ago. A few weeks ago I was migrating some audio files to a new hard drive and came across the score for a piece titled “02-03-04” and realized the ten year anniversary of its first performance was coming up. So I sent an email to a group of my multitalented colleagues and before I knew it had an informal recording session scheduled for exactly ten years to the day after its first performance.

Beyond the ten year anniversary and the sequential numbers in the date, there is more lore. In the original performance, indie rock guy and reformed composer John Maus played his guitar part too fast and loud, creating an awkward situation. This date is also the anniversary of my grandfather Ed Marmor’s death. He was a pop music publisher in the 1950s, and published the hit song “Do You Wanna Dance" among many others. I’m sure you’ll hear the influence of that song in this music.

This weekend I rewrote the software that generates the score three times, each time with a completely different approach, resulting in this. The score is just a list of events, each of which consists of one or more performers’ names with pitches next to them. If a performer has pitches listed, he or she starts playing those pitches in any voicing, repeating the chord regularly but slowly enough that there isn’t a sense of a pulse. If there are no pitches listed, the performer stops playing. It’s written so at any moment the harmony is consonant, but any common tone transition between consonant harmonies is possible. The version we played has less harmonic movement than the typical output.




There are several traditions in which pieces of music are named after a person. Music can be named after a member of the band, like the Miles Davis tune “John McLaughlin”. Dedications are sometimes used as titles, like Morton Feldman’s “For Philip Guston”.
The first substantial piece of music I wrote was very methodical music for six instruments in rhythmic unison covering six octaves. I’ve made numerous arrangements of it in the last 20 years, and I keep coming back to it for some reason. It’s the place where all the other music I’ve made comes from. So it seemed appropriate — and funny — to call it “Jonathan Marmor.”
In a future post, I’ll describe the later piece I wrote called “For Jonathan Marmor.” :)
I’ll be talking about this music and how it was made at the Automatic Music Hackathon Friday, December 6th, 2013 at Etsy in Dumbo, Brooklyn. There are several other speakers who will have more fascinating things to say, and the violin duo String Noise will play some music, possibly including “Jonathan Marmor,” all starting at 8 PM with a reception at 7:30. The arrangement of “Jonathan Marmor” for two violins is pictured here. See here for more info: http://automusic.eventbrite.com about the talks, performances, and hackathon.

Download the full score of “Jonathan Marmor” arranged for two violins here.
The code that generated the composition, arrangement, and notation for “Jonathan Marmor” will eventually live here: https://github.com/jonathanmarmor/jonathanmarmor
permalink

There are several traditions in which pieces of music are named after a person. Music can be named after a member of the band, like the Miles Davis tune “John McLaughlin”. Dedications are sometimes used as titles, like Morton Feldman’s “For Philip Guston”.

The first substantial piece of music I wrote was very methodical music for six instruments in rhythmic unison covering six octaves. I’ve made numerous arrangements of it in the last 20 years, and I keep coming back to it for some reason. It’s the place where all the other music I’ve made comes from. So it seemed appropriate — and funny — to call it “Jonathan Marmor.”

In a future post, I’ll describe the later piece I wrote called “For Jonathan Marmor.” :)

I’ll be talking about this music and how it was made at the Automatic Music Hackathon Friday, December 6th, 2013 at Etsy in Dumbo, Brooklyn. There are several other speakers who will have more fascinating things to say, and the violin duo String Noise will play some music, possibly including “Jonathan Marmor,” all starting at 8 PM with a reception at 7:30. The arrangement of “Jonathan Marmor” for two violins is pictured here. See here for more info: http://automusic.eventbrite.com about the talks, performances, and hackathon.

Download the full score of “Jonathan Marmor” arranged for two violins here.

The code that generated the composition, arrangement, and notation for “Jonathan Marmor” will eventually live here: https://github.com/jonathanmarmor/jonathanmarmor

permalink


radio


Listen here

"Wolf Notes" episode 11-18
“Dedalus Ensemble at Roulette, Brooklyn on Sept. 9th, 2013”
With commentary by Kevin Weng-Yew Mayner

Jonathan Marmor: Penguin Atlas of African History
Quentin Tolimieri: Any Number of Instruments
Michael Vincent Waller: Ritratto
Craig Shepard: Coney Island, April 15, 2012
Jason Brogan: Deux études
Travis Just: The young generation is right

Amélie Berson, flûte
Cyprien Busolini, alto
Pierre-Stéphane Meugé, saxophones
Thierry Madiot, trombone
Deborah Walker, violoncelle
Didier Aschour, guitare

Hear this broadcast
Monday, Nov. 18th, 11:00pm EST
The Classical Network
http://wwfm.org

Listen online: http://rdo.to/WWFM
or on terrestrial radio in NYC, NJ, and Philadelphia: http://www.wwfm.org/technical.shtml
WKCR HD2 89.9 HD2 New York City
WWFM 89.1 FM Trenton/Princeton, NJ
WKVP HD2 89.5 HD2 Cherry Hill/Philadelphia


MIDI Digester

The Echo Nest Remix API comes with a demo, enToMIDI by Brian Whitman, which attempts to transcribe any audio file using only Remix’s audio analysis data, and spits out a MIDI file. The purpose of the EN audio analysis data is to provide a summary of the music, not to do the source separation necessary for an accurate transcription. This means the resulting MIDI file usually doesn’t sound much like the input.

MIDI Digester is a very small script that runs audio through enToMIDI, plays back the resulting MIDI using Quicktime and its built in piano synthesizer, records the audio with sox, then repeats the process as many times as you want. Each repetition strips away more of the original musical material and accumulates the sound of enToMIDI.

Check out this demo which “digests” a 7.66 second excerpt of the traditional bluegrass tune “The Groundhog” played by the same Quicktime piano synthesizer.


Concerts in France

More performances of my new piece for sextet “Penguin Atlas of African History” by Dedalus:

Paris (Instants Chavirés) on November 22

Montpellier (La Chapelle) on November 23



Dedalus plays “Penguin Atlas of African History” at Roulette

Please consider coming out to Roulette in Brooklyn tomorrow, Monday, September 9th, 2013 at 8pm to hear my new piece “Penguin Atlas of African History” for Piccolo, Soprano Saxophone, Viola, Cello, Trombone, and Electric Guitar with slide. It will be performed by the amazing Didier Aschour and his ensemble Dedalus from Montpelier, France.

The concert is featuring a bunch of music by my friends: Travis Just, Devin Maxwell, Cat Lamb, Jason Brogan, Quentin Tolimieri (his music is some of my favorite in the universe), Craig Shepard, John Hastings, and Michael Vincent Waller.

Details here: http://roulette.org/events/dedalus/.

My next concert will be September 27th in Montreal. The concert will be three new ~20 minute pieces for Violin and Guitar by Andre Cormier, Mirko Sablich, and me.


Exquisite Corpses: A collaborative music composition and performance experiment

musichackathon:

At the August 31st, 2013 Music Hackathon NYC, we’ll be attempting to collaboratively create about an hour of new music in just 7 hours. Anyone is welcome to join us in this experiment, but also feel free to do your own thing at this hackathon — there will be an opportunity for everyone to present their work.

At 8pm we’ll perform what we’ve come up with. If you’re not contributing, please come listen!

Exquisite Corpse is a method for collaboratively creating an art work. One person or group gets it started, then hands it off to any number of other people or groups in sequence, who can add to it or modify it however they see fit. A key twist is that most of the prior work is hidden from the group currently working on it — the current group only has part of the existing thing to build from. This can lead to hilarious drawings of grotesque bodies with hands coming out of necks, for example, hence the name exquisite corpse.

This has been done with music before, but I’m not aware of an exquisite corpse where the medium is software, electronics, or musical instrument building (anyone know of any?). In the context of Monthly Music Hackathon NYC, the only restriction on projects and approaches is that they are somehow related to music: acoustic or electronic live performance, playback of prerecorded sound, real time generation of sound, software, hardware, notated music, improvisation, musical sculpture, software that does something else related to music, etc. If you’re interested in participating but wondering if what you do is an appropriate fit, stop worrying and just come contribute. I’m particularly interested in seeing “corpses” where each round of modification is approached completely differently. So bring your instruments, amps, laptops, audio interfaces, soldering irons, guitar pedals, and ideas. If you’d like to discuss what this will be like or throw out ideas or ask questions please send a message to our email discussion group (Go here to subscribe).

Here’s how it will/might work:

* There will be multiple pieces circulating the room.
* The day will be broken up into one hour segments from noon to 8. At the top of each hour we’ll switch pieces.
* Depending on the number of people who want to participate and if folks want to form groups or not, there may be more or fewer pieces.
* Feel free to bring an idea or piece that you’ve already started — we’ll transform it completely, I’m sure!
* This schedule is in place just to get us going. We can break out of it on a case-by-case basis or altogether if that makes sense.

Noon

Corpse 1 - Group A
Corpse 2 - Group B
Corpse 3 - Group C
Corpse 4 - Group D
Corpse 5 - Group E
Corpse 6 - Group F

1 PM

Corpse 1 - Group F
Corpse 2 - Group A
Corpse 3 - Group B
Corpse 4 - Group C
Corpse 5 - Group D
Corpse 6 - Group E

2 PM

Corpse 1 - Group E
Corpse 2 - Group F
Corpse 3 - Group A
Corpse 4 - Group B
Corpse 5 - Group C
Corpse 6 - Group D

3 PM

Corpse 1 - Group D
Corpse 2 - Group E
Corpse 3 - Group F
Corpse 4 - Group A
Corpse 5 - Group B
Corpse 6 - Group C

4 PM

Corpse 1 - Group C
Corpse 2 - Group D
Corpse 3 - Group E
Corpse 4 - Group F
Corpse 5 - Group A
Corpse 6 - Group B

5 PM

Corpse 1 - Group B
Corpse 2 - Group C
Corpse 3 - Group D
Corpse 4 - Group E
Corpse 5 - Group F
Corpse 6 - Group A

6 PM Overflow / Scramble / Prep

7 PM Tech Rehearsal

8 PM Performance of the Exquisite Corpses

9 PM Talks about how the corpses were made and presentations of other projects worked on during the hackathon

WHEN

Saturday, August 31st, 2013
Noon Hacking starts
8 PM Concert

WHERE

Slader
199 Lafayette St, Suite 3B
New York, NY 10012
http://goo.gl/maps/QRuj8

FREE, but please RSVP at http://monthlymusichackathonnyc.eventbrite.com/


Audience guide for my music entitled “April 10th, 2010 at Ontological Theater at the St Mark’s Church, New York, NY.”  Movement titles are from the lyrics.  The lyrics are total nonsense English words that adhere very strictly to automatically generated rhyme schemes and metric patterns.

Hear it: http://archive.org/details/April102010Ontological-hystericTheater

Audience guide for my music entitled “April 10th, 2010 at Ontological Theater at the St Mark’s Church, New York, NY.” Movement titles are from the lyrics. The lyrics are total nonsense English words that adhere very strictly to automatically generated rhyme schemes and metric patterns.

Hear it: http://archive.org/details/April102010Ontological-hystericTheater


A couple concerts of my music in the next few months:

Friday, July 19th, 2013 at 7:30 pm
St Mary at Hill
London
Anton Lukoszevieze - Cello
Tim Parkinson - Keyboards
http://www.musicwedliketohear.com/

A re-arrangement of “Cattle in the Woods” (https://soundcloud.com/jonathanmarmor/cattle) for cello, violin, two reed organs, synthesizer, and fan. Music by Jurg Frey and Christian Wolff also on the concert. I hear there will be other performances in other locations as well, but I don’t where or when yet. Thanks to Tim Parkinson for inviting me to be a part of this.

Monday, September 9, 2013 at 8:00 pm
Roulette
New York City
Dedalus Ensemble (http://dedalus.ensemble.free.fr/extraits.html)
https://roulette.org/events/dedalus/

They’ll be playing a new piece for flute, viola, saxophone, cello, trombone, and electric guitar. “Since its inception in 1996, the DEDALUS Ensemble of Montpellier, France has been a leading purveyor of contemporary experimental music across the Atlantic. For its US debut, DEDALUS Ensemble has initiated a project which gives a selective overview of new music from NYC. With Made in USA, the ensemble will be working with an esteemed roster of New York-based experimental composers, including Travis Just, Devin Maxwell, Cat Lamb, Jason Brogan, Quentin Tolimieri, Jonathan Marmor, Craig Sheppard, John Hastings, and Michael V. Waller. These exciting new compositions will be performed by Dedalus both in at Roulette, and also, jointly in Montpellier, France.” Thank you to Didier Aschour for inviting me to be a part of this one. So much for “New York-based”… who wants to play my music in the greater Boston area?

More shows are in the works as well…


Interview about Jazz & Technology Forum

Here’s a short interview I did about Jazz & Technology Forum:

Jonathan Marmor composes experimental music, writes software for exfm, helps coordinate Monthly Music Hackathon NYC and plays tabla. He’s one of the primary instigators behind this weekend’s Jazz and Technology Forum at Ace Hotel New York in honor of UNESCO International Jazz Day. We had a sit down with him about some stuff he knows about.

See more on our calendar at www.acehotel.com/jazzforum.

//////////

+ What is Music Information Retrieval and how does it interest you?

Music Information Retrieval (MIR), or Music Information Research, as it’s frequently called these days, is a field of science focused on using computer science techniques such as digital signal processing and machine-learning to better understand music. A classic example is the challenge of automatically classifying a large collection of audio files into genres, based entirely on the characteristics of the audio signal.

I’ve personally always been interested in learning about how music functions and making up my own rules for my music. MIR scientists approach analyzing music differently than anything I ever experienced in music school or studying Indian music. It was eye-opening and revealed I had been underestimating the number of unanswered basic questions about how music works. This opened up many new avenues for speculation on musical fantasy worlds. It’s like there was a box with a hundred knobs, and now that box has an unknown but much larger number of knobs.

+ What kinds of projects span the code/music relationship?

At Monthly Music Hackathon NYC we’ve had an incredible variety of projects spanning music and technology. One of my favorites is a wind harp built out of a metal rain gutter, driven by a computer controlled fan. One of the inspirations for this jazz-focused hackathon was Ben Lacker’s Jazz Drum Machine, which chops up audio files of jazz drum solos, classifies all the segments by pitch characteristics and positions within the meter, then allows you to fade in and out loops of related sounds. It shows how having data describing audio can lead to really beautiful and unexpected art. Ben will be speaking at the Jazz & Technology Forum about how data generated by MIR tools can be used to create new music.

+ What do you think some ideal pairings of skill sets would be?

I’d love to see someone with a deep knowledge of jazz history, a data scientist and an interactive artist team up to create a visualization (with audio) demonstrating a seemingly inconsequential nuance that makes jazz expressive, such as the variations in pitch intonation in 100 performances of the head of St. Louis Blues.

Originally posted here: https://www.facebook.com/jmarmor/posts/10200665102444479


Another post advertising last month’s Music Hackathon NYC.

acehotel:

This Saturday at Ace Hotel New York, we’re hosting a Jazz & Technology Forum with Monthly Music Hackathon NYC as part of UNESCO International Jazz Day. It’s a chance to meet up and share knowledge, ideas and challenges among the jazz, music technology, music information research, and musicology communities, to brainstorm new possibilities and act on those possibilities quickly and in tandem. An evening concert will showcase music made that day, and the day’s discoveries will be presented on the web.
The day will start with two talks by Monthly Music Hackathon regulars Brian McFee and Ben Lacker, focusing on using new technology for research and creation, respectively. In the afternoon, you and your new best friends will share, think and make beautiful music together, culminating in a free concert open to everyone. See the full schedule on our calendar, and an interview with Jonathan Marmor, one of the primary instigators behind this weekend’s meeting of minds.
  Another post advertising last month’s Music Hackathon NYC.

acehotel:

This Saturday at Ace Hotel New York, we’re hosting a Jazz & Technology Forum with Monthly Music Hackathon NYC as part of UNESCO International Jazz DayIt’s a chance to meet up and share knowledge, ideas and challenges among the jazz, music technology, music information research, and musicology communities, to brainstorm new possibilities and act on those possibilities quickly and in tandem. An evening concert will showcase music made that day, and the day’s discoveries will be presented on the web.

The day will start with two talks by Monthly Music Hackathon regulars Brian McFee and Ben Lacker, focusing on using new technology for research and creation, respectively. In the afternoon, you and your new best friends will share, think and make beautiful music together, culminating in a free concert open to everyone. See the full schedule on our calendar, and an interview with Jonathan Marmor, one of the primary instigators behind this weekend’s meeting of minds.

 


Interview about my music from 2010

I just stumbled on this old interview one of my colleagues at WNET did with me after attending a concert of my music. I give extensive, exhaustive answers, covering my justification for making automated music including randomness and silences.

Here’s one of the movements of the piece being discussed: https://soundcloud.com/jonathanmarmor/april10-3

"Jonathan Marmor at the Ontological Theater + Interview"
by Bijan Rezvani
Thursday, April 15th, 2010

On Saturday night I made it to the Ontological-Hysteric Theater at St. Mark’s Church for a new piece of music by computer-aided algorithmic composer Jonathan Marmor. Marmor conducted 10 human beings through a deceivingly lovely alien song cycle. Without the romantic flourishes typical of our pledge-time heroes, the piece used shifting sound combinations patterned with long silences to warp the temporal experience.

To learn more about the composer and his unique piece, which is streamed below, I had Jonathan Marmor answer a few questions:

Bjian: What’s the name of the piece, and when did you write it?

Jonathan Marmor: The piece doesn’t have a name. You’re the first person to ask. I wrote it between December and a week before the concert. However both the construction of the piece and the software used to make it are just the latest variation in a string of related pieces.

B: How did you get into making computer music?

JM: Since I was a teenager I’ve been writing ‘algorithmic’ music, in which all or most of the events are governed by some simple systematic process. My musical training from age 14 was in North Indian Classical music, which frequently uses very clear logical patterns to construct phrases and forms. As a foreigner I didn’t have an intuitive understanding of the musical structures, learning process, or folk tunes that make up Hindustani music, so I think I had a tendency to over-emphasize the importance of systematic processes. I started writing music that consisted of one simple process. I’d set up some process just to hear what all the different combinations sounded like. The interesting part of listening to these experiments was hearing the unexpected results that came from uncommon combinations or sequences of otherwise pretty standard material. One liberating aspect of experimental music in the tradition of John Cage is that it encourages you to appreciate music by simply observing its unique shape. A common practice to make some music to observe is to make decisions about the content of a piece using some procedure with random results, such as flipping a coin. So when I started studying the music of John Cage, and the generations of musicians who were influenced by his music and ideas, I had a realization about my experience listening to algorithmic music: I didn’t need a clear logical process to get to the unusual combinations of material I was interested in, I could just use randomness. The next several pieces I wrote employed increasingly complex webs of decisions made with a random number generator. Following the advice of my brother, I started using the Python programming language to generate huge lists of all the possible combinations and permutations of little patterns of musical material. I was still making one decision at a time, making choices from the lists of options, then notating the music manually. A couple years ago I was asked to write some music for some friends coming to town to play a concert. Using this process I managed to generate the data for a piece that was about ten times bigger than I could notate before the concert. I missed my deadline and was totally embarrassed. So I decided I needed to build two tools: 1.) a standardized representation or model of a piece of music in Python data structures that could be customized to create a new piece, and 2.) a wrapper for the popular notation typesetting library Lilypond that could take my Python representation of a piece and automatically make beautiful sheet music. The piece performed last Saturday was the second piece I’ve written using these tools.

There is another path I took to electronic music. In 1997 I downloaded a free trial copy of Noteworthy Composer, music notation software that appeared to be written by people who had a very strange and seemingly faulty conception of how music behaves. It could be used as a sequencer triggering the amazing Roland Sound Canvas GM/GS Sound Set that came built in as a part of Windows (think pan flutes and steel pans). Noteworthy Composer had some unusual capabilities, which I exploited: the tempo could be set to dotted half note equals 750 beats per minute, you could write 128th notes, and you could change the tempo at any point abruptly or gradually; the pitch of each track could be tuned to 8192 divisions of a half step and could be changed on the fly; individual tracks could contain loops of any duration that did not affect the other tracks and loops could be nested. I made roughly 1000 little studies using this tool between 1997 and 2003, in the spirit of Conlon Nancarrow’s player piano studies. Check out this track from my old experimental pop album for a sample: http://jonathanmarmor.bandcamp.com/track/current-ie-contemporary

B: Describe the creative process for this piece.

JM: It’s possible to think of musical genre as a set of rules and tendencies that govern how musical material is organized. The rules are defined by the sum of the genre’s body of work. Most genres are the accumulated contributions of hundreds or thousands of diverse musicians spanning decades or centuries. This has led most genres to obey a handful of nearly universal rules, such as pitch class equivalence at the octave (middle C is the same note as C in any other octave), or the idea that some element of the music must repeat. In all my recent music I have tried to create an original set of rules and tendencies based on a skewed or faulty conception of the nature of music. It embraces some collection of traditional or made up rules and relentlessly sticks to them. Other common rules are completely ignored. The hope is that this results in an internally consistent piece which is only related to other music by coincidence.

B: What’s the longest silence (length)?

Only about 2 and a half minutes. Surprising, right?

[note: I’m amazed. I would have guessed 10 minutes.]

One of the purposes of putting periods of silence in a piece of music is to let the listener’s mind wander. However, the first 50 or so times a normal listener goes to a concert with a lot of silence in it, his mind is going to wander to rage! He’ll be really uncomfortable, trying not to breathe. He’ll be self conscious. He won’t know what he’s supposed to be doing or thinking or listening to. He might think he’s doing something wrong. He’ll certainly think that silence isn’t music, that there isn’t music happening during the silence, that the composer is a self-righteous idiot, and that the concert is bad. Some of the time, however, this is not the case. If the you are open to listening carefully and letting your mind wander, you may find all sorts of nice things to enjoy.

B: Tell me about the lyrics.

JM: I wrote a little program that makes nonsense poetry. You give it any arbitrary pattern of stressed and unstressed syllables and a rhyme scheme and it will grab random individual words from lyrics of Bob Dylan, Steely Dan, The Eagles, Elvis Costello, Billy Joel, The Band, Tom Waits, and Rufus Wainwright that match the meter and rhyme scheme.

For example, just now I gave it a rhyme scheme of AABBA and this

pattern of unstressed (u) and stressed (S) syllables:

uSuuSuuSu
uSuuSuuSu
uuSuuS
uuSuuS
uSuuSuuSu

and it spit out this limerick:

The Wrongfully Showdown Y’all Sounding
Reporters The Reading In Bounding
Ayatollah’s A Slot
Inconceivable Bought
A Callin’ Coincidence Pounding

It uses the Carnegie Mellon Pronouncing Dictionary to match rhymes and syllable stresses. It always ends up sounding like total nonsense but follows the meter and rhyme scheme very strictly. It doesn’t use any kind of natural language processing to make the order of words similar to English. For this piece, I made it tend to pick words with more syllables first then fill in the gaps with shorter words which gives it a certain sound.

In this piece, after a melody’s rhythm is selected a corresponding poem is made to match. Because the melody rhythms were written with no consideration for the lyrics rhythm the meter and rhyme scheme of the lyrics aren’t really apparent. I’ll probably write another vocal piece in the future that more deliberately exploits this tool.

B: One of the most unique things about your instrumentation was the weak sound coming out of the keyboards. What were those sounds?

JM: I love the sounds that come with consumer keyboards. They’re beautiful and funny. The choice to use layered synthesizer sounds along with an otherwise acoustic ensemble was made purely because I like how it sounds.

B: Did you know early on what your instruments would be, did the computer determine this, or did you decide after you had a composition?

JM: Picking the instruments was a back and forth between an idea I had for a sound and figuring out which of my very talented musician friends were available. The specific sounds used by the synthesizers were chosen randomly from a list that I ranked intuitively.

B: Describe the process of working with the musicians. Were there any challenges?

JM: We only had four rehearsals and never had the whole group together until the first note of the concert was played. It’s an hour and 20 minutes of pretty non-idiomatic music. I was very happy with the way the concert came out, but there were a few trainwrecks.

B: How did the performance end up at the ontologic-hysteric theater?

JM: Composer Travis Just curates a monthly experimental music series at the Ontological Theater. He is familiar with my music from the period when we were both graduate students at CalArts in Los Angeles.


Here is the recording of the whole performance:
http://archive.org/details/April102010Ontological-hystericTheater

One hour 17 minutes

Erin Flannery – Voice,
Laura Bohn – Voice,
Quentin Tolimieri – Synthesizer/Voice,
Phil Rodriguez – Synthesizer/Voice,
Will Donnelly – Synthesizer,
Matt Wrobel – Acoustic Guitar/Voice,
Ian Wolff – Acoustic Guitar/Voice,
Katie Porter – Clarinet,
Beth Schenck – Alto Saxophone,
Jason Mears – Alto Saxophone

These are links to two pieces that were made using the same basic approach and software:

https://soundcloud.com/jonathanmarmor/cattle
9 minutes, featuring the fantastic New Zealander violinist Johnny Chang

https://soundcloud.com/jonathanmarmor/sets/stone
45 minutes, featuring the incomparable clarinetist Katie Porter


Page 1 of 9