"Wolf Notes" episode 11-18
“Dedalus Ensemble at Roulette, Brooklyn on Sept. 9th, 2013”
With commentary by Kevin Weng-Yew Mayner
Jonathan Marmor: Penguin Atlas of African History
Quentin Tolimieri: Any Number of Instruments
Michael Vincent Waller: Ritratto
Craig Shepard: Coney Island, April 15, 2012
Jason Brogan: Deux études
Travis Just: The young generation is right
Amélie Berson, flûte
Cyprien Busolini, alto
Pierre-Stéphane Meugé, saxophones
Thierry Madiot, trombone
Deborah Walker, violoncelle
Didier Aschour, guitare
Hear this broadcast
Monday, Nov. 18th, 11:00pm EST
The Classical Network
Listen online: http://rdo.to/WWFM
or on terrestrial radio in NYC, NJ, and Philadelphia: http://www.wwfm.org/technical.shtml
WKCR HD2 89.9 HD2 New York City
WWFM 89.1 FM Trenton/Princeton, NJ
WKVP HD2 89.5 HD2 Cherry Hill/Philadelphia
The Echo Nest Remix API comes with a demo, enToMIDI by Brian Whitman, which attempts to transcribe any audio file using only Remix’s audio analysis data, and spits out a MIDI file. The purpose of the EN audio analysis data is to provide a summary of the music, not to do the source separation necessary for an accurate transcription. This means the resulting MIDI file usually doesn’t sound much like the input.
MIDI Digester is a very small script that runs audio through enToMIDI, plays back the resulting MIDI using Quicktime and its built in piano synthesizer, records the audio with sox, then repeats the process as many times as you want. Each repetition strips away more of the original musical material and accumulates the sound of enToMIDI.
Check out this demo which “digests” a 7.66 second excerpt of the traditional bluegrass tune “The Groundhog” played by the same Quicktime piano synthesizer.
Concerts in France
More performances of my new piece for sextet “Penguin Atlas of African History” by Dedalus:
Montpellier (La Chapelle) on November 23
Dedalus plays “Penguin Atlas of African History” at Roulette
Please consider coming out to Roulette in Brooklyn tomorrow, Monday, September 9th, 2013 at 8pm to hear my new piece “Penguin Atlas of African History” for Piccolo, Soprano Saxophone, Viola, Cello, Trombone, and Electric Guitar with slide. It will be performed by the amazing Didier Aschour and his ensemble Dedalus from Montpelier, France.
The concert is featuring a bunch of music by my friends: Travis Just, Devin Maxwell, Cat Lamb, Jason Brogan, Quentin Tolimieri (his music is some of my favorite in the universe), Craig Shepard, John Hastings, and Michael Vincent Waller.
Details here: http://roulette.org/events/dedalus/.
My next concert will be September 27th in Montreal. The concert will be three new ~20 minute pieces for Violin and Guitar by Andre Cormier, Mirko Sablich, and me.
Exquisite Corpses: A collaborative music composition and performance experiment
At the August 31st, 2013 Music Hackathon NYC, we’ll be attempting to collaboratively create about an hour of new music in just 7 hours. Anyone is welcome to join us in this experiment, but also feel free to do your own thing at this hackathon — there will be an opportunity for everyone to present their work.
At 8pm we’ll perform what we’ve come up with. If you’re not contributing, please come listen!
Exquisite Corpse is a method for collaboratively creating an art work. One person or group gets it started, then hands it off to any number of other people or groups in sequence, who can add to it or modify it however they see fit. A key twist is that most of the prior work is hidden from the group currently working on it — the current group only has part of the existing thing to build from. This can lead to hilarious drawings of grotesque bodies with hands coming out of necks, for example, hence the name exquisite corpse.
This has been done with music before, but I’m not aware of an exquisite corpse where the medium is software, electronics, or musical instrument building (anyone know of any?). In the context of Monthly Music Hackathon NYC, the only restriction on projects and approaches is that they are somehow related to music: acoustic or electronic live performance, playback of prerecorded sound, real time generation of sound, software, hardware, notated music, improvisation, musical sculpture, software that does something else related to music, etc. If you’re interested in participating but wondering if what you do is an appropriate fit, stop worrying and just come contribute. I’m particularly interested in seeing “corpses” where each round of modification is approached completely differently. So bring your instruments, amps, laptops, audio interfaces, soldering irons, guitar pedals, and ideas. If you’d like to discuss what this will be like or throw out ideas or ask questions please send a message to our email discussion group (Go here to subscribe).
Here’s how it will/might work:
* There will be multiple pieces circulating the room.
* The day will be broken up into one hour segments from noon to 8. At the top of each hour we’ll switch pieces.
* Depending on the number of people who want to participate and if folks want to form groups or not, there may be more or fewer pieces.
* Feel free to bring an idea or piece that you’ve already started — we’ll transform it completely, I’m sure!
* This schedule is in place just to get us going. We can break out of it on a case-by-case basis or altogether if that makes sense.
Corpse 1 - Group A
Corpse 2 - Group B
Corpse 3 - Group C
Corpse 4 - Group D
Corpse 5 - Group E
Corpse 6 - Group F
Corpse 1 - Group F
Corpse 2 - Group A
Corpse 3 - Group B
Corpse 4 - Group C
Corpse 5 - Group D
Corpse 6 - Group E
Corpse 1 - Group E
Corpse 2 - Group F
Corpse 3 - Group A
Corpse 4 - Group B
Corpse 5 - Group C
Corpse 6 - Group D
Corpse 1 - Group D
Corpse 2 - Group E
Corpse 3 - Group F
Corpse 4 - Group A
Corpse 5 - Group B
Corpse 6 - Group C
Corpse 1 - Group C
Corpse 2 - Group D
Corpse 3 - Group E
Corpse 4 - Group F
Corpse 5 - Group A
Corpse 6 - Group B
Corpse 1 - Group B
Corpse 2 - Group C
Corpse 3 - Group D
Corpse 4 - Group E
Corpse 5 - Group F
Corpse 6 - Group A
6 PM Overflow / Scramble / Prep
7 PM Tech Rehearsal
8 PM Performance of the Exquisite Corpses
9 PM Talks about how the corpses were made and presentations of other projects worked on during the hackathon
Saturday, August 31st, 2013
Noon Hacking starts
8 PM Concert
199 Lafayette St, Suite 3B
New York, NY 10012
FREE, but please RSVP at http://monthlymusichackathonnyc.eventbrite.com/
Interview about Jazz & Technology Forum
Here’s a short interview I did about Jazz & Technology Forum:
Jonathan Marmor composes experimental music, writes software for exfm, helps coordinate Monthly Music Hackathon NYC and plays tabla. He’s one of the primary instigators behind this weekend’s Jazz and Technology Forum at Ace Hotel New York in honor of UNESCO International Jazz Day. We had a sit down with him about some stuff he knows about.
See more on our calendar at www.acehotel.com/jazzforum.
+ What is Music Information Retrieval and how does it interest you?
Music Information Retrieval (MIR), or Music Information Research, as it’s frequently called these days, is a field of science focused on using computer science techniques such as digital signal processing and machine-learning to better understand music. A classic example is the challenge of automatically classifying a large collection of audio files into genres, based entirely on the characteristics of the audio signal.
I’ve personally always been interested in learning about how music functions and making up my own rules for my music. MIR scientists approach analyzing music differently than anything I ever experienced in music school or studying Indian music. It was eye-opening and revealed I had been underestimating the number of unanswered basic questions about how music works. This opened up many new avenues for speculation on musical fantasy worlds. It’s like there was a box with a hundred knobs, and now that box has an unknown but much larger number of knobs.
+ What kinds of projects span the code/music relationship?
At Monthly Music Hackathon NYC we’ve had an incredible variety of projects spanning music and technology. One of my favorites is a wind harp built out of a metal rain gutter, driven by a computer controlled fan. One of the inspirations for this jazz-focused hackathon was Ben Lacker’s Jazz Drum Machine, which chops up audio files of jazz drum solos, classifies all the segments by pitch characteristics and positions within the meter, then allows you to fade in and out loops of related sounds. It shows how having data describing audio can lead to really beautiful and unexpected art. Ben will be speaking at the Jazz & Technology Forum about how data generated by MIR tools can be used to create new music.
+ What do you think some ideal pairings of skill sets would be?
I’d love to see someone with a deep knowledge of jazz history, a data scientist and an interactive artist team up to create a visualization (with audio) demonstrating a seemingly inconsequential nuance that makes jazz expressive, such as the variations in pitch intonation in 100 performances of the head of St. Louis Blues.
Originally posted here: https://www.facebook.com/jmarmor/posts/10200665102444479
Interview about my music from 2010
I just stumbled on this old interview one of my colleagues at WNET did with me after attending a concert of my music. I give extensive, exhaustive answers, covering my justification for making automated music including randomness and silences.
Here’s one of the movements of the piece being discussed: https://soundcloud.com/jonathanmarmor/april10-3
"Jonathan Marmor at the Ontological Theater + Interview"
by Bijan Rezvani
Thursday, April 15th, 2010
On Saturday night I made it to the Ontological-Hysteric Theater at St. Mark’s Church for a new piece of music by computer-aided algorithmic composer Jonathan Marmor. Marmor conducted 10 human beings through a deceivingly lovely alien song cycle. Without the romantic flourishes typical of our pledge-time heroes, the piece used shifting sound combinations patterned with long silences to warp the temporal experience.
To learn more about the composer and his unique piece, which is streamed below, I had Jonathan Marmor answer a few questions:
Bjian: What’s the name of the piece, and when did you write it?
Jonathan Marmor: The piece doesn’t have a name. You’re the first person to ask. I wrote it between December and a week before the concert. However both the construction of the piece and the software used to make it are just the latest variation in a string of related pieces.
B: How did you get into making computer music?
JM: Since I was a teenager I’ve been writing ‘algorithmic’ music, in which all or most of the events are governed by some simple systematic process. My musical training from age 14 was in North Indian Classical music, which frequently uses very clear logical patterns to construct phrases and forms. As a foreigner I didn’t have an intuitive understanding of the musical structures, learning process, or folk tunes that make up Hindustani music, so I think I had a tendency to over-emphasize the importance of systematic processes. I started writing music that consisted of one simple process. I’d set up some process just to hear what all the different combinations sounded like. The interesting part of listening to these experiments was hearing the unexpected results that came from uncommon combinations or sequences of otherwise pretty standard material. One liberating aspect of experimental music in the tradition of John Cage is that it encourages you to appreciate music by simply observing its unique shape. A common practice to make some music to observe is to make decisions about the content of a piece using some procedure with random results, such as flipping a coin. So when I started studying the music of John Cage, and the generations of musicians who were influenced by his music and ideas, I had a realization about my experience listening to algorithmic music: I didn’t need a clear logical process to get to the unusual combinations of material I was interested in, I could just use randomness. The next several pieces I wrote employed increasingly complex webs of decisions made with a random number generator. Following the advice of my brother, I started using the Python programming language to generate huge lists of all the possible combinations and permutations of little patterns of musical material. I was still making one decision at a time, making choices from the lists of options, then notating the music manually. A couple years ago I was asked to write some music for some friends coming to town to play a concert. Using this process I managed to generate the data for a piece that was about ten times bigger than I could notate before the concert. I missed my deadline and was totally embarrassed. So I decided I needed to build two tools: 1.) a standardized representation or model of a piece of music in Python data structures that could be customized to create a new piece, and 2.) a wrapper for the popular notation typesetting library Lilypond that could take my Python representation of a piece and automatically make beautiful sheet music. The piece performed last Saturday was the second piece I’ve written using these tools.
There is another path I took to electronic music. In 1997 I downloaded a free trial copy of Noteworthy Composer, music notation software that appeared to be written by people who had a very strange and seemingly faulty conception of how music behaves. It could be used as a sequencer triggering the amazing Roland Sound Canvas GM/GS Sound Set that came built in as a part of Windows (think pan flutes and steel pans). Noteworthy Composer had some unusual capabilities, which I exploited: the tempo could be set to dotted half note equals 750 beats per minute, you could write 128th notes, and you could change the tempo at any point abruptly or gradually; the pitch of each track could be tuned to 8192 divisions of a half step and could be changed on the fly; individual tracks could contain loops of any duration that did not affect the other tracks and loops could be nested. I made roughly 1000 little studies using this tool between 1997 and 2003, in the spirit of Conlon Nancarrow’s player piano studies. Check out this track from my old experimental pop album for a sample: http://jonathanmarmor.bandcamp.com/track/current-ie-contemporary
B: Describe the creative process for this piece.
JM: It’s possible to think of musical genre as a set of rules and tendencies that govern how musical material is organized. The rules are defined by the sum of the genre’s body of work. Most genres are the accumulated contributions of hundreds or thousands of diverse musicians spanning decades or centuries. This has led most genres to obey a handful of nearly universal rules, such as pitch class equivalence at the octave (middle C is the same note as C in any other octave), or the idea that some element of the music must repeat. In all my recent music I have tried to create an original set of rules and tendencies based on a skewed or faulty conception of the nature of music. It embraces some collection of traditional or made up rules and relentlessly sticks to them. Other common rules are completely ignored. The hope is that this results in an internally consistent piece which is only related to other music by coincidence.
B: What’s the longest silence (length)?
Only about 2 and a half minutes. Surprising, right?
[note: I’m amazed. I would have guessed 10 minutes.]
One of the purposes of putting periods of silence in a piece of music is to let the listener’s mind wander. However, the first 50 or so times a normal listener goes to a concert with a lot of silence in it, his mind is going to wander to rage! He’ll be really uncomfortable, trying not to breathe. He’ll be self conscious. He won’t know what he’s supposed to be doing or thinking or listening to. He might think he’s doing something wrong. He’ll certainly think that silence isn’t music, that there isn’t music happening during the silence, that the composer is a self-righteous idiot, and that the concert is bad. Some of the time, however, this is not the case. If the you are open to listening carefully and letting your mind wander, you may find all sorts of nice things to enjoy.
B: Tell me about the lyrics.
JM: I wrote a little program that makes nonsense poetry. You give it any arbitrary pattern of stressed and unstressed syllables and a rhyme scheme and it will grab random individual words from lyrics of Bob Dylan, Steely Dan, The Eagles, Elvis Costello, Billy Joel, The Band, Tom Waits, and Rufus Wainwright that match the meter and rhyme scheme.
For example, just now I gave it a rhyme scheme of AABBA and this
pattern of unstressed (u) and stressed (S) syllables:
and it spit out this limerick:
The Wrongfully Showdown Y’all Sounding
Reporters The Reading In Bounding
Ayatollah’s A Slot
A Callin’ Coincidence Pounding
It uses the Carnegie Mellon Pronouncing Dictionary to match rhymes and syllable stresses. It always ends up sounding like total nonsense but follows the meter and rhyme scheme very strictly. It doesn’t use any kind of natural language processing to make the order of words similar to English. For this piece, I made it tend to pick words with more syllables first then fill in the gaps with shorter words which gives it a certain sound.
In this piece, after a melody’s rhythm is selected a corresponding poem is made to match. Because the melody rhythms were written with no consideration for the lyrics rhythm the meter and rhyme scheme of the lyrics aren’t really apparent. I’ll probably write another vocal piece in the future that more deliberately exploits this tool.
B: One of the most unique things about your instrumentation was the weak sound coming out of the keyboards. What were those sounds?
JM: I love the sounds that come with consumer keyboards. They’re beautiful and funny. The choice to use layered synthesizer sounds along with an otherwise acoustic ensemble was made purely because I like how it sounds.
B: Did you know early on what your instruments would be, did the computer determine this, or did you decide after you had a composition?
JM: Picking the instruments was a back and forth between an idea I had for a sound and figuring out which of my very talented musician friends were available. The specific sounds used by the synthesizers were chosen randomly from a list that I ranked intuitively.
B: Describe the process of working with the musicians. Were there any challenges?
JM: We only had four rehearsals and never had the whole group together until the first note of the concert was played. It’s an hour and 20 minutes of pretty non-idiomatic music. I was very happy with the way the concert came out, but there were a few trainwrecks.
B: How did the performance end up at the ontologic-hysteric theater?
JM: Composer Travis Just curates a monthly experimental music series at the Ontological Theater. He is familiar with my music from the period when we were both graduate students at CalArts in Los Angeles.
Here is the recording of the whole performance:
One hour 17 minutes
Erin Flannery – Voice,
Laura Bohn – Voice,
Quentin Tolimieri – Synthesizer/Voice,
Phil Rodriguez – Synthesizer/Voice,
Will Donnelly – Synthesizer,
Matt Wrobel – Acoustic Guitar/Voice,
Ian Wolff – Acoustic Guitar/Voice,
Katie Porter – Clarinet,
Beth Schenck – Alto Saxophone,
Jason Mears – Alto Saxophone
These are links to two pieces that were made using the same basic approach and software:
9 minutes, featuring the fantastic New Zealander violinist Johnny Chang
45 minutes, featuring the incomparable clarinetist Katie Porter
Generating the non-standards
For my part, I joined up with Jonathan, who proposed an idea for a hack based on building a statistical model of standards, and using it to generate new songs (non-standards). The eventual goal is a machine of the form “PUSH BUTTON, RECEIVE JAZZ”, which could generate fresh lead sheets to be performed by the live band. We didn’t quite make it that far, but we got close!