pubPhotos

What is music21?

Music21 is a set of tools for helping scholars and other active listeners answer questions about music quickly and simply. If you’ve ever asked yourself a question like, “I wonder how often Bach does that” or “I wish I knew which band was the first to use these chords in this order,” or “I’ll bet we’d know more about Renaissance counterpoint (or Indian ragas or post-tonal pitch structures or the form of minuets) if I could write a program to automatically write more of them,” then music21 can help you with your work.

How simple is music21 to use?

Extremely. After starting Python and typing "from music21 import *" you can do all of these things with only a single line of music21 code:

Display a short melody in musical notation:
converter.parse("tinynotation: 3/4 c4 d8 f g16 a g f#").show()

Print the twelve-tone matrix for a tone row (in this case the opening of Schoenberg's Fourth String Quartet):
print (serial.rowToMatrix([2,1,9,10,5,3,4,0,8,7,6,11]) )

or since all the 2nd-Viennese school rows are already available as objects, you can type:
print (serial.getHistoricalRowByName('RowSchoenbergOp37').matrix() )

Convert a file from Humdrum's **kern data format to MusicXML for editing in Finale or Sibelius:
converter.parse('/users/cuthbert/docs/composition.krn').write('musicxml')

With five lines of music21 code or less, you can:

Prepare a thematic (incipit) catalog of every Bach chorale that is in 3/4:

catalog = stream.Opus()
for workName in corpus.getBachChorales():
  work = converter.parse(workName)
  firstTS = work.flat.getTimeSignatures()[0]
  if firstTS.ratioString == '6/8':
    catalog.append(work.measureRange(0,2))
catalog.show()

Google every motet in your database that includes the word ‘exultavit’ in the superius (soprano) part (even if broken up as multiple syllables in the source file) to see how common the motet's text is:

import webbrowser
for motet in listOfMotets:
  superius = motet[0]
  lyrics = text.assembleLyrics(part)
  if 'exultavit' in lyrics:
    webbrowser.open('http://www.google.com/search?&q=' + lyrics)

Add the German name (i.e., B♭ = B, B = H, A♯ = Ais) under each note of a Bach chorale and show the new score:

bwv295 = corpus.parse('bach/bwv295')
for thisNote in bwv295.recurse().notes:
  thisNote.addLyric(thisNote.pitch.german)
bwv295.show()

Of course, you are never limited to just using five lines to do tasks with music21. In the demos folder of the music21 package and in the sample problems page (and throughout the documentation) you’ll find examples of more complicated problems that music21 is well-suited to solving, such as cataloging the rhythms of a piece from most to least-frequently used.

Music21 builds on preexisting frameworks and technologies such as Humdrum, MusicXML, MuseData, MIDI, and Lilypond but music21 uses an object-oriented skeleton that makes it easier to handle complex data. But at the same time music21 tries to keep its code clear and make reusing existing code simple. With music21 once you (or anyone else) has written a program to solve a problem, that program can easily become a module to be adapted or built upon to solve dozens of similar (but not identical) problems.

Interested in learning more?

Latest music21 News

[July 18, 2016 12:36 pm] « » [music21]
Despite the author name, this is a guest post from Christopher Witulski; he can be reached at  chris.witulski at gmail.com.  We thank him for sharing this exciting pre-publication work. -- MSC

Last year I learned about music21 and ever since I have been wondering how I can use it to learn more about the Moroccan musical repertoires that I study. Long story short, I ended up building a tool for creating interactive web-based contour visualizations from the command line and I'd like to share it here.

Climbing out of a rabbit hole

I was working through a project and struggling to keep track of things. The paper was an analysis of a genre of Moroccan sung poetry called malhun for the 2016 Analytical Approaches to World Music conference. The performance of each poem can last twenty-ish minutes and contains a number of repetitions of the refrain text. These refrains are short (roughly eight 2/4 measures long) and modulate through repetitive--but different--melodies. I had transcribed over sixty of them in an attempt to understand how they worked, how they changed, and how they were related to each other.

Musicians on stage, with a solo singer in front and the author among five violinists holding his in Western style
Malhun Performance in Fez, Morocco

Having performed this music in Morocco (I'm the lone violinist in the photo who can't quite figure out how to play while holding the instrument upright on my knee), I was constantly struck with this feeling of déjà vu. New melodies felt so similar to old ones, but I could not put my finger on how or why. The problem was simple: I could not keep sixty or seventy different transcriptions in my head at once. Comparing them was getting tricky. I wanted a way to stack them on top of each other, almost as if I could print the transcriptions on a transparency and show them all at once on an overhead projector.

Over the previous months, I had been teaching myself Python in an effort to learn more about music21 and what it could do. It was time to try and build the tool that I needed instead of wishing I could find it.

Visualizing contours

For the presentation, I put together a small library that carried out two main tasks. First, it used music21 to parse my transcriptions, normalize the length of each melody, and build a dataset. Using offsets, frequencies, and distances from the final note of each melody, it turned note objects into a JSON of coordinates. At 1,000 different y values (each corresponding with 1/1,000th of the length of the total melody), it measured an x value for the frequency and one for the "distance from the root," the distance in steps above or below the melody's final pitch.

Visualization of 68 Melodic Contours from Malhun overlaid with one labeled in bold red
Malhun Contour Example

The JSON was passed to another library that I had been recently learning and working with called D3.js. It is written in JavaScript and designed for creating powerful interactive data visualizations. I supplemented my presentation with an online chart of each of my malhun transcriptions: by grouping contour lines within each poem, I was able to easily see the source of my déjà vu. Despite changes in pitch content, range, root motion, and a host of other things, the contours themselves often stayed strikingly consistent throughout the long performances. You can see the visualization and click through the different poems online, though be aware that some parts (like the "Next" button) are artifacts of the paper presentation.

Building a tool

Maybe two weeks ago I decided to try my hand at creating a Python library of my own. I simplified the chart, creating a sort of template, remove the stepwise element of the visualization, and fought my way through learning to upload a project to PyPI. The result is ContourViz... I didn't give much thought to the name, my apologies.

"Three Melodic Contours" -- one is shown in blue.
ContourViz, simple example

The tool takes, as an argument, either a music notation file or a directory with many of them. It parses these files and creates a JSON structure of 1,000 coordinates for D3.js to work with. It then copies a folder called results that includes an index.html file and a folder of JavaScript and CSS files that the generated web page will use into the current directory. Finally, it runs the Python SimpleHTTPServer and opens the new page, parsing the JSON to create the visualization.

You can install ContourViz using the following in your terminal:

pip install contourviz

It runs from the command line, so creating a visualization of multiple melodies, like the one above, is as easy as:

chart-single-contour '/path/to/file.xml'

Working with a directory is similar:

chart-contours '/path/to/directory/full/of/xml/or/mxl/files'

6 Melodic Contours -- more complex chart showing 2 x three overlapping contours from Damlij-Bouzouba
ContourViz, more complex example
I'm still toying with the system and it has a number of issues. For example, I would love for it to parse voices as individual melodies if they are present. Instead, it only works with monophonic lines, meaning that each voice has to be in an individual file if you wanted to visualize voice leaning or other contrapuntal patterns. There are smaller issues: I still need to set up the Y axis to render note names properly.

Please feel free to check out the GitHub repo and suggest any other changes or ways in which it could be more helpful. This is my first go around at building a tool of this sort, so I am eager to hear if it is helpful and how it could be improved. And thank you for allowing me to join the community.



[July 12, 2016 18:08 pm] « » [music21]
The following is a guest post from Daniel McGillicuddy, alias Basso Ridiculoso.  He can be reached at daniel.mcg [at] gmail.com.   -- MSC

Hello all!

I am a gigging musician and bass player who has discovered music21, but, alas, I am certainly not a musicologist or academic.

I have seen many of the amazing examples that showcase music21’s capabilities with classical and twentieth-century music, and wanted to show how I use music21. Hopefully these examples show that music21 can also be used to explore jazz and popular music, either via analysis for educational purposes or for developing improvisational ideas.

Jazz Standard Voice Leading Lines

Music21 has an amazing corpus of public domain classical music, but most jazz standards are not available for inclusion. But, since music21 has an understanding of seventh chords and reads MusicXML, a virtual corpus of jazz standards is available for analysis and exploration via another application called IRealPro. IRealPro is a virtual accompanist software program that has chord charts for over 3000 jazz standards, and which can export the chord progressions in MusicXML, a format that will allow music21 to understand the harmony. Once we have that outline of a jazz standard's harmonic structure, music21 can be turned loose.

For this example, lets export the chord chart to the standard “Alone Together” and generate a 3rd to 7th voice-leading line through the entire tune, based on this concept by Burt Ligon, as described here.

(links: Alone Together.XML and Guide Tone Lines with Music21.py)

Since music21 understands harmony, any kind of voice leading line is possible, for instance the 5th resolving to the 9th. Now these voice leading lines can be generated for any jazz standard (or for any chord progression) that can be exported as MusicXML format and these lines can be used as jumping off points for making solos or studying voice leading.

Jazz Solo Analysis 

Analyzing jazz solos from the masters is another way to get improvisational material, but it is better known as stealing someones licks! Since music21 can understand the relationship of any note to any chord, it can be used to analyze the functional relationship of the notes in a solo.

Here is an example of Miles Davis’s solo on “Freddie Freeloader” with the notes being labeled so they represent their function against the chord being played, for example, an F note on a Bb7 chord being the fifth.

(links: Miles Solo XML  and Melodic Labeler.py)

This same Music21 code was used to analyze Charlie Parker's solo on Bloomdido, and a walking bass line over F blues by Ron Carter.

Now any solo line that can be exported as MusicXML can be analyzed by music21 and then explored even further. What notes are favored? What beats of the bar do certain notes get played on? How many times do certain notes get played? Are there repeating phrases that a certain player uses over and over? All of this can be cataloged or graphed once it has been brought into the music21 world. The included code needs a chord symbol over every measure.

Hopefully these examples show that music21 is not only for musicologists exploring the pitch class space of Bartok's string quartets or for twelve-tone row composers! Students and musicians can use it for very useful and practical purposes as well. Many thanks to Michael for allowing this guest posting from big music21 fan!

(Ed: Thanks Dan! The examples included here are copyrighted by their respective composers and publishers. We believe their inclusion here for educational and instructional purposes are supported by all four factors of the Fair Use test).
[October 2, 2015 12:37 pm] « » [music21]

MIT Spectrum has an article by Kathryn M. O'Neill on my work, music21, and computational musicology:
“IF I WANT TO KNOW how the guitar and saxophone became the important instruments throughout classical repertory or how chord progressions have changed, those are questions musicology has been unable to approach,” says Associate Professor of Music Michael Cuthbert. Spotting trends and patterns in a large corpus of music is nearly impossible using traditional methods of study, because it requires the slow process of examining pieces one by one. What his field needed, Cuthbert determined, was a way to “listen faster.”
Read more at http://spectrum.mit.edu/articles/data-in-a-major-key/.

In other news, Clifton Callender at Florida State University is currently teaching a doctoral seminar on music theory techniques using music21.  His course description is at http://cliftoncallender.com/teaching/.



[September 28, 2015 17:58 pm] « » [music21]
The long-awaited (at least by me) version 2 of music21 is released!  This is the first version of the v.2 release to be out of beta and stable enough for general use by everyone.

Upgrade with:

    pip install --upgrade music21

Or download from GitHub.

The first non-beta release of music21 since v. 1.9.3 (June 2014) gives a ton of new features and lots of new speed.  But being a major release change number, it also has some changes that every programmer using the system needs to be aware of.  The release notes on GitHub gives all the details, but here are the highlights since 1.9:

Changed and Added features


* Duration and Offset now use Fractions when necessary for exact representation of tuplets. Many, many errors from rounding are gone.  For now, you can use Duration.quarterLengthFloat and offsetFloat to get the old behavior, but float(Duration.quarterLength) and float(offset) are better.
* Converters support easy to install custom sub converters. MEI is now supported (thanks to McGill university)
* Python 2.6 is not supported.  Python 3.4 is highly recommended; 2.7, 3.3, and 3.5 also work.
* Loading cached streams is extremely fast. All streams are automatically cached when loaded from disk.
* Sorting is much more consistent and faster
* MusicXML parsing and showing have been rewritten to use cElementTree and many new features.
* Stream's internal mechanisms have been hugely rearranged.  Now offsets are stored inside Streams instead of inside Notes, etc., making lots of things faster and more reliable.
* Streams support filters on iteration using the `.iter` property and the `recurse()` method.  These are big changes for speed and reliability.
* Namedtuples replace anonymous tuples in many places
* Music21 is available under the BSD license.
* Musedata files are no longer available in the corpus. However, new files in MusicXML format have replaced several of them.
* Complete rewrite of TinyNotation making it much easier to subclass for your needs.
* If you have MuseScore 2, try sc.show('musicxml.png') to get a beautifully rendered musicxml file. Or use .pdf to get something ready to print.  Thanks Nicholas, Thomas, and Walter!
* Builds are automatically tested for errors and documentation coverage.
* Experimental modules moved to the `alpha` sub package.  `demos` reorganized.
* Lots of documentation changes!
* Obscure and almost never used (or actually never used) methods and attributes have been removed.
* Did I mention how much better the documentation is getting?

In case anyone is keeping track, since v.1.0 (June 2012), here are the:

Biggest changes between 1.0 and 1.9


* Store complete Streams via FreezeThaw
* Output to Vexflow and `music21j`
* Converters have been moved into packages.
* It takes 1/3 the time to do most operations, and 1/4 the time to start up.
* Capella supported.  ABC imports almost everything. Humdrum supports multiple voices. Chords have a better root() algorithm
* Many, many new corpus pieces.
* Layout support.
* Python 3 supported, and now recommended.
* Timespans make .getContextByClass at least an order of magnitude faster, letting music21 handle huge scores.
* Derivations reduce the number of Streams to keep track of.


Oh, and I did more than patch bugs in the last week:

Release notes since 2.0.11


* Streams use .iter and .recurse() in TONS of functions, making many a lot faster, a few a bit slower, but all cleaner to debug and safer.
* Deprecated items now return a deprecation warning.
* Duration objects now have a `.client` which can inform the `Note` of changes to it.
* `.classes` searches are way faster. Returns tuple.
* `deepcopy` is about 30% faster.
* `common` is split into a directory of related functions.  Now worth looking through.
* all corpus files, including small .abc files with non-standard additions, now parse.  A complete corpus.search().parse() should be possible without any try: statements.
* several bugs in musicxml processing (mainly related to the handling of expressions, noteheads, etc., on chords) have been fixed.  Also Finale's `` tag is supported.
* code is much more "lint-free" catching many subtle bugs.
* audioSearch is cleaned up, with beta-type code moved to demos.
* Documentation much improved including three new User's Guide sections, and (thanks to bagratte) fixes for UTF-8 errors.
* `io.open` replaces `codecs.open` for better non-Western script handling.
* .egg files are no longer distributed.  I'll work on getting .whl (wheel) files soon, but for now use .tar.gz.  PyPi no longer supports .egg, so there's no reason for them.

 incompatible changes


* `.fullyQualifiedClasses` is GONE. No one used it.  Instead a new `.classSet` replaces it for rapid class searching.
* sites.Sites and sites.SiteRef are no longer imported into base by default.
* `documentation` modules reorganized, with better examples.
* `stream.core` moves several core modules out of the `stream` module.
* `Volume.parent` renamed `Volume.client` to match `Derivation` and `Duration`
* `.components` on `Duration` now returns a tuple.

What's Next?

Today also announces the first commit of music21 3.0 -- for the first time, I'm going to try to do something daring: keep bug fixes and some backwards compatible changes in the 2.1 (2.2, etc.) branch, but go forward with bigger changes in a 3.0-alpha branch.  Some things that you might expect to happen:

* All deprecated functions will be gone in 3.0; like immediately; like I'm deleting them as I type.

* Lots of things that currently return a Stream will instead be iterators over Streams.  These include: .getElementsByClass(),  getElementsByOffset() -- the fact that so many streams get created is one of the biggest headaches and reasons why the system gets slow.  You can prepare for the change by examining your usage of these functions and asking yourself, "Am I actually using this as a Stream? Or just as a bunch of objects to iterate over in a for loop or to count using len()"?  If the latter, you're fine.  If the former, go ahead and add .stream() after it, for instance filteredStream = s.getElementsByClass("TimeSignature").stream().  The last `.stream()` call does NOTHING right now, but it will ensure that your code works exactly the same after the change happens.  If you want to use the new features (even in 2.1) add .iter between `s` and `.getElementsByClass()` (but leave off the `.stream()`.  You'll find that life will be going a lot lot lot faster.

* I'm going to make a second attempt to use TimeSpans as a general storage engine for Streams.  These are the super fast representations of Streams that Josiah Oberholtzer made, that speed up working with large streams by 10-100x. But for very small streams (such as one measure of a Chorale), they are much slower than the current Streams. Now that all the core mechanisms are factored out of Stream into StreamCore, I can play much more easily with switching in any out the backend functions. Using the lessons of Python's TimSort, I'll probably have the TimeSpan core kick in immediately when there are more than 64 elements in a Stream; it should be seamless except for a tiny delay when the 65th element is added (like shifting gears in a car).

* I may make Python 3.4 a requirement.  We'll see... I'm sick of coding for Python 2.  Python 3 is much more fun from the coder's perspective.

Thanks everyone for great support! -- Myke



[September 21, 2015 14:03 pm] « » [music21]
Ten days since the last release, so time for a new one. Again, speed, stability, and new features. The biggest change is the entirely new MusicXML output system to match the entirely new input system introduced last release. 
The second biggest is in the (re)introduction of StreamIterators and RecursiveIterator. I'll need to get some demos up of this soon, but this will be a game changer for some tasks.
Update or install with one (or both) of these commands: 
pip install --upgrade music21
pip3 install --upgrade music21

Bigger changes
  1. MusicXML now uses the faster, more reliable ElementTree output generator. Please report any bugs on import or export, especially if they are regressions from format='oldmusicxml'oldmusicxmlwill disappear soon.
  2. Better docs (see below), especially for the long under documented recurse function. Everything that was in overview is now in the User's Guide.
  3. Streams now support filters on iteration -- if you have been using: for e in s.getElementsByClass('X'), try: for e in s.iter.getElementsByClass('X') for a major speedup, especially if you just want the first one or something of that sort. Recurse() supports the same, so for e in s.recurse().notes.getElementsByGroup('tuba') will be WAY faster than before. You might not notice the difference on your own work, but internally things are getting a lot faster. (obscure non-filter routines will be deprecated and disappear soon).
  4. Corpus docs/indexes, etc. are updated with more recent corpus changes (nothing new, but easier to find).
  5. Use of deprecated functions now generates a warning. This should help people plan for migration in case you're not reading the documentation religiously.
Smaller changes
  1. Documentation is improved and updated working with Jupyter/IPython 4 (note: a bug in nbconvert + pandoc requires pandoc v. 1.33 or older to make; they're working on a patch). Docs build in parallel, so it's very fast -- you'll see updates more often.
  2. Documentation is now separated into "source/" and "autogenerated/" folders -- everything in source is user editable. Nothing in autogenerated is.
  3. A number of obscure, long deprecated functions are gone, the biggest being n.removeLocationBySite() use n.sites.remove()
  4. normalization in features has been fixed (Thanks Frank Zalkow)
  5. Parsing of cappella MusicXML files has been improved.
  6. Improved parsing of RomanText files; bugs in several encodings of rntxt and abc files have been fixed.
  7. common.nearestCommonFraction has been renamed addFloatPrecision to better reflect what it does. This has always confused me.
[September 11, 2015 19:45 pm] « » [music21]
This post announces the v.2.0.10 beta release of music21, which is moving quickly to the official v.2 release, v.2.1.  Some of the changes have already been announced on the music21list Google Groups mailing list.

Upgrade by downloading from https://github.com/cuthbertLab/music21/releases or by running "pip install --upgrade music21"

The major changes include:


  • New parsing engine for MusicXML (see below)
  • DurationTuples replace DurationUnits
  • Percussion clefs and No Clefs now are supported properly in musicxml output
  • Improvements to the RomanText and clercqTemperly formats (thanks DT!)
  • Some obscure modules removed from the main namespace:
    1. intervalNetwork becomes scale.intervalNetwork and BoundIntervalNetwork becomes simply IntervalNetwork.
    2. scala becomes scale.scala
    3. chord becomes a package and chordTables becomes chord.tables 
  • In the next version, expect languageExcerpts to become text.languageDetection and the "xmlnode" module to disappear.
  • Environment and CapellaXML, which depended on XMLNode now don't.  CapellaXML processing is 10x faster.
  • jsonpickling is upgraded and safer.
  • Building documentation now works on IPython 4/Jupyter 4.0
  • MusicXML output with Unicode now works on Py3 (thanks Sarig!)
  • Spanners on Rests now export properly in MusicXML
  • VexFlow only supports the music21j based output now. More bug fixes there to come (or will be moved to alpha support)
  • Everything overall is about 30% faster than a month ago.


The biggest change in this version is how MusicXML is processed.  When Christopher Ariza joined the music21 team in 2008, music21 had a tiny limitation: it didn't work with MusicXML, at all. Whoops! It was just too big a task to tackle for me when I was still figuring out how Streams, Sites, Durations, etc. would work. Thankfully Chris took it on and extremely quickly produced a great parser for MusicXML.  The problem back then was that few people were on the latest, greatest version of Python 2.5, and music21 aimed to support at least back to Python 2.1, and only the newest Python 2.5 had the brand new "ElementTree" Python processing module (and there were still substantial bugs in that module before Python 2.6).  We were determined not to make MusicXML parsing require an external library such as "lxml", so that left two choices, xml.minidom and xml.sax.

Anyone who knows anything about the structure of MusicXML and the differences in philosophy between DOM and SAX will know that DOM is the logical choice for MusicXML parsing -- it allows nodes to look at their neighbors, parents, children, and make logical decisions (am I a note, rest, or chord?) based on the context.  SAX on the other hand is built on calling functions whenever a particular tag start is encountered, whenever data is encountered, and whenever a stop tag is encountered. Great for certain types of text formatting, insanely difficult for a format like MusicXML (or MEI or just about any music format besides perhaps MIDI).  So, if memory serves, Chris wrote a quick DOM processor for MusicXML and it was getting notes, durations, measures, beautifully.

But Chris Ariza is also probably the best programmer I've ever met and before going further he profiled the system and extrapolated what it would be like to work with a large corpus of MusicXML files using it.  Slow as slime.  The minidom was implemented entirely in Python, not highly optimized, and was not going to make anyone want to use MusicXML in the toolkit.

So, he basically did the impossible: implemented a blazingly fast SAX processor for MusicXML that built a close-to-the-original representation of the file (musicxml.mxObjects) and then processed that in a much more friendly format.  Bam! Speed went up by an order of magnitude, and everything that music21 could do with MusicXML was born.  In the dozens of releases since he moved on from the project, I've barely had to touch the internals at all even as the rest of the system has expanded and changed dramatically. And there was a system for caching the mxObjects representation for a speedup in the next parse.

Fast forward 7 years.  Python has changed.  Version 2.7 is now the minimum requirement (it's over five years old already; we just found a check for Python > 2.2 somewhere in the system! removed it) V.3.3 and 3.4 are supported (3.5 should be out this week and of course will be supported).  And everyone has access to xml.etree.ElementTree now. And the final representation of all parsed formats is now cached, so there is no need for the mxObjects cache.   So in the interest of simplifying parsing (and getting a 40% speedup over SAX + mxObjects), it made sense to rewrite the MusicXML parsing engine.

The new version is called musicxml.xmlToM21.  There are a few miscellaneous files in a new musicxml.xmlObjects file, but basically all the parsing takes place in the xmlToM21 file.  Every tag in musicxml is now written directly into the file to make it easier to see exactly which tag is causing any particular problem. (Line number properties may be possible to add soon).  Because the format of the parser is now much closer to the format of the MusicXML document, a TODO: has been added for every missing tag, or attribute.  Expect music21 to support every tag and attribute in MusicXML 3.0 sometime soon.  If you've ever wanted to hack additional support into Music21's MusicXML parsing but it seemed too daunting, give another look at the code now.

This is a major change on the most used format for music21. Thankfully, Ariza wrote so many tests into the system that I am relatively confident that everything now works exactly like before.  The exceptions are: non-printed notes are no longer skipped (this was to prevent the next bug), notes with incorrect divisions are now corrected rather than skipped, and spanners preceding rests are now attached to the rest rather than the next adjacent note.  (My intention was to be 100% compatible with before, but it would've been very hard to replicate this incorrect behavior).  The one negative side-effect you will see is that parsing some of the Beethoven files is now slower (rather than 40% faster) because some of those files used a large number of incorrectly notated, non-printing notes to represent playback of trills.  For certain files (such as the Große Fuge) the number of notes in the score will almost double with the new system.

Because this change is major, for now you can still use the old parsing system via converter.parse('filename.xml', format='oldmusicxml').  I suggest also adding "forceSource=True" to make sure that you are reading the file from disk and not from Cache.

I'm extremely excited by this change -- we will get the writing of music21 files to use the new system by the next release (a much easier task).

As always, music21 has been supported by the Seaver Institute, the NEH Digging into Data grant, and MIT Music and Theater Arts/SHASS.



How can I contribute?

Music21 is a rapidly-progressing project, but it is always looking for researchers interested in contributing code, questions, freely-distributable pieces, bug fixes, or documentation. Please contact Michael Scott Cuthbert (cuthbert at mit.edu), Principal Investigator.

The development of music21 has been supported by the School of Humanities, Arts, and Social Sciences at M.I.T., the Music and Theater Arts section, and generous grants from the Seaver Institute and the NEH/Digging-Into-Data Challenge. Further donations to the project are always welcome.