tools for segmenting – that is, dividing up a score into small, possibly overlapping sections – for searching across pieces for similarity.
this module is definitely a case where running PyPy rather than cPython will give you a 3-5x speedup.
If you really want to do lots of comparisons, the scoreSimilarity method will use pyLevenshtein if it is installed from http://code.google.com/p/pylevenshtein/ . You will need to compile it by running sudo python setup.py install on Mac or Unix (compilation is much more difficult on Windows; sorry). The ratios are very slightly different, but the speedup is between 10 and 100x!
Returns either a difflib.SequenceMatcher or pyLevenshtein StringMatcher.StringMatcher object depending on what is installed.
If forceDifflib is True then use difflib even if pyLevenshtein is installed:
Returns a dictionary of the lists from indexScoreParts for each score in scoreFilePaths
>>> searchResults = corpus.search('bwv19') >>> fpsNamesOnly = sorted([searchResult.sourcePath ... for searchResult in searchResults]) >>> len(fpsNamesOnly) 9
>>> scoreDict = search.segment.indexScoreFilePaths(fpsNamesOnly[2:5]) >>> len(scoreDict['bwv190.7.mxl']) 4
>>> scoreDict['bwv190.7.mxl']['measureList'] [0, 5, 11, 17, 22, 27]
>>> scoreDict['bwv190.7.mxl']['segmentList'] 'NNJLNOLLLJJIJLLLLNJJJIJLLJNNJL'
Creates segment and measure lists for each part of a score Returns list of dictionaries of segment and measure lists
>>> bach = corpus.parse('bwv66.6') >>> scoreList = search.segment.indexScoreParts(bach) >>> scoreList['segmentList'] '@B@@@@ED@DBDA=BB@?==B@@EBBDBBA' >>> scoreList['measureList'][0:3] [0, 4, 8]
Load the scoreDictionary from filePath
Save the score dict from indexScoreFilePaths as a .json file for quickly reloading
Returns the filepath (assumes you’ll probably be using a temporary file)
Find the level of similarity between each pair of segments in a scoreDict.
This takes twice as long as it should because it does not cache the pairwise similarity.
>>> filePaths =  >>> filePaths.append(corpus.search('bwv197.5.mxl').sourcePath) >>> filePaths.append(corpus.search('bwv190.7.mxl').sourcePath) >>> filePaths.append(corpus.search('bwv197.10.mxl').sourcePath) >>> scoreDict = search.segment.indexScoreFilePaths(filePaths) >>> scoreSim = search.segment.scoreSimilarity(scoreDict) >>> len(scoreSim) 671
Returns a tuple of first score name, first score voice number, first score measure number, second score name, second score voice number, second score measure number, and similarity score (0 to 1).
>>> for result in scoreSim[64:68]: ... result ... (...'bwv197.5.mxl', 0, 1, 4, ...'bwv197.10.mxl', 3, 1, 4, 0.0) (...'bwv197.5.mxl', 0, 1, 4, ...'bwv197.10.mxl', 3, 2, 9, 0.0) (...'bwv197.5.mxl', 0, 2, 9, ...'bwv190.7.mxl', 0, 0, 0, 0.07547...) (...'bwv197.5.mxl', 0, 2, 9, ...'bwv190.7.mxl', 0, 1, 5, 0.07547...)
Translates a monophonic part with measures to a set of segments of length segmentLengths with overlap of overlap using the algorithm of algorithm. Returns two lists, a list of segments, and a list of measure numbers that match the segments.
If algorithm is None then a default algorithm of music21.search.translateStreamToStringNoRhythm is used
>>> from music21 import * >>> luca = corpus.parse('luca/gloria') >>> lucaCantus = luca.parts >>> segments, measureLists = search.segment.translateMonophonicPartToSegments(lucaCantus) >>> segments[0:2] ['HJHEAAEHHCE@JHGECA@A>@A><A@AAE', '@A>@A><A@AAEEECGHJHGH@CAE@FECA']
>>> measureLists[0:3] [1, 7, 14]
>>> segments, measureLists = search.segment.translateMonophonicPartToSegments( ... lucaCantus, ... algorithm=search.translateDiatonicStreamToString) >>> segments[0:2] ['CRJOMTHCQNALRQPAGFEFDLFDCFEMOO', 'EFDLFDCFEMOOONPJDCBJSNTHLBOGFE']
>>> measureLists[0:3] [1, 7, 14]