music21.analysis.patel.melodicIntervalVariability(streamForAnalysis, *skipArgs, **skipKeywords)

gives the Melodic Interval Variability (MIV) for a Stream, as defined by Aniruddh D. Patel in “Music, Language, and the Brain” p. 223, as 100 x the coefficient of variation (standard deviation/mean) of the interval size (measured in semitones) between consecutive elements.

the 100x is designed to put it in the same range as nPVI

this method takes the same arguments of skipArgs and skipKeywords as Stream.melodicIntervals() for determining how to find consecutive intervals.

>>> s2 = converter.parse('tinynotation: 4/4 C4 D E F# G#').flatten()
>>> analysis.patel.melodicIntervalVariability(s2)
>>> s3 = converter.parse('tinynotation: 4/4 C4 D E F G C').flatten()
>>> analysis.patel.melodicIntervalVariability(s3)
>>> s4 = corpus.parse('bwv66.6').parts[0].flatten()
>>> analysis.patel.melodicIntervalVariability(s4)

Algorithm to give the normalized pairwise variability index (Low, Grabe, & Nolan, 2000) of the rhythm of a stream.

Used by Aniruddh D. Patel to argue for national differences between musical themes. First encountered it in a presentation by Patel, Chew, Francois, and Child at MIT.

n.b. – takes the distance between each element, including clefs, keys, etc. use .notesAndRests etc. to filter out elements that are not useful (though this will skip zero length objects)

n.b. – duration is used rather than actual distance – for gapless streams (the norm) these two measures will be identical.

>>> s2 = converter.parse('tinynotation: 4/4 C4 D E F G').flatten()
>>> analysis.patel.nPVI(s2)
>>> s3 = converter.parse('tinynotation: 4/4 C4 D8 C4 D8 C4').flatten()
>>> analysis.patel.nPVI(s3)
>>> s4 = corpus.parse('bwv66.6').parts[0].flatten()
>>> analysis.patel.nPVI(s4)