music21.features.jSymbolic

The features implemented here are based on those found in jSymbolic and defined in Cory McKay’s MA Thesis, “Automatic Genre Classification of MIDI Recordings”

AcousticGuitarFractionFeature

class music21.features.jSymbolic.AcousticGuitarFractionFeature(dataOrStream=None, **keywords)

A feature extractor that extracts the fraction of all Note Ons belonging to acoustic guitar patches (General MIDI patches 25 and 26).

>>> s1 = stream.Stream()
>>> s1.append(instrument.AcousticGuitar())
>>> s1.repeatAppend(note.Note(), 3)
>>> s1.append(instrument.Tuba())
>>> s1.append(note.Note())
>>> fe = features.jSymbolic.AcousticGuitarFractionFeature(s1)
>>> fe.extract().vector
[0.75]

AcousticGuitarFractionFeature bases

AcousticGuitarFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

AmountOfArpeggiationFeature

class music21.features.jSymbolic.AmountOfArpeggiationFeature(dataOrStream=None, **keywords)

Fraction of horizontal intervals that are repeated notes, minor thirds, major thirds, perfect fifths, minor sevenths, major sevenths, octaves, minor tenths or major tenths.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.AmountOfArpeggiationFeature(s)
>>> f = fe.extract()
>>> f.name
'Amount of Arpeggiation'
>>> f.vector
[0.333...]

AmountOfArpeggiationFeature bases

AmountOfArpeggiationFeature methods

AmountOfArpeggiationFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

AverageMelodicIntervalFeature

class music21.features.jSymbolic.AverageMelodicIntervalFeature(dataOrStream=None, **keywords)

Average melodic interval (in semitones).

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.AverageMelodicIntervalFeature(s)
>>> f = fe.extract()
>>> f.vector
[2.44...]

AverageMelodicIntervalFeature bases

AverageMelodicIntervalFeature methods

AverageMelodicIntervalFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

AverageNoteDurationFeature

class music21.features.jSymbolic.AverageNoteDurationFeature(dataOrStream=None, **keywords)

Average duration of notes in seconds.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.AverageNoteDurationFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.552...]
>>> s.insert(0, tempo.MetronomeMark(number=240))
>>> fe = features.jSymbolic.AverageNoteDurationFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.220858...]

AverageNoteDurationFeature bases

AverageNoteDurationFeature methods

AverageNoteDurationFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

AverageNoteToNoteDynamicsChangeFeature

class music21.features.jSymbolic.AverageNoteToNoteDynamicsChangeFeature(dataOrStream=None, **keywords)

Not implemented

Average change of loudness from one note to the next note in the same channel (in MIDI velocity units).

TODO: implement

AverageNoteToNoteDynamicsChangeFeature bases

AverageNoteToNoteDynamicsChangeFeature methods

Methods inherited from FeatureExtractor:

AverageNumberOfIndependentVoicesFeature

class music21.features.jSymbolic.AverageNumberOfIndependentVoicesFeature(dataOrStream=None, **keywords)

Average number of different channels in which notes have sounded simultaneously. Rests are not included in this calculation. Here, Parts are treated as voices

>>> s = corpus.parse('handel/rinaldo/lascia_chio_pianga')
>>> fe = features.jSymbolic.AverageNumberOfIndependentVoicesFeature(s)
>>> f = fe.extract()
>>> f.vector
[1.528...]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.AverageNumberOfIndependentVoicesFeature(s)
>>> f = fe.extract()
>>> f.vector
[3.90...]

AverageNumberOfIndependentVoicesFeature bases

AverageNumberOfIndependentVoicesFeature methods

AverageNumberOfIndependentVoicesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

AverageRangeOfGlissandosFeature

class music21.features.jSymbolic.AverageRangeOfGlissandosFeature(dataOrStream=None, **keywords)

Not yet implemented in music21

Average range of MIDI Pitch Bends, where “range” is defined as the greatest value of the absolute difference between 64 and the second data byte of all MIDI Pitch Bend messages falling between the Note On and Note Off messages of any note

AverageRangeOfGlissandosFeature bases

AverageRangeOfGlissandosFeature methods

AverageRangeOfGlissandosFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

AverageTimeBetweenAttacksFeature

class music21.features.jSymbolic.AverageTimeBetweenAttacksFeature(dataOrStream=None, **keywords)

Average time in seconds between Note On events (regardless of channel).

>>> s = corpus.parse('bwv66.6')
>>> for p in s.parts:
...     p.insert(0, tempo.MetronomeMark(number=120))
>>> fe = features.jSymbolic.AverageTimeBetweenAttacksFeature(s)
>>> f = fe.extract()
>>> print(f.vector)
[0.35]

AverageTimeBetweenAttacksFeature bases

AverageTimeBetweenAttacksFeature methods

AverageTimeBetweenAttacksFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

AverageTimeBetweenAttacksForEachVoiceFeature

class music21.features.jSymbolic.AverageTimeBetweenAttacksForEachVoiceFeature(dataOrStream=None, **keywords)

Average of average times in seconds between Note On events on individual channels that contain at least one note.

>>> s = corpus.parse('bwv66.6')
>>> for p in s.parts:
...     p.insert(0, tempo.MetronomeMark(number=120))
>>> fe = features.jSymbolic.AverageTimeBetweenAttacksForEachVoiceFeature(s)
>>> f = fe.extract()
>>> print(f.vector[0])
0.442...

AverageTimeBetweenAttacksForEachVoiceFeature bases

AverageTimeBetweenAttacksForEachVoiceFeature methods

AverageTimeBetweenAttacksForEachVoiceFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature

class music21.features.jSymbolic.AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature(dataOrStream=None, **keywords)

Average standard deviation, in seconds, of time between Note On events on individual channels that contain at least one note.

>>> s = corpus.parse('bwv66.6')
>>> for p in s.parts:
...     p.insert(0, tempo.MetronomeMark(number=120))
>>> fe = features.jSymbolic.AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.177...]

AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature bases

AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature methods

AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

BasicPitchHistogramFeature

class music21.features.jSymbolic.BasicPitchHistogramFeature(dataOrStream=None, **keywords)

A feature extractor that finds a features array with bins corresponding to the values of the basic pitch histogram.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.BasicPitchHistogramFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.006..., 0.0, 0.0, 0.006..., 0.006..., 0.030...,
 0.0, 0.036..., 0.012..., 0.0, 0.006..., 0.018..., 0.061..., 0.0,
 0.042..., 0.073..., 0.012..., 0.092..., 0.0, 0.116..., 0.061...,
 0.006..., 0.085..., 0.018..., 0.110..., 0.0, 0.042..., 0.055...,
 0.0, 0.049..., 0.0, 0.042..., 0.0, 0.0, 0.006..., 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

BasicPitchHistogramFeature bases

BasicPitchHistogramFeature methods

BasicPitchHistogramFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

BeatHistogramFeature

class music21.features.jSymbolic.BeatHistogramFeature(dataOrStream=None, **keywords)

Not yet implemented

A feature extractor that finds a feature array with entries corresponding to the frequency values of each of the bins of the beat histogram (except the first 40 empty ones).

BeatHistogramFeature bases

BeatHistogramFeature methods

BeatHistogramFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

BrassFractionFeature

class music21.features.jSymbolic.BrassFractionFeature(dataOrStream=None, **keywords)

A feature extractor that extracts the fraction of all Note Ons belonging to brass patches (General MIDI patches 57 through 68).

TODO: Conflict in source: only does 57-62?

>>> s1 = stream.Stream()
>>> s1.append(instrument.SopranoSaxophone())
>>> s1.repeatAppend(note.Note(), 6)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 4)
>>> fe = features.jSymbolic.BrassFractionFeature(s1)
>>> print(fe.extract().vector[0])
0.4

BrassFractionFeature bases

BrassFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

ChangesOfMeterFeature

class music21.features.jSymbolic.ChangesOfMeterFeature(dataOrStream=None, **keywords)

Returns 1 if the time signature is changed one or more times during the recording.

>>> s1 = stream.Stream()
>>> s1.append(meter.TimeSignature('3/4'))
>>> fe = features.jSymbolic.ChangesOfMeterFeature(s1)
>>> fe.extract().vector
[0]
>>> s2 = stream.Stream()
>>> s2.append(meter.TimeSignature('3/4'))
>>> s2.append(meter.TimeSignature('4/4'))
>>> fe.setData(s2)  # change the data
>>> fe.extract().vector
[1]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.ChangesOfMeterFeature(s)
>>> f = fe.extract()
>>> f.vector
[0]

ChangesOfMeterFeature bases

ChangesOfMeterFeature methods

ChangesOfMeterFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

ChromaticMotionFeature

class music21.features.jSymbolic.ChromaticMotionFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals corresponding to a semitone.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.ChromaticMotionFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.220...]

ChromaticMotionFeature bases

ChromaticMotionFeature methods

ChromaticMotionFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

CombinedStrengthOfTwoStrongestRhythmicPulsesFeature

class music21.features.jSymbolic.CombinedStrengthOfTwoStrongestRhythmicPulsesFeature(dataOrStream=None, **keywords)

The sum of the frequencies of the two beat bins of the peaks with the highest frequencies.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.CombinedStrengthOfTwoStrongestRhythmicPulsesFeature(sch)
>>> fe.extract().vector[0]
0.975...

CombinedStrengthOfTwoStrongestRhythmicPulsesFeature bases

CombinedStrengthOfTwoStrongestRhythmicPulsesFeature methods

CombinedStrengthOfTwoStrongestRhythmicPulsesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

CompoundOrSimpleMeterFeature

class music21.features.jSymbolic.CompoundOrSimpleMeterFeature(dataOrStream=None, **keywords)

Set to 1 if the initial meter is compound (numerator of time signature is greater than or equal to 6 and is evenly divisible by 3) and to 0 if it is simple (if the above condition is not fulfilled).

>>> s1 = stream.Stream()
>>> s1.append(meter.TimeSignature('3/4'))
>>> fe = features.jSymbolic.CompoundOrSimpleMeterFeature(s1)
>>> fe.extract().vector
[0]
>>> s2 = stream.Stream()
>>> s2.append(meter.TimeSignature('9/8'))
>>> fe.setData(s2)  # change the data
>>> fe.extract().vector
[1]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.CompoundOrSimpleMeterFeature(s)
>>> f = fe.extract()
>>> f.vector
[0]

CompoundOrSimpleMeterFeature bases

CompoundOrSimpleMeterFeature methods

CompoundOrSimpleMeterFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

DirectionOfMotionFeature

class music21.features.jSymbolic.DirectionOfMotionFeature(dataOrStream=None, **keywords)

Returns the fraction of melodic intervals that are rising rather than falling. Unisons are omitted.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.DirectionOfMotionFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.470...]

DirectionOfMotionFeature bases

DirectionOfMotionFeature methods

DirectionOfMotionFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

DistanceBetweenMostCommonMelodicIntervalsFeature

class music21.features.jSymbolic.DistanceBetweenMostCommonMelodicIntervalsFeature(dataOrStream=None, **keywords)
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.DistanceBetweenMostCommonMelodicIntervalsFeature(s)
>>> f = fe.extract()
>>> f.vector
[1]

DistanceBetweenMostCommonMelodicIntervalsFeature bases

DistanceBetweenMostCommonMelodicIntervalsFeature methods

DistanceBetweenMostCommonMelodicIntervalsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

DominantSpreadFeature

class music21.features.jSymbolic.DominantSpreadFeature(dataOrStream=None, **keywords)

Not implemented

Largest number of consecutive pitch classes separated by perfect 5ths that accounted for at least 9% each of the notes.

DominantSpreadFeature bases

DominantSpreadFeature methods

DominantSpreadFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

DurationFeature

class music21.features.jSymbolic.DurationFeature(dataOrStream=None, **keywords)

A feature extractor that extracts the duration of the piece in seconds.

>>> s = corpus.parse('bwv66.6')
>>> for p in s.parts:
...     p.insert(0, tempo.MetronomeMark(number=120))
>>> fe = features.jSymbolic.DurationFeature(s)
>>> f = fe.extract()
>>> f.vector[0]
18.0

DurationFeature bases

DurationFeature methods

DurationFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

DurationOfMelodicArcsFeature

class music21.features.jSymbolic.DurationOfMelodicArcsFeature(dataOrStream=None, **keywords)

Average number of notes that separate melodic peaks and troughs in any part. This is calculated as the total number of intervals (not counting unisons) divided by the number of times the melody changes direction.

Example: C D E D C D E C C Intervals: [0] 2 2 -2 -2 2 2 -4 0 Changes direction (equivalent to +/- sign) three times. There are seven non-unison (nonzero) intervals. Thus, the duration of arcs is 7/3 ~= 2.333…

>>> s = converter.parse("tinyNotation: c' d' e' d' c' d' e'2 c'2 c'2")
>>> fe = features.jSymbolic.DurationOfMelodicArcsFeature(s)
>>> fe.extract().vector
[2.333...]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.DurationOfMelodicArcsFeature(s)
>>> fe.extract().vector
[1.74...]

DurationOfMelodicArcsFeature bases

DurationOfMelodicArcsFeature methods

DurationOfMelodicArcsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

ElectricGuitarFractionFeature

class music21.features.jSymbolic.ElectricGuitarFractionFeature(dataOrStream=None, **keywords)
>>> s1 = stream.Stream()
>>> s1.append(instrument.ElectricGuitar())
>>> s1.repeatAppend(note.Note(), 4)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 4)
>>> fe = features.jSymbolic.ElectricGuitarFractionFeature(s1)
>>> fe.extract().vector
[0.5]

ElectricGuitarFractionFeature bases

ElectricGuitarFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

ElectricInstrumentFractionFeature

class music21.features.jSymbolic.ElectricInstrumentFractionFeature(dataOrStream=None, **keywords)

Fraction of all Note Ons belonging to electric instrument patches (General MIDI patches 5, 6, 17, 19, 27 through 32, 24 through 40).

>>> s1 = stream.Stream()
>>> s1.append(instrument.ElectricOrgan())
>>> s1.repeatAppend(note.Note(), 8)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 2)
>>> fe = features.jSymbolic.ElectricInstrumentFractionFeature(s1)
>>> print(fe.extract().vector[0])
0.8

ElectricInstrumentFractionFeature bases

ElectricInstrumentFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

FifthsPitchHistogramFeature

class music21.features.jSymbolic.FifthsPitchHistogramFeature(dataOrStream=None, **keywords)

A feature array with bins corresponding to the values of the 5ths pitch class histogram. Instead of the bins being arranged according to semitones – [C, C#, D, etc.] – they are arranged according to the circle of fifths: [C, G, D, A, E, B, F#, C#, G#, D#, A#, F]. Viewing such a histogram may draw attention to the prevalence of a tonal center, including the prevalence of dominant relationships in the piece.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.FifthsPitchHistogramFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.0, 0.0, 0.073..., 0.134..., 0.098..., 0.171..., 0.177..., 0.196...,
 0.085..., 0.006..., 0.018..., 0.036...]

FifthsPitchHistogramFeature bases

FifthsPitchHistogramFeature methods

FifthsPitchHistogramFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

GlissandoPrevalenceFeature

class music21.features.jSymbolic.GlissandoPrevalenceFeature(dataOrStream=None, **keywords)

Not yet implemented in music21

Number of Note Ons that have at least one MIDI Pitch Bend associated with them divided by total number of pitched Note Ons.

GlissandoPrevalenceFeature bases

GlissandoPrevalenceFeature methods

GlissandoPrevalenceFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

HarmonicityOfTwoStrongestRhythmicPulsesFeature

class music21.features.jSymbolic.HarmonicityOfTwoStrongestRhythmicPulsesFeature(dataOrStream=None, **keywords)

The bin label of the higher (in terms of bin label) of the two beat bins of the peaks with the highest frequency divided by the bin label of the lower.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.HarmonicityOfTwoStrongestRhythmicPulsesFeature(sch)
>>> f = fe.extract()
>>> f.vector[0]
2.0
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.HarmonicityOfTwoStrongestRhythmicPulsesFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.5]

HarmonicityOfTwoStrongestRhythmicPulsesFeature bases

HarmonicityOfTwoStrongestRhythmicPulsesFeature methods

HarmonicityOfTwoStrongestRhythmicPulsesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

ImportanceOfBassRegisterFeature

class music21.features.jSymbolic.ImportanceOfBassRegisterFeature(dataOrStream=None, **keywords)

Fraction of Notes between MIDI pitches 0 and 54.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.ImportanceOfBassRegisterFeature(s)
>>> fe.extract().vector
[0.184...]

ImportanceOfBassRegisterFeature bases

ImportanceOfBassRegisterFeature methods

ImportanceOfBassRegisterFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

ImportanceOfHighRegisterFeature

class music21.features.jSymbolic.ImportanceOfHighRegisterFeature(dataOrStream=None, **keywords)

Fraction of Notes between MIDI pitches 73 and 127.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.ImportanceOfHighRegisterFeature(s)
>>> fe.extract().vector
[0.049...]

ImportanceOfHighRegisterFeature bases

ImportanceOfHighRegisterFeature methods

ImportanceOfHighRegisterFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

ImportanceOfLoudestVoiceFeature

class music21.features.jSymbolic.ImportanceOfLoudestVoiceFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

ImportanceOfLoudestVoiceFeature bases

ImportanceOfLoudestVoiceFeature methods

Methods inherited from FeatureExtractor:

ImportanceOfMiddleRegisterFeature

class music21.features.jSymbolic.ImportanceOfMiddleRegisterFeature(dataOrStream=None, **keywords)

Fraction of Notes between MIDI pitches 55 and 72

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.ImportanceOfMiddleRegisterFeature(s)
>>> fe.extract().vector
[0.766...]

ImportanceOfMiddleRegisterFeature bases

ImportanceOfMiddleRegisterFeature methods

ImportanceOfMiddleRegisterFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

InitialTempoFeature

class music21.features.jSymbolic.InitialTempoFeature(dataOrStream=None, **keywords)

Tempo in beats per minute at the start of the recording.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.InitialTempoFeature(s)
>>> f = fe.extract()
>>> f.vector  # a default
[96.0]

InitialTempoFeature bases

InitialTempoFeature methods

InitialTempoFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

InitialTimeSignatureFeature

class music21.features.jSymbolic.InitialTimeSignatureFeature(dataOrStream=None, **keywords)

A feature array with two elements. The first is the numerator of the first occurring time signature and the second is the denominator of the first occurring time signature. Both are set to 0 if no time signature is present.

>>> s1 = stream.Stream()
>>> s1.append(meter.TimeSignature('3/4'))
>>> fe = features.jSymbolic.InitialTimeSignatureFeature(s1)
>>> fe.extract().vector
[3, 4]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.InitialTimeSignatureFeature(s)
>>> f = fe.extract()
>>> f.vector
[4, 4]

InitialTimeSignatureFeature bases

InitialTimeSignatureFeature methods

InitialTimeSignatureFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

InstrumentFractionFeature

class music21.features.jSymbolic.InstrumentFractionFeature(dataOrStream=None, **keywords)

TODO: Add description of feature

This subclass is in-turn subclassed by all FeatureExtractors that look at the proportional usage of an Instrument

InstrumentFractionFeature bases

InstrumentFractionFeature methods

InstrumentFractionFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

IntervalBetweenStrongestPitchClassesFeature

class music21.features.jSymbolic.IntervalBetweenStrongestPitchClassesFeature(dataOrStream=None, **keywords)
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.IntervalBetweenStrongestPitchClassesFeature(s)
>>> fe.extract().vector
[5]

IntervalBetweenStrongestPitchClassesFeature bases

IntervalBetweenStrongestPitchClassesFeature methods

IntervalBetweenStrongestPitchClassesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

IntervalBetweenStrongestPitchesFeature

class music21.features.jSymbolic.IntervalBetweenStrongestPitchesFeature(dataOrStream=None, **keywords)

Absolute value of the difference between the pitches of the two most common MIDI pitches.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.IntervalBetweenStrongestPitchesFeature(s)
>>> fe.extract().vector
[5]

IntervalBetweenStrongestPitchesFeature bases

IntervalBetweenStrongestPitchesFeature methods

IntervalBetweenStrongestPitchesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MaximumNoteDurationFeature

class music21.features.jSymbolic.MaximumNoteDurationFeature(dataOrStream=None, **keywords)

Duration of the longest note (in seconds).

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MaximumNoteDurationFeature(s)
>>> f = fe.extract()
>>> f.vector
[1.25]

MaximumNoteDurationFeature bases

MaximumNoteDurationFeature methods

MaximumNoteDurationFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

MaximumNumberOfIndependentVoicesFeature

class music21.features.jSymbolic.MaximumNumberOfIndependentVoicesFeature(dataOrStream=None, **keywords)

Maximum number of different channels in which notes have sounded simultaneously. Here, Parts are treated as channels.

>>> s = corpus.parse('handel/rinaldo/lascia_chio_pianga')
>>> fe = features.jSymbolic.MaximumNumberOfIndependentVoicesFeature(s)
>>> f = fe.extract()
>>> f.vector
[3]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MaximumNumberOfIndependentVoicesFeature(s)
>>> f = fe.extract()
>>> f.vector
[4]

MaximumNumberOfIndependentVoicesFeature bases

MaximumNumberOfIndependentVoicesFeature methods

MaximumNumberOfIndependentVoicesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

MelodicFifthsFeature

class music21.features.jSymbolic.MelodicFifthsFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals that are perfect fifths

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MelodicFifthsFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.056...]

MelodicFifthsFeature bases

MelodicFifthsFeature methods

MelodicFifthsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MelodicIntervalHistogramFeature

class music21.features.jSymbolic.MelodicIntervalHistogramFeature(dataOrStream=None, **keywords)

A features array with bins corresponding to the values of the melodic interval histogram.

128 dimensions

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MelodicIntervalHistogramFeature(s)
>>> f = fe.extract()
>>> f.vector[0:5]
[0.144..., 0.220..., 0.364..., 0.062..., 0.050...]

MelodicIntervalHistogramFeature bases

MelodicIntervalHistogramFeature methods

MelodicIntervalHistogramFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MelodicIntervalsInLowestLineFeature

class music21.features.jSymbolic.MelodicIntervalsInLowestLineFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

MelodicIntervalsInLowestLineFeature bases

MelodicIntervalsInLowestLineFeature methods

Methods inherited from FeatureExtractor:

MelodicOctavesFeature

class music21.features.jSymbolic.MelodicOctavesFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals that are octaves

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MelodicOctavesFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.018...]

MelodicOctavesFeature bases

MelodicOctavesFeature methods

MelodicOctavesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MelodicThirdsFeature

class music21.features.jSymbolic.MelodicThirdsFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals that are major or minor thirds

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MelodicThirdsFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.113...]

MelodicThirdsFeature bases

MelodicThirdsFeature methods

MelodicThirdsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MelodicTritonesFeature

class music21.features.jSymbolic.MelodicTritonesFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals that are tritones

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MelodicTritonesFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.012...]

MelodicTritonesFeature bases

MelodicTritonesFeature methods

MelodicTritonesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MinimumNoteDurationFeature

class music21.features.jSymbolic.MinimumNoteDurationFeature(dataOrStream=None, **keywords)

Duration of the shortest note (in seconds).

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MinimumNoteDurationFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.3125]

MinimumNoteDurationFeature bases

MinimumNoteDurationFeature methods

MinimumNoteDurationFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

MostCommonMelodicIntervalFeature

class music21.features.jSymbolic.MostCommonMelodicIntervalFeature(dataOrStream=None, **keywords)
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MostCommonMelodicIntervalFeature(s)
>>> f = fe.extract()
>>> f.vector
[2]

MostCommonMelodicIntervalFeature bases

MostCommonMelodicIntervalFeature methods

MostCommonMelodicIntervalFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MostCommonMelodicIntervalPrevalenceFeature

class music21.features.jSymbolic.MostCommonMelodicIntervalPrevalenceFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals that belong to the most common interval.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MostCommonMelodicIntervalPrevalenceFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.364...]

MostCommonMelodicIntervalPrevalenceFeature bases

MostCommonMelodicIntervalPrevalenceFeature methods

MostCommonMelodicIntervalPrevalenceFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MostCommonPitchClassFeature

class music21.features.jSymbolic.MostCommonPitchClassFeature(dataOrStream=None, **keywords)

Bin label of the most common pitch class.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MostCommonPitchClassFeature(s)
>>> fe.extract().vector
[1]

MostCommonPitchClassFeature bases

MostCommonPitchClassFeature methods

MostCommonPitchClassFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MostCommonPitchClassPrevalenceFeature

class music21.features.jSymbolic.MostCommonPitchClassPrevalenceFeature(dataOrStream=None, **keywords)

Fraction of Notes corresponding to the most common pitch class.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MostCommonPitchClassPrevalenceFeature(s)
>>> fe.extract().vector
[0.196...]

MostCommonPitchClassPrevalenceFeature bases

MostCommonPitchClassPrevalenceFeature methods

MostCommonPitchClassPrevalenceFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MostCommonPitchFeature

class music21.features.jSymbolic.MostCommonPitchFeature(dataOrStream=None, **keywords)

Bin label of the most common pitch.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MostCommonPitchFeature(s)
>>> fe.extract().vector
[61]

MostCommonPitchFeature bases

MostCommonPitchFeature methods

MostCommonPitchFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

MostCommonPitchPrevalenceFeature

class music21.features.jSymbolic.MostCommonPitchPrevalenceFeature(dataOrStream=None, **keywords)

Fraction of Notes corresponding to the most common pitch.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.MostCommonPitchPrevalenceFeature(s)
>>> fe.extract().vector[0]
0.116...

MostCommonPitchPrevalenceFeature bases

MostCommonPitchPrevalenceFeature methods

MostCommonPitchPrevalenceFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

NoteDensityFeature

class music21.features.jSymbolic.NoteDensityFeature(dataOrStream=None, **keywords)

Gives the average number of notes per second, taking into account the tempo at any moment in the piece. Unlike jSymbolic, music21 quantizes notes from MIDI somewhat before running this test; this function is meant to be run on encoded MIDI scores rather than recorded MIDI performances.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.NoteDensityFeature(s)
>>> f = fe.extract()
>>> f.vector
[7.244...]

NoteDensityFeature bases

NoteDensityFeature methods

NoteDensityFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

NotePrevalenceOfPitchedInstrumentsFeature

class music21.features.jSymbolic.NotePrevalenceOfPitchedInstrumentsFeature(dataOrStream=None, **keywords)
>>> s1 = stream.Stream()
>>> s1.append(instrument.AcousticGuitar())
>>> s1.repeatAppend(note.Note(), 4)
>>> s1.append(instrument.Tuba())
>>> s1.append(note.Note())
>>> fe = features.jSymbolic.NotePrevalenceOfPitchedInstrumentsFeature(s1)
>>> fe.extract().vector
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0.8..., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.2...,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

.midiProgram cannot be None:

>>> s1.getInstruments().first().midiProgram = None
>>> fe2 = features.jSymbolic.NotePrevalenceOfPitchedInstrumentsFeature(s1)
>>> fe2.extract()
Traceback (most recent call last):
music21.features.jSymbolic.JSymbolicFeatureException: Acoustic Guitar lacks a midiProgram

NotePrevalenceOfPitchedInstrumentsFeature bases

NotePrevalenceOfPitchedInstrumentsFeature methods

NotePrevalenceOfPitchedInstrumentsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

NotePrevalenceOfUnpitchedInstrumentsFeature

class music21.features.jSymbolic.NotePrevalenceOfUnpitchedInstrumentsFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

NotePrevalenceOfUnpitchedInstrumentsFeature bases

NotePrevalenceOfUnpitchedInstrumentsFeature methods

Methods inherited from FeatureExtractor:

NumberOfCommonMelodicIntervalsFeature

class music21.features.jSymbolic.NumberOfCommonMelodicIntervalsFeature(dataOrStream=None, **keywords)

Number of melodic intervals that represent at least 9% of all melodic intervals.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.NumberOfCommonMelodicIntervalsFeature(s)
>>> f = fe.extract()
>>> f.vector
[3]

NumberOfCommonMelodicIntervalsFeature bases

NumberOfCommonMelodicIntervalsFeature methods

NumberOfCommonMelodicIntervalsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

NumberOfCommonPitchesFeature

class music21.features.jSymbolic.NumberOfCommonPitchesFeature(dataOrStream=None, **keywords)

Number of pitches that account individually for at least 9% of all notes.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.NumberOfCommonPitchesFeature(s)
>>> fe.extract().vector
[3]

NumberOfCommonPitchesFeature bases

NumberOfCommonPitchesFeature methods

NumberOfCommonPitchesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

NumberOfModeratePulsesFeature

class music21.features.jSymbolic.NumberOfModeratePulsesFeature(dataOrStream=None, **keywords)

Not yet implemented

Number of beat peaks with normalized frequencies over 0.01.

NumberOfModeratePulsesFeature bases

NumberOfModeratePulsesFeature methods

NumberOfModeratePulsesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

NumberOfPitchedInstrumentsFeature

class music21.features.jSymbolic.NumberOfPitchedInstrumentsFeature(dataOrStream=None, **keywords)

Total number of General MIDI patches that are used to play at least one note.

>>> s1 = stream.Stream()
>>> s1.append(instrument.AcousticGuitar())
>>> s1.append(note.Note())
>>> s1.append(instrument.Tuba())
>>> s1.append(note.Note())
>>> fe = features.jSymbolic.NumberOfPitchedInstrumentsFeature(s1)
>>> fe.extract().vector
[2]

NumberOfPitchedInstrumentsFeature bases

NumberOfPitchedInstrumentsFeature methods

NumberOfPitchedInstrumentsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

NumberOfRelativelyStrongPulsesFeature

class music21.features.jSymbolic.NumberOfRelativelyStrongPulsesFeature(dataOrStream=None, **keywords)

not yet implemented

Number of beat peaks with frequencies at least 30% as high as the frequency of the bin with the highest frequency.

NumberOfRelativelyStrongPulsesFeature bases

NumberOfRelativelyStrongPulsesFeature methods

Methods inherited from FeatureExtractor:

NumberOfStrongPulsesFeature

class music21.features.jSymbolic.NumberOfStrongPulsesFeature(dataOrStream=None, **keywords)

Not yet implemented

Number of beat peaks with normalized frequencies over 0.1.

NumberOfStrongPulsesFeature bases

NumberOfStrongPulsesFeature methods

NumberOfStrongPulsesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

NumberOfUnpitchedInstrumentsFeature

class music21.features.jSymbolic.NumberOfUnpitchedInstrumentsFeature(dataOrStream=None, **keywords)

Not implemented

Number of distinct MIDI Percussion Key Map patches that were used to play at least one note. It should be noted that only instruments 35 to 81 are included here, as they are the ones that are included in the official standard.

TODO: implement

NumberOfUnpitchedInstrumentsFeature bases

NumberOfUnpitchedInstrumentsFeature methods

Methods inherited from FeatureExtractor:

OrchestralStringsFractionFeature

class music21.features.jSymbolic.OrchestralStringsFractionFeature(dataOrStream=None, **keywords)

Fraction of all Note Ons belonging to orchestral strings patches (General MIDI patches 41 or 47).

>>> s1 = stream.Stream()
>>> s1.append(instrument.Violoncello())
>>> s1.repeatAppend(note.Note(), 4)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 6)
>>> fe = features.jSymbolic.OrchestralStringsFractionFeature(s1)
>>> print(fe.extract().vector[0])
0.4

OrchestralStringsFractionFeature bases

OrchestralStringsFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

OverallDynamicRangeFeature

class music21.features.jSymbolic.OverallDynamicRangeFeature(dataOrStream=None, **keywords)

Not implemented

The maximum loudness minus the minimum loudness value.

TODO: implement

OverallDynamicRangeFeature bases

OverallDynamicRangeFeature methods

Methods inherited from FeatureExtractor:

PercussionPrevalenceFeature

class music21.features.jSymbolic.PercussionPrevalenceFeature(dataOrStream=None, **keywords)

Not implemented

Total number of Note Ons corresponding to unpitched percussion instruments divided by the total number of Note Ons in the recording.

PercussionPrevalenceFeature bases

PercussionPrevalenceFeature methods

Methods inherited from FeatureExtractor:

PitchClassDistributionFeature

class music21.features.jSymbolic.PitchClassDistributionFeature(dataOrStream=None, **keywords)

A feature array with 12 entries where the first holds the frequency of the bin of the pitch class histogram with the highest frequency, and the following entries holding the successive bins of the histogram, wrapping around if necessary.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.PitchClassDistributionFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.196..., 0.073..., 0.006..., 0.098..., 0.036..., 0.177..., 0.0,
 0.085..., 0.134..., 0.018..., 0.171..., 0.0]

PitchClassDistributionFeature bases

PitchClassDistributionFeature methods

PitchClassDistributionFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

PitchClassVarietyFeature

class music21.features.jSymbolic.PitchClassVarietyFeature(dataOrStream=None, **keywords)

Number of pitch classes used at least once.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.PitchClassVarietyFeature(s)
>>> fe.extract().vector
[10]

PitchClassVarietyFeature bases

PitchClassVarietyFeature methods

PitchClassVarietyFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

PitchVarietyFeature

class music21.features.jSymbolic.PitchVarietyFeature(dataOrStream=None, **keywords)

Number of pitches used at least once.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.PitchVarietyFeature(s)
>>> fe.extract().vector
[24]

PitchVarietyFeature bases

PitchVarietyFeature methods

PitchVarietyFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

PitchedInstrumentsPresentFeature

class music21.features.jSymbolic.PitchedInstrumentsPresentFeature(dataOrStream=None, **keywords)

Which pitched General MIDI Instruments are present. There is one entry for each instrument, which is set to 1.0 if there is at least one Note On in the recording corresponding to the instrument and to 0.0 if there is not.

>>> s1 = stream.Stream()
>>> s1.append(instrument.AcousticGuitar())
>>> s1.append(note.Note())
>>> s1.append(instrument.Tuba())
>>> s1.append(note.Note())
>>> fe = features.jSymbolic.PitchedInstrumentsPresentFeature(s1)
>>> fe.extract().vector
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

Default instruments will lack a .midiProgram, so they raise exceptions:

>>> i = instrument.Instrument()
>>> i.midiProgram is None
True
>>> s2 = stream.Stream()
>>> s2.append(i)
>>> s2.append(note.Note())
>>> fe2 = features.jSymbolic.PitchedInstrumentsPresentFeature(s2)
>>> fe2.extract()
Traceback (most recent call last):
music21.features.jSymbolic.JSymbolicFeatureException:
<music21.instrument.Instrument ''> lacks a midiProgram

PitchedInstrumentsPresentFeature bases

PitchedInstrumentsPresentFeature methods

PitchedInstrumentsPresentFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

PolyrhythmsFeature

class music21.features.jSymbolic.PolyrhythmsFeature(dataOrStream=None, **keywords)

Not yet implemented

Number of beat peaks with frequencies at least 30% of the highest frequency whose bin labels are not integer multiples or factors (using only multipliers of 1, 2, 3, 4, 6 and 8) (with an accepted error of +/- 3 bins) of the bin label of the peak with the highest frequency. This number is then divided by the total number of beat bins with frequencies over 30% of the highest frequency.

PolyrhythmsFeature bases

PolyrhythmsFeature methods

PolyrhythmsFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

PrevalenceOfMicrotonesFeature

class music21.features.jSymbolic.PrevalenceOfMicrotonesFeature(dataOrStream=None, **keywords)

not yet implemented

Number of Note Ons that are preceded by isolated MIDI Pitch Bend messages as a fraction of the total number of Note Ons.’

PrevalenceOfMicrotonesFeature bases

PrevalenceOfMicrotonesFeature methods

PrevalenceOfMicrotonesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

PrimaryRegisterFeature

class music21.features.jSymbolic.PrimaryRegisterFeature(dataOrStream=None, **keywords)

Average MIDI pitch.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.PrimaryRegisterFeature(s)
>>> fe.extract().vector
[61.12...]

PrimaryRegisterFeature bases

PrimaryRegisterFeature methods

PrimaryRegisterFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

QualityFeature

class music21.features.jSymbolic.QualityFeature(dataOrStream=None, **keywords)

Set to 0 if the key signature indicates that a recording is major, set to 1 if it indicates that it is minor. In jSymbolic, this is set to 0 if key signature is unknown.

See features.native.QualityFeature for a music21 improvement on this method

Example: Handel, Rinaldo Aria (musicxml) is explicitly encoded as being in Major:

>>> s = corpus.parse('handel/rinaldo/lascia_chio_pianga')
>>> fe = features.jSymbolic.QualityFeature(s)
>>> f = fe.extract()
>>> f.vector
[0]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.QualityFeature(s)
>>> f = fe.extract()
>>> f.vector
[1]

QualityFeature bases

QualityFeature methods

QualityFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

QuintupleMeterFeature

class music21.features.jSymbolic.QuintupleMeterFeature(dataOrStream=None, **keywords)

Set to 1 if numerator of initial time signature is 5, set to 0 otherwise.

>>> s1 = stream.Stream()
>>> s1.append(meter.TimeSignature('5/4'))
>>> fe = features.jSymbolic.QuintupleMeterFeature(s1)
>>> fe.extract().vector
[1]
>>> s2 = stream.Stream()
>>> s2.append(meter.TimeSignature('3/4'))
>>> fe.setData(s2)  # change the data
>>> fe.extract().vector
[0]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.QuintupleMeterFeature(s)
>>> f = fe.extract()
>>> f.vector
[0]

QuintupleMeterFeature bases

QuintupleMeterFeature methods

QuintupleMeterFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

RangeFeature

class music21.features.jSymbolic.RangeFeature(dataOrStream=None, **keywords)

Difference between highest and lowest pitches. In semitones

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.RangeFeature(s)
>>> fe.extract().vector
[34]

RangeFeature bases

RangeFeature methods

RangeFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

RangeOfHighestLineFeature

class music21.features.jSymbolic.RangeOfHighestLineFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

RangeOfHighestLineFeature bases

RangeOfHighestLineFeature methods

Methods inherited from FeatureExtractor:

RelativeNoteDensityOfHighestLineFeature

class music21.features.jSymbolic.RelativeNoteDensityOfHighestLineFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

RelativeNoteDensityOfHighestLineFeature bases

RelativeNoteDensityOfHighestLineFeature methods

Methods inherited from FeatureExtractor:

RelativeRangeOfLoudestVoiceFeature

class music21.features.jSymbolic.RelativeRangeOfLoudestVoiceFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

RelativeRangeOfLoudestVoiceFeature bases

RelativeRangeOfLoudestVoiceFeature methods

Methods inherited from FeatureExtractor:

RelativeStrengthOfMostCommonIntervalsFeature

class music21.features.jSymbolic.RelativeStrengthOfMostCommonIntervalsFeature(dataOrStream=None, **keywords)
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.RelativeStrengthOfMostCommonIntervalsFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.603...]

RelativeStrengthOfMostCommonIntervalsFeature bases

RelativeStrengthOfMostCommonIntervalsFeature methods

RelativeStrengthOfMostCommonIntervalsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

RelativeStrengthOfTopPitchClassesFeature

class music21.features.jSymbolic.RelativeStrengthOfTopPitchClassesFeature(dataOrStream=None, **keywords)
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.RelativeStrengthOfTopPitchClassesFeature(s)
>>> fe.extract().vector
[0.906...]

RelativeStrengthOfTopPitchClassesFeature bases

RelativeStrengthOfTopPitchClassesFeature methods

RelativeStrengthOfTopPitchClassesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

RelativeStrengthOfTopPitchesFeature

class music21.features.jSymbolic.RelativeStrengthOfTopPitchesFeature(dataOrStream=None, **keywords)

The frequency of the 2nd most common pitch divided by the frequency of the most common pitch.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.RelativeStrengthOfTopPitchesFeature(s)
>>> fe.extract().vector
[0.947...]

RelativeStrengthOfTopPitchesFeature bases

RelativeStrengthOfTopPitchesFeature methods

RelativeStrengthOfTopPitchesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

RepeatedNotesFeature

class music21.features.jSymbolic.RepeatedNotesFeature(dataOrStream=None, **keywords)

Fraction of notes that are repeated melodically

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.RepeatedNotesFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.144...]

RepeatedNotesFeature bases

RepeatedNotesFeature methods

RepeatedNotesFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

RhythmicLoosenessFeature

class music21.features.jSymbolic.RhythmicLoosenessFeature(dataOrStream=None, **keywords)

not yet implemented

Average width of beat histogram peaks (in beats per minute). Width is measured for all peaks with frequencies at least 30% as high as the highest peak, and is defined by the distance between the points on the peak in question that are 30% of the height of the peak.

RhythmicLoosenessFeature bases

RhythmicLoosenessFeature methods

RhythmicLoosenessFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

RhythmicVariabilityFeature

class music21.features.jSymbolic.RhythmicVariabilityFeature(dataOrStream=None, **keywords)

Not yet implemented

Standard deviation of the bin values (except the first 40 empty ones).

RhythmicVariabilityFeature bases

RhythmicVariabilityFeature methods

RhythmicVariabilityFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

SaxophoneFractionFeature

class music21.features.jSymbolic.SaxophoneFractionFeature(dataOrStream=None, **keywords)

Fraction of all Note Ons belonging to saxophone patches (General MIDI patches 65 through 68). # NOTE: incorrect

>>> s1 = stream.Stream()
>>> s1.append(instrument.SopranoSaxophone())
>>> s1.repeatAppend(note.Note(), 6)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 4)
>>> fe = features.jSymbolic.SaxophoneFractionFeature(s1)
>>> print(fe.extract().vector[0])
0.6

SaxophoneFractionFeature bases

SaxophoneFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

SecondStrongestRhythmicPulseFeature

class music21.features.jSymbolic.SecondStrongestRhythmicPulseFeature(dataOrStream=None, **keywords)

Bin label of the beat bin of the peak with the second-highest frequency.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.SecondStrongestRhythmicPulseFeature(sch)
>>> f = fe.extract()
>>> f.vector[0]
70
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.SecondStrongestRhythmicPulseFeature(s)
>>> f = fe.extract()
>>> f.vector
[192]

SecondStrongestRhythmicPulseFeature bases

SecondStrongestRhythmicPulseFeature methods

SecondStrongestRhythmicPulseFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

SizeOfMelodicArcsFeature

class music21.features.jSymbolic.SizeOfMelodicArcsFeature(dataOrStream=None, **keywords)

Average span (in semitones) between melodic peaks and troughs in any part. Each time the melody changes direction begins a new arc. The average size of melodic arcs is defined as the total size of melodic intervals between changes of directions - or between the start of the melody and the first change of direction - divided by the number of direction changes.

Example: C D E D C E D C C Intervals: [0] 2 2 -2 -2 2 2 -4 0 Changes direction (equivalent to +/- sign) three times. The total sum of interval distance up to the last change of direction is 12. We don’t count the last interval, the descending major third, because it is not between changes of direction. Thus, the average size of melodic arcs is 12/3 = 4.

>>> s = converter.parse("tinyNotation: c' d' e' d' c' d' e'2 c'2 c'2")
>>> fe = features.jSymbolic.SizeOfMelodicArcsFeature(s)
>>> fe.extract().vector
[4.0]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.SizeOfMelodicArcsFeature(s)
>>> fe.extract().vector
[4.84...]

SizeOfMelodicArcsFeature bases

SizeOfMelodicArcsFeature methods

SizeOfMelodicArcsFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

StaccatoIncidenceFeature

class music21.features.jSymbolic.StaccatoIncidenceFeature(dataOrStream=None, **keywords)

Number of notes with durations of less than a 10th of a second divided by the total number of notes in the recording.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.StaccatoIncidenceFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.0]

StaccatoIncidenceFeature bases

StaccatoIncidenceFeature methods

StaccatoIncidenceFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

StepwiseMotionFeature

class music21.features.jSymbolic.StepwiseMotionFeature(dataOrStream=None, **keywords)

Fraction of melodic intervals that corresponded to a minor or major second

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.StepwiseMotionFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.584...]

StepwiseMotionFeature bases

StepwiseMotionFeature methods

StepwiseMotionFeature.process()

Do processing necessary, storing result in feature.

Methods inherited from FeatureExtractor:

StrengthOfSecondStrongestRhythmicPulseFeature

class music21.features.jSymbolic.StrengthOfSecondStrongestRhythmicPulseFeature(dataOrStream=None, **keywords)

Frequency of the beat bin of the peak with the second-highest frequency.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.StrengthOfSecondStrongestRhythmicPulseFeature(sch)
>>> fe.extract().vector[0]
0.121...

StrengthOfSecondStrongestRhythmicPulseFeature bases

StrengthOfSecondStrongestRhythmicPulseFeature methods

StrengthOfSecondStrongestRhythmicPulseFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

StrengthOfStrongestRhythmicPulseFeature

class music21.features.jSymbolic.StrengthOfStrongestRhythmicPulseFeature(dataOrStream=None, **keywords)

Frequency of the beat bin with the highest frequency.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.StrengthOfStrongestRhythmicPulseFeature(sch)
>>> fe.extract().vector[0]
0.853...

StrengthOfStrongestRhythmicPulseFeature bases

StrengthOfStrongestRhythmicPulseFeature methods

StrengthOfStrongestRhythmicPulseFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

StrengthRatioOfTwoStrongestRhythmicPulsesFeature

class music21.features.jSymbolic.StrengthRatioOfTwoStrongestRhythmicPulsesFeature(dataOrStream=None, **keywords)

The frequency of the higher (in terms of frequency) of the two beat bins corresponding to the peaks with the highest frequency divided by the frequency of the lower.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.StrengthRatioOfTwoStrongestRhythmicPulsesFeature(sch)
>>> fe.extract().vector[0]
7.0

StrengthRatioOfTwoStrongestRhythmicPulsesFeature bases

StrengthRatioOfTwoStrongestRhythmicPulsesFeature methods

StrengthRatioOfTwoStrongestRhythmicPulsesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

StringEnsembleFractionFeature

class music21.features.jSymbolic.StringEnsembleFractionFeature(dataOrStream=None, **keywords)

Not implemented

Fraction of all Note Ons belonging to string ensemble patches (General MIDI patches 49 to 52).

StringEnsembleFractionFeature bases

StringEnsembleFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

StringKeyboardFractionFeature

class music21.features.jSymbolic.StringKeyboardFractionFeature(dataOrStream=None, **keywords)

Fraction of all Note Ons belonging to string keyboard patches (General MIDI patches 1 to 8).

>>> s1 = stream.Stream()
>>> s1.append(instrument.Piano())
>>> s1.repeatAppend(note.Note(), 9)
>>> s1.append(instrument.Tuba())
>>> s1.append(note.Note())
>>> fe = features.jSymbolic.StringKeyboardFractionFeature(s1)
>>> fe.extract().vector
[0.9...]

StringKeyboardFractionFeature bases

StringKeyboardFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

StrongTonalCentresFeature

class music21.features.jSymbolic.StrongTonalCentresFeature(dataOrStream=None, **keywords)

Not implemented

Number of peaks in the fifths pitch histogram that each account for at least 9% of all Note Ons.

StrongTonalCentresFeature bases

StrongTonalCentresFeature methods

StrongTonalCentresFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

StrongestRhythmicPulseFeature

class music21.features.jSymbolic.StrongestRhythmicPulseFeature(dataOrStream=None, **keywords)

Bin label of the beat bin of the peak with the highest frequency.

>>> sch = corpus.parse('schoenberg/opus19', 2)
>>> for p in sch.parts:
...     p.insert(0, tempo.MetronomeMark('Langsam', 70))
>>> fe = features.jSymbolic.StrongestRhythmicPulseFeature(sch)
>>> f = fe.extract()
>>> f.vector[0]
140
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.StrongestRhythmicPulseFeature(s)
>>> f = fe.extract()
>>> f.vector
[96]

StrongestRhythmicPulseFeature bases

StrongestRhythmicPulseFeature methods

StrongestRhythmicPulseFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

TimePrevalenceOfPitchedInstrumentsFeature

class music21.features.jSymbolic.TimePrevalenceOfPitchedInstrumentsFeature(dataOrStream=None, **keywords)

Not implemented

The fraction of the total time of the recording in which a note was sounding for each (pitched) General MIDI Instrument. There is one entry for each instrument, which is set to the total time in seconds during which a given instrument was sounding one or more notes divided by the total length in seconds of the piece.’

TODO: implement

TimePrevalenceOfPitchedInstrumentsFeature bases

TimePrevalenceOfPitchedInstrumentsFeature methods

Methods inherited from FeatureExtractor:

TripleMeterFeature

class music21.features.jSymbolic.TripleMeterFeature(dataOrStream=None, **keywords)

Set to 1 if numerator of initial time signature is 3, set to 0 otherwise.

>>> s1 = stream.Stream()
>>> s1.append(meter.TimeSignature('5/4'))
>>> fe = features.jSymbolic.TripleMeterFeature(s1)
>>> fe.extract().vector
[0]
>>> s2 = stream.Stream()
>>> s2.append(meter.TimeSignature('3/4'))
>>> fe.setData(s2)  # change the data
>>> fe.extract().vector
[1]
>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.TripleMeterFeature(s)
>>> f = fe.extract()
>>> f.vector
[0]

TripleMeterFeature bases

TripleMeterFeature methods

TripleMeterFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

UnpitchedInstrumentsPresentFeature

class music21.features.jSymbolic.UnpitchedInstrumentsPresentFeature(dataOrStream=None, **keywords)

Not yet implemented

Which unpitched MIDI Percussion Key Map instruments are present. There is one entry for each instrument, which is set to 1.0 if there is at least one Note On in the recording corresponding to the instrument and to 0.0 if there is not. It should be noted that only instruments 35 to 81 are included here, as they are the ones that meet the official standard. They are numbered in this array from 0 to 46.

UnpitchedInstrumentsPresentFeature bases

UnpitchedInstrumentsPresentFeature methods

UnpitchedInstrumentsPresentFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

VariabilityOfNoteDurationFeature

class music21.features.jSymbolic.VariabilityOfNoteDurationFeature(dataOrStream=None, **keywords)

Standard deviation of note durations in seconds.

# In this piece, we have: # 9 half notes or tied pair of quarters # 98 untied quarters or tied pair of eighths # 56 untied eighths # BPM = 120 means a half note is a second. # Mean duration should thus be 0.44171779141104295 # and standard deviation should be 0.17854763448902145

>>> s = corpus.parse('bwv66.6')
>>> for p in s.parts:
...     p.insert(0, tempo.MetronomeMark(number=120))
>>> fe = features.jSymbolic.VariabilityOfNoteDurationFeature(s)
>>> f = fe.extract()
>>> f.vector[0]
0.178...

VariabilityOfNoteDurationFeature bases

VariabilityOfNoteDurationFeature methods

VariabilityOfNoteDurationFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature

class music21.features.jSymbolic.VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature(dataOrStream=None, **keywords)

Standard deviation of the fraction of Note Ons played by each (pitched) General MIDI instrument that is used to play at least one note.

>>> s1 = stream.Stream()
>>> s1.append(instrument.AcousticGuitar())
>>> s1.repeatAppend(note.Note(), 5)
>>> s1.append(instrument.Tuba())
>>> s1.append(note.Note())
>>> fe = features.jSymbolic.VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature(s1)
>>> fe.extract().vector
[0.33333...]

VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature bases

VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature methods

VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

VariabilityOfNotePrevalenceOfUnpitchedInstrumentsFeature

class music21.features.jSymbolic.VariabilityOfNotePrevalenceOfUnpitchedInstrumentsFeature(dataOrStream=None, **keywords)

Not implemented

Standard deviation of the fraction of Note Ons played by each (unpitched) MIDI Percussion Key Map instrument that is used to play at least one note. It should be noted that only instruments 35 to 81 are included here, as they are the ones that are included in the official standard.

TODO: implement

VariabilityOfNotePrevalenceOfUnpitchedInstrumentsFeature bases

VariabilityOfNotePrevalenceOfUnpitchedInstrumentsFeature methods

Methods inherited from FeatureExtractor:

VariabilityOfNumberOfIndependentVoicesFeature

class music21.features.jSymbolic.VariabilityOfNumberOfIndependentVoicesFeature(dataOrStream=None, **keywords)

Standard deviation of number of different channels in which notes have sounded simultaneously. Rests are not included in this calculation.

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.VariabilityOfNumberOfIndependentVoicesFeature(s)
>>> f = fe.extract()
>>> f.vector
[0.449...]

VariabilityOfNumberOfIndependentVoicesFeature bases

VariabilityOfNumberOfIndependentVoicesFeature methods

VariabilityOfNumberOfIndependentVoicesFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

VariabilityOfTimeBetweenAttacksFeature

class music21.features.jSymbolic.VariabilityOfTimeBetweenAttacksFeature(dataOrStream=None, **keywords)

Standard deviation of the times, in seconds, between Note On events (regardless of channel).

>>> s = corpus.parse('bwv66.6')
>>> fe = features.jSymbolic.VariabilityOfTimeBetweenAttacksFeature(s)
>>> f = fe.extract()
>>> print(f.vector)
[0.1875]

VariabilityOfTimeBetweenAttacksFeature bases

VariabilityOfTimeBetweenAttacksFeature methods

VariabilityOfTimeBetweenAttacksFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

VariationOfDynamicsFeature

class music21.features.jSymbolic.VariationOfDynamicsFeature(dataOrStream=None, **keywords)

Not implemented

Standard deviation of loudness levels of all notes.

TODO: implement

VariationOfDynamicsFeature bases

VariationOfDynamicsFeature methods

Methods inherited from FeatureExtractor:

VariationOfDynamicsInEachVoiceFeature

class music21.features.jSymbolic.VariationOfDynamicsInEachVoiceFeature(dataOrStream=None, **keywords)

Not implemented

The average of the standard deviations of loudness levels within each channel that contains at least one note.

TODO: implement

VariationOfDynamicsInEachVoiceFeature bases

VariationOfDynamicsInEachVoiceFeature methods

Methods inherited from FeatureExtractor:

VibratoPrevalenceFeature

class music21.features.jSymbolic.VibratoPrevalenceFeature(dataOrStream=None, **keywords)

Not yet implemented in music21

Number of notes for which Pitch Bend messages change direction at least twice divided by total number of notes that have Pitch Bend messages associated with them.

VibratoPrevalenceFeature bases

VibratoPrevalenceFeature methods

VibratoPrevalenceFeature.process()

Do processing necessary, storing result in _feature.

Methods inherited from FeatureExtractor:

ViolinFractionFeature

class music21.features.jSymbolic.ViolinFractionFeature(dataOrStream=None, **keywords)

Fraction of all Note Ons belonging to violin patches (General MIDI patches 41 or 111).

>>> s1 = stream.Stream()
>>> s1.append(instrument.Violin())
>>> s1.repeatAppend(note.Note(), 2)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 8)
>>> fe = features.jSymbolic.ViolinFractionFeature(s1)
>>> fe.extract().vector
[0.2...]

ViolinFractionFeature bases

ViolinFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

VoiceEqualityDynamicsFeature

class music21.features.jSymbolic.VoiceEqualityDynamicsFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

VoiceEqualityDynamicsFeature bases

VoiceEqualityDynamicsFeature methods

Methods inherited from FeatureExtractor:

VoiceEqualityMelodicLeapsFeature

class music21.features.jSymbolic.VoiceEqualityMelodicLeapsFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

VoiceEqualityMelodicLeapsFeature bases

VoiceEqualityMelodicLeapsFeature methods

Methods inherited from FeatureExtractor:

VoiceEqualityNoteDurationFeature

class music21.features.jSymbolic.VoiceEqualityNoteDurationFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

VoiceEqualityNoteDurationFeature bases

VoiceEqualityNoteDurationFeature methods

Methods inherited from FeatureExtractor:

VoiceEqualityNumberOfNotesFeature

class music21.features.jSymbolic.VoiceEqualityNumberOfNotesFeature(dataOrStream=None, **keywords)

Not implemented

TODO: implement

Standard deviation of the total number of Note Ons in each channel that contains at least one note.

VoiceEqualityNumberOfNotesFeature bases

VoiceEqualityNumberOfNotesFeature methods

Methods inherited from FeatureExtractor:

VoiceEqualityRangeFeature

class music21.features.jSymbolic.VoiceEqualityRangeFeature(dataOrStream=None, **keywords)

Not implemented

Standard deviation of the differences between the highest and lowest pitches in each channel that contains at least one note.

VoiceEqualityRangeFeature bases

VoiceEqualityRangeFeature methods

Methods inherited from FeatureExtractor:

VoiceSeparationFeature

class music21.features.jSymbolic.VoiceSeparationFeature(dataOrStream=None, **keywords)

Not implemented

Average separation in semitones between the average pitches of consecutive channels (after sorting based/non-average pitch) that contain at least one note.

VoiceSeparationFeature bases

VoiceSeparationFeature methods

Methods inherited from FeatureExtractor:

WoodwindsFractionFeature

class music21.features.jSymbolic.WoodwindsFractionFeature(dataOrStream=None, **keywords)

Fraction of all Note Ons belonging to woodwind patches (General MIDI patches 69 through 76).

TODO: Conflict in source: does 69-79?

>>> s1 = stream.Stream()
>>> s1.append(instrument.Flute())
>>> s1.repeatAppend(note.Note(), 3)
>>> s1.append(instrument.Tuba())
>>> s1.repeatAppend(note.Note(), 7)
>>> fe = features.jSymbolic.WoodwindsFractionFeature(s1)
>>> print(fe.extract().vector[0])
0.3

WoodwindsFractionFeature bases

WoodwindsFractionFeature methods

Methods inherited from InstrumentFractionFeature:

Methods inherited from FeatureExtractor:

Functions

music21.features.jSymbolic.getCompletionStats()
>>> features.jSymbolic.getCompletionStats()
completion stats: 72/112 (0.6428...)
music21.features.jSymbolic.getExtractorByTypeAndNumber(extractorType, number)

Typical usage:

>>> t5 = features.jSymbolic.getExtractorByTypeAndNumber('T', 5)
>>> t5.__name__
'VoiceEqualityNoteDurationFeature'
>>> bachExample = corpus.parse('bach/bwv66.6')
>>> fe = t5(bachExample)

Features unimplemented in jSymbolic but documented in the dissertation return None

>>> features.jSymbolic.getExtractorByTypeAndNumber('C', 20) is None
True

Totally unknown features return an exception:

>>> features.jSymbolic.getExtractorByTypeAndNumber('L', 900)
Traceback (most recent call last):
music21.features.jSymbolic.JSymbolicFeatureException: Could not find
    any jSymbolic features of type L
>>> features.jSymbolic.getExtractorByTypeAndNumber('C', 200)
Traceback (most recent call last):
music21.features.jSymbolic.JSymbolicFeatureException: jSymbolic
    features of type C do not have number 200

You could also find all the feature extractors this way:

>>> fs = features.jSymbolic.extractorsById
>>> for k in fs:
...     for i in range(len(fs[k])):
...       if fs[k][i] is not None:
...         n = fs[k][i].__name__
...         if fs[k][i] not in features.jSymbolic.featureExtractors:
...            n += ' (not implemented)'
...         print(f'{k} {i} {n}')
D 1 OverallDynamicRangeFeature (not implemented)
D 2 VariationOfDynamicsFeature (not implemented)
D 3 VariationOfDynamicsInEachVoiceFeature (not implemented)
D 4 AverageNoteToNoteDynamicsChangeFeature (not implemented)
I 1 PitchedInstrumentsPresentFeature
I 2 UnpitchedInstrumentsPresentFeature (not implemented)
I 3 NotePrevalenceOfPitchedInstrumentsFeature
I 4 NotePrevalenceOfUnpitchedInstrumentsFeature (not implemented)
I 5 TimePrevalenceOfPitchedInstrumentsFeature (not implemented)
I 6 VariabilityOfNotePrevalenceOfPitchedInstrumentsFeature
I 7 VariabilityOfNotePrevalenceOfUnpitchedInstrumentsFeature (not implemented)
I 8 NumberOfPitchedInstrumentsFeature
I 9 NumberOfUnpitchedInstrumentsFeature (not implemented)
I 10 PercussionPrevalenceFeature (not implemented)
I 11 StringKeyboardFractionFeature
I 12 AcousticGuitarFractionFeature
I 13 ElectricGuitarFractionFeature
I 14 ViolinFractionFeature
I 15 SaxophoneFractionFeature
I 16 BrassFractionFeature
I 17 WoodwindsFractionFeature
I 18 OrchestralStringsFractionFeature
I 19 StringEnsembleFractionFeature
I 20 ElectricInstrumentFractionFeature
M 1 MelodicIntervalHistogramFeature
M 2 AverageMelodicIntervalFeature
M 3 MostCommonMelodicIntervalFeature
M 4 DistanceBetweenMostCommonMelodicIntervalsFeature
M 5 MostCommonMelodicIntervalPrevalenceFeature
M 6 RelativeStrengthOfMostCommonIntervalsFeature
M 7 NumberOfCommonMelodicIntervalsFeature
M 8 AmountOfArpeggiationFeature
M 9 RepeatedNotesFeature
M 10 ChromaticMotionFeature
M 11 StepwiseMotionFeature
M 12 MelodicThirdsFeature
M 13 MelodicFifthsFeature
M 14 MelodicTritonesFeature
M 15 MelodicOctavesFeature
M 17 DirectionOfMotionFeature
M 18 DurationOfMelodicArcsFeature
M 19 SizeOfMelodicArcsFeature
P 1 MostCommonPitchPrevalenceFeature
P 2 MostCommonPitchClassPrevalenceFeature
P 3 RelativeStrengthOfTopPitchesFeature
P 4 RelativeStrengthOfTopPitchClassesFeature
P 5 IntervalBetweenStrongestPitchesFeature
P 6 IntervalBetweenStrongestPitchClassesFeature
P 7 NumberOfCommonPitchesFeature
P 8 PitchVarietyFeature
P 9 PitchClassVarietyFeature
P 10 RangeFeature
P 11 MostCommonPitchFeature
P 12 PrimaryRegisterFeature
P 13 ImportanceOfBassRegisterFeature
P 14 ImportanceOfMiddleRegisterFeature
P 15 ImportanceOfHighRegisterFeature
P 16 MostCommonPitchClassFeature
P 17 DominantSpreadFeature (not implemented)
P 18 StrongTonalCentresFeature (not implemented)
P 19 BasicPitchHistogramFeature
P 20 PitchClassDistributionFeature
P 21 FifthsPitchHistogramFeature
P 22 QualityFeature
P 23 GlissandoPrevalenceFeature (not implemented)
P 24 AverageRangeOfGlissandosFeature (not implemented)
P 25 VibratoPrevalenceFeature (not implemented)
R 1 StrongestRhythmicPulseFeature (not implemented)
R 2 SecondStrongestRhythmicPulseFeature (not implemented)
R 3 HarmonicityOfTwoStrongestRhythmicPulsesFeature (not implemented)
R 4 StrengthOfStrongestRhythmicPulseFeature (not implemented)
R 5 StrengthOfSecondStrongestRhythmicPulseFeature (not implemented)
R 6 StrengthRatioOfTwoStrongestRhythmicPulsesFeature (not implemented)
R 7 CombinedStrengthOfTwoStrongestRhythmicPulsesFeature (not implemented)
R 8 NumberOfStrongPulsesFeature (not implemented)
R 9 NumberOfModeratePulsesFeature (not implemented)
R 10 NumberOfRelativelyStrongPulsesFeature (not implemented)
R 11 RhythmicLoosenessFeature (not implemented)
R 12 PolyrhythmsFeature (not implemented)
R 13 RhythmicVariabilityFeature (not implemented)
R 14 BeatHistogramFeature (not implemented)
R 15 NoteDensityFeature
R 17 AverageNoteDurationFeature
R 18 VariabilityOfNoteDurationFeature
R 19 MaximumNoteDurationFeature
R 20 MinimumNoteDurationFeature
R 21 StaccatoIncidenceFeature
R 22 AverageTimeBetweenAttacksFeature
R 23 VariabilityOfTimeBetweenAttacksFeature
R 24 AverageTimeBetweenAttacksForEachVoiceFeature
R 25 AverageVariabilityOfTimeBetweenAttacksForEachVoiceFeature
R 30 InitialTempoFeature
R 31 InitialTimeSignatureFeature
R 32 CompoundOrSimpleMeterFeature
R 33 TripleMeterFeature
R 34 QuintupleMeterFeature
R 35 ChangesOfMeterFeature
R 36 DurationFeature
T 1 MaximumNumberOfIndependentVoicesFeature
T 2 AverageNumberOfIndependentVoicesFeature
T 3 VariabilityOfNumberOfIndependentVoicesFeature
T 4 VoiceEqualityNumberOfNotesFeature (not implemented)
T 5 VoiceEqualityNoteDurationFeature (not implemented)
T 6 VoiceEqualityDynamicsFeature (not implemented)
T 7 VoiceEqualityMelodicLeapsFeature (not implemented)
T 8 VoiceEqualityRangeFeature (not implemented)
T 9 ImportanceOfLoudestVoiceFeature (not implemented)
T 10 RelativeRangeOfLoudestVoiceFeature (not implemented)
T 12 RangeOfHighestLineFeature (not implemented)
T 13 RelativeNoteDensityOfHighestLineFeature (not implemented)
T 15 MelodicIntervalsInLowestLineFeature (not implemented)
T 20 VoiceSeparationFeature (not implemented)