Searching for Beauty in Music
Applications of Zipf's Law in MIDI-Encoded Music



NOTE: This page is superseded - see newer results in music and fractals.


Where shall you seek beauty, and how
shall you find her unless she herself be your 
way and your guide?
And how shall you speak of her except 
she be the weaver of your speech?


--Kahlil Gibran, The Prophet, p. 74

[Overview]   [Background]   [Data and Results]   [Credits]   [Publications]   [References]


Overview    (top)

This project explores stochastic techniques to computationally identify and emphasize aesthetic aspects of music. Currently, we are studying ways to apply the Zipf-Mandelbrot law on musical pieces encoded in MIDI. 

We have extended earlier results (Voss and Clarke, 1975; Zipf, 1949) by identifying a set of measurable attributes of music that may exhibit Zipf-Mandelbrot distributions. These measurable attributes (metrics) include pitch of notes, duration of notes, harmonic and melodic intervals, and many others. Experiments on corpora from various music genres (e.g., baroque, classical, 12-tone, jazz, rock, punk rock) demonstrate the validity of the approach.  Currently, we are investigating ways to combine our metrics with AI techniques, such neural networks and genetic algorithms, to analyze and help generate music that sounds "pleasing, beautiful, harmonious." Related application areas include music education, music therapy, music recognition by computers, and computer-aided music analysis/composition.


Background    (top)

Earlier studies (Voss and Clarke, 1975) show that pitch and loudness fluctuations in music follow Zipf's distribution.  However they were unable to show this for note fluctuations. This work was carried out at the level of frequencies in an electrical signal. Eventually, Voss and Clark reversed the process so they could compose music through a computer. Their computer program used a Zipf's distribution (1/f power spectrum) generator to produce pitch fluctuations. The results were remarkable. The music produced by this method was judged by most listeners to be much more pleasing than generators that did not follow Zipf's distribution. They concluded that "the sophistication of this '1/f music' (which was 'just right') extends far beyond what one might expect from such a simple algorithm, suggesting that a '1/f noise' (perhaps that in nerve membranes?) may have an essential role in the creative process." [Voss and Clarke, 1975, p. 258]

We have extended these results by identifying a larger set of measurable attributes of music pieces on which to apply the Zipf-Mandelbrot law. These measurable attributes (metrics) include pitch of musical events, duration of musical events, the combination of pitch and duration of musical events, harmonic and melodic intervals, and several others.  After several manual experiments, which demonstrated the promise of this approach, we automated these metrics. Applications of these metrics on corpora from various music genres (e.g., baroque, classical, 12-tone, jazz, and rock) demonstrate the validity of the approach (see Data and Results).  

Current Directions

We are investigating ways to combine Zipf metrics with AI techniques such neural networks and genetic algorithms to analyze and generate music that sounds "pleasing, beautiful, harmonious."  Related application areas include music education, music therapy, music recognition by computers, and computer-aided music analysis/composition.  Currently, we are exploring three directions:

1) Classification of pleasant music through artificial neural networks.

2) Genetic algorithms for generation of pleasant music.

3) Development of Zipf-Mandelbrot metrics (an extension of Zipf metrics).  


Data and Results    (top)

Zipf's distribution in music

A study on a corpus of 220 pieces of baroque, classical, 12-tone, jazz, pop, rock, and random (aleatory) music, discovered near-Zipfian distributions across many of our metrics (melodic intervals, harmonic intervals, pitch&duration, etc.)  Also, certain patterns seem to emerge; for instance, we are able to automatically identify 12-tone music from other types of music (including random ones).  

Figures 1 and 2 below show an example from this study.  

Fig. 1. Pitch distribution for Bach's Orchestral Suite No.3 in D 
'2. Air on the G String', BWV.1068.

Fig. 2. Pitch distribution for Random Piece No. 7 (white noise).

For additional information, see Manaris, Purewal, and McCormick, (2002).

Music Classification

Juan Romero and his group (at University of La Coruņa, Spain) used our metrics to train an artificial neural network (ANN).  This ANN was able to classify music by Bach and Beethoven with 100% accuracy.  This experiment was conducted on a corpus of 132 pieces by Bach (BWV500 to BWV599) and Beethoven (32 piano sonatas).  The ANN was trained on 66% of the corpus (97 pieces) and tested on the remaining 47 pieces.

Figures 3 and 4 show visualizations of six metrics that were identified by the ANN as the most relevant for differentiating Bach and Beethoven.  These metrics capture various statistical aspects of (a) pitch and (b) melodic intervals. In particular, the x-axis (blue) corresponds to significant metrics (1 to 6); the y-axis (red) corresponds to music piece (1 to 32); and z-axis (green) corresponds to absolute value of metrics . 

Fig. 3. Bach-scape - a 3D contour map of six Zipf metrics over 32 Bach pieces

Fig. 4. Beethoven-scape - a 3D contour map of six Zipf metrics over 32 Beethoven pieces

Incidentally, these visualizations help identify Beethoven's Piano Sonata No. 20 as an outlier.  This piece exhibits an "unexpected" peak of 1.7472 for metric #3.  Metric # 3 captures the Zipf balance of pitch regardless of octave (e.g., C1 and C4 are counted as the same note).  This indicates that Piano Sonata No. 20 is considerably more monotonous, in terms of pitch regardless of octave, than the other Piano Sonatas.  This may be accidental, or it could be the result of Beethoven trying something different when composing this piece.

In a preliminary, follow-up experiment, we have trained an ANN to classify music by Bach and Chopin with 98.69% accuracy. This ANN was trained on 300 pieces and tested on 153 pieces.  Additional ANN experiments are being conducted.

For additional information on these experiments, see Machado, et al. (2003).


Credits    (top)

The following individuals have contributed to this project (in reverse chronological order; students in bold): William Daugherty, Dallas Vaughan, Christopher Wagner, Penousal Machado, Juan Romero, Charles McCormick, Tarsem Purewal, Dwight Krehbiel, Robert B. Davis, Valerie Sessions, Yuliya Schmidt, James Wilkinson, and Bill Manaris.   

The project has received support from the Classical Music Archives and the College of Charleston.


Publications    (top)

  1. Penousal Machado, Juan Romero, Bill Manaris, Antonino Santos, and Amilcar Cardoso, (2003), "Power to the Critics - A Framework for the Development of Artificial Critics," in Proceedings of 3rd Workshop on Creative Systems, 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), Acapulco, Mexico, Aug. 2003, pp. 55-64.

  2. Bill Manaris, Dallas Vaughan, Christopher Wagner, Juan Romero, and Robert B. Davis, (2003), "Evolutionary Music and the Zipf-Mandelbrot Law: Developing Fitness Functions for Pleasant Music," EvoMUSART2003 - 1st European Workshop on Evolutionary Music and Art, Essex, UK, Lecture Notes in Computer Science, Applications of Evolutionary Computing, LNCS 2611, Springer-Verlag, Apr. 2003, pp. 522-534.

  3. Bill Manaris, Tarsem Purewal, and Charles McCormick, (2002), "Progress Towards Recognizing and Classifying Beautiful Music with Computers-MIDI-Encoded Music and the Zipf-Mandelbrot Law," Proceedings of the IEEE SoutheastCon 2002, Columbia, SC, Apr. 2002, pp. 52-57.

  4. Bill Manaris, Charles McCormick, and Tarsem Purewal, (2001), "Searching for Beauty in Music--Applications of Zipf's Law in MIDI-Encoded Music," 2001 Sigma Xi Forum, "Science, the Arts and the Humanities: Connections and Collisions" (poster and demonstration), Raleigh, NC, November 8-9, 2001.


References    (top)

  1. Adamic, L.A., (1999), "Zipf, Power-laws, and Pareto - a Ranking Tutorial", www.parc.xerox.com/istl/groups/iea/papers/ranking/ 
  2. Balaban, M., Ebcioglu, K., and Laske, O., eds. (1992), Dobrian, C. (1988), Understanding Music with AI: Perspectives on Music Cognition, AAAI Press and MIT Press. 
  3. Dobrian, C. (1992), "Music and Artificial Intelligence", www.arts.uci.edu/dobrian/CD.music.ai.htm 
  4. Elliot, J. and Atwell, E. (2000), "Is Anybody Out There? The Detection of Intelligent and Generic Language-Like Features", Journal of the British Interplanetary Society 53(1/2), pp. 13-22, www.comp.leeds.ac.uk/eric/jbisjournal2000.ps 
  5. Glatt, J., "Tutorial for MIDI Users", www.borg.com/~jglatt/tutr/miditutr.htm 
  6. Knuth, K. (1997), "Power Laws and Hierarchical Organization in Complex Systems-From Sandpiles and Monetary Systems to Brains, Language, and Music", CUNY Cognitive Science Symposium. http://bulky.aecom.yu.edu/users/kknuth/complex/powerlaws.html
  7. Li, W. (2000), "Zipf's Law" http://linkage.rockefeller.edu/wli/zipf/ 
  8. Mandelbrot, B.B. (1977), The Fractal Geometry of Nature, W.H. Freeman and Company. 
  9. Schroeder, M. (1991), Fractals, Chaos, Power Laws, W.H. Freeman. 
  10. Voss, R.F., and Clarke, J. (1975), "1/f Noise in Music and Speech", Nature 258, pp. 317-318. 
  11. Zipf, G.K. (1949), Human Behavior and the Principle of Least Effort, Addison-Wesley.

manaris@cs.cofc.edu.
Last updated on Thursday, November 06, 2003 06:08 PM -0500