Ch. 7 – Sonification and Big Data

Topics:   Data sonification, mapValue() and mapScale(), Kepler, Python strings, music from text, Guido d’Arezzo, nested loops, file input/output, Python while loop, big data, biosignal sonification, defining functions, image sonification, Python images, visual soundscapes.

According to Scaletti (1993), “[t]he idea of representing data in sound is an ancient one. For the ancient Greeks music was not an art-for-art’s sake, practiced in a vacuum, but a manifestation of the same ratios and relationships as those found in geometry or in the positions and behaviors of the planets.” Pythagoras, Plato, and Aristotle worked on quantitative expressions of proportion and beauty, such as the golden ratio. Pythagoreans, for instance, quantified harmonious musical intervals in terms of proportions (ratios) of the numbers 1, 2, 3, 4 and 5. This became the basis for the scales, modes and tuning systems used in Western music.

Sonification allows us to capture and better experience phenomena that are outside our sensory range by mapping values into sound structures that we can perceive by listening to them. Data for sonification may come from any measurable vibration or fluctuation, such as planetary orbits, magnitudes of earthquakes, positions of branches on a tree, lengths of words in this chapter, and so on.

Mapping a value from one range to another is very common in sonification. For instance, you may want to map a list of numbers (say, from 20.0 to 110.0) to pitch values (say, from 30 to 90). This becomes more realistic if, say, you are interested in global warming and want to explore how temperatures change over time. By converting temperatures to pitch values you can actually hear these changes.

The music library provides two functions precisely for this task, i.e., mapValue() and mapScale().

Sonifying planetary data

In 1619 Johannes Kepler wrote his “Harmonices Mundi (Harmonies of the World)” book (Kepler, 1619). While Pythagoreans only talked about the “music of the spheres,” Kepler discovered physical harmonies in planetary motion. As a result, he became a key figure in the development of Astronomy and modern Physics.

Following Kepler’s studies, in the late 1700s Johann Daniel Titius and Johann Elert Bode independently contributed to a model of the symmetries and proportions of our solar system. Their formula, known as the Titius–Bode law (or simply Bode’s law) predicts the positions of the planets in our solar system. Actually, it also predicted the asteroid belt between Mars and Jupiter (long before it was discovered), but fails to account for the irregularly moving Neptune and the (now demoted non-planet) Pluto.

This code sample (Ch. 7, p. 196) sonifies one aspects of the celestial organization of planets. In particular, it converts the orbital velocities of the planets to musical notes. It then maps this range of velocities to a range of MIDI pitches. To do this mapping it uses the mapScale() function.

Here, in the spirit of J.S. Bach and Arvo Pärt, we build a canon from the sonfied orbital velocities. To do this, we treat the melody as the theme and use canonic devices (seen in Ch. 4) to create a celestial canon. We choose to play the melody concurrently, against itself, using different durations (see figure below). This is similar to Arvo Pärt’s musical structure for “Cantus in Memoriam” (seen in Ch. 4).

Diagram of canon structure (proposed by Douglas McNellis and Ian Fricker).

Diagram of canon structure (proposed by Douglas McNellis and Ian Fricker).

 


Making music from text

A simple way to algorithmically dictate musical structures is to follow some existing data patterns. One source of data patterns is astronomical data, as we saw above. Another source of patterns is natural language (e.g., English). Since languages have inherent structure—as described by Noam Chomsky (1957) and George K. Zipf (1949)—it is reasonable to expect that music based on text might maintain some of the expressiveness inherent in this structure. The sonification of text can, like all sonifications, use simple or complex mappings between the text and sound.

This code sample (Ch. 7, p. 202) demonstrates how to generate music from text. Using the ord() Python built-in function, this program converts the values of ASCII characters to MIDI pitches. For variety, note durations are randomized; other note properties (volume, etc.) are the same for all notes.

Notice that the string to sonify is at the top of the code. If you change this string you will get different (yet similar music). Why? The music generated depends on the relative probabilities of characters in the English language (and not on the actual words, or, even further, the meaning of those words). It would be interesting to explore how to somehow map the meaning of words (or actual words) to note pitch. This would involve more work (beyond the scope of this chapter), but definitely something that could be explored using Python.

 


Recreating Guido d’Arezzo’s “Word Music” (ca. 1000)

One of the oldest known algorithmic music processes is a rule-based algorithm that selects each note based on the letters in a text, credited to Guido d’Arezzo (991 – 1033). Originally the intention was that the melody was a sung phrase and the text was the lyric to be sung. Each vowel in the text is associated with a pitch. The duration of notes comes from the word length.

This code sample (Ch. 7, p. 207) is an approximation to d’Arezzo’s algorithm, adapted to text written in ASCII, and to modern musical sensibilities. Although d’Arezzo’s original intention was simply to provide an approximate composition guide, here we formalize and automate these rules.

 


Sonifying biosignals

Here we explore pre-processing and sonification of data from biological processes. The figure below displays heart data, captured by measuring blood pressure over time.

 Sample raw heart data (x-axis is time, y-axis is pressure).

Sample raw heart data (x-axis is time, y-axis is pressure).

Moreover, the figure below displays skin conductance, captured by measuring electrical conductivity between two fingers over time (the more sweaty the fingers get, the higher the skin conductance).

Sample skin-conductance data (x-axis is time, y-axis is skin conductance).

Sample skin-conductance data (x-axis is time, y-axis is skin conductance).

The data presented in the above figures are actually stored in a data file. In order to sonify these data, we first need to understand their format (i.e., how they are stored in our data file). This is shown below:

The data format consists of three columns (fields). These are, the time of measurement (e.g., 20:39:51.560), the skin conductance at that time (e.g., 1.84), and the particular blood pressure at the time (e.g., 1.880). These data was captured at a rate of approximately 30 measurements per second.

The complete data file is available here.

Sonification Design

To analyze data through sonification we need to find a way to map these data to sound. We pose the following questions: How can we map characteristics of these data to musical parameters?  Are there some characteristics that are more important than others? Are there certain musical parameters better suited to sonify these data characteristics? For a given data set, there may be many ways to answer these questions.

First of all, there is no correct way to map data to sound.  Again, the trick is to decide what aspects of the data you would like to make easily perceivable by mapping them to sound parameters.  Moreover, in the context of music-making, you might also consider what aspects of the data might contribute to more interesting music.

Below are some possibilities for the above data set:

  • Map skin data to pitch (remember to scale to a preferred integer range, e.g., C3 – C6).
  • Map heart data also to pitch (e.g., add some variety to pitch).
  • Map heart data to dynamic (remember to scale to 0 – 127).

This code sample (Ch. 7, p. 217) demonstrates these rules:

 


Sonifying images

This code sample (Ch. 7, p. 231) demonstrates how to sonify (generate music from) images. It sonifies the following image:

soundscapeLoutrakiSunset

Loutraki Sunset (320 × 213 pixels)

A soundscape refers to a musical composition that incorporates sounds recorded from, and/or music that depicts the characteristics of an environment.

Sonification of image data can generate interesting musical artifacts. This is done by mapping visual aspects of an image into corresponding musical aspects. When sonifying, there are numerous ways to map pixels to sound. A rule of thumb is to find what inspires you about a particular image and explore how you might convert that to sound. So image sonification involves imagination and artistic exploration.

The image above has a very nice gradient that gets brighter from left to right. The sun is not shown but can be imagined. There is a clear horizontal division between the sea and sky. The mountains, on the left, provide a contrast to the color of the sea and sky. Finally, the image gradient is interrupted by the (somewhat noisy) visual layers and the sea at the bottom half of the image. Clearly, there is enough structural variety in the visual domain to provide interesting analogies in the musical domain. All this can be exploited by selecting certain rows (or columns) of pixels (as shown below in red), scanning the image left-to-right (or up-and-down), and converting individual pixels or areas of pixels to musical notes or passages.

soundscapeLoutrakiSunset.redLines

Loutraki Sunset with lines indicating rows of pixels (0, 53, 106, 159, 212) being scanned and sonified

Sonification Design

In this case study, we use the following sonification rules:

  • Left-to-right pixel (column) position is mapped to time (actually, note start time);
  • Brightness (or luminosity) of a pixel (i.e., average RGB value) is mapped to pitch (the brighter the pixel, the higher the pitch);
  • Redness of a pixel (R value) is mapped to duration (the redder the pixel, the longer the note); and
  • Blueness of a pixel (B value) is mapped to dynamic (the bluer the pixel, the louder the note).

Using the same sonification scheme with other images will likely generate interesting results. For instance, you could select a new image with this scheme in mind. Or you could create/modify an image (e.g., via Photoshop) with the particular sonification scheme in mind.

However, the most appropriate way is to pick an image and then design a set of sonification rules to use that matches its features. The image choice and sonification rules are intimately connected.

Here is the program:

And here is the music generated from this program: