A concept project using audio impressions from last.fm radioplays to generate meaningful visualizations of song performance across different listener groups.
Many promotional and advertising programs for musicians provide great opportunities for concepting and developing new metrics and visualizations that are based on insights from aggregations of community data. In alignment with the goals of music advertising applications, it is especially interesting to leverage aggregations of unbiased audio impressions, and use this “attention data” to quickly understand how certain songs may perform in front of specific target audiences. Access to these kinds of analytics will help artists concentrate their time and money in the right places when promoting their music.
Last.fm provides an interesting, if limited, service for generating and tracking audio impressions. Repurposing their existing profiling metrics, Last.fm’s new Powerplay campaigns allows artists to buy “airtime” ($25 for 100 impressions) on last.fm radio stations of other artists and get limited statistical feedback on the performance of their music in front of unbiased listeners. Similar to Pandora, these radio stations play this artist in line with related artists on an ongoing rotation.
Listeners of a last.fm radio station can perform one or more four trackable activities per song. Each provide insight into a song’s performance:
• They “play the entire track”, or
• they the “skip track” before it finishes.
• They tag that they “love” this track (and wish to hear this song more often), or
• they “ban” this track permanently from their listening rotation
The goal of Powerplay campaigns is simple: how will songs be liked by audiences with certain expectations of the music they are going to listen to? How well does my song fit with this family of music?
While this data has potential to be insightful, unfortunately, last.fm does a poor job of visualizing an artist’s performance across various types of listeners.
The goal of this research is to use Powerplay stats to produce new metrics and visualisatins that make this data more meaningful. Using music from my band BASECAMP, I began by running ten Powerplay campaigns (two songs against five artists: Iron and Wine, Elliott Smith, U2, Jeff Buckley and Yo La Tengo). The matrix of our Powerplay campaigns looks like this:
Related Artists: Iron and Wine | Elliott Smith | U2 | Jeff Buckley | Yo La Tengo
Songs: “Bright Bright Red and Orange”, “Fairplay”
Our hunch was that both of these artists shared some affinity with each of these songs. To learn more about this affinity, each of these two songs (rows) was set to be played 100 times on the radio station for each of these artists. To date, some of these campaigns have not ended, but I have collected a good amount of activity data on full plays, skips, “loves” and “hates” each campaign received. This raw data is essentially where last.fm stops providing feedback. The goal of this research project was to arrange this data in ways that could make it more meaningful.
Metrics and Visualizations
I then arranged our data (to date) using two custom metrics:
1) Play Rate: By dividing the number of “full plays” by the number of overall plays, you get a percentage value that shows how well each song did. Play rates on my campaigns ranged from 83% to 99%. These play rates can then be plotted on a stacked line chart to provide insight into the performance of each song in different contexts.
2: Quality Score: Next I focused on the ratio of “loves” to “bans”. For each campaign, I subtracted bans from loves, and I was able to establish a quality score: a ratio of the number of “loves” to the number of “bans For example, one of our songs, “Fairplay”, received 5 “loves” and 3 “bans”, when played in front of Jeff Buckley fans, resulting in a quality score of 2. The other song, “Bright Bright Red and Orange”, received 4 “loves” and 3 “bans”, receiving a quality score of 1 in front of the same group. These metrics become even more meaningful when visualized.
It’s logical to expect that certain songs will evoke stronger pos/neg reactions in a fanbase. For example, what’s interesting (thanks Jim) in the charts shown above, is the apparent disparity between “full plays” and “quality scores” for the campaign promoting “Fairplay” to the U2 radio station. While “full plays” shows the general tolerance of a song, “quality score” shows how many people “love” the song enough to keep it in their rotation. To me, this makes “loves” the most valuable metric, because it’s the best indication of how many listeners will not just like my song, but like it enough to save it (and maybe buy it!)
The goal of this project was to think about better ways to visualize the attention data collected from last.fm audio impressions. As some next steps, I would like see how a number of songs in an artist’s catalog might perform across a larger number of last.fm genres. Attention data is a valuable metric to any band who doesn’t have time or money to lose on promoting the wrong songs to the wrong markets. It’s our job to help musicians to understand this data in meaningful ways, to enable them to become more savvy and intelligent about how and where their music might best fit into the market.