Prepare Your Data For Analysis: Load Data With Spoon

Note: The instructions described here is learnt and utilized while working on the Clouds & Concerts project at University of Oslo in close collaboration with Xin Jian, Johannes Bjelland og Pål Roe Sundsøy under supervision by Arnt Maasø. The credit for methodology and SQL ninja tricks is therefore to be attributed to all members of the fantastic stream team. Inconveniences or mistakes are solely my own.

Technical set up

We chose to use Amazon as a solution for doing our data-analysis. Amazon Web Services provides both virtual computers, databases and storage in the cloud. All services are scalable so data loading and manipulation computer can be put dormant when idle, and the database can be scaled up when doing complex and data-intensive operations.

DSV files – choosing delimiter

An excerpt from a CSV file, note the tabulator distance between the occurrence's variables and lineshift between the occurrences.
An excerpt from a CSV file, note the tabulator distance between the occurrence’s variables and lineshift between the occurrences.

When moving large database files between two different servers in different organisations the standard way of exchanging data is through flat text files. The flat text files may have a structure e.g. XML or JSON, but in n the case of working with relational databases where all entries, or rows, have the same number of attributes, or columns, DSV is a convenient format. DSV is the abbreviation for delimiter separated values, and is a format where each attribute and each entry are separated with an assigned delimiter token. The most frequent delimiters are comma between attributes and newline between entries. This format is known as CSV.

When choosing delimiters, it is important to find a character that is not found in the data values. We chose to use tab as delimiter. To save space and transfer cost the files should be zipped (NB: the .zip-container supports only upto 4GB. If you have any larger files, use .gzip or .tar.gz).

Data delivery S3 bucket

Amazon’s Simple Storage Service (S3) is an easy way to retrieve data, either by using a client or using the web-interface accessible through the AWS console. All kinds of files may be uploaded to S3 including zipped or tar.gz-ed files. Files can easily be uploaded and downloaded to and from S3 by those who have access, and the web-uploader supports large files. You don’t have to define predefined storage space either since this is allocated on the fly.

Using Pentaho Spoon for loading

The software company Pentaho provides good tools for working with data-analysis, and part of their suite is the ETL-tool (Edit, transform, load) Spoon. This also comes in a community edition which can be downloaded from their homepage. The tool is built on Java so it can be run on different platforms. Once installed, use the terminal to navigate into the ‘data-integration’ folder, then set execution rights and run ‘spoon.sh’. This should open the spoon ETL-tool.

The Pentaho spoon workbench let you transfer data between "modules". In this picture a CSV input module is passing data to a table output module.
The Pentaho spoon workbench let you transfer data between “modules”. In this picture a CSV input module is passing data to a table output module.

In Penthao you can make a transformation for each table you need to transfer, alternatively you can join more transformations together in jobs. To do a basic transfer of data from CSV, or other delimited files, to a database server you will need to use two modules: CSV file input and table output.

Set up a super-easy transformation to load csv to table

In the Pentaho, the convenient module CSV file input can be used for reading data from delimited files. 1: Use the file finder to select the file, set the delimiter and enclosure (e.g. if the strings are prepended and appended by quote marks). If you have a file dump where the header row is present, tick of this option. 2: The CSV file input module can try and determine the format of the various tables based on the data in the files. Use the ‘Get Fields’ option, and select a good sample size to determine the format.

The Pentaho CSV input options lets you define the data-type of your field, it can also detect this automatically.
The Pentaho CSV input options lets you define the data-type of your field, it can also detect this automatically.
Loading tip: represent values as strings.

If you have a file with a high risk of data errors, for example a file where the columns overflow, I usually set the fields to text, using the functionality where I can define variable data-type (3), and make sure the length is large enough to encompass the data. This enables us to do the changes, and alterations of data format on the database server once the data is loaded. Most data can be represented in a string. Both “12354”, “182,54” and “01/08/1986 00:00:00,000000” can be represented as a string, but the first example can also be an integer, the second a real or float, and the third a timestamp. We can parse the data when we have them loaded into our SQL-database. You can adjust the type in the CSV file input module.

Use Table Output to write read rows to SQL-server

The other Pentaho module needed for this basic transformation is the Table output. Once Pentaho is configured with valid database configuration, this module can access the database directly so you can select to which table to write the data. First drag a line between the CSV Input module to to the Table Output to get the data connection. Then select output database/target schema and target table . If you have setup the table structure through a DDL (The Data Definition Language – The structure of the tables) you can map the fields from the CSV file input. In the database field tab you first select ‘Get fields’ to get the fields found or created in the CSV input module. Then select either ‘Enter field mapping’ if you have the full DDL loaded on the database or ‘SQL’ to create the DDL followed by ‘Enter field mapping’ (see tip below). Once the table is connected and the fields from the input data is connected to the fields of the database you are ready to run the transformation. Return to the workbench and press the play symbol on the top of the transformation tab.

Table loading tip: Let Pentaho deal with DDL
Use Pentaho to help you fill out the table structure
Use Pentaho to help you fill out the table structure

Once you have defined a connection by making a line between the CSV file input and the Table output, you can load the fields as they are stored in the flat CSV file using the ‘Get fields’ button (1). If you already have a defined table definition you can go straight to the mapping (3), but if you don’t have a DDL or you want the database table structure to follow the field definition you declared in the CSV file input you can use the SQL button (2) to get Pentaho to create the DDL and execute this on the database. To be able to do this you first have to create a table for the ‘target table’ field to be filled out. The trick is to know that you only have to fill in one variable when you create the target table e.g. execute the command “CREATE TABLE ALMOST_EMPTY_TABLE (ID_COLUMN INTEGER)” on the database, and let Pentaho deal with creating the rest. After running the SQL, you can map the fields (3) and then run then transformation.

Validate that the data is loaded into the database.

Check the number of loaded rows

In Pentaho Spoon, the transformation will have a step matrix when running where you can monitor the progress. During the load and upon completion the total number of rows will show here. To be sure that all data is loaded, or to get a number of how many rows that was dropped you can read this from the step matrix, but also by double checking the number of lines in the CSV file and the number of rows in the database table. The number of lines in the CSV file can be found by using the command “wc -l filename.csv” in the folder where the CSV files are stored (if the file have a header subtract the result by one), and compare this to the number of rows in the loaded table using “SELECT count(*) FROM tablename”.

Your data is now loaded into the database.

Screenscrape av Øya-programmet

Forskningsprosjektet Sky & Scene, hvor jeg jobber, ser blant mye annet nærmere på strømmetallene fra WiMP før, under og etter Øya-festivalen. For å gjøre dette trenger vi en liste over hvilke artister som spiller, hvilken dag de spiller og når på dagen de spiller. Før dataene kan analyseres må disse dataene være tilgjengelige i Excel-ark og i CSV-format og i databasen hvor strømmetallene finnes. Dataene må hentes og struktureres i et bestemt format.

Et godt utgangspunkt er å samle dataene i et CSV-format. CSV står for Comma separated values, kommaseparerte verdier, og er en liste hvor verdiene er for en forekomst er samlet på en linje, og hvor forekomstens data-attributter, også kalt variabler, er separert med – you guessed it – komma. Et lignende format kan du finne i Excel hvor èn forekomst finnes på èn linje, og denne forekomstens variabler oppgis i kolonner.

Finne dataene

Ok, nok om formatering. Hvor kan vi finne dataene? Et naturlig utgangspunkt er festivalens hjemmesider. På oyafestivalen.com (den engelske hjemmesiden til festivalen) finner vi et menyvalg kalt “program“, og her finner vi også programmet.

developer_menu_oyafestival
Utviklerverktøyet til Chrome kan finnes i menyen. Dette er et veldig nyttig verktøy for både web-utvikling og screen scraping

For å screen scrape programmet hjelper det lite med den visuelle presentasjonen av siden og vi må derfor se på HTML kilden. I Google Chrome finner du denne ved å høyreklikke i web-vinduet for så å klikke på “vis sidekilde”, her kan vi finne HTML-koden. Eventuelt kan du kopiere denne lenken inn i din Chrome browser: “view-source:http://oyafestivalen.com/program/#all

Dersom du gikk inn i kildekoden vil du se at listen med artister mangler. Hvorfor? Jo, fordi listen er ganske lang og benyttes av flere kilder lastes ikke listen med programmet inn av selve program-siden. Den lastes inn asynkront med AJAX (Asynchronous Javascript and XML). Finn fram Chrome Developer Tools som finnes i menyen, og gå til Network fanen. Last siden igjen ved å klikke på sirkelen med pil til venstre for URL-feltet.

Her kan du se at en fil kalt getArtist.php er lastet (bilde 1), og at denne filen ikke lastes som en del av originalforespørselen vår til web-tjeneren, men istedet er lastet inn via Javascript. Dersom vi klikker for å se på hva denne URL-en leverer kan vi se at artistlisten kommer herifra. URLen til siden kan du finne ved å høyreklikke på navnet getArtist.php og så velge “copy link address”.

Når du har URLen (http://oyafestivalen.com/wp-content/themes/oya13_new/includes/ajax/program/getArtists.php) kan du kopiere denne inn i nettleser vinduet ditt. Du skal nå få en liste uten spesiell formatering som ser omtrent slik ut:

artistliste_oyafestival
Øyafestivalens artistliste hentes fra serveren asynkront for å spare tid når hovedsiden lastes. Nå har vi funnet dataene vi trenger.

OK, nå har vi funnet dataene vi trenger. Nå må vi bare finne en god måte å hente de ut fra siden. La oss ta en titt på kilden bak konsertlista. Her finner vi både dataene og strukturen vi trenger:

Dataene vi trenger, men med en annen formatering. Uansett, nå gjenstår bare hentingen og reformateringen.
Dataene vi trenger, men med en annen formatering. Uansett, nå gjenstår bare hentingen og reformateringen.

Her kan vi se at:

  1. Ytterst har vi en div-tag med klassen “table title”. Denne innleder forklaringen som står over kolonnen i visningen.
  2. Vi har en uordnet liste (ul-tag) med klassen “table”
  3. Den uordnede listen har flere barn som er satt i liste elementer (li). Disse benytter seg av HTML5 data-attributter, men disse skal vi ikke bruke i denne omgang.
  4. Hvert liste-element har et span element med klassen “name”, hvor innholdet er navnet på artisten
  5. Liste-elementet har også en klasse “scene” med scene navnet som innhold.
  6. Sist har liste-elementet også en “date” klasse med de tre første bokstavene på dagen, tre non breaking spaces (HTML syntaks:  ) og tidspunkt for konsert-start.

Her finner vi alle dataene, og formateringen er også lik for alle elementene i lista med klassen “table”.

Når vi nå har funnet datakilden kan vi begynne å trekke ut dataene for videre bruk.

Screen scrape med Ruby og Nokogiri

Vi har nå funnet kilden og da kan vi benytte oss av Ruby og biblioteket (ruby-term: gem) Nokogiri.

Før vi begynner å hente dataene må vi gjøre klart scriptet som skal hente dataene fra festivalens hjemmeside. Vi inkluderer nokogiri som skal hjelpe oss å parsere datakilden. Samtidig laster vi også inn csv-bibliotek for å skrive ut filene og open-uri for å kunne lese URI-kilden som en fil.

[sourcecode language=”ruby”]
#!/usr/bin/ruby
# -*- encoding : utf-8 -*-

require ‘nokogiri’
require ‘open-uri’
require ‘csv’
[/sourcecode]

Konsert klassen

For å lagre og manipulere dataene lager vi en klasse for å lagre de fire verdiene vi trenger: artist, scene, date og datetime. Hos kilden finner vi de tre første verdiene og datetime konstruerer vi utfra date.

For klassen setter vi alle variablene vi skal benytte med en attr_accessor. Dette gjør at ruby selv genererer get og set-metoder for alle variablene listet etter funksjonen, noe som gjør at vi fritt kan hente og sette variablene fra instansene av klassen.

Vi skriver en initialize-metode, en konstruktør, som kalles når instansen opprettes. Siden vi allerede henter artist, scene og dato fra datakilden kaller vi konstruktøren med disse variablene slik at disse settes. For å oversette date til datetime, lager vi en dictionary med dagene og tilsvarende ISO-datoformat.

Legg merke til at når instans-variabelen @date settes, så gjøres det en del formatering. Fra kilden får vi datoformatet noe annerledes, så vi fjerner non-braking space, og bytter ut punktum med semikolon og sørger for at det er mellomrom mellom de tre bokstavene som angir dagen, og klokkeslettet. Når dette er gjort kaller vi en metode for å generere datetime-verdien basert på date-verdien. Vi bruker @ foran variabelnavnet for å markere at dette er en instanse-variabel.

metoden add_datetime gjør et oppslag i date_dict og bytter ut dag-bokstavene med ISO-dato, deretter henter den ut tidspunktet fra @date variabelen og interpolerer disse to verdiene til en datetime string.

Den siste metoden vi lager to_arr tar alle instanse-variablene og returnerer disse som en array. Siden CSV-funksjonen vi inkluderte tidligere kan lage en CSV-linje fra en array er dette en hendig måte å hente ut verdiene fra objektet.

[sourcecode language=”ruby”]
class Concert
attr_accessor :artist, :scene, :date, :datetime
def initialize(artist, scene, date)
@date_dict = {‘wed’ => ‘2013-08-07′ ,’thu’ => ‘2013-08-08′ ,’fri’ => ‘2013-08-09′ ,’sat’ => ‘2013-08-10’}
@artist = artist.strip
@scene = scene.strip
@date = date.gsub(/\u00a0/, ”).gsub(‘.’,’:’).gsub(/([a-zA-Z]{3})(.)/,’\1 \2′).strip
self.add_datetime
end

def to_arr
return [self.artist, self.scene, self.date, self.datetime]
end

def add_datetime
@datetime = "#{@date_dict[@date[0,3].downcase]} #{@date[4..9]}"
end

end
[/sourcecode]

Lese dokumentet, hente ut dataene og lage objektene

Når vi nå har en datastruktur hvor vi kan lagre informasjonen, kan vi begynne å hente informasjonen fra internett. Aller først lager vi igjen en tom dictionary hvor vi ønsker å lagre våre konsert-objekter etterhvert som vi lager disse.

Vi bruker Nokogiris HTML klasse og lagrer denne til doc variabelen. Til denne sender vi en tekst-strøm som hentes fra URLen. Vi sender altså samme tekst som vi fikk fra getArtist.php kildekoden til Nokogiri.

Nokogiri har en utmerket methode kalt css. Denne metoden tar en CSS (Cascading Style Sheet) selektor og finner riktig element fra DOMen (Document Object Model) som Nokogiri holder. Vi ønsker å iterere over alle “.table li”-nodene (alle li-nodene under table-klassen), og gjør dette ved enkelt med .each metoden.

For hver “.table li” vi itererer over, henter vi ut innholdet av elementene som har klassene .name, .scene og .date og oppretter et objekt av Concert-klassen. Det siste vi gjør for hver iterasjon er å lagre objektet med artisten som nøkkel i vår concerts dictionary.

[sourcecode language=”ruby”]
concerts = {}

doc = Nokogiri::HTML(open(‘http://oyafestivalen.com/wp-content/themes/oya13_new/includes/ajax/program/getArtists.php’))
doc.css(‘.table li’).each do |el|
a = Concert.new(el.css(‘.name a’).first.content,
el.css(‘.scene’).first.content,
el.css(‘.date’).first.content)
concerts[a.artist] = a
end
[/sourcecode]

Printe objektene som CSV

Når vi har opprettet alle objektene ønsker vi å skrive ut alle variablene i disse til fil. Vi gjør dette ved å åpne en fil kalt output.csv med skrivetilgang. Deretter itererer vi igjennom alle objektene og bruker nøkkelen fra k-variabelen til å hente ut hvert enkelt objekt som finnes i vår concerts dictionary. For å kun få Øya-festivalens konserter (ikke klubb-Øya) sjekker vi at konserten fant sted på enten scenene “Enga”, “Klubben”, “Sjøsiden” eller “Vika” (Sjøsiden har feil format her som vi senere korrigerer i Excel). For hvert objekt hvis scene er inkludert blant Øya-scenene skrives det en linje til csv-fila som tar en array med verdier. Denne arrayen hentes fra to_arr metoden vi skrev i Concert-klassen.

[sourcecode language=”ruby”]
CSV.open("output.csv", "wb") do |csv|
concerts.each do |k,v|
csv << concerts[k].to_arr if [‘Enga’,’Klubben’,’Sjøsiden’,’Vika’].include? concerts[k].scene
end
end

[/sourcecode]

Sånn. Nå burde du ha en CSV med alle Øya-artistene som du kan enten importere til en database eller åpne i Excel.

Hele scriptet:

[sourcecode language=”ruby”]

#!/usr/bin/ruby
# -*- encoding : utf-8 -*-

require ‘nokogiri’
require ‘open-uri’
require ‘csv’
require ‘open-uri’

class Concert
attr_accessor :artist, :scene, :date, :datetime
def initialize(artist, scene, date)
@date_dict = {‘wed’ => ‘2013-08-07′ ,’thu’ => ‘2013-08-08′ ,’fri’ => ‘2013-08-09′ ,’sat’ => ‘2013-08-10’}
@artist = artist.strip
@scene = scene.strip
@date = date.gsub(/\u00a0/, ”).gsub(‘.’,’:’).gsub(/([a-zA-Z]{3})(.)/,’\1 \2′).strip
self.add_datetime
end

def to_arr
return [self.artist, self.scene, self.date, self.datetime]
end

def add_datetime
@datetime = "#{@date_dict[@date[0,3].downcase]} #{@date[4..9]}"
end

end

concerts = {}

doc = Nokogiri::HTML(open(‘http://oyafestivalen.com/wp-content/themes/oya13_new/includes/ajax/program/getArtists.php’))
doc.css(‘.table li’).each do |el|
a = Concert.new(el.css(‘.name a’).first.content,
el.css(‘.scene’).first.content,
el.css(‘.date’).first.content)
concerts[a.artist] = a
end

CSV.open("output.csv", "wb") do |csv|
concerts.each do |k,v|
csv << concerts[k].to_arr if [‘Enga’,’Klubben’,’Sjøsiden’,’Vika’].include? concerts[k].scene
end
end

[/sourcecode]

Work programmatically with Google Spreadsheets

Some time back I authored a script which reads through a CSV formatted list, and based on the artists’ names tried to decide the nationality of the artists by querying the last.FM search engine and parsing the XML structured result.

The script worked, and found and returned about 80% of the artists, and around 80% of these again had the conceptually similar artist. What if the alteration could be done in the document itself? For the CSV based script the artists had to be extracted from the database, then parsed and then put into the document. With this attempt we can skip the middle step, and let the script run simultaneous with data being available to view for the users.

If you haven’t already used Google Docs, you should consider it, as it is a very convenient way of working with documents, especially where there are more users. Through Google Docs you can collaborate on working on documents, and since they are stored in clouds all users will instantly have the latest version. Another advantage is that whey you are working on the document simultaneously each user is editing the same document, so merging different versions is not a problem.

A great tool for combining Google Spreadsheets with the Python programming environment is the gspread module. After installing this package you only need to import the code and then you can with very few lines of code retrieve and update information in the spreadsheet cells.

The specifics for doing this task is pretty much these few lines. Import the package. Login, find correct document and do whatever you need to. (this excerpt won’t work. Check the whole script under – NB: mind the indentation. It may not be correct displayed in the browser.)

[sourcecode language=”python”]
import gspread

G_USERNAME = ‘your@gmail.com’
G_PASSWORD = ‘yourPassword’
G_IDENTIFIER = ‘document_identifier_checkdocument_url_in_your_browser’

# Connecting to the data source
gc = gspread.login(G_USERNAME,G_PASSWORD)
sht1 = gc.open_by_key(G_IDENTIFIER)
worksheet = sht1.get_worksheet(0)

for a in range(2,5124):
try:
name = worksheet.acell(get_val[a]).value
res = getCountry(name)
if res:
print res
country, last_id, name = res
worksheet.update_acell(set_name[a], name)
worksheet.update_acell(set_country[a], country)
except Exception as e:
print e

[/sourcecode]

Above is the lines related to connecting Python to the google docs, under you can see the whole script and how the method I mentioned in a post earlier is used in this setting.

[sourcecode language=”python”]
#!/usr/bin/python
# -*- coding:utf-8 -*-

"""
Clouds &amp; Concerts – 2012
Ola Loevholm

Initialized from the commandline. Runs through The Google doc spreadsheet with topp 5000 artists, and
runs the parsing query against the Last.FM browser then enters the country and search string (for validation)
into the google docs.

"""

G_USERNAME = ‘your@gmail.com’
G_PASSWORD = ‘yourPassword’
G_IDENTIFIER = ‘document_identifier_checkdocument_url_in_your_browser’

import sys, urllib, string, csv, time
import xml.etree.ElementTree as ET
import gspread

# Loads a dictionary with ISO 3166-1 abbreviations and codes
COUNTRIES = {"AF":"AFGHANISTAN","AX":"ÅLAND ISLANDS","AL":"ALBANIA","DZ":"ALGERIA","AS":"AMERICAN SAMOA","AD":"ANDORRA","AO":"ANGOLA","AI":"ANGUILLA","AQ":"ANTARCTICA","AG":"ANTIGUA AND BARBUDA","AR":"ARGENTINA","AM":"ARMENIA","AW":"ARUBA","AU":"AUSTRALIA","AT":"AUSTRIA","AZ":"AZERBAIJAN","BS":"BAHAMAS","BH":"BAHRAIN","BD":"BANGLADESH","BB":"BARBADOS","BY":"BELARUS","BE":"BELGIUM","BZ":"BELIZE","BJ":"BENIN","BM":"BERMUDA","BT":"BHUTAN","BO":"BOLIVIA, PLURINATIONAL STATE OF","BQ":"BONAIRE, SINT EUSTATIUS AND SABA","BA":"BOSNIA AND HERZEGOVINA","BW":"BOTSWANA","BV":"BOUVET ISLAND","BR":"BRAZIL","IO":"BRITISH INDIAN OCEAN TERRITORY","BN":"BRUNEI DARUSSALAM","BG":"BULGARIA","BF":"BURKINA FASO","BI":"BURUNDI","KH":"CAMBODIA","CM":"CAMEROON","CA":"CANADA","CV":"CAPE VERDE","KY":"CAYMAN ISLANDS","CF":"CENTRAL AFRICAN REPUBLIC","TD":"CHAD","CL":"CHILE","CN":"CHINA","CX":"CHRISTMAS ISLAND",
"CC":"COCOS (KEELING) ISLANDS","CO":"COLOMBIA","KM":"COMOROS","CG":"CONGO","CD":"CONGO, THE DEMOCRATIC REPUBLIC OF THE","CK":"COOK ISLANDS","CR":"COSTA RICA","CI":"CÔTE D’IVOIRE","HR":"CROATIA","CU":"CUBA","CW":"CURAÇAO","CY":"CYPRUS","CZ":"CZECH REPUBLIC","DK":"DENMARK","DJ":"DJIBOUTI","DM":"DOMINICA","DO":"DOMINICAN REPUBLIC","EC":"ECUADOR","EG":"EGYPT","SV":"EL SALVADOR","GQ":"EQUATORIAL GUINEA","ER":"ERITREA","EE":"ESTONIA","ET":"ETHIOPIA","FK":"FALKLAND ISLANDS (MALVINAS)","FO":"FAROE ISLANDS","FJ":"FIJI","FI":"FINLAND","FR":"FRANCE","GF":"FRENCH GUIANA","PF":"FRENCH POLYNESIA","TF":"FRENCH SOUTHERN TERRITORIES","GA":"GABON","GM":"GAMBIA","GE":"GEORGIA","DE":"GERMANY","GH":"GHANA","GI":"GIBRALTAR","GR":"GREECE","GL":"GREENLAND","GD":"GRENADA","GP":"GUADELOUPE","GU":"GUAM","GT":"GUATEMALA","GG":"GUERNSEY","GN":"GUINEA","GW":"GUINEA-BISSAU","GY":"GUYANA","HT":"HAITI","HM":"HEARD ISLAND AND MCDONALD ISLANDS",
"VA":"HOLY SEE (VATICAN CITY STATE)","HN":"HONDURAS","HK":"HONG KONG","HU":"HUNGARY","IS":"ICELAND","IN":"INDIA","ID":"INDONESIA","IR":"IRAN, ISLAMIC REPUBLIC OF","IQ":"IRAQ","IE":"IRELAND","IM":"ISLE OF MAN","IL":"ISRAEL","IT":"ITALY","JM":"JAMAICA","JP":"JAPAN","JE":"JERSEY","JO":"JORDAN","KZ":"KAZAKHSTAN","KE":"KENYA","KI":"KIRIBATI","KP":"KOREA, DEMOCRATIC PEOPLE’S REPUBLIC OF","KR":"KOREA, REPUBLIC OF","KW":"KUWAIT","KG":"KYRGYZSTAN","LA":"LAO PEOPLE’S DEMOCRATIC REPUBLIC","LV":"LATVIA","LB":"LEBANON","LS":"LESOTHO","LR":"LIBERIA","LY":"LIBYA","LI":"LIECHTENSTEIN","LT":"LITHUANIA","LU":"LUXEMBOURG","MO":"MACAO","MK":"MACEDONIA, THE FORMER YUGOSLAV REPUBLIC OF","MG":"MADAGASCAR","MW":"MALAWI","MY":"MALAYSIA","MV":"MALDIVES","ML":"MALI","MT":"MALTA","MH":"MARSHALL ISLANDS","MQ":"MARTINIQUE","MR":"MAURITANIA","MU":"MAURITIUS","YT":"MAYOTTE","MX":"MEXICO","FM":"MICRONESIA, FEDERATED STATES OF",
"MD":"MOLDOVA, REPUBLIC OF","MC":"MONACO","MN":"MONGOLIA","ME":"MONTENEGRO","MS":"MONTSERRAT","MA":"MOROCCO","MZ":"MOZAMBIQUE","MM":"MYANMAR","NA":"NAMIBIA","NR":"NAURU","NP":"NEPAL","NL":"NETHERLANDS","NC":"NEW CALEDONIA","NZ":"NEW ZEALAND","NI":"NICARAGUA","NE":"NIGER","NG":"NIGERIA","NU":"NIUE","NF":"NORFOLK ISLAND","MP":"NORTHERN MARIANA ISLANDS","NO":"NORWAY","OM":"OMAN","PK":"PAKISTAN","PW":"PALAU","PS":"PALESTINIAN TERRITORY, OCCUPIED","PA":"PANAMA","PG":"PAPUA NEW GUINEA","PY":"PARAGUAY","PE":"PERU","PH":"PHILIPPINES","PN":"PITCAIRN","PL":"POLAND","PT":"PORTUGAL","PR":"PUERTO RICO","QA":"QATAR","RE":"RÉUNION","RO":"ROMANIA","RU":"RUSSIAN FEDERATION","RW":"RWANDA","BL":"SAINT BARTHÉLEMY","SH":"SAINT HELENA, ASCENSION AND TRISTAN DA CUNHA","KN":"SAINT KITTS AND NEVIS","LC":"SAINT LUCIA","MF":"SAINT MARTIN (FRENCH PART)","PM":"SAINT PIERRE AND MIQUELON","VC":"SAINT VINCENT AND THE GRENADINES",
"WS":"SAMOA","SM":"SAN MARINO","ST":"SAO TOME AND PRINCIPE","SA":"SAUDI ARABIA","SN":"SENEGAL","RS":"SERBIA","SC":"SEYCHELLES","SL":"SIERRA LEONE","SG":"SINGAPORE","SX":"SINT MAARTEN (DUTCH PART)","SK":"SLOVAKIA","SI":"SLOVENIA","SB":"SOLOMON ISLANDS","SO":"SOMALIA","ZA":"SOUTH AFRICA","GS":"SOUTH GEORGIA AND THE SOUTH SANDWICH ISLANDS","SS":"SOUTH SUDAN","ES":"SPAIN","LK":"SRI LANKA","SD":"SUDAN","SR":"SURINAME","SJ":"SVALBARD AND JAN MAYEN","SZ":"SWAZILAND","SE":"SWEDEN","CH":"SWITZERLAND","SY":"SYRIAN ARAB REPUBLIC","TW":"TAIWAN, PROVINCE OF CHINA","TJ":"TAJIKISTAN","TZ":"TANZANIA, UNITED REPUBLIC OF","TH":"THAILAND","TL":"TIMOR-LESTE","TG":"TOGO","TK":"TOKELAU","TO":"TONGA","TT":"TRINIDAD AND TOBAGO","TN":"TUNISIA","TR":"TURKEY","TM":"TURKMENISTAN","TC":"TURKS AND CAICOS ISLANDS","TV":"TUVALU","UG":"UGANDA","UA":"UKRAINE","AE":"UNITED ARAB EMIRATES","GB":"UNITED KINGDOM","US":"UNITED STATES",
"UM":"UNITED STATES MINOR OUTLYING ISLANDS","UY":"URUGUAY","UZ":"UZBEKISTAN","VU":"VANUATU","VE":"VENEZUELA, BOLIVARIAN REPUBLIC OF","VN":"VIET NAM","VG":"VIRGIN ISLANDS, BRITISH","VI":"VIRGIN ISLANDS, U.S.","WF":"WALLIS AND FUTUNA","EH":"WESTERN SAHARA","YE":"YEMEN","ZM":"ZAMBIA","ZW":"ZIMBABWE"}

&nbsp;

# Connecting to the data source
gc = gc = gspread.login(G_USERNAME,G_PASSWORD)
sht1 = gc.open_by_key(G_IDENTIFIER)
worksheet = sht1.get_worksheet(0)
# Iterates through XML-structure and removes the namespace, for easier navigation in getCountry()s ElementTree.findall()
def remove_namespace(doc, namespace):
"""Remove namespace in the passed document in place."""
ns = u'{%s}’ % namespace
nsl = len(ns)
for elem in doc.getiterator():
if elem.tag.startswith(ns):
elem.tag = elem.tag[nsl:]
# getCountry – where the magic happens. Encodes string with artistname to url, then query musicbrainz search engine.
# parses the XML-answer and get the name, id and country of the first returned element (with highest weight)
# returns country name i a) artist is found through the search engine b) artist has a country associated to the profile, otherwise returns False
def getCountry(name):
name = urllib.quote_plus(name)
BASE_URL = "http://musicbrainz.org/ws/2/artist/?query=%s&amp;format=xml&amp;method=advanced" % (name)
print "Querying: %s" % (BASE_URL)
try:
search_input = urllib.urlopen(BASE_URL)
# Checks whether HTTP Request Code is 200 – if not goes to sleep for 5 seconds // Inded for 503 Code
http_code = search_input.code
if http_code != 200:
# print "Could not access: %s \t Got HTTP Code: %s. 5 second cool-down" % (name, http_code)
time.sleep(5)
getCountry(name)
except Exception:
print "GETTING_ERROR: Something went wrong while getting HTTP"
return False
#search_xml = search_input.read()
#print search_xml
try:
tree = ET.parse(search_input)
remove_namespace(tree, u’http://musicbrainz.org/ns/mmd-2.0#’)
feed = tree.getroot()
elem = feed.findall("./artist-list/")
#print elem[0].find(‘name’).text
#print elem[0].get(‘id’)
except Exception:
print "PARSE_ERROR: Something went wrong while parsing HTTP"
return False
try:
if elem[0].find(‘country’) != None:
# print COUNTRIES[elem[0].find(‘country’).text]
try:
country = COUNTRIES[elem[0].find(‘country’).text]
except Exception:
print "Could not find key in countrylist error"
return False
return [country,elem[0].get(‘id’),elem[0].find(‘name’).text]
else:
print elem[0].find(‘name’).text + " has not any country associated\n"
return False
except IndexError, ValueError:
print "ERROR – COULD NOT GET DATA FROM %s\n" % (name)
return False

#Running through the values
get_val = []
set_name = []
set_country = []
for a in range(2,5124):
v = "B%s" % a
sn = "H%s" % a
sc = "G%s" % a
get_val.append(v)
set_name.append(sn)
set_country.append(sc)

for a in range(2,5124):
try:
name = worksheet.acell(get_val[a]).value
res = getCountry(name)
if res:
print res
country, last_id, name = res
worksheet.update_acell(set_name[a], name)
worksheet.update_acell(set_country[a], country)
except Exception as e:
print e

[/sourcecode]

Facebooksider som RSS

RSS er super-praktiskt. Enten om du bruker laptop, tablet eller mobil så er aggregerte kilder gull verdt. Hvor digg er det ikke å kunne samle alt du ønsker å lese i en leser (f.eks Google Reader) eller til og med automatisk oppdatere nettbrettet ditt med de nyeste sakene fra dine favorittkilder?

Her er en liten oppskrift på hvordan du kan gjøre dette på facebook sider du følger. Det er ikke alltid at social feed plukker opp disse, og dessuten er det fint å kunne lagre oppdateringer for ettertiden.

Det første du trenger er denne URL strengen:

https://www.facebook.com/feeds/page.php?format=atom10&id=241453889260137

Denne ganske ordinære URL strengen som parses som to argumenter av webserveren [:format =>  “atom”, :id => “241453889260137”] (ruby hash syntax), et voila, du får en atom formatert RSS side tilbake. Det du gjør her er å bytte ut id fra URLen ovenfor med iden fra siden du ønsker å følge.

Men hva om det ikke er noe ID i side URLen på Facebook? Som f.eks i bildet under (tatt fra siden til Løvholm Digitale Medier – mitt forholdsvis nye enkeltmannsforetak).

Screen Shot 2013-01-30 at 11.50.14 PM

Her blir det litt mer komplisert.Dersom du forsøker å laste inn atom RSS-feeden fra dette navnet vil du kun få en side med en feilmelding.

Vår vei til målet går igjennom fantastiske Open Graph, teknologien Facebook bruker for å organisere sine tjenester og sin verden, og dens API.

http://graph.facebook.com/lovdigmed

Her kan du bruke navnet som oppslag og få tilbake en JSON-formatert tekststreng med det du trenger å vite

Screen Shot 2013-01-30 at 11.56.36 PM

Nå trenger du kun å gå tilbake til den første URLen vi hadde, og benytte IDen til siden (merk: ikke cover_id) for å få tilgang til RSS-feeden.

Nå kan du lese Facebook sideoppdateringer i din yndlingsleser sammen med alt annet interessant du abonnerer på

Screen Shot 2013-01-31 at 12.00.10 AM

Data-wrangling: find country based on artist name

At the Clouds & Concerts project at the University of Oslo we are working with really interesting topics, based on interesting empirical data. Through our collaboration with the Norwegian streaming service provider WiMP we are together with Telenor and WiMP analysing a vast collection of data. More about the project’s data-part, also the ‘Clouds’ part of the project’s name can be found on the project’s web sites.

Artist and Country

One of the tasks at hand was to find out which country an artist came from, and whether they came from Norway or not. One way of doing this is to manually go through each artist and use preexisting knowledge about music to determine their country of origin, if stuck, use online services (aren’t we all mostly using Google as initial source of wisdom). Another alternative is to use online services first and then to use human preexisting knowledge to quality assure the final result.

On the Internet, vast amount of sources can be found. However, if you want to get the data without too much fitting, testing, and nitty gritty adaptation for every source, you have an advantage if there is as consistently structured repository you can tap from. Luckily, Metabrainz foundation has a large repository of musical meta-data known as Musicbrainz.

Below you find a script which should (partially) solve our problem by combining the data from Musicbrainz with data exported from our research data. That being said, this script is more a method than a finished product. It should be very easy adaptable, but it is an advantage if you know Python and handling CSV-files. Codeacademy has a good introduction to Python.

The core idea of the script is to take input with name and number of streams and turn them into output with name from the original datasource, number of streams from the original datasource, as well as country of origin, Musicbrainz-ID, and name parsed by Musicbrainz search engine (for initial quality assurance).

To make things simple there is only one successful output and that is if the name sent to the Musicbrainz search engine return an answer, and if that answer has a country associated with it. Be advised (that is also why I have marked the title with ‘try’), that the search engine may not return a similar result. Of that reason we also print the name of the artist we find so this later can be juxtaposed with the original name in the Excel spreadsheet (you are going to transform the CSV to Excel before reviewing aren’t you. Good tool is Google Refine). Another problem is that popular cultural phenomena, common nouns and tribute bands (probably in that order, descending) have same name. This is why a human is always needed, or semantic absolute URI associated with each phenomena. This leads me on to the last step before the code.

Other ways this could have been solved (let me know if you solve the problem in any of these ways)

The semantic way:

The data found in the Musicbrainz database is made available through a SPARQL endpoint named LinkedBrainz.If you know the right ontologies and is comfortable with performing triplestore queries, this is perhaps the most innovative and new-thinking way to solve the problem.

The Virtual Machine Postgres way:

Instead of doing a query on the server, you can be a gentleman and download the server onto your own machine. If you have VirtualBox (if you don’t have it, download it for free) you can run the server locally. An image file with the complete Musicbrainz database can be found on their webpages.

The code:

Here is the code used to solve this task. It can also be cloned from the Cloud & Concerts GitHub-page

[sourcecode language=”python”]
#!/usr/bin/python
# -*- coding:utf-8 -*-

"""
Clouds & Concerts – 2012
Ola Loevholm

Called from command line:
The script reads a file named "topp1000_artister.csv" consisting of a list of artists and then tries to find out which country each artist comes from based on the name.
The name is given in the second column of the CSV file.

Called as a module:
The method getCountry() takes an artist name and checks this with musicbrainz seach engine. Returns the country if a) artist is found through the search engine b) artist has a country associated to the profile

"""

import sys, urllib, string, csv, time
import xml.etree.ElementTree as ET

# Loads a dictionary with ISO 3166-1 abbreviations and countries
COUNTRIES = {"AF":"AFGHANISTAN","AX":"ÅLAND ISLANDS","AL":"ALBANIA","DZ":"ALGERIA","AS":"AMERICAN SAMOA","AD":"ANDORRA","AO":"ANGOLA","AI":"ANGUILLA","AQ":"ANTARCTICA","AG":"ANTIGUA AND BARBUDA","AR":"ARGENTINA","AM":"ARMENIA","AW":"ARUBA","AU":"AUSTRALIA","AT":"AUSTRIA","AZ":"AZERBAIJAN","BS":"BAHAMAS","BH":"BAHRAIN","BD":"BANGLADESH","BB":"BARBADOS","BY":"BELARUS","BE":"BELGIUM","BZ":"BELIZE","BJ":"BENIN","BM":"BERMUDA","BT":"BHUTAN","BO":"BOLIVIA, PLURINATIONAL STATE OF","BQ":"BONAIRE, SINT EUSTATIUS AND SABA","BA":"BOSNIA AND HERZEGOVINA","BW":"BOTSWANA","BV":"BOUVET ISLAND","BR":"BRAZIL","IO":"BRITISH INDIAN OCEAN TERRITORY","BN":"BRUNEI DARUSSALAM","BG":"BULGARIA","BF":"BURKINA FASO","BI":"BURUNDI","KH":"CAMBODIA","CM":"CAMEROON","CA":"CANADA","CV":"CAPE VERDE","KY":"CAYMAN ISLANDS","CF":"CENTRAL AFRICAN REPUBLIC","TD":"CHAD","CL":"CHILE","CN":"CHINA","CX":"CHRISTMAS ISLAND",
"CC":"COCOS (KEELING) ISLANDS","CO":"COLOMBIA","KM":"COMOROS","CG":"CONGO","CD":"CONGO, THE DEMOCRATIC REPUBLIC OF THE","CK":"COOK ISLANDS","CR":"COSTA RICA","CI":"CÔTE D’IVOIRE","HR":"CROATIA","CU":"CUBA","CW":"CURAÇAO","CY":"CYPRUS","CZ":"CZECH REPUBLIC","DK":"DENMARK","DJ":"DJIBOUTI","DM":"DOMINICA","DO":"DOMINICAN REPUBLIC","EC":"ECUADOR","EG":"EGYPT","SV":"EL SALVADOR","GQ":"EQUATORIAL GUINEA","ER":"ERITREA","EE":"ESTONIA","ET":"ETHIOPIA","FK":"FALKLAND ISLANDS (MALVINAS)","FO":"FAROE ISLANDS","FJ":"FIJI","FI":"FINLAND","FR":"FRANCE","GF":"FRENCH GUIANA","PF":"FRENCH POLYNESIA","TF":"FRENCH SOUTHERN TERRITORIES","GA":"GABON","GM":"GAMBIA","GE":"GEORGIA","DE":"GERMANY","GH":"GHANA","GI":"GIBRALTAR","GR":"GREECE","GL":"GREENLAND","GD":"GRENADA","GP":"GUADELOUPE","GU":"GUAM","GT":"GUATEMALA","GG":"GUERNSEY","GN":"GUINEA","GW":"GUINEA-BISSAU","GY":"GUYANA","HT":"HAITI","HM":"HEARD ISLAND AND MCDONALD ISLANDS",
"VA":"HOLY SEE (VATICAN CITY STATE)","HN":"HONDURAS","HK":"HONG KONG","HU":"HUNGARY","IS":"ICELAND","IN":"INDIA","ID":"INDONESIA","IR":"IRAN, ISLAMIC REPUBLIC OF","IQ":"IRAQ","IE":"IRELAND","IM":"ISLE OF MAN","IL":"ISRAEL","IT":"ITALY","JM":"JAMAICA","JP":"JAPAN","JE":"JERSEY","JO":"JORDAN","KZ":"KAZAKHSTAN","KE":"KENYA","KI":"KIRIBATI","KP":"KOREA, DEMOCRATIC PEOPLE’S REPUBLIC OF","KR":"KOREA, REPUBLIC OF","KW":"KUWAIT","KG":"KYRGYZSTAN","LA":"LAO PEOPLE’S DEMOCRATIC REPUBLIC","LV":"LATVIA","LB":"LEBANON","LS":"LESOTHO","LR":"LIBERIA","LY":"LIBYA","LI":"LIECHTENSTEIN","LT":"LITHUANIA","LU":"LUXEMBOURG","MO":"MACAO","MK":"MACEDONIA, THE FORMER YUGOSLAV REPUBLIC OF","MG":"MADAGASCAR","MW":"MALAWI","MY":"MALAYSIA","MV":"MALDIVES","ML":"MALI","MT":"MALTA","MH":"MARSHALL ISLANDS","MQ":"MARTINIQUE","MR":"MAURITANIA","MU":"MAURITIUS","YT":"MAYOTTE","MX":"MEXICO","FM":"MICRONESIA, FEDERATED STATES OF",
"MD":"MOLDOVA, REPUBLIC OF","MC":"MONACO","MN":"MONGOLIA","ME":"MONTENEGRO","MS":"MONTSERRAT","MA":"MOROCCO","MZ":"MOZAMBIQUE","MM":"MYANMAR","NA":"NAMIBIA","NR":"NAURU","NP":"NEPAL","NL":"NETHERLANDS","NC":"NEW CALEDONIA","NZ":"NEW ZEALAND","NI":"NICARAGUA","NE":"NIGER","NG":"NIGERIA","NU":"NIUE","NF":"NORFOLK ISLAND","MP":"NORTHERN MARIANA ISLANDS","NO":"NORWAY","OM":"OMAN","PK":"PAKISTAN","PW":"PALAU","PS":"PALESTINIAN TERRITORY, OCCUPIED","PA":"PANAMA","PG":"PAPUA NEW GUINEA","PY":"PARAGUAY","PE":"PERU","PH":"PHILIPPINES","PN":"PITCAIRN","PL":"POLAND","PT":"PORTUGAL","PR":"PUERTO RICO","QA":"QATAR","RE":"RÉUNION","RO":"ROMANIA","RU":"RUSSIAN FEDERATION","RW":"RWANDA","BL":"SAINT BARTHÉLEMY","SH":"SAINT HELENA, ASCENSION AND TRISTAN DA CUNHA","KN":"SAINT KITTS AND NEVIS","LC":"SAINT LUCIA","MF":"SAINT MARTIN (FRENCH PART)","PM":"SAINT PIERRE AND MIQUELON","VC":"SAINT VINCENT AND THE GRENADINES",
"WS":"SAMOA","SM":"SAN MARINO","ST":"SAO TOME AND PRINCIPE","SA":"SAUDI ARABIA","SN":"SENEGAL","RS":"SERBIA","SC":"SEYCHELLES","SL":"SIERRA LEONE","SG":"SINGAPORE","SX":"SINT MAARTEN (DUTCH PART)","SK":"SLOVAKIA","SI":"SLOVENIA","SB":"SOLOMON ISLANDS","SO":"SOMALIA","ZA":"SOUTH AFRICA","GS":"SOUTH GEORGIA AND THE SOUTH SANDWICH ISLANDS","SS":"SOUTH SUDAN","ES":"SPAIN","LK":"SRI LANKA","SD":"SUDAN","SR":"SURINAME","SJ":"SVALBARD AND JAN MAYEN","SZ":"SWAZILAND","SE":"SWEDEN","CH":"SWITZERLAND","SY":"SYRIAN ARAB REPUBLIC","TW":"TAIWAN, PROVINCE OF CHINA","TJ":"TAJIKISTAN","TZ":"TANZANIA, UNITED REPUBLIC OF","TH":"THAILAND","TL":"TIMOR-LESTE","TG":"TOGO","TK":"TOKELAU","TO":"TONGA","TT":"TRINIDAD AND TOBAGO","TN":"TUNISIA","TR":"TURKEY","TM":"TURKMENISTAN","TC":"TURKS AND CAICOS ISLANDS","TV":"TUVALU","UG":"UGANDA","UA":"UKRAINE","AE":"UNITED ARAB EMIRATES","GB":"UNITED KINGDOM","US":"UNITED STATES",
"UM":"UNITED STATES MINOR OUTLYING ISLANDS","UY":"URUGUAY","UZ":"UZBEKISTAN","VU":"VANUATU","VE":"VENEZUELA, BOLIVARIAN REPUBLIC OF","VN":"VIET NAM","VG":"VIRGIN ISLANDS, BRITISH","VI":"VIRGIN ISLANDS, U.S.","WF":"WALLIS AND FUTUNA","EH":"WESTERN SAHARA","YE":"YEMEN","ZM":"ZAMBIA","ZW":"ZIMBABWE"}

# Iterates through XML-structure and removes the namespace, for easier navigation in getCountry()s ElementTree.findall()
def remove_namespace(doc, namespace):
"""Remove namespace in the passed document in place."""
ns = u'{%s}’ % namespace
nsl = len(ns)
for elem in doc.getiterator():
if elem.tag.startswith(ns):
elem.tag = elem.tag[nsl:]

# getCountry – where the magic happens. Encodes string with artistname to url, then query musicbrainz search engine.
# parses the XML-answer and get the name, id and country of the first returned element (with highest weight)
# returns country name i a) artist is found through the search engine b) artist has a country associated to the profile, otherwise returns False
def getCountry(name):
name = urllib.quote_plus(name)
BASE_URL = "http://musicbrainz.org/ws/2/artist/?query=%s&format=xml&method=advanced" % (name)
print "Querying: %s" % (BASE_URL)
try:
search_input = urllib.urlopen(BASE_URL)
# Checks whether HTTP Request Code is 200 – if not goes to sleep for 5 seconds // Inded for 503 Code
http_code = search_input.code
if http_code != 200:
# print "Could not access: %s \t Got HTTP Code: %s. 5 second cool-down" % (name, http_code)
time.sleep(5)
getCountry(name)
except Exception:
print "GETTING_ERROR: Something went wrong while getting HTTP"
return False
#search_xml = search_input.read()
#print search_xml
try:
tree = ET.parse(search_input)
remove_namespace(tree, u’http://musicbrainz.org/ns/mmd-2.0#’)
feed = tree.getroot()
elem = feed.findall("./artist-list/")
#print elem[0].find(‘name’).text
#print elem[0].get(‘id’)
except Exception:
print "PARSE_ERROR: Something went wrong while parsing HTTP"
return False
try:
if elem[0].find(‘country’) != None:
# print COUNTRIES[elem[0].find(‘country’).text]
try:
country = COUNTRIES[elem[0].find(‘country’).text]
except Exception:
print "Could not find key in countrylist error"
return False
return [country,elem[0].get(‘id’),elem[0].find(‘name’).text]
else:
print elem[0].find(‘name’).text + " has not any country associated\n"
return False
except IndexError, ValueError:
print "ERROR – COULD NOT GET DATA FROM %s\n" % (name)
return False

# If method is called from terminal. Iterates through topp1000 artists contained in a CSV-file in same directory.
if __name__ == "__main__":
#name = sys.argv[1]
csvfile = open("topp1000_artister.csv")
outfile = open("topp1000_output.csv","w")
artistlist = csv.reader(csvfile, delimiter=’,’, quotechar=’"’)
for line in artistlist:
result = getCountry(line[1])
try:
if result != False:
result_string = "%s,%s,%s,%s,%s,%s\n" % (line[0],line[1],line[2],result[0],result[1],result[2])
# print result_string
else:
result_string = "%s,%s,%s,%s\n" % (line[0],line[1],line[2],"No Country Found or fail occured")
# print result_string
except IndexError, ValueError:
print e
result_string = "Error on element: %s\n" % line[1]
try:
outfile.write(result_string)
except:
print "Write error happened with %s" % line[1]
[/sourcecode]

And as always, I am most greatful for feedback! Hope this may come in handy!

The Music in the Clouds

I have got a new job. For the last one and a half month I have been working as a research assistant on the research project “Clouds & Concerts” (Norwegian: Sky & Scene). The project is a collaboration between the Department of Musicology and the Department of Media and Communication at the University of Oslo as well as the Norwegian streaming service WiMP, Telenor and SINTEF and seeks to understand how the new music technology paradigm has affected music listening, music distribution, and audiences’ relationship to live music. My position is 50%, so I hope to fill the rest of my weeks with relevant freelance work and experiments (if you need any digital assistance or want to develop something, please contact me.) I am excited to learn more about this interesting field, and to learn more about scientific work and hopefully specialise in data analysis and quantitative methods.

The projects English website and Norwegian website

The picture is a word cloud made up from genre tags, in courtesy of the project