Last week I attended a lecture hosted by Digital Leaders. The speaker was Rayid Ghani, Former Chief Scientist for the Obama for America presidential campaign.
Rayid started with some background on the US presidential system and how the electoral votes work. Put simply a cross analysis of the largest states having the most electoral votes with the swing states identifies the key states to target for campaigning.
They wanted to run an evidence driven campaign. Not for the team to do things simply because that is the way they had always done them. So metrics were key and they distilled the factors that influence the amount of votes down to five offline actions.
- Registering people to vote
- Persuading people to vote for you
- Getting people out to vote
…and because the whole campaign relies on money and people to run it…
- Getting donations
- Recruiting volunteers
If an activity didn’t influence one of these goals - then they didn’t care about it.
Integrating data across channels
The most difficult part of the work according to Rayid was combining the data they had on people across all channels. Their donation datasets, their email subscriptions, volunteer records, telephone, mail addresses, social media and so on. Matching this telephone number to this email address etc to get a single record for each person was the foundation for the rest of the campaigns work with data. It was not possible to completely merge the data in every case as you can imagine but it sounds like they did a pretty good job.
Rayid’s first observation was that combining offline and online fundraising data enabled an increase of 1% on donations. It may not sound a lot - but 1% of a Billion Dollars is farly significant. It also meant they were able stop stop emailing potential donors with a request for a $25 donation when that donor had already given an offline donation worth thousands, and the subsequent bemused email reply.
Getting serious with data predictions
The next step was to get serious with data modelling. They asked a selection of swing voters what issue mattered to them and stored this in their unified database. If they said healthcare the system they built would automatically email a video of Obama’s health achievements and so on.
Raid noted that it was very important to link off and online as all the key objectives occurred offline but also as a large proportion of voters preferred offline.
Rayid said that most campaign actions were individual - ie a person (campaign volunteer) would telephone call a person (potential voter). They set up a score system, 1-100, on each person for different actions:
- How likely they will vote
- How likely they will vote for us
People were sorted by these scores to produce lists of people to contact via different channels for varying actions. First a test was run on a sub-set of these people - they were polled before and after the communication to see who’s likelihood to vote for Obama had decreased and who’s had increased. This was also done for variants of the message. The results were put in statistical model to work out the best strategy for communicating with the larger set of people not polled. This was done for the three main voting actions:
- Persuade to vote for us
- Register them to vote
- Get them out to vote.
However for these models to drive results they needed to be embedded into the systems and workflow, mostly offline, that the rest of the campaign were using. Linking these data models into the the actuators, the valuable actions you want to persuade people to do, was key.
“I treat social media very different from social networks” said Rayid. He went on the explain they couldn’t handle a lot of interactions with people on social media, so used it more as a measurement tool - a message went out from the campaign, which gets covered in the mainstream media; Facebook and Twitter were echos of this and could be measured. They didn’t measure positive or negative sentiment in detail, but monitored people and put them into different buckets:
- Anti Obama
- Pro Obama
They watched to see when changes or movement between the buckets happened and what issue triggered it. According to Rayid it didn’t happen much.
Friends persuading friends
They were really not reaching the 18-35 age range via traditional comms - as they didn’t have phone numbers, or they couldn’t get them to answer the phone. However the 18-35 age range were the people on social networks. They used targeted sharing - presenting a list of suggested friends to share with. They tailored these lists by applying the data modelling to each user’s FaceBook friend list and selecting only the top six friends most likely to be persuaded.
Someone in the audience thought this was “creepy”. However Rayid pointed out that they only used data that people had consented to share with them, and that persuasion by a friend is more effective and comes across as less intrusive than direct persuasion from the central campaign.
blog comments powered by Disqus