How AI Will Impact Agency Analytics

How agencies can realize the potential of predictive analytics

How agencies can realize the potential of predictive analytics

 

Machine Learning (ML) has been heralded as a technology breakthrough that has the capacity to fundamentally change the way that we approach many tasks in our daily life, from communicating with friends to commuting to work, and much more in between.

For marketers, machine learning opens the door to predictive analytics, which provide a way to anticipate consumer trends before they actually happen. According to SAS, predictive analytics are defined as “the use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal is to go beyond knowing what has happened to providing a best assessment of what will happen in the future.”

Used in this way, Machine Learning can automatically parse consumer behavior that happened in the recent past in order to predict future buying habits.  This type of “crystal ball” potential makes ML highly attractive to marketers. Imaging being able to target people precisely with personalized communications that match their preferences and buying history, in order to sell them exactly the right thing at the right time. That is the promise ML offers to marketers.

But in order to make accurate predictions, we need to know what we are predicting, and we need to be able to measure the efficacy of our predictions.  In ML, there are three primary measurement parameters that can be applied: Accuracy, Precision, and Recall.

  • “Accuracy” is defined as correct predictions divided by the total number of predictions (Correct Predictions/(Correct Predictions+Incorrect Predictions).  Although accuracy is an understandable everyday metric, it’s actually a very poor measure of how well a machine learning algorithm is doing. For the sake of example, let’s say that we’d like to predict what percentage of the news articles mentioning a particular company are financial, in terms of topic. Using an old-fashioned rules-based approach, we may have programmed the computer to mark articles as “financial news” if they contained specific phrases like “share price”, “%,” and “$.”   This would undoubtedly uncover a number of financial articles, but it would also likely miss some articles and surface a high number of false positives.

Now, let’s say that on the average, 10% of news articles mentioning Company X are financial. This means that 90% of news articles are NOT financial. We could create a machine learning model, known as a majority class classifier, that would always predict that an article is NOT financial and it would have 90% accuracy. But this model would be useless because it would never predict financial news articles and we might be “tricked” into thinking that they do not exist. This issue is known as “Class Imbalance” in machine learning and it is why accuracy is not necessarily the best metric for measuring ML models.

  • Precision, defined as the number of True Positives divided by all Positives (both true and false), adds another layer of insight by illustrating how “precise” the algorithm is. So, using the same example, let’s say that for every 100 articles, we predict that we will have 10 true positives (correct financial predictions) and 40 false positives (incorrect financial predictions).   Our precision would be 20%. This will provide us with some additional nuance to determine how well our model is performing.

  • Recall is another lens through which to measure the performance of ML models.  Defined as True Positives/ (True Positives + False Negatives), recall is essentially a measure of how many relevant articles were found, or recalled. Let’s say we know that yesterday, 10 financial articles came through our system, and we predicted 5 articles correctly (True Positive) and missed 5 articles (False Negative). Our recall would be 50%.  

Precision and recall give us a better sense of how our algorithm is performing on a class-by class basis, for both financial and non-financial news.  If we were to rely solely on accuracy, we would know how we are performing overall, but could also be led to some incorrect assumptions. And in some cases, those kinds of mistakes can be costly.  For example, ML is used to predict credit card fraud. Let’s say that 1 in every 100,000 transactions is fraudulent. In this situation, it would be very important to measure precision and recall, because the goal is to predict fraud when it happens.  If we rely only on accuracy, we might be tricked into thinking our algorithm is 99.999% accurate.

So, let’s create a Machine Learning model that evaluates and predicts financial articles about Company X.  To do this, we will use a supervised learning approach, in which we provide a set of data for the computer to learn from, and we will look for patterns based on the evaluation of 350,000 features.  The features could include whether the URL is from a financial news source like WSJ, Reuters, or Forbes; whether NASDAQ or DOW appear in the title; how many times the word “share” appears; and other text statistics. 

If our ML model demonstrates 90% accuracy (meaning that 90/100 of our predictions are correct), 90% precision and 90% recall, it will not only result in better capture of financial news articles, it will also cut down on false positives and false negatives. 

Once we have a model that yields sufficient results, we can engineer features that will help it to predict outcomes through data enhancement, which can ensure any data that is coming into the business is being filtered to maximize its value.  The training data used in machine learning can often be enhanced by extracting features from the raw data collected. In our example, this would include marking articles as financial, predicting the sentiment of an article, extracting entities for customers, and more.  This kind of data enhancement, or augmentation, increases the predictive power of learning algorithms by creating features from raw data that help facilitate the learning process.

Ultimately, the question of whether ML will live up to the hype for marketers depends on two conditions.  The first is the validity of the data. ML models are really only as good as the data upon which they are based.  If the data is old, tainted, or otherwise questionable, the model won’t work.

The second variable is the ability to visualize data in a way that makes sense.  Regardless of which type of model marketers choose to work with, they will need to implement a data visualization strategy that helps easily digest and make sense of the information being tracked.  Data visualization not only appeals to the eye, it can also be used to inform, inspire and guide actions based on customer behavior (and other business information).  Machine learning-based data visualization tools help businesses to optimize operations and make important decisions. Without a good visualization strategy in place, all the data in the world may not help you make good decisions.

Tickr MetaCloud

Tickr MetaCloud

As an example of data visualization, Tickr’s solution implements a feature called MetaCloud, which uses machine learning to generate keywords in a news article, then visualizes them in a new and intuitive way. MetaCloud provides a deeper look inside a news article within the wider context of how it relates to other key phrases and topics being discussed in the news. Through a conversation flow analysis that illustrates the word connectivity between major themes, keywords, and topics of interest, MetaCloud allows people to easily grasp the meaning of the data.  

We are just at the top of the iceberg of what ML is capable of, in terms of predictive analytics. But with 91% of top marketers saying that they are either fully committed to or already implementing predictive marketing, change is coming soon. Provided that the data is set up right and visualized in a digestible way, ML can be an invaluable asset, especially when applied to specific marketing challenges, such as qualifying and prioritizing leads or bringing the right product to market at the right time.  Savvy marketers are not only embracing ML, but also learning more about how to measure its efficacy, in order to make sure that they get the most from their predictive models.

 
Tim Williams