At AddThis, quality data analysis is an integral part of our business and one of the key drivers to our success. When analyzing data we’ve found that making an assumption first and then testing against it is critical to driving accurate results. This blog post, which was intended to be all about winter weather and has since changed entirely, is the perfect example.
With upwards of nine feet of snow accumulation in the Northeast and ice storms through the Tennessee Valley, we made a fairly obvious assumption that residents in those regions were likely looking to escape the oppressive winter conditions and retreat to warmer weather.
To prove our shared assumption that bad winter storms lead to an increase in vacation searches, we looked to the AddThis data to show just how strong that correlation would be. When we observed a very small set of data for just one storm, our hypothesis seemed accurate. But when we expanded our coverage to include multiple locations and timeframes the answer was surprising.
We assumed our graph would show patterns like this simplified example when localized to a specific area’s weather, where searches increase in and around stormy days:
In reality, it looked like this:
It seemed like there was some kind of pattern in there, but we didn’t actually see a relationship like the one we depicted in the first chart. We tried to analyze the data in multiple ways, looking during the storm, after the storm, comparing to similar storm-free timeframes, and even looking at vacation searching as a percentage of all searching we saw, but the results we anticipated weren’t there.
So we changed the labels on our x-axis:
This is the same data, but we replaced “date” with “day of the week”. Now the visual cues in the data made more sense. While we couldn’t argue that people increased vacation search activity in relation to ice or snow storms, we discovered instead that vacation shopping patterns followed workweek patterns, tending toward increase at the start of the week and decrease on Fridays and Saturdays, rather than a strong relationship with stormy days. Who would have thought?!
Although we didn’t achieve the results we were expecting, it’s important to highlight the following:
1. It’s critical to rely on facts, not on assumptions, when it comes to marketing.
As marketing and advertising on the web becomes noisier and more competitive, the importance of targeting the right audience, in the right places, at the right time is crucial. We made the assumption that users who experienced bad winter storms would show increased searching for warmer vacations around the times of those storm, which wasn’t the case. If we had launched a marketing or advertising campaign based on those assumptions, there’s a strong chance we would have wasted budget that could have been spent in better places.
2. Data isn’t anecdotal.
You need enough data to make a decision. If we had stopped after our initial check of one area during one storm, we would have assumed that this relationship existed. By validating our findings with a significant amount of data across different geographies and time periods, we found that this hypothesis was wrong more often than right.
3. Our conclusion is not an absolute.
Quality data analysis is a lot like science. If one hypothesis fails during testing, it doesn’t mean that the whole concept is out the window – there may be other experiments to design and hypotheses to test. In our case, we could look at temperatures, departures in temperature between normal and actual, snow pack, and a variety of other weather-related phenomena that generally make people miserable. We could also look at other outcome measures besides search, such as social engagement around vacation-related content or content consumption. These results are not anywhere near the final word on weather-based advertising!
All data-driven posts from AddThis are crafted by our dedicated Data Insights team.