Big Data Case Studies

CASE STUDY NO. 1

How Big Data Means Hot Pizzas for Domino’s

how-big-data-means-hot-pizzas-for-dominos


Pizza delivery might not seem, on first glance, the most high-tech of industries. But Domino’s, the world’s largest delivery chain, has consistently pushed its brand towards new and developing tech, and as such it is now possible to order pizzas on Twitter, smartwatches and TVs, in-car entertainment systems such as Ford’s Synch, and social media platforms like Facebook.

The initiative to keep a Domino’s order button at the fingertips of their customers at all times is called Domino’s AnyWare. Of course, this multi-channel approach to interfacing with customers gives rise to the potential to generate and capture a lot of data – which Domino’s capitalizes on by using it to improve the efficiency of their marketing. I had the chance to talk to data management lead Cliff Miller and Dan Djuric, vice president of enterprise information services, about how the business uses Big Data to derive insights on customers, as well as how it sees analytics as the key to driving change across the industry. “Domino’s AnyWare literally translates to data everywhere,” Djuric tells me.

Data captured through all its channels – text message, Twitter, Pebble, Android, and Amazon Echo to name just a fraction – is fed into the Domino’s Information Management Framework. There it’s combined with enrichment data from a large number of third party sources such as the United States Postal Service as well as geocode information, demographic, and competitor data to allow in depth customer segmentation.

“Ultimately we start to build this unified customer view, measuring consistent information across our operational and analytic layers,” Djuric says. “So this is now where we start to tell the story.”

Information collected through the group’s point of sales systems and enrichment data add up to 85,000 data sources, both structured and unstructured, pouring into the system every day.

The family or household is fundamental to Domino’s tactics for segmenting customers. “Pizza ordering is a household exercise,” says Djuric.

“We have the ability to not only look at a consumer as an individual and assess their buying patterns, but also look at the multiple consumers residing within a household, understand who is the dominant buyer, who reacts to our coupons, and, foremost, understand how they react to the channel that they’re coming to us on.”

This means that individual customers or households can be presented with totally different presentation layers than others – different coupons and product offers – based on statistical modelling of customers fitting their profile.

Customer segmentation data is also used to assess performance and drive growth at individual stores and franchise groups. “We can work with a store to tailor their coupons, and tailor what we think their upselling capabilities might be in terms of adding more revenue. “But what we can also do is tell them what is not working for them – considering not only their market but comparing their market to other markets, and also to our competitive landscape.”

Domino’s investment in driving orders across their digital platforms has certainly borne results – the group now processes about 55% of its orders via online systems, with the remainder accounted for by traditional ordering methods such as telephone or in-person.

“We’re becoming more of a digital e-commerce operation, rather than a traditional quick service restaurant business,” Djuric tells me, “We’re leading the way, I think, in what we’re doing with social and what we’re doing with our digital platforms. And importantly, I think, it’s structuring the digital service with the insights we have on customers and practices, that leading our customers to a better, faster and more quality-based experience on our digital platforms. It has surprised us, how quickly we’ve transitioned from a traditional ordering mechanism into an e-commerce based profile.”

Domino’s data management lead Cliff Miller was also keen to talk about how influential analytics and Big Data has been in that transition. “We want to be sure we’re getting the best product we can to our customers as fast as we can, but we always want to analyze all segments of our business, as we believe being a data driven company gives us a lot of competitive advantages.”

“We’ve definitely moved away from a time when we were just processing data on sales and operational metrics in our warehouse, every business unit under our umbrella is looking to leverage data to make them faster and more cost effective.”

Domino’s relies on Talend’s Big Data Integration platform for much of the heavy back-end work. across Enterprise Data Warehouse . “We’ve already got a ton of stuff in Talend, and it’s become our de-facto enterprise data processing platform, it’s extremely flexible in terms of what it can do.”

“We’re an aggressively aspirational company and store growth is very important. We need to make sure we are on platforms which can scale.”

Domino’s is clearly a business which is not scared to move with the times. From pioneering online and TV-based ordering to advanced customer analytics, it has consistently moved to keep with the latest trends in technology. The logistics of delivering close to a million pizzas a day across 70 countries throws up exactly the sort of problems that Big Data is good at overcoming. The company has recently been experimenting with new delivery methods – such as a car with a built in oven capable of delivering 80 pizzas in one journey, in order to cut the carbon footprint of deliveries. It will be interesting to see where their enthusiasm for technology and analytics takes them in the future.

CASE STUDY NO. 2

The NFL Understands How to Play the Big Data Game

nfl

Like many businesses, the National Football League is experimenting with big data to help players, fans, and teams alike.

The NFL recently announced a deal with tech firm Zebra to install RFID data sensors in players’ shoulder pads and in all of the NFL’s arenas. The chips collect detailed location data on each player, and from that data, things like player acceleration and speed can be analyzed.

The NFL plans to make the data available to fans and teams, though not during game play. The thought is that statistics-mad fans will jump at the chance to consume more data about their favorite players and teams.

In the future, the data collection might be expanded. In last year’s Pro Bowl, sensors were installed in the footballs to show exactly how far they were thrown.

Of course, this isn’t the NFL’s first foray into big data. In fact, like other statistics-dependent sports leagues, the NFL was crunching big data before the term even existed.

However, in the last few years, the business has embraced the technology side, hiring its first chief information officer, and developing its own platform available to all 32 teams. Individual teams can create their own applications to mine the data to improve scouting, education, and preparation for meeting an opposing team.

It’s also hoped that the data will help coaches make better decisions. They can review real statistics about an opposing team’s plays or how often one of their own plays worked rather than relying solely on instinct. They will also, in the future, be able to use the data on an individual player to determine if he is improving.

Diehard fans can, for a fee, access this same database to build their perfect fantasy football team. Because, at heart, the NFL believes that the best fans are engaged fans. They want to encourage the kind of obsessive statistics-keeping that many sport fans are known for. It’s hard to predict how this flood of new data will impact the game. In 2015, only 14 stadiums and a few teams were outfitted with the sensors. And in 2016, the NFL decided against installing sensors in all footballs after the politics of 2015’s “deflate gate” when the Patriots were accused of under inflating footballs for an advantage. Still, it seems fairly easy to predict that the new data will quickly make its way into TV broadcast booths and instant replays. Broadcasters love to have additional data points to examine between plays and between games.

2016 also marked the 50th year of the Super Bowl, and plenty has changed since the first time teams duked it out for the national title. Sports analysts have been collecting data on football games since the very beginning, but our data collection techniques and abilities have vastly improved. New sensors in stadiums and on NFL players’ pads and helmets help collect real-time position data, show where and how far players have moved, and can even help indicate when a player may have suffered a damaging hit to the head.

In 2013, Microsoft struck a $400 million deal with the NFL to make this data available to coaches and players via their Surface tablets. Coaches use the tablets to demonstrate and review plays on the sidelines, as well as access real-time data from the NFL’s databanks.

Of course, the partnership hasn’t been all smooth. At first, commenters kept referring to the tablets as “iPads,” and had to be reminded many times to call them by their brand name. Then, during the AFC championship game this year, the Patriots briefly had connectivity issues, causing the tablets to stop working.

And, of course, the commenters remembered to call them by their brand name that time just as, almost as unfortunately, the network ran a Microsoft ad during the break.

Still, connectivity issues aside, the advent of real-time data access is potentially changing the way coaches view games and call plays. As data collection has improved, people are also turning more and more to computer algorithms to predict the outcome of the game. Big data, IoT sensors, and new analytics abilities makes even more data available, including real-time distance traveled and position on the field, how weather conditions affect individual plays, and even predicting individual player matchups.

But it’s still a long way from a sure bet.

A company called Varick Media Management has created its own Prediction Machine to predict the outcome of all the games in the season. The site also offers its “Trend Machine” which can analyze many different matchups from more than 30,000 games over 35 years of play.

However, their accuracy is far from perfect; in the 2014-2014 regular season, they boasted a 69% accuracy rating. But, they were the only major predictor to foresee the Seattle Seahawks blowout win over the Denver Broncos in 2014. Facebook tried to use social data to predict the outcome, and predicted the Broncos would take home the trophy. Another way big data could be changing the game is in the realm of advertising.

Everyone knows Super Bowl ads cost millions of dollars — a record $5 million for a 30-second spot this year — but what big data showed marketers last year is that most of the online chatter about Super Bowl XLIX took place after the game.

This could be a boon for advertisers who want to take advantage of the attention on the game, but can’t afford a TV spot. In fact, by using data to strategically target only certain consumers, online advertising could represent a much better investment with better engagement — and at a fraction of the cost.

That could mean that, for well targeted ads, online advertising could have a better return on investment than television, even ads aired during the most-watched event on live TV.

In addition, social listening in years past told us that social media users talked more about the brands, commercials, and halftime show than they did the game. That points to an important social opportunity for brands who can afford to advertise during the game.

As we all know, brands can make or break a big social opportunity like this depending on how well their social media is managed, and how well they’re watching their data. Take the Oreo blackout tweet during the 2013 Super Bowl — many said the Twitter ad “won” the Super Bowl marketing game that year, and they paid mere pennies compared to brands that bought TV ads.

CASE STUDY NO. 3

NASA’s Mars Rover – Real Time Analytics 150 Million Miles From Earth

nasa

The team in control of NASA’s Mars Rover spacecraft now have Big Data driven analytical engines at their fingertips.

The same open source ElasticSearch technology used by the likes  of Netflix and Goldman Sachs is now being put to use planning the actions of the Curiosity rover, which landed on the red planet in 2012. And the next mission, due to launch in 2020 with the primary mission of finding evidence of ancient life on the planet, is being built around the technology from the ground up.

NASA’s Jet Propulsion Labs, which runs the day-to-day mission planning, has rebuilt its analytics systems around ElasticSearch which now processes all of the data transmitted from the Rover during its four daily scheduled uploads.

These uploads cover tens of thousands of data points – every reading taken by Curiosity’s onboard sensors including temperature on the Martian surface, atmospheric composition, and precise data on the rover’s equipment, tools and actions.

All of Curiosity’s operations are planned a day in advance based on data received the previous day. This move to a real time analytics vastly speeds up the time in which decisions can be taken by mission control.

ElasticSearch – which recently achieved its 50 millionth download – means that patterns and anomalies in the datasets can be spotted far more quickly. Correlations which could provide mission-critical insights are more likely to become apparent, leading to a greater rate of scientific discovery and less danger or malfunction or failure.

NASA JPL data scientist Dan Isla told me “This has been very transformational from an operational point of view. We can get very quick insights over multiple days’ worth of data, or even the entire mission, in just a few seconds, without delays. We can actually ask questions and get answers quicker than we can think up new questions.”

One application is anomaly resolutions. When it appears that there is a problem with the spacecraft, precise details of its operations can immediately be analyzed to find out when the last time this situation occurred, and what other elements were in play at the time.

The system is also used for the vital function of managing power generated by the rover’s radioisotope thermoelectric and solar power generators. The trickle charge system only makes around 100 watts of power available to the craft and its instruments at any one time consistently.

This means that power management must be carefully planned in advance. As Isla explains, “using ElasticSearch we can budget this power much more effectively, we can look at the data and see ‘ok, this is what we used yesterday, and this is the state of the batteries, and this is our plan coming up…’ we can build a power model very quickly. “With ElasticSearch everyone can do analytics, ask questions at the same time and get the same results, we’re all working together now. Everyone’s very agile now.”

This isn’t NASA’s only project to use ElasticSearch. It was incorporated into the Mars Rover program after being successfully tested in the Soil Moisture Active Packing mission, launched last year. SMAP uses high powered radar and radiometer to analyze and capture high resolution data on soil moisture across the globe. A full moisture map of the Earth’s surface is generated every three days – data creation an order of magnitude greater than that of the Mars Rover. While the rover has so far generated around 1 billion data points, the SMAP project is already pushing 20 billion.

Perhaps the most exciting aspect of Mars exploration today is the possibility of discovering whether or not the planet was ever home to life. It is thought that ElasticSearch will greatly speed up the answering of this question.

Isla tells me “Curiosity was designed to find out whether Mars was ever a habitable environment. During the course of the mission we’ve been able to confirm that yes, it was. The next mission – in 2020 – we’re taking life detection instruments and we need more capabilities, more instruments and we need to be able to do it faster.

“We can’t be spending hours analyzing the data, so we need to take the ElasticSearch approach that we implemented fairly early into Curiosity and do that from day one. We’re definitely baselining this tech as our operational data system.”

Share this post
[social_warfare]
Netflix Application
Introduction to Hadoop

Get industry recognized certification – Contact us

keyboard_arrow_up