Today, we have a guest post from Dan Smith, Chief Architect at Quaero, a CSG Solution and a Neolane partner. Dan is responsible for the architectural integrity of Quaero’s Intelligent Engagement platform, focusing on the capability, flexibility, scalability and fitness of purpose of the platform for Quaero’s Customer Engagement hosted solutions. In this post, he draws from his nearly 25 years of experience to discuss how marketers can effectively process and apply big data to create micro-segments that drive even more targeted, personalized, and welcomed interactions.
Using data to personalize and optimize the experience that a prospect or customer has with a company brand is not new concept. And as consumers (assuming we can put our Big Brother concerns aside), I think we all appreciate when a company does this really well. It feels personal, even though we know it’s all technology and business strategy behind the scenes. And more importantly, it’s helpful. It saves us time, which is a precious commodity. I personally appreciate it when a company uses data to make content more relevant or proactively suggest things that save me time, save me clicks, and save me from having to search.
As we live more of our lives—at least the consumer part of them—in the digital world, our actions are more easily captured. This data represents our behavior and as the old saying goes, “actions speak louder than words.” In the digital world, actions become interactions, and they tell a story; they paint a picture of us. They also cause the traditional marketing boundaries of “inbound” and “outbound” to blur. As a consumer, I don’t care about inbound versus outbound. Don’t forget, it’s all about me.
So, interaction data is good. But it’s also big—“Big Data.” And Big Data introduces challenges…and opportunities…and decisions. For example, low-level, raw data is ripe for behavioral analytics. But traditional analytic engines are generally not well-equipped for that type and volume of data, nor are they well-suited for the type of analytics that can be done when rich behavioral data is available.
This is where Big Data and Big Analytics technologies come in. Technologies like Hadoop, HBase, MongoDB, Neteeza, Vertica, etc. enable the storage and processing of Big Data using Massively Parallel Processing (MPP), or Grid Computing, architectures. And analytics technologies such as “R” are being extended from traditional application server deployments to run “in cluster,” exploiting these MPP architectures and the Big Data analytics that they enable—bringing the analytics to the data instead of bringing the data to the analytics, which becomes critical at scale.
Importantly, these new technologies are accretive; they do not replace existing technologies. There are still many business critical applications that rely on a relational databases (or data marts)—for example, applications which provide business intelligence, marketing, and customer relationship management. So when implementing a Big Data solution, you have to think through (or rethink) technology roles and responsibilities in your ecosystem as well as integrations and data-flows. For example, which data-flows go through a Big Data technology and which though RDBMS technology? Further, which technologies execute Big Data behavior analytics that operate on long, rich history (millions or billions of records) and which technologies execute real-time analytics (the last 50, in-session clicks)? They are very different.
One approach is to create data-flows through Big Data technologies like Hadoop for the voluminous behavioral data and unstructured data and create data-flows through RDBMS technologies for user profile and registration data. This allows the Big Data technologies to do what they are good at (large volume ingestion, transformation, aggregation, and analytics) while allowing the relational technologies to do what they are good at (normalizing, joining, row- and cell-level updates, low-latency, optimized queries, etc.). Big Data analytics can run on large volumes of detailed behavior and interaction data and be reduced to easy-to-consume micro-segmentations: Steelers Fanatic, Last Minute Cell Phone Top-Upper, Soccer Mom, News Junkie, Always Online, Deal Hunter, etc. (anything that can be gleaned from the data and is useful to support the business applications).
Even though the input data is big and the analytic models are complex, the output micro-segmentations are small and simple and easily synchronized along with other aggregate data from the Big Data processing environment to the RDBMS data mart where they are easily, and quickly, consumed and used to drive even more targeted, personalized, and welcomed interactions. They are also easily integrated with real-time analytics; it’s very useful to know that it’s an Online Deal Hunter that just navigated through your site in some particular pattern versus a Traditional Brick-and-Mortar Shopper. Of course, these new, Big Data analytics-informed interactions are captured as before, and routed through the Big Data technologies, and, well, the cycle starts again.