In my last post about Data Monetization I touched upon the concept of a Digital Product Factory. Today, I would like to outline why a factory-style R&D process is a must-have when monetizing your data, no matter in what consumption pattern; data feed, analytics or application. Furthermore, I will highlight how data governance will play into this paradigm, a handful of key use case categories most organizations are trying to monetize in addition to a few real-life examples.
If you recall from my previous post, I touched upon six major factors you should think about before embarking on this journey: market share, usage rights, privacy, readiness, value proposition and market timing.
One aspect most customers struggle with is the customer benefit valuation. After all, this is how they should price their new digital product after all. A lot of pricing theories are available around this but I will abstain from putting you to sleep with these. BAH and McKinsey are surely glad to help you put these together for a few hundred thousand dollars in fees. However, I found this paper based on the now defunct Windows Azure Marketplace Datamarket fairly informative as a head start.
While a bit dated, the findings seems to be as true as ever. To price appropriately you have to embed yourself with your future buyers to understand their business model, operations and challenge to determine where and how much value you can add by them using your data product. Most companies often used tiered pricing based on volume, users, attribute breadth or functional unit to drive a “land and expand” go-to-market model often drastically underpricing their lower trial tiers. This strategy aims to understand the marginal consumer willingness to pay (mCWTP), which is dependent on the total available profit in a given market, e.g. retail footfall analytics and the number of market participants over time.
Overall, pricing is just one aspect in bringing a digital product to market. The life cycle mirrors that of a software product, which we at Informatica are intimately familiar with. A major difference; however, is that the data (content) delivered by itself or within an analytical or transactional application can never be fully warranted, especially if it needs to adhere to some industry or regulatory standard, e.g. ICD codes, etc. While this is not written in stone, your R&D organization would shoulder a significant burden always keeping abreast of new releases of these standards.
Ultimately, we are looking at eight major launch sequence steps with a multitude of work streams in each. Below is a taste (starting at 12 o’clock position clockwise).
So, let’s look at why you need a factory approach.
Do It Right Or Don’t Do It At All
First, you want product excellence because you want identify process and product weaknesses early as you strive for an agile development approach. Consequently, you would ensure proper development and testing policies to establish a clear data creation, curation and consumption data life cycle all the way to retirement. Data governance plays a key role here.
Second, you need accountability so you want a P&L responsible person associated with the effort as you will likely look at a shared services model to reimburse other departments for their data contribution. Moreover, internal constituents and customers need a clear interaction model and support touchpoints. Lastly, it also facilitates security features by design.
Third, you must have scalability in mind. It may very well be that you are so successful, you quickly need to ramp up your IT and distribution infrastructure. This demand may come from new data sources, use cases, a shift from batch to real-time client needs, the need to enable an external developer community or just sheer customer demand.
Land & Expand
Reuse is a key factor when putting together your enabling IT infrastructure. You want to use the same skill set for multiple operations. You do not want to recreate the same rule set across multiple modules. A focus on business logic and data-driven application development instead of technical integration is key. To do so, you must fully understand, catalog and streamline your data flows. Below is a stab at the core componentry you should be thinking about. However, you do not need to bite all these off in one. As stated in my prior post, leverage your internally used data integration, quality, etc. capacity first to prototype your first release and add additional capabilities as you tackle new, more complex use cases.
As you can see above, I outlined three factory lines: Data, DevOps and App. Each plays a key role but the maturity grows from left to right. The Data Factory Line acquires, profiles, standardizes, dedupes, transforms, enriches and links key data elements and associates them with relevant transactions (orders, bills, emails, etc.), often in a Hadoop environment and with a high degree of historical lineage. This are the raw ingredients to publish data sets. We have seen our clients experience a 10x return over 5 years vis-à-vis traditional data quality practices, no matter what the deployment pattern.
The DevOps Factory Line provides a repeatable means to design, consume, execute and monitor microservices and APIs for analytics and application developers in a secure and agile fashion when new business logic dictates changes. These capabilities provide you a means to scale the creation of services and APIs feeding applications. This allows you develop two-to-four-times faster in a secure and codeless fashion geared towards business analyst.
The App Factory Line allows internal and external development teams easy access to realistic test data to develop (self) service (predictive) reporting and (mobile) applications across multiple user interface options. The App Factory enables you to provision core features when your team creates simple to sophisticated applications, often highly workflow and business oriented. Here we expect to see a 20% productivity gain over prevailing practices.
Common Use Case Scenarios
When I look at the last 18 months of customer discussions and inquiries, I can distill them into three major realizations. First, most clients typically have IT run with data monetization. Two, IT is very quick to jump to use cases based on their often-limited business acumen or exposure. Third, IT typically does not consider the go-to-market and operational considerations beyond standing up a supporting IT environment. Therefore, these initiatives die a rather quick death or stay in limbo indefinitely.
A curious consulting mind-set at heart, these things would go a lot differently and should end up with an increasingly larger and more complex set of business use cases. However, this assumes your (IT) staff has the charter and capacity to embed themselves with potential clients beyond just a one-hour viability phone call to test ideas. The data-driven business use case categories we see most often realized via enriched data sets, analytics or transactional applications are listed below:
- New Customer Acquisition
- Sentiment/Referral Analysis
- Product/Service Quality
Case 1: Real Estate
So how are these applied in real life scenarios we have been discussing with our clients? Well, a global real estate behemoth wants to drive additional advertising and referral revenue on top of improved customer acquisition by integrating and cleaning hundreds of real estate listing data bases and sharing various granularity levels to their own and competitor agents. They will have the ability to capture 20-30% of a broadband contract over 2 years from a telecom operator for just referring a new home owner in a timely fashion. Now, multiple this paradigm into home furnishings, home owner’s insurance, yard and pet care, etc and you get the picture.
Case 2: Logistics
A large passenger railroad wants to license station and on-train Wifi data linked to known demographics or frequent traveler profiles for known entities to third party aggregators and station-adjacent merchants for a subscription fee. They are also thinking to pushing promotions via their mobile app to passengers based on location and past travel patterns and preferences. They expect somewhere between $500,000 to $4 million annually from data, which is just incurring cost today from a service meant to ensure loyalty.
Case 3: Healthcare
A large healthcare data provider is looking to leverage provider, patient and pharmacy data to ensure drug safety, provider performance, realign sales force compensation, assure spend compliance to create sticky customer relationships. They would license this data via analytics to pharmaceutical companies for clinical trials management and hospital groups to ensure a higher P4P reimbursement ratio.
Case 4: BPO
Many large service-based organizations like insurers, telecom carriers and even pharma companies outsource processes like service center operations to Business Process Outsourcers. These interactions are largely hidden to these clients so this large BPO firm believes that it can provide churn risk signals in a timely fashion to its clients based upon transliterated audio files and keyed CSR entries. It may also be positioned to provide order intake and upsell services for its clients.
Case 5: Manufacturing
Lastly, the crown jewel – GE Aviation. One of a very few companies manufacturing jet engines for commercial airliners. Due to its return to its core business post-financial crisis and the continuous trend for increases passenger miles flown and the fact of a 9-year airline buying cycle for $24 million engines, it wanted to transform its revenue model to a more predictable, revenue-smoothed subscription model. To do this, contract, customer, engine and airframe profiles must be cleaned, linked and enriched via real-time sensor data from inflight operations. Not only can GE Aviation predict failure and rebalance its supply and service chain accordingly for its fleet in operation, but it can also charge its customers for time-on-wing performance. Ultimately, the company uses atmospheric and flight data in a Hadoop environment to suggest updated flight plans due to air pollution increasing wear and tear on engine blades.
So how do you get started with us? First, it would be useful to understand your data, so a profile across your most important sources within scope would be in order. Then, a Business Value Assessment (BVA) could help flesh out business use cases and related financial value. A technical deep dive session with our sales consulting, product specialists or even professional services (IPS) is key to further cement the as-is to to-be state transformation steps. IPS or even one of our partners could then work on scoping a SOW with milestones based on our findings to this point. Lastly, our sales brethren would kick off a formal sourcing process with you.
At this juncture, please let me know about your best examples on how companies have monetized their data by licensing it in some form to third parties. Maybe I can learn another trick or two.
The post Enter The Digital Product Factory appeared first on The Informatica Blog – Perspectives for the Data Ready Enterprise.
Source: Informatica Perspectives