Seth Earley, CEO & Founder, Earley Information Science

Seth Earley is the CEO & Founder at Earley Information Science. He is an expert with 20+ years of experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications, and Information Findability solutions. Seth has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance by making information more findable, usable, and valuable through integrated enterprise architectures supporting analytics, e-commerce, and customer experience applications. He is a sought-after speaker, writer, and influencer. His writing has appeared in IT Professional Magazine from the IEEE where, as former editor, he wrote a regular column on data analytics and information access issues and trends.  

 

Beginning the journey with the right preparation will lead to success instead of disappointment 

For many enterprises, some AI capabilities are moving into core operations.  However, other enterprises are still early in the journey or are seeking out applications that will provide meaningful, measurable returns. Still others are just scratching the surface with initial capabilities but realize that they have far to go in order to truly build competitive advantage. 

The usual traps to new technology are out in force – over hyped capabilities, aspirational functionality, and unrealistic expectations.  On the business side, enterprises face the pitfalls that any technology adoption is susceptible to – insufficient resourcing, unclear objectives or scope, lack of business justification, and insufficient supporting processes, among others.   

Here are the top issues that execs need to keep in mind when trying to get the most from their (AI) technology investments. 

First Rule of AI:  Forget AI

It is important to take a step back and ask what the organization needs to accomplish with a given process or initiative.  AI is a tool in your toolkit, and like any tool, it should not be the focus, but should be in service to the problem that needs to be solved.  

To identify the problem, ask the following questions: What is the process that requires attention or intervention?  Customer service?  New product development?  Understanding of risk patterns?  Where does the organization need to optimize to meet customer expectations or market demands, competitive threats or take advantage of new value propositions? 

AI does not replace entire processes or people – it is an intervention at specific points of a process.  AI should be considered, as my colleague Dan Turchin likes to say, as “augmented” intelligence, rather than “artificial” intelligence. It supports people doing their jobs rather than replacing their jobs.  AI removes tedium from repetitious tasks that bore humans.  It also speeds the ability to perform complex analyses and uncover insights in large amounts of data. 

 Starting with Business Outcomes

Once the right questions have been asked, begin with the objective – what does the business need to do in order to solve the problems that have been identified?  What can be automated, and what can be made more efficient by helping an analyst access information more readily?  For a claims processor, it might be consolidating prior history data for similar claims from multiple systems, including the large and growing collection of unstructured content.  Text analytics and semantic search can lead to tremendous productivity gains by making knowledge and content available in context. 

The first step of clarifying business outcomes is mapping end-to-end processes.  You cannot automate a mess and you cannot automate what you don’t understand.  Ideating on future state business scenarios and articulating current state process is done through libraries of use cases, which are testable, measurable tasks that your customers or employees need to accomplish in the course of their work.  

The more detailed and finely grained the use case, the more effectively can the needs can be met through machine learning and AI powered personalization and contextualization. Personalization can then be tailored according to a rich understanding of the user and their context (background, interests, role, title, industry, objectives, equipment configuration – everything and anything you can understand about the user), Certain prerequisites need to be in place, but beginning with scenarios and use cases, testing, iterating functionality, and measuring baselines and impact will maximize the value from AI programs.     

Data is More Important than the Algorithm 

AI runs on data. The recent analogies are that data is the new oil or similar analogies.  The point is that training data can be as simple as a knowledge base consisting of reference documents, or as complex as a billion historical transactions.  In either case, it is important to align the data with the use case.   Some vendors claim that AI will “fix your data.”  They predict “no muss no fuss,” but that is rarely the situation. Data needs to be architected in a way that informs the algorithm about what is important to the business – products, services, solutions, processes, customer characteristics, employee tasks and more.  This takes the form of what is referred to as an “ontology” – a way of describing the organization as multiple categories and taxonomies.  

For a manufacturer, these taxonomies would include products, industries, competitors, markets, manufacturing processes, applications, problems, solutions, tasks, customer types, roles, document types. They represent all of the different concepts that describe the business as well as the relationships among these things.   The relationships might be products for a solution, applications for a process, solutions for a problem and so on. This set of information forms the “knowledge scaffolding” of the enterprise.  

When data is accessed using the structure of an ontology, it is frequently presented in the form of a “knowledge graph. ” An example is IMDb, the movie database.  One can look up actors and then navigate to movies the actors have been in, then connect the directors to other movies, etc.  The corporate analogy would be navigating from a particular customer to the industry that customer works in, and then looking at other customers in that industry and considering other types of products and services that may also be of interest to these customers. This kind of functionality could be part of a cross-sell recommendation system for sales people.

Cultural Requirements 

The organizational culture must be open to experimentation and able to accept failure.  In some enterprises politics, makes it more difficult to embrace AI and associated process change. AI is difficult and there are inevitably failures and setbacks while traversing the learning curve.  If the culture is one of “success theatre” where digital transformations are touted at the executive level and ridiculed or diminished by those at the operational level, real progress will be difficult to achieve.  A culture of learning, experimenting and failing needs to be part of innovation.  A philosophy of being a fast follower with less tolerance for experimentation is perfectly reasonable, as long as the objective is aligned with the level of maturity.

Leadership and Social Capital 

Programs with significant impact require that leadership take chances, and moving into new areas entails risk that many may not be prepared to take.  A leader with the vision of a new way of operating needs to understand the nuances and mechanics of making the innovation an operational reality.  That requires a track record and organizational social capital.  If the program is risking social capital, execs need to be sure that they are doing all they can to reduce risk and ensure success. Being realistic around organizational capacity and capabilities is a prerequisite.  

Adequate Resourcing 

Many programs are funded based on ROI projections.  Frequently, projects are funded to address identified gaps in capability; however, these are sometimes only the tip of a capability gap iceberg.   At the surface certain things appear to be straightforward, but during a project, those surface issues are peeled back to reveal bigger challenges that were not anticipated.  This is where current state maturity models can provide a higher fidelity understanding of what needs to be in place to operationalize.  

Maturity models show the organization what is achievable and how much needs to be done to build fluency and capability.  For example, a proof of concept (PoC) can afford to carefully cleanse, structure and enrich data.  However, production data sources are not afforded the same degree of curation and attention.  Getting that production data in shape for full deployment can require resources that were not anticipated or budgeted.

Supporting Processes 

Along the lines of maturity, upstream processes also need careful evaluation.  One organization put a lot of resources into the content and data models that would support personalization.  When it came time to deploy, however, the marketing function could not identify differentiated messaging that would be used for personalization.  The infrastructure was there, but the supporting process to locate the appropriate the messaging was not. 

Measuring Results 

Even when embarking on supporting foundation projects around data quality or completeness, the organization needs to show a linkage to measurable results. Data quality can be scored.  Data supports a process, and it is critical to instrument the process (and gather baselines) that will be impacted.  Processes support business outcomes which are also measured, so there is a connection. And outcomes support the enterprise strategy.   Linkage from data to process to outcome to enterprise strategy will retain executive attention and funding. 

To recap, success with AI requires:

  • Clarity of business purpose
  • Detailed understanding of processes
  • The correct, quality data sources structured for the application
  • A culture that is open to new ways of working 
  • An understanding of which aspects of a process can be improved or automated using AI technology
  • A strong sponsor with social capital 
  • Adequate resources and funding 
  • The right supporting processes 
  • A way of measuring results – upstream and downstream  

Acting on these guidelines will improve the likelihood of success. These principles apply to many types of enterprise projects, and AI is no different.  There are more dependencies and complexities to today’s technology initiatives, and success requires attention to the basic blocking and tackling. AI will not understand the needs of the business by itself. It needs enterprise support from the ground up to the C suite. 

Content Disclaimer

Related Articles