Wednesday, August 12, 2020

The Lost Art of Data Lineage


 
Maybe it is more of mangled art than a lost art. Data lineage is one of the aspects of data governance that gets lost in the shuffle of data analytics projects. It is vital for many reasons, least of them compliance and auditing. Often time data lineage never makes on the train to the final destination when building large scale data warehousing and analytics solutions.

Part of the problem is that it gets turned into an all or nothing effort leading to very little getting delivered relating to data lineage or it gets turned into a mishmash of concepts and solution features. Often it gets dumped under auditing and logging, irrespective of business or technical metadata, and then forgotten.

Let's breakdown what data lineage actually entails. There are three dimensions of data lineage to consider as part of a general data governance strategy. These include:

1) Logical Data Processing Flows (logical or visual DAG)
Defining the high-level visual graph and code module level relationships between data processes stagings and steps that produce and generate data models.

2) Metadata Relationship Management (low level relationships and logic/code)
Tracking metadata relationships between data models (schemes/tables/columns) and the related source code used in the transformation. This includes showing the transformation logic/code used going from one or multiple source data models to a target data model.

3) Physical Data Processing History (what happened, is happening, and going to happen)
At a data set level (sets of records, rows and columns), this shows the record level and data set linkage between data sets that have happened in the past or will happen in the future. There is a temporal (real-time and historical) aspect to this showing transaction events from one or multiple data sets feeding a downstream data set and the transactional breadcrumbs and events involved.

Note, that the term data model denotes a static structure (more or less your schema/tables definitions) while data sets are live physical structures (the actual data at a record level) across one or multiple data models.

Before you get started on your data lineage journey you need to decide to what extent you will implement one or all of these dimensions of data lineage as part of your overall data governance strategy. There are varying degrees of exactness and completeness to each one of them as well. And make sure to keep them distinct.

No one commercial tool will do the complete job. It is usually a combination of multiple tools, hand stitched software services, and best practices/conventions that will be necessary to do the job well and depending your parameters for success.

Thursday, July 2, 2020

Tuning the Snowflake Data Cloud


To be clear, I do not classify Snowflake as an OLAP or MPP database. It has these capabilities for sure, but being born in the cloud and only for the cloud, it has much more to offer. I consider it a "Data Fabric". Yea that is broad term, but Snowflake is really what Big Data and Hadoop were aspiring to achieve, but never did for reasons I won't get into here.

What makes Snowflake a game changer for OLAP engines and data warehousing? The below listed features, shown in the diagram, are all true and not just marketing spin. How can Snowflake accomplish this? Built in the cloud and only for cloud - what does that really mean - Snowflake takes full advantage of two key superpowers only available in the cloud. 1) elastic and virtually limitless highly durable immutable storage and 2) spinning up virtually limitless compute. Starts with these two things, and lot more in Snowflake to deliver full package solution.



If Snowflake has no developer/DBA configurable indexes, partitioning, distribution keys, vacuuming, stats tuning, storage tuning...etc, like other MPP/OLAP engines, is there anything I can really tune (be-careful with auto re-clustering)? With great power comes great responsibility. There are multiple things you can do to tune and optimize for performance. This means being careful to monitoring and manage costs because it can be too easy to scale up and out and this will cost you. From a schema modeling/design perspective there are some optimizations you can do to minimize compute scale up/out requirements (and thus costs). One of them is using cluster/sort keys, one of the few DDL things you can tune at metadata level. Also how you use materialized views and manage joins vs de-normalization are important considerations. All these things are highly dependent on downstream consumption/usage patterns. So yes you still need good data engineers and architects :)

Thursday, June 11, 2020

Is ML Curve Fitting The Best We Got?


Curve Fitting is for the most part what most machine learning boils down to, not that that is a bad thing. How do go be beyond the correlation of the black box? I see the rediscovery of symbolic AI and the introduction of casualty into purely probabilistic ML analogous to what happened in software decades ago when we evolved from assembler and procedural languages and we started to model software/data as richer abstractions with relationships. Not the same thing, but a similar evolution in engineering and computer science.

Causal relationships exist in the world and can influence how we collect our data and engineer the features that drive our ML model training. This includes everything from how we analyze covariance in the data and in how we manage and monitor data distributions. Collecting data and engineering features is not enough. Understanding causal relationships can sometimes be gleaned from the data we observer, but often times we must look at how we can develop experiments and interventions with A/B test strategies and multi-armed banded processes to uncover the causality in order to  better train our models.

Intervention and experiments can help us answer some "what if questions" and then you have counterfactual, which are beyond the reach of most experiments, yet understanding causal relationships have the potential to offer us insights and help business make better since of the world and their opportunities. We need better tools and engineering processes to incorporate these skills into our ML frameworks and ML processes.

This is starting to happen in AI and ML today across disciplines that are applying ML. This is a good article on the topic that I suggest all ML engineers and data scientists to read.


Friday, June 5, 2020

Choosing an ML Cloud Platform: GCP vs AWS

ML cloud services are evolving fast and furious. GCP and AWS are the leading players. Here is a quick visual peak at both ML tech stacks.

AWS has SageMaker as the centerpiece:


Then there is GCP with its Kubeflow angle and on-premises hybrid cloud options:


Tuesday, June 2, 2020

Cloud OLAP: Choosing between Redshift, Snowflake, BigQuery or other?


Which to choose for your cloud OLAP engine? There are a lot of choices when it comes to cloud based analytics engines. All the major clouds have their homemade solution (GCP/BigQuery, AWS/Redshift, Azure) and their are plenty of independent options from Snowflake to Databricks to mention a few.

Which is right for your business and in what situation? Needs can vary from internal data exploration to driving downstream analytics with tight SLA. I am a strong proponent of the approach that no matter what you do that you have start with a foundational data lake blueprint and you then choose to build that with either an open source analytics engine on top of your cloud data lake or license a commercial analytic engine such as Redshift, Snowflake or BigQuery.

There is no one answer without looking at your business needs, existing technical foundation and strategic direction, but I have to say have I am getting more impressed with Snowflake as the product matures. Without getting to deep into the details, Snowflake is sort of an in memory (backed by public cloud object storage) data lake with a highly elastic in-memory MPP layer. There are many pros and cons in selecting the best option for your business. The edge Snowflake has it is cloud agnostic (sort of the Anthos of the data cloud) and I really like their cross cloud and data center replication feature (recently released feature) and cross cloud management.

If you want to discuss how to approach making this decision process look me up!

Friday, January 24, 2020

Why Spark is the Wrong Abstraction


Is the sun setting on Spark? I don't want to knock Spark and frameworks like it, they have had their moment in the sun. Spark was a reasonable and important successor to Map/Reduce & HDFS/Hadoop, but its time has come to be exiled to the fridges of the big data ecosystem and used only when absolutely necessary. Spark is still has usefulness for some specialized ETL and data processing data applications, but overall Spark can be a massive overkill and burden to program and operate (expensive too). In many cases it is inefficient both in development, troubleshooting and the overhead of infrastructure management can be expensive relative to other options.

Not Everything is a Nail

I see many projects using Spark (and tools related to it such as aws emr for example) for transforming and moving data into and out of data lakes when often times simpler tools can be used. Spark is a pretty big and complex sledge hammer and many problems can often be solved with more effective tooling. The ever growing ubiquity of serverless technology, especially database and analytics capable services, have presented engineers with many more options and it is time to dial back when it is most appropriate to consider bringing out the Spark sledge hammer.

In a lot of cases, with Spark development, you end up writing a one-off database engine for your specific ETL scenario and using Spark's distributed compute and storage abstractions and DAG engine makes that convenient. While it is possible to use Spark as a database engine of sorts, the reality is databases are better at optimization and using the available compute/storage resources. And this is specially the case with serverless technologies that support SQL as a first class citizen. For Spark SQL is really does a bolted it on.

The only big challenge is that most database platforms are not designed for the cloud or for elastic compute/storage like Spark sort of is. I say sort of, because Spark leaves too much responsibility on the developer/DevOps to make data/compute optimization decisions and infrastructure decision which is something databases are intrinsically good at.

Declarative vs Imperative Analytics

Now, there are Spark serverless type of services as well (like aws glue and other managed Spark services), but given the general purpose nature of Spark, this still leaves optimizations and resource alloation a challenge for developers. What are the alternatives? I really like Presto and in particular the serverless aws flavor of it (Athena) as well as services like BigQuery. Tools like these are the future of big data ETL and analytics. Spark can still be useful in heavy data transformation scenarios and complex feature engineering, but not as a general data analytics engine and data movement engine. Streaming is one specialized area where solutions like Spark can still play, but there are many other solutions better designed out of the box for streaming and cloud scale-out. Spark has in many respects tried to be all things to all people. It has continuously expanding support for SQL semantics and has incorporating APIs for streaming...etc. This has made Spark a versatile framework and API for developers, but as a general purpose ETL and data analytics engine, I think there are now other options.

While the sun may not completely set on Spark, and tools like it, the declarative power of SQL will win in the end over the imperative programming model of Spark. This has been proven time and time again in the database and analytics tech space.

Tuesday, January 21, 2020

Data Lakes before AI/ML/Analytics (cart before horse thing)


Don't start or continue your AI and predictive analytics journey without building the necessary data infrastructure underpinnings. And that starts, first and foremost, with building a cloud data lake that is designed to meet the data and compute hungry needs of your . Why build a cloud data lake first?

1) Economics
2) Elastic compute
3) Elastic Storage
4) Storing (almost) everything
5) ML Model engineering
6) Feeding downstream analytics
7) Feeding downstream operational data stores
8) Data exploration, experimentation and discovery

A cloud data lake makes all the above possible at scale.

Building a cloud data lake securely and in an architecturally effective manner is achievable and will make your downstream AI/ML/Analytics journey attainable and long-term sustainable. Don't start your journey without this foundation.



Monday, September 9, 2019

Know Where Your Data Lake Has Been?



The foundation of a good data management strategy is based on number skills including data policies, best practices, and technology/tools as shown in the diagram above. The most commonly discussed term of the bunch, from the ones listed in the diagram, is Data Governance. This term is tossed around a lot when describing the organizational best practices and technical processes needed to have a sound data practice. Data governance can often be ambiguous and all encompassing, and in many cases what exists in  organizations falls short of what is needed in our modern big data and data lake oriented world where ever increasing volumes of data are playing an ever more critical role in everything a business does.

What is often left out and is missing in many modern data lake and data warehousing solutions are the two lesser known cornerstones I show in the diagram:  Data Lineage and Data Provenance. And without these additional pieces your data lakes (with ever increasing volumes and variety of data) can quickly become an unmanageable data dumping ground.

Who needs Data Lineage and Data Provenance management tools, APIs and visualization services?
  • Data Engineers (building and managing data pipelines)
  • Data Scientists(discovering data & understanding relationships)
  • Business/System Analysts (data stewardship)
  • Data Lake / BI Executives (bird's eye view of health and sources/destinations)
  • Data Ops (managing scale and infrastructure)
  • Workflow/ETL Process Operators (monitoring & troubleshooting)
Most of the ETL processing and dataflow orchestration tools out there in the market (open source and commercial) such as NiFi, Airflow, Informatica, and Talend among others,  do not directly address this gap. What is the gap? The gap is knowing where your data is coming from, where it has been and where it is going. Put another way, having visibility into the journey your data takes through your data lake and overall data fabric. And doing this in a lightweight fashion with out a lot of complex and expensive commercial tools.

Let's spend a bit of time talking about data linage and data provenance in particular and why they are important parts of a modern and healthy overall data architecture. First let's touch on the broader Data Governance ecosystem.

Data Governance

Data Governance can be an over used term in the industry and sometimes is all encompassing when describing the data strategy, best practices and services in an organization. You can read a lot of differing definitions of what data governance is and is not. I take a simple and broader definition for data governance which includes:
  1. Data Stewardship - org policies for access/permissions/rights
  2. Data Map - location of data
  3. MDM - identifying and curating key entities
  4. Common Definitions - tagging and common terminology
  5. Taxonomy/Ontology - the relationships between data elements 
  6. Data Quality - accuracy of data
  7. Compliance - HIPAA, GDPR, PCI DSS...
These are all important concepts, yet the do not address the dynamic nature of data and the ever more complex journey data is taking through modern data lakes and the relationships between data models residing in your data lake. This relates to needing to know where is your data going and where did it come from. This is where data lineage and data provenance comes into play to compliment data governance and allow data engineers and analysts to wrangle and track the run-time dynamics of data as it moves through your systems and gets combined and transformed on its journey and its many destinations.

 

Data Control Plane == Data Lineage and Data Provenance

I view data lineage and data provenance as two sides of the same coin. You can't do one well without the other. Just like digital networks have control planes for visualizing data traffic, our data lakes need a Data Control Plane. And one that is independent of whatever ETL technology stack you are using.

In our modern technical age, data volumes are ever increasing and data is being mashed and integrated together from all corners of an organization. Managing this from both a business level and technical level is a challenge. A lot of ETL tools exist today to allow you to model your data transformation processes and build data lakes and data warehouses, but they all consistently fall short of giving you a comprehensive and I would say orthogonal and independent view of how your data is interconnected; where your data came from, where it is right now and where it is going to go (and where it is expected to go next) in its journey through your data fabric and destinations.

Many modern ETL tools let you build visual models and provide some degree of monitoring and tracking, but these tools are proprietary and can't be separated from the ETL tools themselves which creates lock-in and does not allow one to mix and match best of breed ETL tools (Airflow, aws step functions, lambda, Glue, Spark, EMR...etc) that are now prevalent in the cloud. If you are using multiple tools and cloud solutions it gets ever more complicated to have a holistic view of your data and its journey through your platform.

This is why I strongly believe that data lineage and data provenance should be completely independent from what underlying ETL tooling and data processing technology you are using. And if it is not, then you are both locking yourself in unnecessary and greatly limiting the potential of your data ops teams, data engineers and limiting your overall management of the data and the processes carrying your data through its journey in your data lake.

Data Provenance and Data Lineage are not just fancy words; they are a framework and tool set for managing your data and having a historical audit trail, reat-time tracing/logging and control plane graph of where your data is going and how it is interconnected.

So do not build your data lake without a benefits of Data Control Plane. Set your data free on its journey while still maintaining viability, traceability and control.

Thursday, August 15, 2019

Data Lake vs Data Warehouse


Is a data lake part of your data warehouse platform or does the data lake sit beside it? There is a fair amount of ambiguity as to what a data lake is and how it should fit into your overall data strategy. 

I believe data lakes (coupled with elastic cloud storage and compute) are a game changer in both the DW and BI world. Your data warehousing strategy should be part of the data lake not the other way around. While you don't have to throw away everything you have done or learned in your traditional ETL and DW world, the fundamentals have changed. 

To take advantage of your data and build better BI/analytics you must build atop a sold data lake foundation. And this going well beyond the many failed Big Data and Hadoop projects of the recent past that many enterprises have experienced. 

While Hadoop was a necessary step forward at the time, it was and is an evolutionary dead end - RIP Hadoop. Cloud data lakes are the future and it is more than putting your data into S3 buckets. 

Well architected data lakes are the culmination of a succinct data management strategy that leverages the strengths of cloud services and many traditional DW best practices and data governance policies.

Wednesday, May 15, 2019

R.I.P. HDFS | The Cloud Wins!


HDFS is an evolutionary dead end in the tree of big data. Data lakes based on S3 object storage deliver on the promise of separating storage from compute and make it possible to scale your processing and downstream analytics/AI and data marts on top of a data lake in an agile and elastic fashion. The HDFS architecture always bugged me when it was first released (besides the fact it is written in Java). Moving the code to the Hadoop data node (usually only three replicas available by the way), seemed to be inherently limiting to me. It was not really better than using big unix SMP servers other than you got to use cheaper commodity hardware and grow incrementally. Good stuff, but not good enough - 1 step forward and a half step backwards.
While the idea of moving code to the data sounded cool at the time, it is fundamentally a bad data processing design for a truly scalable data lake that allows for rolling up an arbitrary number ephemeral compute clusters on top of your storage. There is a place for HDFS and traditional Hadoop clusters, if you have big fixed and slow evolving predictable cluster of compute/storage environment. For the rest of us, a cloud based data lake architecture will win in the end and allow for agile development to meet the fast paced needs of downsteam today's BI, analytics and AI/ML applications that need to sit on top of the mythical data lake.

Thursday, August 9, 2018

Choosing Between Spark ML, scikit-learn, and DNNs


Now these aren't the only considerations when deciding on how to build your data science stack and the related tooling you will need around it,  but it is a place a lot of organizations tend to begin their opening questions. Sometimes the answer may be, all the above. But you have to first reflect on your organizations goals and the level of your investment in any transformation effort, especially one that involves such a fundamental shift in how you to turn data into business value.

There are a number considerations that can influence your data science architecture that should be examined before establishing your AI platform. They include:

  1. ETL and data prep tools? AI does not work without data. Find it, mine for it and create it.
  2. Cloud, on-prem or hybrid for building your data science stack?
  3. How big is your data? Really how big is your data? Not everyone has "big data".
  4. What are you modeling? What kind of outcomes are you looking to solve for?
  5. Build, buy, partner. What kind of skills do you want to invest in for in-house data science, ML engineering and ML operations?
The bullets above are only touching on much deeper considerations that need to be assessed by any organization looking to transform their business with AI. But let's step back a bit and just discuss the question posed by the title of this blog to avoid turning this blog into a long drawn out analysis that goes down too many rabbit holes.

Spark ML
It is natural for a lot of organizations who have been doing "Big Data" to get their first exposure to data science through Spark's MLlib. Spark ML is a nice module/framework the comes with Spark and comes packaged with most major Hadoop distributions. The ML APIs and algorithms include many of the popular model building options from decision trees, to survival analysis (time-to-live), to allowing you to build recommendations engines (ALS), to unsupervised learning with clustering and topic modeling. Spark ML is nice and convenient for those coming from the Big Data universe. One nice advantage is that you can often leverage Spark's inherent distributed architecture to build models that can operate at large petabyte scale when needed. Is Spark ML ideal for all data sources and outcome objectives and it is the most efficient (you can hack DNN into if you have the stomach for it) - the answer as you might guess is obviously no. Why, well that is for another day to dive into, but suffice it to say that it may always be the most accurate way to build models and may not always be the best bang for CPU/GPU buck.

scikit-learn
Then there is good old scikit-learn. Any Python developer with a math or data background or has done any statistical modeling (or ML work) will know and likely love scikit-learn and all the other related Python packages such as numpy, scipy and pandas to name the most popular. scikit-learn is a treasure trove of algos and APIs. It is an awesome framework for ML developers and data scientists. Does it scale in same ways that Spark can - unfortunately no. But do you always really need it to? Look at your data before you answer that.

Deep Neural Nets
Then there is the new kids on the block, DNNs (back from the future). Tensorflow and PyTorch just to name a couple of the most popular are claiming to be universal function approximators that can model anything and solve for everything. Note, you will need to bring data and lots of it. They are data hungry. They can solve anything from classifications to generating word embeddings to creating generative models. There isn't much a DNN and its offshoots can't do theoretically. Through their natural fit with GPUs, they can scale fairly efficiently, and you can sometimes sort of distribute them with some extra heavy lift.

Taking your Models Live
A lot of what we just reviewed is about building and training models. Now, how do then take what we just trained and turn it into a service that predicts, classifies or generates data? That is also a topic unto its own. Operationalizing machine learning models can be non-trivial but it can also be not so difficult at times. It just depends on the model you are creating. For example, sometimes discrete bounded models can just be exported into a database, but often times the solution (input and output space) is not finite and requires creating distributed your build models as inference engines - and that is a bit more work. Then there is the nagging issue of how and when to update your models. Again another subject all together.

Buy or Build
So should you build or buy? The big boys (google, aws, azure) are all making a lot of what we just described available as MLaaS offerings (to various degrees of completeness). So stay tuned and current as the AI  technology world is changing fast.


Saturday, February 10, 2018

Don't Build your NLP Bot like a GUI


There has been an ongoing buzz about conversational user interfaces (CUI) and how they can enhance human-to-computer interaction. Many of us have experienced them to some extent both in play with voice interfaces such as Siri and Alexa and at work with messaging interfaces in tools like Slack. When CUIs are infused with a good dose of NLP (and context aware, machine learning..etc), they have the potential to become more than just a primitive command line or simple minded messaging/voice interface. The potential for building NLP powered virtual assistants and benefiting from the productivity that they can provide does exist, but you will have to get out of your comfort zone as a developer.

The Mental Leap to CUIs

Building CUIs using NLP and AI can take some getting used to when you have spent your career or education building graphical user interfaces (GUI). Be wary of getting caught in the trap of building your natural language interactions like you would build a GUI. A virtual assistant, like how some NLP bots function, can behave in many of the same ways you would interact with another human that would be working on your behalf. Now, a GUI is a very capable and functional interface, but is not a virtual human. And you have to keep reminding yourself that the bot you are building needs to behave like a virtual human, because it is easy to get caught in the trap of trying to build your NLP virtual assistant like you would a traditional GUI. It ends up being the worst of both worlds if you end up doing that.

Not that there is Anything Wrong with GUIs

Now there is nothing wrong with GUIs. For some tasks and functions they will always be superior to natural language. But for many things we do in our daily lives, language is a superior medium especially if the engaging entity is intelligent.

Fundamentally how you would ask (via text or voice) a virtual assistant to perform an action or make an inquiry on your behalf is different than using a GUI. The way you reach a decision or action point in a CUI can be different than a GUI. Let's take a very simple example to compare.

Say you are using a business application to manage pending requests that you must take action upon on a daily basis. And there are different types or classes of requests you must review. Some requests are specific to you, some to your direct team members, and other requests are company wide actions you must review, but you are required to review and make decision/action on all of them at some point during the day or week.

In a GUI you might navigate to the screen and see some quick filters that let's you select which requests types to view or all requests might be shown in some sorted of grouped order on a scrolling page. And you would navigate through the information to decide what to do first. There are many ways to represent the request and perform filtering and sorting. It also depends on the your preferences for what tasks you like to tackle first and how you manage your day. The ways to make the GUI optimal for your particular usage pattern depends on many factors, many of whom are specific to you. This is where GUIs begin to breakdown. They can sometimes overwhelm us with information or not adapt to our unique usage patterns - basically they lack the ability to easily adapt to the extreme personalization you can achieve with a CUI.

All Things being Equal CUIs Win

Now this all depends on your implementation of the GUI and CUI. You can obviously build a horrible CUI and a wonderful ultra personalized GUI, but all things being equal I claim that a CUI will always be superior. The advantage the CUI has is that the user experience does not have to change significantly to improve personalization. The inherent nature of conversational interaction is something natural to all humans. So as a developer, as you improve the conversational flow of your virtual assistant, and as you release new versions of the bot, the user can adapt much more easily because the flow is fundamentally the same, the interaction is still a conversation and it is the bot that is getting smarter/better and always assisting you through the same conversational experience to help you accomplish your objectives.

So let's go back to our request/action application we described earlier. In the CUI case, a smart virtual assistant might look at all the pending requests and tell you have many pending requests of different types and suggest you start with your direct reports first, since there is only one of those, if that was the case. The ability of the CUI to branch off in different directions is much more dynamic than what a GUI can do, and this can be done in a way that does not force the user to learn a brand new interface with each new release of the software, since the virtual assistant is your interface and your personal guide at the same time. The CUI sort of has built in help since it is a virtual assistant by design.

Mixed Mode CUI and GUI

Many of the popular bot platforms such as Slack and Facebook Messenger have incorporated some very convenient visual GUI components into their messaging and flows which allow developers to mix conversation question/answer dialog with short-cut GUI actions and interactions. This can be great way to meld conversational with GUI, but at the same time it can get CUI developers sucked back into the GUI world, and get developers focusing too much on injecting too many GUI interactions into the CUI flow. So use these tools wisely and keep the interactions focused on the conversational flow between the user and the human like virtual assistant.

Grow your Bot Like You are Raising a Child

So be careful, always think like you are building a smart virtual assistant not a GUI. You will not have all the smarts built into the bot from day one, but make that your mission to keep the conversational flow focused on the interactions and dialog between the human user and the virtual assistant, and everything else will follow as your bot gets smarter.

Saturday, February 3, 2018

Can AI Put the Human Back into HR?


Over the past couple of decades Human Resources (HR) has moved from paper and manual processes to more automation and providing employees with more and more web and mobile powered experiences. But has HR lost its human essence and personalization in the process? Nowadays managing a company's "human" assets seems to involve less and less human interaction between employees and the actual people in the HR organization. Try finding an HR person when you need one these day!

In the Beginning there were Humans

I recall my first few jobs at both major corporations and at startups, there was always a HR person I could reach on the phone, or simply walk over to their office to get small and big matters resolved. HR personnel where typically visible and accessible. Human Resources representatives interacted at a personal level with employees and often knew you on a first name basis.

It seems with more technology that HR has lost its human to human interaction and become more impersonal and mechanical. Today, if you need some payroll or benefits issue resolved, you typically need to submit a "ticket" in some online system and wait for someone (you have never met) to contact you back over email or if you are luck over the phone. Or if you are lucky you can click your way through a labyrinth of HR GUI applications to find what you need.

Virtual Assistants are the New Humans

So what is the solution, more or less technology? Maybe the answer is better technology. AI powered HR virtual assistants have the potential to be your personalized guide to do everything from answering general HR questions to requesting time-off and helping guide you to finding the information or actions you need to take quickly. Machine learning powered intelligent assistants could help resolve problems and questions with your payroll, for example. Interactive NLP powered bots could help guide you through, an often times, complicated benefits and open enrollment process by knowing your preferences and history to match you with the optimal recommendations for your particular situation.

The Future is an AI Powered Voice and Messaging-First World

The future of HR is not more technology, but more intelligent technology powered by machine learning, natural language processing, personalized recommendations engines, and other AI enabled technology that bring hyper-personalization and a human-to-computer interaction model that goes beyond impersonal graphical user interfaces. Voice and messaging-first interfaces (endowed with NLP) are a step in the right direction and can bring back a bit of humanity to your technology overloaded workplace.

Monday, December 25, 2017

Recommending Actions for Your NLP Bot

At first glance, the machine learning methods typically used in NLP applications (such as chatbots) and those used in recommender systems (for recommending products) are not often leveraged together in the same applications.

NLP is the machine learning domain that makes your virtual assistant capable of engaging in human language based conversation and recommender systems, as the name suggests, recommend products/services you will hopefully like (thus saving you the trouble of discovering them on your own); but it is not often you see NLP and recommender systems together.

Where Conversational UI Meets Recommendations

But let's think about that for a moment. Is there a solution space where NLP and recommender systems intersect and why would anyone want to do such a thing? I will make the case that every so called AI powered virtual assistant (aka chatbot and their kin) needs context and part of that context can be provided by a personalized recommender system that helps guide the conversation and streamline the conversational user experience.

A Messaging First World

We are in the midst of a messaging application revolution. A new generation of users are making messaging based applications their preferred medium for communicating with people, places and things around them, especially when it comes to the digital world (and contextual world). And there is no lack of applications from fintech to social applications leveraging and rediscovering the command line interface as the new mode (or not so new mode for many command-line geeks) of communication between humans and computers.

Your Next Question or Answer is a Recommendation Away

Ok, so where do recommender systems fit into the world of NLP and conversational driven user interfaces? Well, conversational applications are not without their own challenges. Typing and speaking takes effort on both the user and virtual assistant, in order to engage in timely and efficient interaction. But what if your virtual assistant new what you wanted to do next (or what you might/should like to do next)? What if your NLP powered bot could suggest to you actions you might want to take and save you the trouble of verbalizing it - maybe give you a quick one click shortcut (can be voice powered as well) to driving and continue the conversation?

This is where recommender systems can play a vital role in making your virtual assistant not just clever at understanding intent and named entity recognition from voice or text, but also present a sense of intelligence in remembering your past behavior or predicting what you might do next (or should do next) by relating your behavior to what others in similar roles and situations have done next. So even with no prior knowledge of "you", the virtual assistant might prescribe next actions based on what others in similar roles and situations have done. Does that not sound like a recommendation system?

Prescribing vs Predicting

Recommender systems are inherently about prescribing things (which can include actions not just items) applicable to your context at a given point in time (time based being a critical context as well). I foresee a future where both business and consumer application oriented virtual assistants and NLP bots will leverage highly personalized recommender systems to take the human-to-computer interaction to the next logical evolution (as promised by many sci-fi books and movies :)

Matrix Factorization and LSTMs are Your Friend

So for all you NLP bot developers, make things like matrix factorization and collaborative filtering your friend. Hybrid recommender systems based on collaborative filtering and content filtering (product and customer meta data) have been the state of the art for the past few years (since the Netfix contest). However the future of recommender systems will be powered by deep learning and concepts like LSTM and product and item embeddings. Research in this space is evolving fast. A mix of shallow and deep learning techniques are racing to enable this world of intelligent NLP bots and efficient conversational user interfaces.

Wednesday, September 13, 2017

AI and Machine Learning Madness


The hype around machine learning and AI is just off the charts these days. This reminds me of several years back when just about every software product started putting the term "Big Data" in front (or at the end) of all of their product literature and company tagline.

AI is the Art of Using ML at the Right Time and in the Right Context

Today, startups are claiming to change every industry and the world just because they put some "machine learning" inside their products. Machine learning has the potential to give software super powers, but alone it is does not create great software.

Listen, I am just as excited as the next geek about the potential of deep learning and the democratization of data science, but please stop putting machine learning in your company's slogan, like it was the secret ingredient that you were looking for. Sorry, this is becoming my new pet peeve.

Machine Learning, at the end of the day, is just one more tool in our arsenal of tools and skills needed to build more efficient and more intuitive solutions and services for our customers and for humanity as a whole. Infusing machine learning into your products is more than technology. It takes a new way of thinking and new of running engineering teams and engineering processes. 

A Great Product is Much More than AI

Please talk about the benefits of using AI/ML/DL, don't just say "because you are using machine learning" you are changing the world or have a better product. There is that little thing called building a product that delivers value to the end user that you still have to get right. And believe it or not, this still requires human creativity and execution. Claiming your infusing your product with ML will not necessarily make it a better product or the world a better place.

AI is in the Product - It Should not be the Product

I do believe that machine learning related technology will be in every future product, just like we have transistors in every computer today. Machine learning will allow us to build better and more efficient products. How you use the machine learning to build a great product will matter much more than the fact that you are using machine learning. Keep that in mind when you come up with your next startup tagline and company slogan!

End of rant.

Monday, August 28, 2017

AI vs Paradox of Choice


The paradox of choice is a problem we see more and more of in our modern world. It goes beyond what products Amazon should recommend or friends Facebook should suggest. In the business world and in enterprise applications this is also a challenging problem as our applications and processes grow in complexity. The potential for machine learning powered recommender systems to augment human decision making is one of the next frontier for AI in the enterprise . 

Recommender systems can do more than just suggest what articles you should read on Linkedin or what jobs are most suited for you. In the future machine learning (and more likely deep learning) powered recommender systems will guide enterprise decision making by helping business process owners take the most effective actions and decisions in a timely manner and with hyper-personalization. 

Recommender systems will move from solving B2C optimization problems (how they are typically used today in our data saturated and over marketed world) to solving problems in B2B and enterprise applications. Ultimately recommender systems are about prescribing (they are not really about predicting) an optimal decision at the right time and place/context, so they can naturally deal with a variety of B2B scenarios such as optimizing workflow paths, streamlining supply chain actions, to augmenting human decisions for common day to day business operational functions. Enterprise decision makers are in vital need of these AI super powers. Stay tuned they are coming :)