Intro to AI for Digital Marketers

Intro to AI for Digital Health Marketers

Published On March 9, 2019 | By mbalogh | Blog, Podcast

Overview

To best introduce you to Artificial Intelligence (AI) I’m going to start out with a little history lesson.  Next, I’m going to introduce and define a few terms like deep learning, and natural language processing.  Then I’m going to cover structured and unstructured data and get into natural language processing.  And, finally, I’ll sum it up with a couple of good examples of AI in healthcare and some steps for really making it happen within your organization.

A Brief History of Artificial Intelligence

In 1948, in response to a comment at a lecture that it was impossible for a machine to think, John von Neumann stated, “you insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!”.

Twenty years earlier, in 1928, Von Neumann founded the field of game theory as a mathematical discipline.  Game theory is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.” Originally, it addressed zero-sum games — games in which one person’s gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations and is an umbrella term for the science of logical decision making in humans and computers. Game theory ultimately led to both perfect and imperfect information games. (Games where players all have the same information are called “perfect”, and where they do not are called “imperfect). Both these games types, but especially imperfect, have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found.

In 1950, in a paper titled “Computing Machinery and Intelligence”, Alan Turing proposed a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.  At the time he considered a text-only interface but, as we’ll see later, Turing’s idea has grown dramatically since the 50’s.

AI continued to grow and evolve throughout the 1950’s and 60’s.  If you’d like to check out a detailed history, here is a great timeline on Wikipedia: https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence.

Then, in 1970 the computer program Mycin was created as a doctoral project at Stanford University.  Mycin was an early expert system that used artificial intelligence to identify bacteria that caused severe infections, such as bacteremia and meningitis and recommend antibiotics with the dosage adjustment accounting for the patient’s body weight. Its performance was shown to be comparable to and sometimes more accurate than that of Stanford infectious disease faculty.

The creator of Mycin, Edward Hance Shortliffe, went on to found biomedical informatics (now known as health informatics).  Health Informatics is a multidisciplinary field that uses health information technology (HIT) to improve health care via any combination of higher quality, higher efficiency, lower cost, greater availability, and new opportunities. Unfortunately, Mycin was never used in practice because it could not be integrated with patient records and physician workflow, an issue we don’t have today.

Following Mycin, the next great breakthrough in AI cam in 1979. Hans Berliner created BKG, a backgammon playing AI.  (As a side note, I love backgammon.  If anyone else out there is a fan, reach out.  I’d love to roll a few games.)  In 1979 BKG became the first computer program to defeat a world champion in any game.

From then on AI would continue to dominate perfect (all information is shared) games.  IBM’s Deep Blue chess machine defeated world champion Garry Kasparov in 1997.  In 2011 IBM did it again when the Watson AI competed and won the television game show Jeopardy! beating two reigning champions Rutter and Jennings.  Again in 2015 Google DeepMind’s AlphaGo defeated 3-time European Go champion 2 dan professional Fan Hui by 5-0, stepping that up in 2016 to defeat Lee Sedol, a 9 dan professional Korean Go champion.

Most recently, in 2017, Carnegie Mellon’s Libratus won against four top players at no-limit Texas hold ’em. Unlike Go and Chess, Poker is an imperfect game — a game in which some information is hidden (the cards of the other player).  Imperfect games, like healthcare and marketing, are much harder to model.

The reason it is important to understand the history of AI is not just because it’s fun to have interesting factoids at geek dinner parties.  There’s also a lot of relevance to it.  It’s important to note that early versions of AI date back to the 1950s, but they were not AI as you have come to know them today.

Defining Some Terms

Expert Systems

From the 1950s through to the 1980s AI primarily consisted of rules-based programs called “expert systems”.  Expert systems mimic human decision making process through hard or hand-coded if-then or decision statements.  An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts.  Mycin was a good example of this.

Artificial Intelligence Timeline from nvidia

Figure 1 source: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai

Machine Learning

Starting in the 1980’s and dominating through about 2010 a new approach to AI called Machine Learning was the main-stream.  Machine Learning did not rely on hard-coded rules and decisions giving computers the ability to learn without being explicitly programmed how to do them.

Instead of programming, Machine Learning relies on learning data sets and training.  The computer actually learns through feedback of a test data set, which can be provided by another machine or a human subject matter expert.  Once trained, the AI can perform accurately on other yet unseen tasks and datasets with high confidence. The core idea in layman’s terms is to keep asking the computer questions and tell it when it gets a correct or incorrect answer.  Over time it will learn to increase its confidence (the statistical meaning of the word) until it’s reached a satisfactory point.  Then you test it with questions it’s never seen before.

To really understand what this looks like I encourage you to watch the IBM Documentary, “IBM Watson: Smartest Machine on Earth”, on YouTube (https://www.youtube.com/watch?v=bTIxX857KcM).

Neural Networks & Deep Learning

The real term I’d like to define is Deep Learning, because that’s one you’ll hear a lot. But to understand deep learning, you’ll first have to understand the basics of Neural Networks.  Neural Networks are, simply put, computer hardware and software that are structured similarly to how the human brain works.  Given this, the goal of the neural network is to solve problems in the same way that the human brain would.

The basic foundational unit of a neural network is the neuron. The architecture of a neuron is conceptually quite simple.  There is a set of inputs and an output; in the middle, there is a function.  Really, any function for the sake of this explanation.  What’s important is that each neuron’s function weights the inputs and creates an output. That output then becomes the input of the next linked neuron.

Bilwaj K Gaonkark of UCLA explained on Quora:  think of a bucket which you are filling with water. After a certain amount of water goes into the bucket, it overflows. Now imagine if every time a bucket overflows you can start filling another bucket with the overflowing water by connecting it with a hose. One could construct a large series of buckets, or entire “deep” networks of buckets, connected by hoses linking outputs of one bucket to inputs of others.

Now suppose that five streams of water are running down a mountain in America and a gardener sets them up to flow into five buckets.  Now imagine the gardener wants to water three gardens. One garden contains cactus plants (require little water), another contains chrysanthemums (require medium water), and another contains water hyacinths (require a lot of water).  We call this the training data set.

The gardener starts with hoses and buckets of equal size to construct a network that takes the five streams and waters three gardens. The first year all his plants die because he did not have the right set of hoses going to the right plants. So next year he changes hose sizes and bucket sizes and the cacti live but the other two gardens don’t bloom.  The gardener keeps meddling with the hose and bucket sizes until all gardens bloom. His “network” of buckets is now trained.

When his friend in another country has the same problem the first gardener has already solved the problem.  The first gardener can give the second gardener the bucket and hose sizes so his friend’s gardens can flourish as well.  This second garden is the equivalent of introducing a testing data set – something the AI has not seen before – so we can see how it performs.

Deep Learning

Now that we understand neural networks, deep learning is just a step away.  As I mentioned before, the output of a neuron is connected to the input of one, or more, other neurons.  Similar to how neurons are organized as layers in the human brain, so too are neurons in neural networks.  They are layered like a lasagna.  Neurons on the bottom layer receive signals from the inputs.  These inputs could be voice, an image, a video, data, text, etc. As the data passes through the neuron the function performs some action. In doing this, each layer modifies the data by the function within, then passes it up and out the top.  As the data moves up through the layers and reaches the top the system has a high-confidence in the correct solution.  These layers are what make up the “deep” in deep learning — the more layers the “deeper” it gets.  And, as I already described above, the link between the neurons (or how we arrange the hoses in the example) is the “learning” part.

So there you have it “deep learning”.

Let’s Talk Data

When we think about all the problems we’ll want to solve with AI, there’s no shortage of ideas. AI has already been used for games, cancer diagnosis and treatment, and even cooking the world’s best cookie. There’s no shortage of problems to solve, so of the most important things you’ll have to consider is what kind and how much data you have to support the AI.  Without information AI, like any other intelligence, is just an empty brain.

The first thing you’ll have to know about data is there are two main types: structured and unstructured.  Both are very important, but the approach to using them is dramatically different.

Structured data has a high degree of organization.  Think of an excel spreadsheet with rows and columns. Each column has a header, so you have a good idea what data will be below it. This makes it predictable because it follows a pre-set structure. Often the data itself is typed, which means it’s pre-defined as an integer, percentage, character, timestamp, etc. Strongly structured data is easily queried and indexed, which allows the data to be recognized and used by the computer AI.

Unstructured data is just the opposite.  Unstructured data does not have a pre-defined format.  This blog post, for instance, is unstructured data.  It can contain text, dates, numbers, HTML links, images, videos, and many more data types in no particular order.  This is what makes unstructured data notoriously difficult to use.  The lack of organization results in irregularities and ambiguities that make it specifically difficult for computers to process and understand as compared to structured data.

It is important to recognize the difference between structured and unstructured data because, in 1998, Merrill Lynch created a rule of thumb that between 80% – 90% of usable business data may originate in unstructured form.  More recent estimates put that number closer to 70% – 80%, but the key takeaway is that unstructured data make up most of the data we have access to but is also the hardest to analyze at scale.  If you think about it, every web page, blog post, journal article, medical record (at least the free-form fields), books, emails, audio, video, and much more make up unstructured data.

An example of unstructured data could be the spoken or text words of a doctor:

“I reviewed Jane Smith at the Jonestown clinic today.  She was referred to us from Doctor Lee for further evaluation. The patient came in complaining of chest pain, shortness of breath, and lingering headaches. She does not have a cough, smokes one pack of cigarettes a day and has no family history of heart disease.  The patient has been experiencing similar symptoms for the past 12 hours….”

Let’s think about what’s really going on here.  As you read it, it makes sense and the message is pretty clear. But how would an AI read this?  Just some of the things you do naturally that the AI needs to learn are language identification, lexical analysis, classification, disambiguation, entity extraction, fact extraction, concept extraction, relationship extraction, and probably a whole bunch more “extractions”.  Together these things are called Natural Language Understanding (NLU).  What all those complicated terms are really doing is figuring out the “who” of these unstructured words: “I, Jane Smith, she, us, Dr. Lee, the patient, family”.  The AI is also trying to figure out the “what”, or “clinic, chest pain, shortness of breath, headaches, cough, cigarettes”.  There’s also the “when”, which is referred to as “today, day, history, 12 hours”.  And so on.

I could keep going, but you should be starting to get the point.  AI has a lot of work to understand what we as humans may take for granted as just six sentences.

Natural Language Processing (NLP)

One of the core tenants of an AI is the ability to simply talk to it. The ability to speak to a computer and have it talk back is something we’ve been thinking about how to do since the 1950s. You may recall HAL from 2001: A Space Odyssey, Iron Man’s JARVIS, or the unnamed computer that ran the bridge of the Enterprise in Star Trek.  These are all Hollywood special effects, but what does it really take to do this?

A big part of considering what you want to use an AI for is understanding how users will interface with it. Once we have structured and unstructured data feed processing through NLU we have to considered how to interface or input with the artificially intelligent computer. Interfaces for AI can be quite broad.  It may be a chatbot, social feed, database search, or your AI could be answering the phone.  AIs are used in robotics, used to label images, and much more.

NLP is the field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages and, in particular, concerned with programming computers to fruitfully process large natural language samples.  There are two main parts to NLP: taking your speech and turning it to text, and taking that text and making it meaningful to the AI. NLP relies on a technology called Automated Speech Recognition (ASR), which deals with the methodologies and technologies that enable the recognition and translation of spoken language into text by computers.  From there I have already described natural language understanding (NLU) above.

I can keep going on this, because we as humans take in information in a variety of ways, many of which have computerized counterparts or metaphors.  Computer Vision and the ability for self-driving cars to see the road, people, and obey signs are just a few of those examples.

Natural Language Generation (NLG)

NLG system is the opposite of NLP.  It is a translator that converts data into a natural language representation. Where NLP is breaking things down, NLG has to make decisions on how to put a concept into words — it is creating rather than dissecting words and sentences and idea.

NLG language representation can take several forms.  In some cases, it could be as simple as filling variables into a form letter.  An example of this could be a tweet that denotes the winner of a football game with the score.  Most of the tweet is pre-written and the AI simply adds the score and sends it out.

More complex output can actually generate the text itself.  In this case, the NLG will have to consider the rules of grammar for the language it’s outputting (because AI can be multi-lingual — one of its strengths).  Building on this further is actually giving the AI a voice, which has been historically difficult because it’s not just content and grammar, but also the expressiveness of a voice.

It may sound obvious, but key to natural language generation is getting it right.  There’s a theory in animation called “the uncanny valley”, but I believe it can be applied to NLG as well.  As the theory goes, we can continue to make human-replicas seem more human.  But there is a point at which the replication gets very good, but still does not quite seem real.  At this point we, as humans, flip out (which is a technical term).  We actually don’t like it.  It’s close enough to real, but not real, so we resist and repel it.  The uncanny valley is meant for animation, but I think it applies to language as well.

Getting Started with Artificial Intelligence

Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing are still all keywords and catchphrases at this point. If you want to go deeper and actually leverage AI for a project, then there are some considerations when getting started.

What AI Is and Isn’t

Congratulations, you’ve already started the first step in implementing an AI project.  It may sound silly at first, but one of the most important steps to implementing AI is understanding what it is and what it is not – what it can and can’t do well.

Because of the limited scope of current AI implementations, the best problems are those that have some defined input and output that can be made very quickly and repetitively.  Here are some good examples from Andrew Ng’s Harvard Business Review article, What Artificial Intelligence Can and Can’t Do Right Now:

Andrew NG - What Machine Learning Can Do, via HBR.org

Much to the chagrin of the AI fear-mongers, there is no good general AI. The learning methods of AI makes it good at doing one thing very well.  This is why many AI gurus will start off by asking you to identify a problem.  While I don’t disagree with that, I also believe it’s an unreasonable request.  AI is super complex, and despite the research and detail I’ve put into this article, most people will still only have an extremely high-level idea what AI is and where it can/should be used.  There’s a great Dilbert cartoon, by Scott Adams that explains this idea quite well.

Dilbert, by Scott Adams

The main idea is that it is typically not appropriate to ask a non-expert how to best implement something as complex as an AI.  At the same time, effective implementation of an AI requires the input of business owners and subject matter experts.  What makes this so difficult is AI, as a final product, is not well defined.  The features, however, as described above, are.  So here are a few things to consider when analyzing a problem to see if AI will be a good fit.

Data Sources

Data is the fuel for any artificial intelligence system.  Without data (or with bad data) there’s just an empty box of cool technology that doesn’t do a whole lot.  Of course with machine learning and deep learning principles, we’re no longer limited to only considering structured data (now you know what that is), which opens up many new sources of information possibilities.

At this stage, the right question to ask is what data do I own or have access to that will help in supporting the AI’s decision making.  If, for instance, you’d like to use an AI to support a call center, what will be the primary sources of data from which to pull answers?  Do you log the most common questions and answers?  How do you evaluate the success or quality of the answer?  How many languages will it need to support?  And the one that many people forget about when it comes to AI, how will you generate a question and answer set significantly large enough to train and test it?

It should also be noted that when I refer to data, I should really qualify that with HUGE amounts of data.  Smart photo recognition systems are neat to play with on facebook and google photos, but the reality is that behind the scenes there are 10’s, if not 100’s, of thousands of example photos already tagged with the correct answer that has been used for training and testing the AI.

Data, it should be noted, is also a scarce resource.  In most industry cases, technology can be copied, but data is a differentiator.

Problem Identification

Problem identification is listed here as the second thing to consider, but really you’ll need to think about the problem and the data together.  Without either of these things, your project will never get out of the gate.  The few times I’ve been lucky enough to run an AI project, we start with an intensive working session to cover these two topics – what is an AI followed quickly with brainstorming problems.

As noted above, a good AI problem has an input, some type of interpretation of the input based on what it has learned, and some type of response to that input.  Another feature of a problem is something repetitive, or that involves consuming and making sense of large amounts of information and then applying that knowledge to a particular problem.

Often times the user journey will cross with the brand goals to come up with some list of desired behaviors and primary barriers.  From there we architect a solution, most commonly leveraging channels and content to play some role in behavior change with a tracked outcome.  While a sound model for AI problem identification, this kind of thinking will naturally eliminate many excellent options for AI implementations creating internal efficiencies — such as approving expense reports.

Recall that AI is good at repetitive tasks that require the evaluation of an input against a large set of data.  Nowhere in that definition did anyone say it has to be customer facing only.  An AI, for example can replace a call center as a 24-hour chatbot that helps users find the right solution.  We’ve seen AI play as complex a role as cancer diagnostics and compound identification based on genetic research.  We’ve also seen it help plan out your meeting calendar, tell you the weather, and write articles for newspapers, twitter, and blogs.  In short, AI is helping us do a lot, which is why it’s best to make sure your team understands AI before identifying problems.

A 4-Step User-Centric Approach to AI Projects

Now I’m not saying this is the exact right approach for everyone, but I would like to leave you with a tangible takeaway on how to start from scratch and get an AI project off the ground.  While your problem or industry may be different, hopefully, these steps will apply. If you have a different approach, I’d love to hear about it.

My assumption here is that the AI project you are looking to take on is of some significance.  There are plenty of “off the shelf” AI-driven tools out there, but I don’t consider them true AI projects unless the AI is core to the functionality. An example of this is a service I evaluated that leverage natural language processing to map customer sentiment vs. the traditional role of surveying customers. While very interesting and quite neat, the AI itself is not central to the functionality that I was looking to implement so much as an added feature or selling point above and beyond the core functionality of surveying customers I was looking for.

Step 1. Document the User Journey

Most organizations I’ve worked with have a pretty good understanding of the sales cycle, some even have it written down and documented.  From there they have an informed perspective of the user journey as it applies to the sales cycle.  Very few have actually documented it with respect to the content and experience, and even less break it down to specific decisions, barriers, and influences.

So when I say document the user journey, I mean really documented in a flow diagram that maps all the multiple and various channels, messages, and key customer interaction points with your organization.  Examples could include digital channels like CRM, web, and banners, but it may also include direct mail, rep visits, and incoming phone calls for information.  This is NOT an easy thing to do and, to make it more difficult, it’s constantly changing each time you have a new initiative.  But, I believe that if you are serious about taking on a project like AI integration, taking the time to document the user journey is an invaluable tool.

I should note that a user journey does not have to be external.  There are many internal processes, like cancer research, that can benefit from AI.  Plotting your user journey for these things can be just as helpful as doing it for external customers. A well-documented user journey is like a good business plan – the true value is not in the having of it, but in the making of it.  That’s when you’ll find the opportunities.

Step 2. Review the User Journey Considering Where AI could Help

Once you have the user journey you’ll start to see the opportunities.  Are there areas where your customers can get “stuck” and may need help? Do you have important tasks that just won’t scale well with increased business or geography?  Are there repetitive tasks that could be automated or made more efficient?  Are there choices that need to be made which require a lot of data, processing of information, or validating submissions, etc.?    Is there an interaction that could be reinforced with information to make the experience better?  These areas and more are all great opportunities to consider if AI is right for you.

Step 3. Data

I said it before and I’ll say it again now, data is the fuel that makes the AI motor run.  I can solve the problem of getting to Mars by building a rocket, but without fuel, it’s just an expensive paperweight.  Once you have your journey and have used it to identify points where AI can help improve it, the next thing to do is consider the data.  Take a look at each of your potential AI projects and consider what kind of data you’ll need to support it.  Ask yourself if you have the data, or do you need to get it.  If you’ll need to leverage multiple data sets, figure out what it will take to integrate them.  Data can be a differentiator, but it can also be a show stopper, so think about data early on in the AI process.

Step 4. Find an AI that Meets Your Needs

As you know by now AI is not just one thing – it could be anything.  Now that you have a really firm grasp of your user experiences and have identified a few potential points of improving them, it’s time to start your research.  There are hundreds, and soon to be thousands of AI vendors. Some of them just use AI, and some of them build AI.  Now that you have a much better idea of what AI can do (check out the above sections if you skipped them), you are much better prepared find the right solution for you.

Some of the Most Popular AI Platforms

Most people, if they’re not thinking of HAL or the Terminator, will recall IBM’s Watson AI either beating Kasperov in chess or Jennings on Jeopardy!.  Through a platform called Bluemix, IBM has opened up Watson for everyone to use.  Of course, you’ll have to pay for the privilege, but along with a whole lot of functionality, you’ll get IBM Watson’s services team walking you through the process (and a really excellent and informative demo if you can get to their NYC offices).

Here are a few more options in no particular order:

Tensorflow is an open source AI platform.  Tensorflow was originally developed by the Google Brain team and later open sourced to the general public.  This is a good option if you are looking to geek out a bit and dig into AI programming on a budget.  Another open source option is Alchemy.  It has many of the features you would want as a developer but, like most open source, the support is community-based.

Microsoft’s Azure Machine Learning platform is more focused on the business customer, much like IBM’s Watson.  It’s a fully managed cloud service that enables you to easily build, deploy, and share predictive analytics solutions among other things… many other things.

Amazon’s AWS AI service is also pretty comprehensive including features for image recognition, text-to-speech, voice & text chatbots, machine learning, and more.  What’s nice about this is Amazon has deliberately set it up in four levels, ranging from AI Services to AI Infrastructure, depending on how deep, technical, and customized you want to go.

The last group consists of everyone else.  According to VentureScanner there are nearly 1,000 different AI companies and start-up ranging across approximately a dozen groups.  In most cases, unless you are an innovative team looking to do something completely new or very specific, you’ll be going with one of these.  These are vendors which have taken an AI platform base and built a useful application on top of it.

957 Artificial Intelligence Companies from VentureScanner

A Word of Caution

Finding the right vendor is the real key — and I can tell you, it’s not easy.  There’s an old joke that sums up the AI (and many other) marketplaces very well:

A group of people die and go to heaven.  At the pearly gates Saint Peter says, “we’re not quite sure what to do here.  You’re not good, you’re not bad, so we don’t know what to do with you.  Rather than make a choice we’re going to give you the option.  Explore both heaven and hell for one day, and on the third day make a choice where you’d like to spend all eternity”.  So the team, already pretty confident in their decision, decide to check out heaven first.  They walk around and it’s very serene.  People are reading on fluffy clouds in the sun listening to harp music.  Everyone seems content and happy.

The next day they all head downstairs to evaluate the underworld.  Much to their surprise, there is a party going on. The weather is perfect, people are dancing, drinking, the devil is the DJ and music is playing loud. Everyone is more than content — they’re having an epic time!  The devil wave’s them in and, astonished, they partake in the party.  After hitting a few rounds of golf with him, as fast as a day can go by, they wave good-bye to the party and head back to the gates to make a choice.  “Well”, they say to Saint Peter, “I can’t believe I’m saying this, but I’m choosing the netherworld!”  “Very well,” says Saint Peter, and the group is cast down.

No sooner do they get there than they realize it’s hot, there’s no more music, and everyone is miserable.  “Hey devil”, they shout, “what gives!  What happened to the people, the dancing, and the partying?”  “That was the demo”, he replies.

The term for this is vaporwear (wikipedia: software that has been advertised but is not yet available to buy, either because it is only a concept or because it is still being written or designed). While the above is a joke, just remember that AI can do a lot and, as such, teams will promise a lot.  So if you have an organization that is good with AI, odds are they’re open to trying a lot of things.  This is good and bad.  Without focus, they’re stuck in the perpetual beta and you’re essentially buying their AI expertise.  But when it comes time to deliver, you need to be confident they’re the right team to deliver the solution to your problem.

Closing Remarks

Okay, so that’s AI, machine learning, and deep learning in a nutshell.  It can be highly technical and somewhat overwhelming, but I find there is a trick to making sense of it.  If you think about the AI as a newborn baby.  An intelligent being with all the possibilities of becoming anything in life.  S/he could grow up to a Doctor, Lawyer, President, or your best sales rep.  They could become your partner in reporting the news as is the case with Heliograf at The Washington Post, or your partner in cancer care like at Memorial Sloan Kettering.

But unlike that child, the decision as to what they become is not random, it is very much up to you.  You will choose what data it learns and teach it right from wrong.  You will direct and shape its intelligence to help solve your business and marketing problems.  It’s really up to you.  So the best thing you can do is have focus and a solid understanding of AI, the problem you are solving, and the data you’ll use to support it.

Like this Article? Share it!

About The Author

Comments are closed.