Whenever I speak to a group of Marketers about Artificial Intelligence and Machine Learning, there’s at least one wannabe Muppets Judge that quotes the Gartner study about how 85% of all AI projects fail as if he’s just discredited an entire multi-billion dollar industry with his genius. (These people should come with horse tranquilizers… for all of us.)
Yes, the vast majority of AI projects do fail. Some of them quite spectacularly. There are many reasons for this. Let’s go through them one by one so you can avoid the pitfalls and ensure your project is a smashing success.
SPOILER ALERT: The true-blue #1 reason Marketing AI projects fail is because they shouldn’t have been AI projects in the first place. Artificial Intelligence/Machine Learning (AI/ML) works when you have the right data, a repetitive issue, and decisions that need to be made.
Speaking of decisions…
NO CLEAR OBJECTIVES
Artificial Intelligence/Machine Learning is the best thing that’s ever happened to the Marketing Industry. WAY BETTER than the Internet in the 90s. Just like in the first days of the WORLD-WIDE WEB (insert OG dial-up tone here), you’ve got to spend some time and effort learning how it works. The big difference between then and now, however? You also need to train The Machine. One of the reasons AI projects fail is that people forget that Marketing AI is new and basically a toddler. Toddlers fall a lot before they learn to walk. You’ll fall too. (Just don’t make it a headbanger. We’re old and not as resilient.)
When first using Artificial Intelligence in their projects, many marketers get so excited about implementing AI that they commit to one project. (This is a good thing.) They set a clear-ish objective and then get oversold by a pushy vendor about all the things they can do. Or they just get plain greedy and try to do everything at once. (This is not a good thing.) If I had a Southern grandmother, she’d have some witty repartee about Bubba taking a salad plate to a buffet and piling the entire table on that dinky little plate, bless his heart, but alas I’m an unsweetened Vermonter. The sentiment is legit, though.
Before you begin your AI/ML project, ask yourself: What business problem am I trying to solve? Do I have a clear, concise goal? What’s my strategy? What are my expectations? Do I have enough money allocated in the budget? Do I have the right team? How am I defining success? What metrics am I using to measure performance?
Clear objectives + managed expectations = Your Marketing AI success.
Do one thing and do it well, at least at the beginning. Start with one small project. Develop one goal for your project. Make sure you have the tools – and the data – to support the project. Set reasonable performance expectations. Then, manage those expectations in yourselves and your team. Inflated expectations are a problem in AI projects. You’re going to need to manage things tightly. Otherwise, folks get disappointed quickly because what they expect is not achievable, especially at the beginning.
POOR DATA QUALITY
AI/ML projects are only as good as the data you use to build them. If your data is garbage, don’t start an AI-enabled project before you clean it up.
I know. I know. This is a hard one for many people. Mainly because they don’t know if their data is “good enough.” Good enough means current (read: not outdated), clean, and comprehensive enough to support what you need to know.
So, if you’re looking for an NLP (Next Logical Product) for each customer on your list but, you no longer have access to which products they purchased, the project isn’t likely to work because you don’t have the data you need to complete the task. It sounds simple but a lot of people try to fake it till they make it.
Faking stuff is a problem because The Machine often can’t tell the difference between real and fake when it’s foundational data. Even more problematic? Faking stuff makes it harder to disrupt the systems when something goes wrong, so instead of going back a few steps, you often need to go back to the beginning. As your data changes, you will need to disrupt/recalibrate your systems but full-scale, scorch-the-earth-and-go-back-to-the-beginning stinks. Avoid it as much as possible.
Good Quality Data comes from reliable sources, is clearly labeled; is current/relevant; de-duped; has been through a hygiene process recently; has good governance, and can be analyzed. You’ll also want to outline the biases in your data before you begin your project. All data is biased, but identifying those biases allows you to better work with them.
POOR DATA QUANTITY
Yes, I could have grouped Poor Data in one big bucket instead of breaking it down into two. However, both Quantity and Quality are important for different reasons. There’s a myth (perpetuated mainly by a few of the more well-known 3rd party data vendors) that you need ALL the data to do an AI project well. That’s not necessarily true. More is typically better but what counts is that you have enough working data for your project.
How much do you need? Your mileage will vary depending on the project and your data. The general rule of thumb is no less than 1,000 records of anything. I shiver at less than 10k and prefer millions. Work with what you have as long as it gives you predictive results. In the interest of full transparency, I’ve done a lot of projects with less than 5,000. It can be done. You just really need to weigh whether you need AI for the task. You often won’t.
Allocate some of the data as your training set, and then look at what’s left over. Do you have enough to do dynamic testing with statistical significance? How much more data will you get in the next month? Three months? Six months? Year? What type of data is this? Type 0/1, Type 2? Type 3? When the project is finished, and you’re reporting on the results, will you feel comfortable presenting your findings to a banker? Prospective investor? A lawyer?
Again, your project is only as good as your data. I see the results for many failed projects; many of the failures could have been spotted after running the training data. Be sure to watch your initial findings VERY closely.
LACK OF COMMUNICATION AND COLLABORATION BETWEEN TEAMS AND/OR LACK OF TALENT.
This is self-explanatory and often an issue in bigger companies. You’ll know the best way to solve this problem for your business. A big part of this will be building time in the schedule to clearly communicate with your teams about the project goals and expectations at the beginning and then, on an ongoing basis, till the end of the results/postmortem process. For the record, I don’t usually see this problem on the initial projects. Usually, things get loosey-goosey around project 4 or 5 when some of the newness has waned.
These days, there’s also a lack of available talent. With Marketing AI projects, it’s even worse because the entire area is so new. Waldorf, our Muppets Judge friend, will tell you that Gartner also says, “56% of the organizations surveyed reported a lack of skills as the main reason for failing to develop successful AI projects.” Start upskilling your team. Lean on your vendors for help. Find an outside consultant. Talk to other people who have conducted similar projects.
Frankly, I see this issue at more medium and big companies than I do small ones. The main reasons it happens are scope creep, lack of training/experience with building the models; too many cooks in the kitchen; and not having enough horsepower behind the tech stack. The good news is that most budget issues go away if your project plan and objectives are crystal-clear from the start.
There’s a common misconception that Marketing AI projects are uber expensive. They’re not. Many of them pay for themselves within the first six months or sooner. When things go awry, it’s often because someone quoted on fixing a leak and ended up building a whole new house.
LIMITED UNDERSTANDING OF THE TECH
This issue gets cited a lot by outside consultants and vendors selling AI products and services. It’s a legit issue, but we’re all pretty new at the tech, and I’ve seen some of the most renowned people in the industry stumble regarding implementation. (A lot of AI/ML sounds different in theory than it works in practice.) For a while, I think we should just understand that we’re all new at this and build some safety gates to keep us from falling down the proverbial flight of stairs. That may mean hiring an outside consultant to help you, building in time to talk with more people like you who have done the same type of project successfully (or not), and/or doing a trial run or two before you run the “real thing.” (Please don’t use test runs for training data, however.)
INADEQUATE TRAINING AND TESTING
Data and algo/model testing/training are incredibly important in Marketing AI projects. It’s critical that you spend enough time to get them right. (Remember, artificial intelligence learns on your data/models, so you want the most solid foundation you can build.) Sometimes you will need to train the algo with input and output data. Other times, you will want to do it with just input data. Either way, as painful and frankly dull as the process is, you don’t want to skip this step. Nor should you ignore it. (Many companies use outside vendors and let them handle “all the magic” of the testing and training process. You can be hands-off with the gruntwork but make sure you have insight into the process and how it’s working.)
Identify your project, and what kind of accuracy level you want/need it to have. (Don’t skip this step.) Then, determine how much data you need for training and testing. The splits on this are often 70/30 or 60/40, but your mileage will vary according to the project. Make sure not to use all your data for training (save some for testing) and do spot checks of the algorithm’s effectiveness along the way.
This is another that gets mentioned A LOT by some of the industry’s smartest – and most vocal- people. I get why they talk about it. The companies who get press for being AI-powered go on ad nauseum about “how X model was the only choice for this project” and “if you want maximum performance, you can only use Y,” but frankly, it’s not in my top five reasons why AI projects fail.
That said, selecting the suitable model for the job is essential if you’re building things in-house. If you’re outsourcing, an overall awareness should do just fine as long as things perform at/better than your expectations. You’re not likely to have much control over the model(s). Please keep in mind that a lot of your success depends on your data quality, budget, how you’re measuring things, and how your model evolves.
Artificial Intelligence changes by the nanosecond, and we now scoff at things that we thought were revolutionary five years ago. You certainly don’t need to be on the bleeding edge, but when building something AI-related, you should consider whether you can keep up with the technology/growth in that particular space. (This varies dramatically by the category of AI.)
PRIVACY AND BIAS CONCERNS
Both privacy and bias are REAL problems in AI. The media often cites them as one of the why projects fail. I don’t see that. They’re definitely a reason why some projects never get started. They’re also why something might get scrapped in the middle or thrown away at the end. However, if you establish your standards for both privacy and bias upfront and you’re clear about what you’re going to do and/or how you’ll accommodate your concerns, they’re not likely to be why your project fails.
Both these areas are serious. I’m not diminishing either one iota. And I see far too many marketers mismanage or oversell a project and then, after it ends, say, “there was terrible bias, yadda yadda yadda, so we had to throw it out” to hide the fact that they didn’t start/manage the project effectively. This doesn’t serve anyone. When a project fails, take the hit. Develop a plan for how you’ll do better next time. You’ll have your hands full with real privacy and bias stuff as it is.
MEASURING RESULTS PREMATURELY
People don’t talk about this one enough. Marketing AI has its timeline. Sometimes it matches the timelines you’re used to, but it often doesn’t. In the beginning, it can get especially dicey. For example, you implement an AI project for dynamically testing email subject lines. If you did an A/B split traditionally, you’d see the results immediately. You send the email, and you have definitive results in just a few days. With AI/ML, that’s not likely to happen. The AI gets better and better as it learns. So yes, you will still get results in a couple of days, but they often won’t accurately represent what will happen if you run the same test/program repeatedly/ongoing for days, months, or years.
What does that even mean? Sometimes you do an AI project, and at first, the results are just slightly better than before. It’s soul-crushing. You spent all this time, money, and energy and then find out you improved things by only a teensy bit. Don’t give up. As the system improves, it gets better, and as it gets better, your results will improve too.
Did you do an AI-centered project that failed? Why do you think it happened? Want to share any tips to help other marketers avoid these problems in their projects? Have a question about your AI success or failure? Tweet @amyafrica or write firstname.lastname@example.org.