One of my favourite things to do is find unexpected parallels between everyday experiences and data science. This kind of extrapolation—seeing patterns in unrelated fields—helps me uncover new insights and approach topics within data science in fresh ways. In this post, I explore how sleep training a baby mirrors some of the challenges in data-driven marketing. This one is a bit tongue-in-cheek, but hope you enjoy it!
Introduction: Why We Keep Doing Things That Don’t Work
As a new father, I’ve spent many late nights trying to get my baby to sleep—and in the process, I stumbled upon an unexpected parallel to how businesses sometimes make decisions. Anyone who has tried sleep training a newborn knows the concept of intermittent reinforcement: if a baby sometimes gets attention after crying, they’ll keep crying—because they’ve learned that occasionally it works. This inconsistency makes the habit even harder to break because the baby believes there’s always a chance.
This same psychological trap plays out in data-driven marketing. The way campaigns are analyzed, justified, and optimized often suffers from the same issue. A bad habit—whether it’s over-attributing success to a single campaign, chasing vanity metrics, or disregarding true incrementality—gets reinforced just enough to stick.
From a Decision Sciences perspective, this is one of the most common pitfalls in business: allowing random successes to override systematic, evidence-based decision-making. It’s why many marketing teams struggle to scale their efforts effectively—because they mistake luck for strategy.
The Problem: Intermittent Reinforcement in Marketing and Data Science
In a perfect world, we would make Marketing decisions based on rigorous testing, clear causality, and well-structured analyses. But in reality, Marketers and decision-makers often rely on short-term wins and post-hoc justifications.
A Marketer might engage in a suboptimal behaviour, such as:
- Launching a campaign with no control group – but seeing a revenue uptick and assuming it was effective.
- Doubling down on an unreliable channel – because one month it delivered great results (even though, overall, it’s inconsistent).
- Over-relying on last-click attribution – because, sometimes, it aligns with positive revenue trends.
- Chasing engagement metrics instead of business impact – because a viral post sometimes leads to sales.
- Overreacting to short-term trends – because a sudden spike in conversions sometimes signals a real shift, even when it’s just temporary fluctuation.
Each of these behaviors persists not because they always work, but because sometimes they do. And that “sometimes” is enough to justify the habit.
The Data Trap: When ‘Success’ Is the Worst Outcome
One of the most dangerous things in data-driven decision-making is random success. A poorly designed campaign that happens to drive revenue can convince teams to continue using the same flawed strategy.
This is the Marketing equivalent of a baby learning that crying works:
- If it worked once, why not try it again?
- If it worked twice, maybe there’s something to it.
- If it doesn’t work next time? Well, maybe it was just an off month.
What happens next? The Marketer keeps pushing the same approach, even if the long-term data suggests it’s ineffective or even detrimental.
The Decision Sciences Perspective: Breaking the Intermittent Reinforcement Cycle
Decision Sciences is about applying structure, rigor, and probabilistic reasoning to decision-making—which is precisely what’s needed to break free from intermittent reinforcement in marketing.
Here are a few Decision Science principles that help teams avoid this trap:
- Shift from Correlation to Causation – use hypothesis testing, causal Bayesian networks, and propensity score matching to isolate true impact.
- Measure Incrementality and Lift – determine whether a marketing effort drove additional conversions beyond organic growth.
- Use Controlled Experiments – apply A/B testing and randomized controlled trials to optimize based on evidence, not assumptions.
- Analyze Marketing Impact with Modelling – use multi-touch attribution or marketing mix modelling to assess channel effectiveness based on historical data.
- Run Monte Carlo Simulations – model uncertainty and forecast a range of possible outcomes.
- Apply Bayesian Thinking – continuously update insights to avoid overreacting to short-term wins.
- Reduce Bias with Decision Frameworks – prevent knee-jerk reactions by defining clear impact-paths and evaluating performance over time.
By applying these methods and principles, teams transition from reactive, anecdotal decision-making to systematic, data-driven strategies that drive real long-term success.
Conclusion: The Sleep-Trained Marketer
If you’ve ever struggled through sleep training, you know that the long-term benefits far outweigh the short-term pain. The same goes for data-driven decision-making. By resisting the temptation of intermittent reinforcement, you build stronger habits, make better decisions, and avoid wasting time and resources on strategies that only occasionally work.
The best marketers—like a loving parent—understand that consistency and discipline lead to better outcomes.
Next time you’re tempted to stick with a tactic just because it sometimes works, ask yourself:
Is this a data-driven decision, or am I just reinforcing a bad habit?


Leave a Reply