Executive Decision Series (Post 6): “Decision Debt” — The Hidden Cost of Shallow Analytics & Shaky Data


Executive Takeaway: Decision Debt exists in every organization—it’s only a matter of degree. It exposes the hidden costs of choices built on thin evidence—when insights jump to conclusions instead of showing options, trade-offs, and uncertainties. What looks like confidence is sometimes just a stack of assumptions—and that debt compounds as misdirection, wasted resources, delays, rework, missed opportunities, and eroded trust in the numbers. The metrics presented below are more conceptual than operational, but they offer a practical lens on where risk hides: thin evidence, heavy assumptions, organizational friction (i.e. politics, processes, misaligned incentives), and outcome misalignment. Seeing decisions through this lens creates guardrails—reducing risk, accelerating clarity, and moving value sooner than competitors.


Introduction

In personal life, “decision debt” builds when choices are avoided, poorly grounded, or repeatedly reversed—draining energy, slowing progress, and letting opportunities slip.1 I’m extending that idea to the data-driven enterprise, where the same dynamics show up across teams: unresolved choices, hesitant calls, and analysis paralysis. Furthermore, when polished summaries rest on thin analysis and incomplete data, those patterns intensify—undermining performance, creating backlogs, raising costs, and eroding credibility. I’ve seen strong teams get stuck here.

For example, a GTM team touts a “+18% lift” from a nurture program after noticing higher close rates among contacted accounts. But there was no randomization, and high-propensity accounts were more likely to receive outreach. When a matched holdout is run with pre-treatment covariates and time controls, the estimated effect collapses to about +2%—small enough that it may simply be noise rather than a real lift. Meanwhile, leadership has already reallocated $1.2M, and three sprints get burned on rework—interest on Decision Debt created by shallow analysis and incomplete data presented as certainty.

The idea of Decision Debt has surfaced across several adjacent domains. In data governance and compliance2, unresolved ownership and unclear controls erode accountability. In AI/ML operations3, rapidly generated “insights” often outpace validation, amplifying bias and miscalibration. In digital transformation4, deferred or reversed decisions accumulate as backlog, rework, and value leakage. The term also appears more broadly to describe general organizational backlogs5—from product pipelines to decision queues—that quietly burn time, profit, and momentum.

What follows reframes Decision Debt for “data-driven” enterprises more broadly—using a measurement-oriented lens to clarify where it accumulates and how to reduce it.

Translating Decision Debt to the Enterprise

Decision Debt arises when confident conclusions rest on shallow analysis and incomplete data. What looks like a solid evidentiary base is sometimes a carefully stacked tower of assumptions—a data veneer that decorates the story more than it drives the best decision. This is not trivial; like financial debt, it accrues interest over time. The longer it’s carried, the harder and costlier it is to unwind.

Decision Debt undermines a healthy economy of insights. Instead of compounding value through learning, experimentation, and careful refinement, the organization compounds risk (unwittingly). Poorly supported conclusions harden into “accepted truths.” They get echoed until familiarity replaces validation, crowding out dissent and inviting biases—e.g. confirmation bias, sunk-cost fallacy, and groupthink—to take root. Leaders, lulled by the surface coherence of dashboards and summaries, skim across problems rather than probe structural causes. Over time, the gap between decisions made and realities on the ground widens, increasing the likelihood of missteps (note: for common traps and practical guardrails, see this concise roundup of pitfalls and strategies).

Unresolved decisions show up as:

  • Analysis without commitment: the evidence is presented, but no direction is chosen.
  • Endless debate: stakeholders revisit the same options without closure.
  • Silent avoidance: everyone knows a call is needed, but no one owns it.

Breaking this cycle requires deliberate intervention. As with prudent financial management, paying down Decision Debt means reallocating attention and rigor to evidence generation (the disciplined production of decision-ready proof) and to decision practices that convert evidence into clear choices. Concretely, that looks like:

  • Deepening analytic rigor: robust methods, not superficial correlations.
  • Strengthening data foundations: fix quality at the source, not in slides.
  • Cultivating decision literacy: interrogate not just what the numbers say, but how they were produced.
  • Instituting feedback loops: test decisions against outcomes and recalibrate when assumptions fail.

In short, getting free of Decision Debt means shifting from surface-level confidence to structural integrity in how evidence is built and choices are made. Only then do decisions create durable value instead of accumulating risk disguised as knowledge.

“A data science group stopped publishing quarterly “victory decks” and started publishing monthly “loss notes.” Each note described a hypothesis that failed and the decision it liberated. Stakeholders stopped asking for certainty and started asking for next bets. The political weather improved. The roadmaps got shorter. The results moved sooner.”

James Kuhman

Decision Debt as an Equation

If “Decision Debt” is the accumulated burden of choices made on shallow analytics and shaky data, we can measure it the same way finance teams track leverage: as a set of leading and lagging indicators that roll up into a composite score. Think of this not as a metric to track in practice, but a mental model to help you better consider where and how risk builds up.

Decision Debt is comprised of three key components:

  1. Excess confidence vs. evidence
  2. Assumption load in analyses
  3. Downstream costs (rework, reversals, delays)

Let’s now investigate each in more detail below.

Dimensions of Decision Debt

This section is a thinking tool, not a real set of metrics or dashboard. By treating Decision Debt as a set of conceptual factors—evidence vs. assertion, assumption load, timeliness & friction, and outcome reality—we make the trade-offs visible. The point isn’t to compute anything or chase data to calculate these metrics; it’s to surface the balance and tension as a conceptual exercise: to consider whether confidence outpaces evidence, assumptions carry more weight in our decision-making than we admit, latency and rework are taxing momentum, and results fail to match the story we anticipated.


A. Evidence vs. Assertion

Too often, big claims outnumber the proof points that support them. Decision Debt grows when polished statements sound persuasive but lack checkable evidence.

Formula:
Evidence-to-Assertion Ratio (EAR) = (# proof points) ÷ (# big claims)

Example: A deck makes 10 big claims, but only 5 have proof you can inspect → EAR = 5 ÷ 10 = 0.5.

Interpretation: A low EAR means confidence is outpacing evidence.


Click here for Deep Dive into Evidence vs. Assertion

A. Evidence vs. Assertion

This pillar asks a simple question: How many big claims do we make—and how many are actually backed up? Decision Debt grows when polished statements outnumber the proof behind them.

Evidence-to-Assertion Ratio (EAR)

Plain formula:

EAR = (# proof points) ÷ (# big claims)

  • Big claims = the headline statements you want leaders to act on.
    Examples: “This campaign drove 20% lift.” “Pipeline coverage is sufficient for Q3.”
  • Proof points = concrete support a reviewer can check.
    Examples: A/B test results, a regression with documented inputs, a dataset excerpt, a cited benchmark with source link.

Example: A deck makes 10 big claims, but only 5 have proof you can inspect → EAR = 5 ÷ 10 = 0.5.

How to read it:

  • ~1.0 → Most claims are backed by visible proof.
  • ~0.3 or below → Lots of confidence, not much backing.

Quick sanity check (use in meetings)

Ask of each big claim:

  1. Where’s the proof? (Show me the test, model, or data excerpt.)
  2. Is it inspectable? (Link, appendix, or one slide with inputs/assumptions.)
  3. Is it relevant? (Directly supports this claim, not a nearby idea.)

If any of the three are “no,” don’t count it as a proof point.


B. Assumption Load & Model Risk

Every recommendation rests on “ifs.” Decision Debt piles up when too much confidence rides on uncertain or untested assumptions.

Formula:
Assumption Load Index (ALI) = Σ (weight × uncertainty)

Example:
For example, a predictive forecast rests on three key assumptions:

  • Stable churn rate → weight 0.4, uncertainty 0.2 → 0.08
  • Representative data → weight 0.35, uncertainty 0.5 → 0.175
  • Competitor pricing static → weight 0.25, uncertainty 0.7 → 0.175

→ ALI = 0.43 (a moderately fragile foundation).

Interpretation: Higher Assumption Load Index (ALI) = more risk concentrated in shaky assumptions.


Click here for Deep Dive into Assumption Load & Model Risk

B. Assumption Load & Model Risk

This pillar asks: How many hidden “ifs” are we standing on—and how shaky are they? Decision Debt grows when strong recommendations rest on uncertain premises.

Assumption Load Index (ALI)

Plain formula:

ALI = Σ ( weightᵢ × uncertaintyᵢ )

  • Assumptions = the “ifs” your argument needs to be true.
    Examples: “Churn stays flat,” “Our sample represents the market,” “Competitors won’t drop price.”
  • Uncertaintyᵢ = how likely each assumption could be wrong (0 = very solid, 1 = very shaky).
  • Weightᵢ = how much damage it would do if that assumption fails (higher weight = bigger impact). Normalize weights so they add up to 1.

Example (three key assumptions):

  • Churn stays flat → weight 0.40, uncertainty 0.20 → 0.08
  • Data is representative → weight 0.35, uncertainty 0.50 → 0.175
  • Competitor pricing holds → weight 0.25, uncertainty 0.70 → 0.175
    ALI = 0.08 + 0.175 + 0.175 = 0.43 → a moderately fragile foundation.

How to read it (directional, not literal):

  • ~0.2 or less → low assumption risk (solid footing).
  • ~0.4–0.6 → moderate risk (watch closely, stress-test).
  • >0.6 → high risk (too much confidence riding on “ifs”).

Quick sanity check (use in meetings)

For each major assumption, ask:

  1. Can we observe it? (Is there recent data that supports it?)
  2. What if it’s wrong? (Size the hit; that’s your weight.)
  3. How uncertain is it—really? (Market shifts, seasonality, policy, competitor moves.)
  4. Did we stress-test it? (Run a sensitivity: “What if churn rises 2 pts?”)

If an assumption is high weight + high uncertainty and untested, you’re carrying hidden model risk.


C. Timeliness & Friction

Even solid evidence loses value if decisions stall. Delays, rework, and bottlenecks are like paying “interest” on Decision Debt — draining momentum before value is captured.

Formula:
Decision Latency (DL) = median(decision date − insight ready date)

Example:
Analysis was ready June 1, but decision made June 20 → latency = 19 days.

Interpretation: A high Decision Latency (DL) shows organizational drag—data is ready, but decisions aren’t moving.


Click here for Deep Dive into Timeliness & Friction

C. Timeliness & Friction

This pillar asks: How long does it take us to act—and how much effort do we waste along the way? Decision Debt grows when insights are ready but decisions stall, or when teams burn time reworking analysis instead of moving forward.

Decision Latency (DL)

Plain formula:

DL = median( decision date − insight ready date )

  • Insight ready date = when the analysis or evidence is available.
  • Decision date = when leaders actually make the call.

Example: Analysis was ready June 1, but decision made June 20 → latency = 19 days.

How to read it:

  • Low latency → decisions flow quickly once data is ready.
  • High latency → friction, hesitation, or bottlenecks.

Rework Tax (RT)

Plain formula:

RT = (hours spent redoing work) ÷ (total analysis hours)

  • Rework = fixing or re-running because of unclear asks, shifting priorities, or bad data.
  • Total hours = all time spent.

Example: 100 hours logged, 25 hours were rework → RT = 0.25 (25%).

How to read it:

  • Low RT → most effort drives forward progress.
  • High RT → a lot of “interest payments” on Decision Debt.

Backlog of Material Decisions (BMD)

Plain formula:

BMD = (value of overdue decisions) ÷ (total value of all decisions)

  • Value at risk = revenue, budget, or strategic importance tied to a decision.
  • Overdue = still pending past agreed timelines (e.g., 30 days).

Example: Three decisions worth $100k, $200k, $700k. The $200k and $700k are overdue.

BMD = (200k + 700k) ÷ (100k + 200k + 700k) = 0.90 → 90% of value stuck in limbo.

Quick sanity check (use in meetings)

Ask:

  1. How fast do we move once data is ready? (latency)
  2. How much of our work is do-overs vs. progress? (rework)
  3. How much value is tied up in overdue calls? (backlog)

If the answers feel uncomfortable, Decision Debt is dragging momentum.


D. Outcome Reality Check

Confidence is meaningless if outcomes don’t match. Decision Debt accumulates when forecasts, bets, or programs consistently fail to deliver on the certainty we claimed.

Formula:
Confidence-Outcome Divergence (COD) = average gap between stated confidence and actual results

Example:
Three bets were made at 70%, 80%, 90% confidence. Only one succeeds. The gap between stated confidence and reality is large → COD is high.

  • Bet 1: |0.70 – 0| = 0.70
  • Bet 2: |0.80 – 0| = 0.80
  • Bet 3: |0.90 – 1| = 0.10
  • Average: COD = (0.70 + 0.80 + 0.10) / 3 = 1.60 / 3 ≈ 0.53

Interpretation: A high COD signals overconfidence and miscalibration → on average, there’s a 53-point gap between confidence and reality. That’s overconfidence / miscalibration / optimism bias.


Click here for Deep Dive into Outcome Reality Check

D. Outcome Reality Check

This pillar asks: Do our decisions actually deliver what we said they would? Decision Debt piles up when confidence consistently outruns reality—when forecasts, bets, or programs don’t line up with what actually happens.

Confidence–Outcome Divergence (COD)

Plain formula:

COD = average gap between stated confidence and actual results

  • Stated confidence = how sure we said we were (e.g., “80% chance this will work”).
  • Actual result = what happened (1 = success, 0 = fail).

Formula (plain):
COD = (1/N) * sum over j of | confidence_j - outcome_j |

  • confidence_j in [0,1] (e.g., 0.80)
  • outcome_j ∈ {0,1} (1 = success, 0 = fail)

Example: Three bets made at 70%, 80%, 90% confidence. Only one succeeds. The average gap is large → high COD.

  • Bet 1: |0.70 – 0| = 0.70
  • Bet 2: |0.80 – 0| = 0.80
  • Bet 3: |0.90 – 1| = 0.10
  • Average: COD = (0.70 + 0.80 + 0.10) / 3 = 1.60 / 3 ≈ 0.53

How to read it:

  • Low COD → confidence matches reality.
  • High COD → overconfidence, miscalibration, or optimism bias.

Reversal Rate (RRv)

Plain formula:

RRv = (# decisions reversed) ÷ (# total decisions)

  • Tracks how often leaders walk back or overturn earlier calls.

Example: 20 major decisions, 5 reversed → RRv = 0.25 (25%).

How to read it:

  • Low RRv → calls stick.
  • High RRv → too many “false starts.”

Post-Decision Adjustment Rate (PDAR)

Plain formula:

PDAR = (# initiatives with major scope/budget changes in < 2 quarters) ÷ (# initiatives)

  • Measures how often plans need big course corrections right after they’re launched.

Example: 10 programs launched, 4 had major changes next quarter → PDAR = 0.40 (40%).

How to read it:

  • Low PDAR → strong upfront grounding.
  • High PDAR → commitments made too early or with weak assumptions.

Option Decay Cost (ODC)

Plain formula (conceptual):

ODC ≈ lost value from decisions delayed past their window of impact

  • Some opportunities lose value the longer we wait.
  • Delays = shrinking upside.

Example: A $1M opportunity delayed long enough to lose 30% potential → ODC = $300k.

How to read it:

  • Low ODC → acting while opportunities are still fresh.
  • High ODC → waiting costs real money and momentum.

Quick sanity check (use in meetings)

Ask:

  1. Do our confidence levels match reality? (COD)
  2. How often do we reverse or rework big calls? (RRv, PDAR)
  3. Are delays killing the value of opportunities? (ODC)

If these questions sting, outcomes aren’t living up to the story—classic Decision Debt.


The Composite Index

Think of the Composite Index as a scorecard for Decision Debt. It blends the four most intuitive metrics—one from each pillar—into a single, directional signal:

  • EAR (Evidence-to-Assertion Ratio) → Are claims backed by proof?
  • ALI (Assumption Load Index) → How fragile is the foundation of assumptions?
  • DL (Decision Latency) → How fast do we act once evidence is ready?
  • COD (Confidence–Outcome Divergence) → Do results live up to our confidence?

We normalize each score on a 0–1 scale (0 = good, 1 = bad), weight them to emphasize evidence and assumptions, and combine:

Decision Debt Score (DDS) = (0.35 *E AR) + (0.25 * ALI) + (0.20 * DL) + (0.20 * COD)

Traffic lights for reflection:

  • Green (<0.30): Discipline high, risk low.
  • Amber (0.30–0.55): Some cracks showing.
  • Red (>0.55): Confidence outpacing reality, value at risk.

Use this as a conversation aid, not a KPI. The point isn’t the number itself—it’s the dialogue it sparks: Which pillar is driving risk this quarter? What would change if latency were cut in half?


Click here for Deep Dive into The Composite Index
Composite Index (using the four anchor metrics)

Purpose: Roll the four pillars into a single, directional signal for discussion (not a KPI).

Metrics used

  • A: Evidence-to-Assertion Ratio (EAR)
  • B: Assumption Load Index (ALI)
  • C: Decision Latency (DL)
  • D: Confidence–Outcome Divergence (COD)
Normalize each to a 0–1 “risk” scale
  • EAR (higher is better): risk_EAR = 1 − min(1, EAR)
    (Cap EAR at 1.0; if EAR=0.8 → risk_EAR=0.2.)
  • ALI (already 0–1, higher is worse): risk_ALI = ALI
  • DL (higher is worse): choose a sensible cap (e.g., 60 days)
    risk_DL = min(DL / 60, 1) (Adjust cap to your context.)
  • COD (0–1, higher is worse): risk_COD = COD
Weighted composite

DDS = (0.35 *E AR) + (0.25 * ALI) + (0.20 * DL) + (0.20 * COD)

Readout (traffic lights)
  • Green: DDS < 0.30
  • Amber: 0.30–0.55
  • Red: > 0.55
Tiny example (illustrative only)
  • EAR = 0.7 → risk_EAR = 0.3
  • ALI = 0.42 → risk_ALI = 0.42
  • DL = 28 d → risk_DL = 28/60 ≈ 0.47
  • COD = 0.35 → risk_COD = 0.35

DDS = 0.35(0.30) + 0.25(0.42) + 0.20(0.47) + 0.20(0.35) ≈ 0.38Amber


Why Institutions Accumulate Debt

Organizations rarely intend to accumulate Decision Debt; it builds gradually, disguised as diligence. The drive for consensus stretches timelines. The fear of being wrong discourages timely judgment. Perfect-data paralysis keeps teams polishing analysis long after the opportunity for impact has passed. Over time, decisions become risk-averse performances of alignment rather than exercises in choice.

Biases like loss aversion (“better to delay than decide and regret”), groupthink (“everyone seems to agree, so it must be right”), and path dependence (“this is how we’ve always done it”) reinforce the cycle. The result is a culture where analysis expands, but commitment contracts.

Decision Debt thrives in that space between what feels safe and what actually creates value—between the comfort of more discussion and the discomfort of decisive action. The longer that gap persists, the more the organization confuses motion for progress and insight for impact.

The Decision Scientist’s Role

Decision Debt won’t appear on a balance sheet, but its effects are real — and someone has to manage it. That’s the work of the Decision Scientist: not to produce more data or dashboards, but to ensure that the organization’s decision capital is compounding, not eroding.

Decision Scientists act as stewards of cognitive capital — the systems, processes, and mindsets that turn information into sound judgment. Where analysts often stop at insight, Decision Scientists carry the work forward to decision integrity: verifying that the evidence is reliable, the assumptions transparent, and the trade-offs explicit before choices are made.

They help service Decision Debt by:

  • Reframing choices in terms of probabilities, expected value, and opportunity cost.
  • Stress-testing options through scenario modeling, simulations, and counterfactuals.
  • Surfacing hidden assumptions and diagnosing bias before it calcifies into strategy.
  • Delivering decision-ready recommendations that connect analytical rigor to business consequences.

In practice, Decision Scientists are the organization’s internal creditors — identifying where cognitive liabilities have accumulated and helping leadership restructure them into strategic assets. Their goal is not more analysis, but better decisions that preserve momentum, reduce risk, and grow value over time.

Why This Matters in the AI Era

Generative and agentic AI magnify both the risk and the reward. On the risk side, AI can flood leaders with outputs—summaries, scenarios, forecasts—faster than organizations can vet them. Volume plus velocity, without evidence standards, turns into Decision Debt at scale: more assertions than proof, more options than choices, and faster propagation of unchecked assumptions. On the reward side, with clear guardrails, AI becomes a force-multiplier for decision science—compressing cycle times from question → evidence → choice.

AI doesn’t replace judgment; it raises the stakes. The organizations that win won’t be those with the most AI, but those with the best evidence standards and decision guardrails—the ones that reduce Decision Debt faster than competitors and convert speed into reliable, value-aligned action.

Conclusion

Decision Debt is real. It may not appear on a balance sheet, but it shows up everywhere: in sluggish strategies, repeated debates, poor data quality and governance, and missed opportunities that quietly drain momentum.

By defining Decision Debt as the accumulating cost of unresolved, biased, or deferred decisions—and by empowering Decision Scientists to manage it—organizations can begin to make the invisible visible. The goal isn’t perfection; it’s progress: transforming decision-making from something reactive and opaque into something deliberate, transparent, and value-creating.

In the AI era, financial balance sheets still matter—but the one that will define competitiveness is the Decision Balance Sheet: a measure of how effectively evidence, assumptions, timeliness, and outcomes are aligned to move the organization forward.


P.S. Below is a reference table with a wider set of optional Decision Debt metrics you can adapt to your context.

Decision Debt – Executive Metric Guide

The metrics used above illustrate just one possible configuration for building a Decision Debt Score. Different contexts—sales vs. marketing, product vs. operations—may call for different substitutes, weights, or entirely new metrics that better reflect how decisions are made and acted upon in your environment. The goal isn’t to enforce a universal formula, but to spark reflection on which factors of evidence, assumptions, timeliness, and outcomes matter most—and how they might be measured or discussed to reduce Decision Debt over time.

PillarMetricPlain FormulaWhat It Measures (short definition)Executive Read
A. Evidence vs. AssertionEvidence-to-Assertion Ratio (EAR)EAR = (# proof points) ÷ (# big claims)Are headline claims actually backed by visible, checkable proof?High ≈ strong footing; Low ≈ confidence outpacing evidence

Reproducibility Rate (RR)RR = (# analyses reproduced) ÷ (# sampled)Whether independent reviewers can reproduce key analyses.High ≈ trustworthy process; Low ≈ fragile results

Lineage Completeness (LC)LC = (# datasets with sources/transforms) ÷ (# used)How well data sources and transformations are documented.High ≈ traceable evidence; Low ≈ “black box” inputs
B. Assumption Load & Model RiskAssumption Load Index (ALI)ALI = Σ (weightᵢ × uncertaintyᵢ)How much your recommendation relies on “ifs,” weighted by impact if wrong.Low ≈ sturdy foundation; High ≈ fragile stack of assumptions

Sensitivity Coverage (SC)SC = (# key drivers stress-tested) ÷ (# key drivers)Whether major drivers were varied to test robustness.High ≈ well-tested; Low ≈ brittle model

Weighted Calibration Error (wCE)wCE = Σ[ v×(p−y)² ] ÷ ΣvGap between predicted probabilities and actual outcomes, weighted by value.Low ≈ well-calibrated; High ≈ over/under-confidence
C. Timeliness & FrictionDecision Latency (DL)DL = median( decision date − insight ready date )How long it takes to make a call once evidence is ready.Low ≈ fast decisions; High ≈ friction/bottlenecks

Rework Tax (RT)RT = rework hours ÷ total analysis hoursHow much effort is spent redoing work vs. advancing.Low ≈ clean execution; High ≈ wasted cycles

Backlog of Material Decisions (BMD)BMD = value of overdue decisions ÷ total decision valueShare of important decisions stuck past agreed timelines.Low ≈ little value in limbo; High ≈ stalled value
D. Outcome Reality CheckConfidence–Outcome Divergence (COD)COD = avg gap(confidence, result)Mismatch between how sure we were and what happened.Low ≈ confidence matches reality; High ≈ miscalibration

Reversal Rate (RRv)RRv = # reversed ÷ # decisionsHow often big calls are walked back or overridden.Low ≈ calls stick; High ≈ false starts

Post-Decision Adjustment Rate (PDAR)PDAR = # major changes(<2q) ÷ # initiativesHow often plans need major changes soon after launch.Low ≈ solid commitments; High ≈ premature decisions

Option Decay Cost (ODC)ODC ≈ value lost from delayUpside lost because decisions missed the timing window.Low ≈ value captured; High ≈ value slipping away
  1. The Life Coach School: Ep #264: Decision Debt ↩︎
  2. Decision Debt: The silent crisis undermining compliance and governance ↩︎
  3. How to prevent AI from scaling technical debt? ↩︎
  4. Decision Debt: The Silent Killer of Digital Transformation ↩︎
  5. The Backlog That Eats Profit, How Decision Debt Quietly Burns Your Quarter ↩︎

Discover more from Decision Sciences: Marketing

Subscribe to get the latest posts to your email.

Leave a Reply

Discover more from Decision Sciences: Marketing

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Discover more from Decision Sciences: Marketing

Subscribe now to keep reading and get access to the full archive.

Continue reading