4. Customer Value, Acquisition, and Marketing ROI

The fourth session shifts attention from platform structure to measurement. Once a platform has some traction, it has to answer a different set of questions: which customers are valuable, which acquisition spend is truly incremental, and how much of observed growth is actually caused by marketing? The original session video is 1.

Observed growth versus incremental growth

4.1 Customer lifetime value is about contribution over time, not signup counts

The notes define customer lifetime value, or CLV, as the expected profit generated by a customer over their relationship with the platform. The conceptual model is straightforward:

  • there is some contribution from the first transaction
  • there may be repeat contribution if the customer returns
  • future value must be discounted because later cash flow is less valuable than immediate cash flow

One convenient stylized expression is:

$$ \mathrm{CLV} \approx m_0 + \sum_{t=1}^{\infty}\frac{r^t m}{(1+d)^t} = m_0 + \frac{rm}{1 + d - r} $$

where (m_0) is initial contribution margin, (m) is repeat-period contribution, (r) is retention probability, and (d) is the discount rate.

The session is careful not to turn this into a mechanical spreadsheet exercise. The more important point is distributional:

  • median behavior can look modest
  • average behavior can be much higher because heavy users contribute disproportionately

The eBay example in the notes captures this nicely: the median buyer makes only a small number of purchases, while the average is much larger because a minority of users are highly active. Platform managers who ignore tails in the distribution will systematically misread customer value.

4.2 Customer acquisition cost: naive CAC is usually too optimistic

Customer acquisition cost, or CAC, is often presented as:

$$ \mathrm{CAC} = \frac{\text{sales and marketing spend}}{\text{new customers}} $$

The session argues that this is frequently wrong in practice because the denominator mixes. A baseline CAC explainer 2 is a useful starting point, but the lecture’s point is that platform settings often need a stricter, incrementality-based interpretation of the same metric:

  • truly incremental customers
  • customers who would have arrived anyway
  • customers who were merely accelerated by the campaign

The session’s stylized example is simple and powerful:

  • spend $100,000
  • observe 10,000 signups
  • experimental evidence shows only 2,000 were incremental

Then:

  • naive CAC = $10
  • incremental CAC = $50

Those are radically different businesses.

4.3 Incrementality is the real marketing question

The notes repeatedly warn against attributing all post-campaign behavior to the campaign itself. That is why incrementality becomes the central concept:

  • observed conversions are not the same as caused conversions
  • campaign-exposed users are not a valid counterfactual for themselves
  • correlation between spend and outcomes can arise from selection, timing, or pre-existing demand

The coupon example from the session is especially instructive. A superficial analysis suggested an enormous return on investment because many coupon users purchased. But once only the truly incremental redemptions were counted, the economics flipped and the campaign no longer looked attractive.

This is the right mental model for platform growth:

  • not every acquired user is incremental
  • not every retained user is profitable
  • not every active buyer helps the seller side equally

Good platform measurement therefore asks about incremental ecosystem activity, not just top-line conversion counts.

4.4 Experiments, including multi-channel experiments, are the cleanest answer when they are feasible

The session strongly favors experimental measurement:

  • randomized holdouts
  • controlled tests on channels or geographies
  • natural experiments when direct randomization is difficult

The eBay paid-search example is the memorable case. Shutting off branded keyword advertising revealed that much of the spend had been cannibalizing traffic that would have arrived through organic search anyway. In other words, paid clicks looked valuable in attribution dashboards because the firm was paying for users it already owned.

This is a classic platform-growth trap. Strong brands and habitual users make last-click metrics look better than reality.

The notes also discuss multi-channel experiments, especially 2 x 2 designs, to test whether channels are substitutes or complements. That extension is treated as a real design choice rather than a footnote, because platforms often run overlapping acquisition tactics whose effects are not additive.

4.5 Selection bias, endogeneity, and small-sample honesty

The sessions stress that observational marketing data are often confounded:

  • high-intent users self-select into certain channels
  • large spend can follow high demand rather than cause it
  • campaign targeting can mirror latent value rather than generate it

This is the same logic that makes platform experimentation difficult in broader economics settings: treatment is often not randomly assigned.

The later Q and A notes add a useful managerial principle for small samples: when the data are thin, honesty is better than false precision. Use qualitative context, acknowledge uncertainty, and design the next experiment rather than pretending the current sample answers more than it does.

That is not anti-analytics. It is disciplined analytics.

4.6 Communicating analysis to decision-makers

The session includes an important applied point about executive communication. Data scientists often want to say they have “proven” a result. The lecture argues for a better approach:

  • explain the counterfactual clearly
  • show why the naive metric is misleading
  • tell a plausible causal story backed by evidence
  • connect that story to a concrete decision

This matters because platform firms often have internal incentives that resist unpleasant findings. If a marketing team is rewarded for attributed conversions, then evidence that branded search spend is mostly wasteful will face organizational resistance even when the experiment is clean.

The analytics problem and the organizational problem are therefore linked.

4.7 Guest-speaker lessons: chicken-and-egg problems and abundance mindsets

The guest-speaker material in the notes adds a grounded founder perspective. Two-sided marketplaces often face the classic chicken-and-egg problem:

  • buyers want supply to be present
  • suppliers want demand to be present

That is why early platform strategy often involves extreme manual effort, local focus, subsidy, or hand-built liquidity.

The notes also emphasize an abundance mindset: build around a genuine user pain point instead of starting with the goal of “building a unicorn.” That framing fits the rest of the notes well. Customer economics become much easier to interpret when the platform is solving a real coordination problem instead of manufacturing growth theater.

4.8 The measurement lesson for platform strategy

This session expands the earlier chapters in one important way. Platform growth should not be evaluated only by:

  • signups
  • app installs
  • attributed conversions
  • gross campaign ROI

It should be evaluated by:

  • retained contribution
  • side-specific incrementality
  • quality of acquired users
  • whether acquisition deepens the network in a durable way

That is a much stricter standard, but it is closer to the economics of what platforms are actually trying to build.

4.9 References

  1. EODP-session-4 W22 | Videos & Movies on Vimeo
  2. Customer Acquisition Cost (CAC) - Definition, Formula, and Example
Previous
Next