Code
fig, _ = plot_intro_graphs(seed=4)
plt.show()Bundling in digital goods starts from a basic tension. A full-catalog bundle preserves every complementarity in the library, but it can also be expensive enough to exclude users whose interests are much narrower than the catalog itself. A platform would often like to sell access to subsets instead: sports without prestige drama, documentaries without children’s programming, one game franchise without the rest of the catalog. The problem is that once the catalog is large, this immediately becomes a packaging problem rather than a simple pricing problem. Which subsets should be offered, and how many?
That question is not well described by the textbook problem of choosing between separate sales and one grand bundle for a small list of goods. A modern platform starts from a large catalog rather than a handful of products. It often does not observe reliable ex-ante good-specific willingness to pay for most items. And once the catalog is large, unrestricted menu design becomes combinatorial: with \(J\) items there are \(2^J\) possible bundles. Standard bundling theory explains why bundling can smooth heterogeneity and why richer menus can improve screening, but it does not by itself provide a tractable way to package a large catalog when complementarities are local and unevenly distributed across items.
The restriction studied here is to replace arbitrary subsets with a small number of channels and to choose those channels using the complementarity structure of the catalog. Items are nodes, complementarities are edges, and a channel is a subset of nodes. Once packaging is restricted in this way, the key economic tradeoff becomes very concrete. A channel menu can lower entry prices and bring narrower users back into the market, but splitting the catalog also destroys the value carried by edges that run across channel boundaries. That loss is a graph cut.
fig, _ = plot_intro_graphs(seed=4)
plt.show()The picture is the core intuition. In a dense hairball, many items reinforce many others, so splitting the catalog into channels destroys a great deal of cross-item value. In that case the full bundle is likely to remain attractive despite its high price. In a clustered catalog, by contrast, many complementarities are already internal to a few groups. Then a channel structure aligned with those groups may preserve most of the value while also lowering the price of entry for users who only want one part of the catalog.
The economic question of the paper can therefore be stated in one sentence: when does the revenue gained by serving narrower users with smaller access packages outweigh the revenue lost from cutting complementarities across those packages? The answer depends on two objects. The first is demand elasticity along the extensive margin: how many users are priced out by the full bundle. The second is graph connectedness: how much complementary value is lost when the catalog is partitioned. The point of the graph formulation is that it gives a tractable way to think about the second object while keeping the first one in view.
The paper supports a simple workflow for catalog design. First, use the complementarity graph to diagnose whether the catalog is even plausibly channelable: dense, hard-to-separate graphs favor a single bundle, while graphs with clear low-cut structure make channels plausible candidates. Second, if channels look plausible, use spectral partitioning to generate a small set of candidate channel structures by keeping strongly complementary items together. Third, do not stop at the graph. Price-test the bundle and the candidate channel menus in the relevant demand environment and compare producer surplus, participation, and total surplus.
Sections 3 and 4 justify the graph step. Under standalone channel pricing, every cross-channel edge is monetizable value left on the table, so low-cut partitions are exactly the right objects to seek. Section 6 then adds the missing screening step: with heterogeneous demand, spectral partitions remain useful candidates, but they no longer determine the final pricing outcome. Throughout, spectral methods should therefore be read as a disciplined candidate-generation device rather than a claim that eigenvectors by themselves solve the full heterogeneous bundling problem.
The relationship to the existing literature is straightforward. Mussa and Rosen (Mussa and Rosen 1978) show how a seller uses a menu to screen one-dimensional private information. The relevance here is conceptual: one first specifies primitive utility, and only then studies the feasible pricing or packaging problem. But their environment is fundamentally one-dimensional. It does not face the combinatorial explosion that arises when a user’s type is effectively a \(J\)-vector of values over a large catalog.
Adams and Yellen (Adams and Yellen 1976) provide the classic starting point for bundling as multi-product price discrimination. In that literature, bundling reshapes the distribution of willingness to pay and can improve surplus extraction relative to separate sales. A central modern lesson from that work is that the fully general optimum is often a mixed-bundling problem over many subsets. With a large catalog, however, the full mixed-bundling menu is infeasible because it contains \(2^J\) bundles.
Chu, Leslie, and Sorensen (Chu, Leslie, and Sorensen 2011) study exactly that feasibility issue. Their main point is that simple restrictions on the menu can approximate the full optimum without requiring a separate price for every subset. The present paper is in that spirit. A channel partition is another feasible restriction: it replaces \(2^J\) item-level bundles with \(2^k\) combinations of channels.
What is different here is the explicit graph-theoretic representation of complementarity. Much of the classic bundling literature works with additive valuations, sometimes with correlation in the distribution of \(v\), but not with an explicit pairwise interaction term in utility. Here utility contains
\[ U(x) = v^\top x + \frac{\alpha}{2} x^\top A x, \]
so complementarity is localized on edges of a graph. That structure creates a current gap in the literature: there is still no simple, tractable theory of how a platform should package a large catalog when the bundle may be too broad for much of the market, complementarities are sparse and localized, and the full mixed-bundling problem is too large to solve or implement. The contribution here is not a full mechanism design solution. It is a tractable intermediate object. Under a natural standalone pricing rule, the packaging problem becomes an explicit graph-cut problem, which then connects directly to spectral partitioning tools.
The paper proceeds from that decision framework to its justification. Section 2 lays out the binary intuition model and states the platform’s packaging problem. Section 3 shows that, under a natural standalone pricing rule for channels, the representative-user problem becomes exactly a graph-cut problem. Section 4 extends the result to weighted graphs, explains why spectral relaxations are natural candidate-generation tools, and turns the theorem into a practical partitioning procedure. Section 5 spells out the numerical implementation of that procedure. Section 6 then adds heterogeneous demand and compares the full bundle to price-optimized candidate channel menus.
There are \(J\) items indexed by \(V = \{1, \ldots, J\}\).
Each user has an intrinsic value vector \(v = (v_1, \ldots, v_J) \in \mathbb{R}^J\). The phrase “intrinsic value” just means standalone value. The component \(v_j\) is the utility the user gets from item \(j\) even if no other complementary items are available. If a particular show, film, or game is valuable on its own, that shows up as a large \(v_j\). If the user were given access only to item \(j\), the intrinsic part of utility would be \(v_j\).
Complementarities are represented by a symmetric binary adjacency matrix \(A \in \{0,1\}^{J \times J}\) with \(A_{ii} = 0\). The interpretation is:
The parameter \(\alpha \ge 0\) scales the strength of complementarity. When \(\alpha\) is large, pairing complementary items creates a large incremental gain. This binary version is only the intuition device; Section 4 returns to the weighted case.
For any access set \(S \subseteq V\), utility is
\[ U(S) = \sum_{i \in S} v_i + \alpha \cdot |E(S)|, \]
where \(|E(S)|\) is the number of adjacency edges with both endpoints in \(S\).
The first term adds up the standalone value of the accessible items. The second term adds the extra gain generated when complementary items are jointly available.
The model is easiest to absorb through a pairwise example. Suppose items 1 and 2 are complementary, so \(A_{12} = 1\). Then giving the user both items yields
\[ U(\{1,2\}) = v_1 + v_2 + \alpha. \]
If items 1 and 3 are unrelated, so \(A_{13} = 0\), then
\[ U(\{1,3\}) = v_1 + v_3. \]
The extra term appears only for connected pairs.
The same utility can be written in matrix form as
\[ U(x) = v^\top x + \frac{\alpha}{2} x^\top A x, \]
where \(x \in \{0,1\}^J\) is the access indicator vector. The factor of one half appears because \(x^\top A x\) counts each undirected edge twice.
The platform does not offer arbitrary subsets of items. Instead, it chooses a partition \(\mathcal{P} = (C_1, \ldots, C_k)\) of the catalog, where \(C_a \cap C_b = \varnothing\) for \(a \neq b\) and \(\cup_a C_a = V\). These sets are the channels.
The platform posts a channel price \(p_a\) for each \(C_a\). A user chooses a subset of channels \(T \subseteq \{1, \ldots, k\}\) and thereby gets access to
\[ S(T) := \bigcup_{a \in T} C_a. \]
Net utility is
\[ U(S(T)) - \sum_{a \in T} p_a, \]
with outside option \(0\).
This restriction matters because it compresses the menu. Instead of pricing all \(2^J\) possible subsets of items, the platform prices \(k\) channels, which gives the user only \(2^k\) channel combinations to consider.
If the platform sells the entire catalog as one bundle, the maximum surplus-extracting bundle price and revenue are
\[ p_B^\ast = R_B = U(V) = \sum_{i=1}^J v_i + \alpha \cdot |E(V)|. \]
This is the benchmark against which channel partitioning will be evaluated.
At a high level, the platform chooses between two menu forms.
The user’s problem is always the same: given the offered menu, choose the subset of packages that maximizes net utility. What changes across the paper is how prices are assigned once a partition is fixed. In Sections 3 and 4 the prices are pinned down by the standalone rule, which makes the partition choice the central economic object. In Section 6 the prices are optimized numerically under heterogeneous demand, so the graph no longer solves the whole problem but still organizes the set of plausible channel menus.
Consider the simple pricing rule that sets each channel price equal to that channel’s standalone value:
\[ p_a := U(C_a) = \sum_{i \in C_a} v_i + \alpha \cdot |E(C_a)|. \]
If the user buys all channels, revenue is
\[ R(\mathcal{P}) = \sum_{a=1}^k p_a = \sum_{i=1}^J v_i + \alpha \cdot \sum_{a=1}^k |E(C_a)|. \]
Proposition 1. Under standalone channel pricing,
\[ R_B - R(\mathcal{P}) = \alpha \cdot \operatorname{cut}(\mathcal{P}), \]
where \(\operatorname{cut}(\mathcal{P})\) is the number of complementarity edges with endpoints in different channels.
Equivalently,
\[ \operatorname{cut}(\mathcal{P}) = |E(V)| - \sum_{a=1}^k |E(C_a)|. \]
The interpretation is immediate. Every edge that stays inside a channel can be monetized by standalone channel pricing. Every edge that crosses channel boundaries generates surplus that this pricing rule fails to extract. Under this rule, good channel design is therefore equivalent to keeping the cut small.
To make the decomposition concrete, the first numerical exercise holds fixed a single representative user, zero marginal costs, and full access to the catalog under either packaging policy. This exercise is not about exclusion. It is a theorem check. The bundle extracts all surplus, so bundle consumer surplus is zero by construction. Under standalone channel pricing, the same user still obtains the full catalog, but the cross-channel complementarity term is left on the table as consumer surplus. The table therefore shows a pure redistribution from producer surplus to consumer surplus, with total surplus unchanged.
representative_results_table(representative_user_experiment())| Representative-user cut decomposition | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Bundle and channel welfare under standalone channel pricing for the same realized catalog. | ||||||||||
| Graph structure | Uniform bundle | Channel menu | Identity check | |||||||
| Within- channel weight |
Cut weight |
Producer surplus |
Consumer surplus |
Total surplus |
Producer surplus |
Consumer surplus |
Total surplus |
Revenue gap |
Alpha x cut weight |
|
| Best Cut | 11.636 | 3.467 | 16.127 | 0.000 | 16.127 | 13.527 | 2.601 | 16.127 | 2.601 | 2.601 |
| Spectral | 11.503 | 3.600 | 16.127 | 0.000 | 16.127 | 13.428 | 2.700 | 16.127 | 2.700 | 2.700 |
| True Blocks | 11.503 | 3.600 | 16.127 | 0.000 | 16.127 | 13.428 | 2.700 | 16.127 | 2.700 | 2.700 |
| Random | 5.760 | 9.343 | 16.127 | 0.000 | 16.127 | 9.120 | 7.007 | 16.127 | 7.007 | 7.007 |
| All surplus measures are reported per user. In this theorem-check exercise, bundle consumer surplus is zero by construction. | ||||||||||
adjacency, true_labels = sample_weighted_block_graph(
cluster_sizes=[4, 4],
w_in=1.0,
w_out=0.24,
jitter=0.12,
seed=7,
)
spectral_labels, _ = spectral_partition(adjacency, n_channels=2, seed=7)
fig, ax = plt.subplots(figsize=(5.5, 5.0))
plot_adjacency_with_partition(adjacency, spectral_labels, ax=ax)
plt.show()The representative-user experiment verifies the decomposition exactly. For each candidate partition, the revenue gap equals \(\alpha\) times the cut weight. Economically, that is the cleanest statement in the paper. Computationally, it also shows why partition quality matters: the random partition performs poorly because it severs many edges, while the spectral partition comes close to the best cut.
The binary model is the right place to build intuition, but the natural generalization lets complementarities vary in strength. Replace \(A \in \{0,1\}^{J \times J}\) with a nonnegative weighted matrix \(A \in \mathbb{R}_+^{J \times J}\). Utility then becomes
\[ U(x) = v^\top x + \frac{\alpha}{2} x^\top A x. \]
The same decomposition survives:
\[ R_B - R(\mathcal{P}) = \alpha \, \operatorname{cut}_A(\mathcal{P}), \]
where \(\operatorname{cut}_A(\mathcal{P})\) is now the weighted cut.
For a two-way partition \(S \subseteq V\), let \(D\) be the diagonal degree matrix and \(L = D - A\) the Laplacian. If membership is encoded by \(s \in \{-1,+1\}^J\), then
\[ \operatorname{cut}_A(S) = \frac{1}{4} s^\top L s. \]
This is where spectral methods enter. Relaxing the discrete membership vector turns the cut problem into an eigenvector problem, so the Fiedler vector and related spectral objects become natural heuristics for finding low-cut partitions.
The qualification is important: the spectrum is not itself the pricing theorem. The pricing theorem is Proposition 1. Spectral methods matter because they give tractable approximations to the cut objective induced by that proposition.
Proposition 1 identifies the graph-side loss from channelization: under standalone pricing, every edge cut across channels is complementary value that cannot be monetized channel by channel. That turns the cut theorem into a practical workflow.
This is deliberately weaker than an optimal spectral decision rule. The theorem is exact for the complementarity-preservation margin. The final bundle-versus-channels comparison still has to be carried by the demand side.
This section spells out exactly what the code is doing in the two numerical exercises. The purpose is modest. The simulations are not meant to solve the general model; they are meant to transparently illustrate the cut decomposition and then show how heterogeneity reintroduces a nontrivial bundle-versus-channels tradeoff.
For any pricing policy, let \(S_u\) denote the subset chosen by user \(u\) and let \(P(S_u)\) denote the payment associated with that choice. If access cost is \(K(S_u)\), then the welfare objects are
\[ \mathrm{CS} = \frac{1}{N} \sum_{u=1}^N \left[V_u(S_u) - P(S_u)\right], \]
\[ \mathrm{PS} = \frac{1}{N} \sum_{u=1}^N \left[P(S_u) - K(S_u)\right], \]
and
\[ \mathrm{TS} = \mathrm{CS} + \mathrm{PS} = \frac{1}{N} \sum_{u=1}^N \left[V_u(S_u) - K(S_u)\right]. \]
The baseline exercises set \(K(S_u) = 0\), so producer surplus is just revenue and total surplus is realized gross utility. The formulas also make clear how to extend the exercises to nonzero marginal costs: one simply subtracts the cost of the chosen access set in producer and total surplus.
The representative-user exercise is the direct numerical implementation of Proposition 1.
Because bundle revenue and channel revenue are computed from the same sampled graph, this exercise is an exact numerical check of the theorem:
\[ R_B - R(\mathcal{P}) = \alpha \, \operatorname{cut}_A(\mathcal{P}). \]
The representative-user simulation is therefore about partition quality, not about price optimization. The only pricing rule used is the standalone rule from Section 3, and the welfare accounting simply records where the cross-edge term ends up.
The heterogeneous-demand exercise adds a screening problem on top of the same graph structure.
\[ V_u(S) = v_u^\top z(S) + \frac{\alpha}{2} z(S)^\top A z(S), \]
where \(z(S)\) is the indicator vector for access to the union of purchased channels.
The bundle problem is then solved by finite-sample monopoly pricing. Given each user’s gross value for the grand bundle, the code searches over the distinct observed willingness-to-pay values and chooses the price that maximizes
\[ p \cdot \Pr\{V_u(V) \ge p\}. \]
The channel problem is solved as a linear posted-price menu over channels. For a two-channel partition with price vector \(p = (p_1, p_2)\), each user chooses the subset of channels that maximizes net utility. Average revenue is therefore
\[ \frac{1}{N} \sum_{u=1}^N \sum_{a=1}^2 p_a \, y_{ua}(p), \]
where \(y_{ua}(p)\) indicates whether user \(u\) purchases channel \(a\) at price \(p\). The code searches for the revenue-maximizing linear price vector on a dense finite grid. In the baseline comparison tables that grid has size \(41 \times 41\); in the phase diagram it uses a slightly coarser \(35 \times 35\) grid for speed. Once the optimal price vector is found, the code computes purchase rates, consumer surplus, producer surplus, and total surplus from the users’ chosen subsets.
The partition comparison is then carried out for the true block partition, the spectral partition, and a random partition. This makes the exercise useful on two margins at once:
The representative-user exercise and the heterogeneous-user exercise use the same partitioning tool for different reasons. In the representative-user exercise, spectral partitioning is evaluated as a tractable approximation to the exact low-cut objective induced by Proposition 1. In the heterogeneous-user exercise, the same spectral partition is only a candidate channel menu. Heterogeneous demand adds a screening problem on top of the graph, so the final comparison is not bundle versus theorem-implied optimal cut. It is bundle versus price-optimized candidate partitions generated by the graph. Spectral methods therefore solve the complementarity-preservation subproblem, not the full heterogeneous pricing problem.
The representative-user benchmark is useful for isolating the graph-cut logic, but it is too favorable to bundling to generate a genuine bundle-versus-channels tradeoff. To create that tradeoff, the numerical extension introduces a simple screening environment:
Put differently, this section does not ask the graph to solve the whole problem. It asks whether low-cut channel candidates generated from the graph remain attractive after prices are re-optimized against heterogeneous users.
This adds a familiar economic force. A bundle preserves all complementarities, but channels allow the seller to screen users by the scope of their interests. Narrow users can buy one channel, while broad users can buy both. In this setting, the bundle can be expensive enough that specialists are priced out entirely, so the relevant comparison is no longer just producer surplus. The tables below therefore report consumer surplus, producer surplus, total surplus, and purchase rates alongside prices.
heterogeneous_results_table(
heterogeneous_partition_comparison(
alpha=0.05,
generalist_share=0.60,
seed=11,
n_users=800,
grid_size=41,
),
title="Low-complementarity market",
subtitle=(
"Channels can expand participation because the uniform bundle price is "
"high relative to many users' narrow interests."
),
)| Low-complementarity market | ||||||||
|---|---|---|---|---|---|---|---|---|
| Channels can expand participation because the uniform bundle price is high relative to many users' narrow interests. | ||||||||
| Graph | Posted prices | Welfare per user | Demand outcomes | |||||
| Cut weight |
Price 1 |
Price 2 |
Producer surplus |
Consumer surplus |
Total surplus |
Purchase rate |
Average channels purchased |
|
| Uniform bundle | ||||||||
| All items | 0.000 | 10.423 | 6.215 | 0.188 | 6.403 | 59.6% | 0.596 | |
| Channel menu | ||||||||
| True blocks | 2.857 | 5.318 | 5.038 | 8.075 | 0.439 | 8.514 | 100.0% | 1.560 |
| Spectral | 2.857 | 5.318 | 5.038 | 8.075 | 0.439 | 8.514 | 100.0% | 1.560 |
| Random | 5.237 | 7.837 | 2.519 | 6.682 | 0.273 | 6.955 | 79.0% | 1.389 |
| Producer, consumer, and total surplus are averages per user. Purchase rate is the share selecting a nonempty option. | ||||||||
This is the case that matches the extensive-margin intuition from the introduction. The uniform bundle is expensive enough that a large share of the market is priced out, while the channel menu serves everyone. In this parameter region, channels raise producer surplus, consumer surplus, and total surplus. The gain does not come from preserving more complementarities than the bundle; it comes from lowering effective entry prices and expanding participation.
heterogeneous_results_table(
heterogeneous_partition_comparison(
alpha=0.25,
generalist_share=0.30,
seed=11,
n_users=800,
grid_size=41,
),
title="High-complementarity market",
subtitle=(
"Once cross-channel complementarities are strong, preserving the bundle "
"becomes more valuable than the extra screening flexibility of channels."
),
)| High-complementarity market | ||||||||
|---|---|---|---|---|---|---|---|---|
| Once cross-channel complementarities are strong, preserving the bundle becomes more valuable than the extra screening flexibility of channels. | ||||||||
| Graph | Posted prices | Welfare per user | Demand outcomes | |||||
| Cut weight |
Price 1 |
Price 2 |
Producer surplus |
Consumer surplus |
Total surplus |
Purchase rate |
Average channels purchased |
|
| Uniform bundle | ||||||||
| All items | 0.000 | 9.042 | 8.962 | 1.658 | 10.621 | 99.1% | 0.991 | |
| Channel menu | ||||||||
| True blocks | 2.857 | 6.694 | 6.342 | 8.603 | 0.451 | 9.054 | 100.0% | 1.319 |
| Spectral | 2.857 | 6.694 | 6.342 | 8.603 | 0.451 | 9.054 | 100.0% | 1.319 |
| Random | 5.237 | 5.989 | 2.818 | 7.786 | 2.322 | 10.108 | 100.0% | 1.637 |
| Producer, consumer, and total surplus are averages per user. Purchase rate is the share selecting a nonempty option. | ||||||||
At \(\alpha = 0.25\), cutting the graph is much more expensive. The bundle then regains the lead because preserving cross-channel complementarities becomes more important than the additional screening flexibility of channels. Here the bundle’s higher willingness to pay more than offsets the exclusion margin.
The next figure sweeps over complementarity strength and the market share of generalists. Positive values mean channels beat the bundle. Negative values mean the bundle wins.
alpha_grid = [0.00, 0.05, 0.10, 0.15, 0.20, 0.25]
generalist_grid = [0.10, 0.20, 0.30, 0.40, 0.50, 0.60]
diagram = phase_diagram(
alpha_grid=alpha_grid,
generalist_grid=generalist_grid,
seed=23,
n_users=700,
grid_size=35,
)
diagram.head()| alpha | generalist_share | bundle_producer_surplus | bundle_consumer_surplus | bundle_total_surplus | bundle_purchase_rate | channel_producer_surplus | channel_consumer_surplus | channel_total_surplus | channel_purchase_rate | channel_minus_bundle | channel_minus_bundle_total_surplus | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.0 | 0.1 | 5.306283 | 0.720114 | 6.026397 | 0.991429 | 5.228297 | 0.480981 | 5.709278 | 0.997143 | -0.077986 | -0.317119 |
| 1 | 0.0 | 0.2 | 5.306353 | 1.205854 | 6.512207 | 0.991429 | 5.594665 | 0.792492 | 6.387157 | 1.000000 | 0.288312 | -0.125050 |
| 2 | 0.0 | 0.3 | 5.281506 | 1.530309 | 6.811816 | 0.984286 | 5.989076 | 0.620926 | 6.610002 | 1.000000 | 0.707569 | -0.201814 |
| 3 | 0.0 | 0.4 | 5.313187 | 1.998871 | 7.312057 | 0.991429 | 6.426653 | 0.791058 | 7.217711 | 1.000000 | 1.113466 | -0.094346 |
| 4 | 0.0 | 0.5 | 5.286213 | 2.380672 | 7.666885 | 0.995714 | 6.949465 | 0.474476 | 7.423942 | 1.000000 | 1.663252 | -0.242943 |
fig, ax = plt.subplots(figsize=(7.5, 5.0))
plot_phase_diagram(
diagram,
value_column="channel_minus_bundle",
title="Channel producer surplus minus bundle producer surplus",
colorbar_label="Producer surplus difference",
ax=ax,
)
plt.show()fig, ax = plt.subplots(figsize=(7.5, 5.0))
plot_phase_diagram(
diagram,
value_column="channel_minus_bundle_total_surplus",
title="Channel total surplus minus bundle total surplus",
colorbar_label="Total surplus difference",
ax=ax,
)
plt.show()The two heatmaps separate pricing and welfare. Higher \(\alpha\) shifts the market toward bundling because cutting destroys more complementary value. In contrast, the region where channels raise total surplus is smaller than the region where they raise producer surplus. That is exactly what one should expect: channel cutting can be privately profitable even when it does not maximize realized surplus, but when demand is elastic enough it can improve both.
The graph formulation does not replace standard bundling theory; it isolates the part of the problem created by catalog complementarities and turns it into a workable design procedure. First diagnose whether the catalog is hard or easy to cut. Then use spectral methods to generate low-cut channel candidates. Then price-test those candidates against the full bundle in the relevant demand environment. In the representative-user benchmark this graph step is exact under standalone channel pricing. With heterogeneous demand it becomes a disciplined candidate-generation step inside a broader screening problem.
The main experiment module is experiments/applied_theory.py. It is organized around three numerical exercises.
Representative-user cut decomposition
representative_user_experiment() samples a weighted block graph, fixes a representative consumer with common intrinsic values across items, and compares several two-way partitions: the latent blocks, the spectral partition, a random partition, and the brute-force best cut. This exercise is a theorem check. It verifies that the bundle-versus-channels revenue gap equals alpha * cut_weight under standalone channel pricing, and reports the associated producer surplus, consumer surplus, and total surplus.
Heterogeneous-demand bundle versus channels
heterogeneous_partition_comparison() adds a pricing problem. Users are drawn from a simple specialist/generalist mixture, the bundle price is optimized on the finite sample, and channel prices are optimized over a dense linear-price grid. The comparison is run for the true block partition, the spectral partition, and a random partition. The output reports prices, purchase rates, producer surplus, consumer surplus, total surplus, and the average number of channels purchased.
Comparative statics and phase diagrams
phase_diagram() sweeps over complementarity strength alpha and the share of generalists in the market. For each parameter pair it compares the bundle to the spectral channel partition and records the producer-surplus and total-surplus differences. These results feed the heatmaps used in the paper.
Supporting plotting utilities include plot_intro_graphs() for the stylized hairball figures, plot_adjacency_with_partition() for reordered adjacency matrices, and plot_phase_diagram() for the comparative-statics heatmaps.
"""Applied theory formulation and numerical experiments for graph-cut bundling.
This module keeps the theory intentionally simple:
1. A weighted complementarity graph A captures pairwise gains from joint access.
2. Users have heterogeneous intrinsic values over items.
3. The platform either sells the full bundle or a linear menu of channels.
4. Under standalone channel pricing for a representative user, the revenue gap
relative to the full bundle is exactly alpha times the weighted cut.
The heterogeneous-user experiments then quantify how screening gains can offset
the cut loss from partitioning.
"""
from __future__ import annotations
import itertools
from dataclasses import dataclass
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
from great_tables import GT, md
from scipy.sparse import csgraph
from sklearn.cluster import KMeans
@dataclass(frozen=True)
class PricingSolution:
"""Optimal linear prices and the induced purchase pattern."""
prices: np.ndarray
revenue: float
shares: np.ndarray
def canonicalize_labels(labels: np.ndarray) -> np.ndarray:
"""Relabel clusters in order of first appearance."""
mapping: dict[int, int] = {}
next_label = 0
out = np.empty_like(labels)
for index, label in enumerate(labels):
key = int(label)
if key not in mapping:
mapping[key] = next_label
next_label += 1
out[index] = mapping[key]
return out
def sample_weighted_block_graph(
cluster_sizes: list[int],
w_in: float = 1.0,
w_out: float = 0.18,
jitter: float = 0.08,
seed: int = 0,
) -> tuple[np.ndarray, np.ndarray]:
"""Sample a non-negative weighted block graph."""
rng = np.random.default_rng(seed)
labels = np.concatenate(
[np.full(size, cluster, dtype=int) for cluster, size in enumerate(cluster_sizes)]
)
n_items = int(labels.size)
adjacency = np.zeros((n_items, n_items), dtype=float)
for i in range(n_items):
for j in range(i + 1, n_items):
base_weight = w_in if labels[i] == labels[j] else w_out
draw = base_weight * (1.0 + jitter * rng.normal())
adjacency[i, j] = max(draw, 0.0)
adjacency[j, i] = adjacency[i, j]
np.fill_diagonal(adjacency, 0.0)
return adjacency, labels
def sample_binary_graph(n_items: int, edge_probability: float, seed: int = 0) -> np.ndarray:
"""Sample an undirected binary graph."""
rng = np.random.default_rng(seed)
adjacency = (rng.random((n_items, n_items)) < edge_probability).astype(float)
adjacency = np.triu(adjacency, 1)
adjacency = adjacency + adjacency.T
np.fill_diagonal(adjacency, 0.0)
return adjacency
def sample_binary_block_graph(
cluster_sizes: list[int],
p_in: float = 0.7,
p_out: float = 0.08,
seed: int = 0,
) -> tuple[np.ndarray, np.ndarray]:
"""Sample a binary stochastic block graph."""
rng = np.random.default_rng(seed)
labels = np.concatenate(
[np.full(size, cluster, dtype=int) for cluster, size in enumerate(cluster_sizes)]
)
n_items = int(labels.size)
adjacency = np.zeros((n_items, n_items), dtype=float)
for i in range(n_items):
for j in range(i + 1, n_items):
probability = p_in if labels[i] == labels[j] else p_out
if rng.random() < probability:
adjacency[i, j] = 1.0
adjacency[j, i] = 1.0
return adjacency, labels
def total_weight(adjacency: np.ndarray) -> float:
"""Total undirected edge weight."""
return float(0.5 * adjacency.sum())
def within_weight(adjacency: np.ndarray, labels: np.ndarray) -> float:
"""Total undirected weight that remains within channels."""
same_channel = labels[:, None] == labels[None, :]
return float(0.5 * adjacency[same_channel].sum())
def cut_weight(adjacency: np.ndarray, labels: np.ndarray) -> float:
"""Total undirected weight that is cut by the partition."""
return float(total_weight(adjacency) - within_weight(adjacency, labels))
def bundle_revenue_representative(v: np.ndarray, adjacency: np.ndarray, alpha: float) -> float:
"""Full-surplus extraction from the grand bundle for one representative user."""
return float(v.sum() + alpha * total_weight(adjacency))
def channel_revenue_standalone(
v: np.ndarray, adjacency: np.ndarray, alpha: float, labels: np.ndarray
) -> float:
"""Standalone pricing revenue for a representative user who buys all channels."""
return float(v.sum() + alpha * within_weight(adjacency, labels))
def representative_user_gap_identity(
v: np.ndarray, adjacency: np.ndarray, alpha: float, labels: np.ndarray
) -> tuple[float, float]:
"""Return both sides of the exact gap identity for verification."""
revenue_gap = bundle_revenue_representative(v, adjacency, alpha) - channel_revenue_standalone(
v, adjacency, alpha, labels
)
cut_loss = alpha * cut_weight(adjacency, labels)
return float(revenue_gap), float(cut_loss)
def build_membership(labels: np.ndarray) -> np.ndarray:
"""Construct a partition matrix with one-hot rows."""
labels = canonicalize_labels(labels)
n_items = labels.size
n_channels = int(labels.max()) + 1
membership = np.zeros((n_items, n_channels), dtype=int)
membership[np.arange(n_items), labels] = 1
return membership
def spectral_partition(
adjacency: np.ndarray, n_channels: int, seed: int = 0, normalized: bool = True
) -> tuple[np.ndarray, np.ndarray]:
"""Partition items with Laplacian spectral clustering."""
laplacian = csgraph.laplacian(adjacency, normed=normalized)
eigenvalues, eigenvectors = np.linalg.eigh(laplacian)
embedding = eigenvectors[:, :n_channels]
row_norms = np.linalg.norm(embedding, axis=1, keepdims=True)
row_norms[row_norms == 0.0] = 1.0
embedding = embedding / row_norms
model = KMeans(n_clusters=n_channels, n_init=50, random_state=seed)
labels = canonicalize_labels(model.fit_predict(embedding))
return labels, eigenvalues
def random_partition(n_items: int, n_channels: int, seed: int = 0) -> np.ndarray:
"""Sample a random non-empty partition."""
if n_channels > n_items:
raise ValueError("n_channels cannot exceed n_items")
rng = np.random.default_rng(seed)
while True:
labels = rng.integers(0, n_channels, size=n_items)
if np.all(np.bincount(labels, minlength=n_channels) > 0):
return canonicalize_labels(labels)
def iter_nonempty_partitions(n_items: int, n_channels: int):
"""Enumerate canonical non-empty partitions for small brute-force checks."""
seen: set[tuple[int, ...]] = set()
for assignment in itertools.product(range(n_channels), repeat=n_items):
labels = np.array(assignment, dtype=int)
if np.any(np.bincount(labels, minlength=n_channels) == 0):
continue
canonical = canonicalize_labels(labels)
key = tuple(int(value) for value in canonical)
if key in seen:
continue
seen.add(key)
yield canonical
def best_cut_partition(adjacency: np.ndarray, n_channels: int) -> tuple[np.ndarray, float]:
"""Brute-force the minimum weighted cut partition for small problems."""
n_items = adjacency.shape[0]
best_labels: np.ndarray | None = None
best_cut = np.inf
for labels in iter_nonempty_partitions(n_items, n_channels):
weight = cut_weight(adjacency, labels)
if weight < best_cut:
best_cut = weight
best_labels = labels.copy()
if best_labels is None:
raise RuntimeError("Failed to find a non-empty partition.")
return best_labels, float(best_cut)
def sample_segmented_users(
labels: np.ndarray,
n_users: int,
base_value: float = 0.35,
segment_strength: float = 0.9,
noise_scale: float = 0.18,
seed: int = 0,
) -> tuple[np.ndarray, np.ndarray]:
"""Sample users whose intrinsic values are tilted toward one channel."""
rng = np.random.default_rng(seed)
labels = canonicalize_labels(labels)
n_channels = int(labels.max()) + 1
preferred_channel = rng.integers(0, n_channels, size=n_users)
values = base_value + noise_scale * rng.normal(size=(n_users, labels.size))
values += segment_strength * (labels[None, :] == preferred_channel[:, None])
values = np.clip(values, 0.0, None)
return values, preferred_channel
def sample_specialist_generalist_users(
labels: np.ndarray,
n_users: int,
generalist_share: float = 0.3,
specialist_value: float = 1.3,
fringe_value: float = 0.05,
generalist_value: float = 0.9,
generalist_base: float = 0.35,
noise_scale: float = 0.05,
seed: int = 0,
) -> tuple[np.ndarray, np.ndarray]:
"""Sample a market with specialists and broad-demand generalists."""
if not 0.0 <= generalist_share <= 1.0:
raise ValueError("generalist_share must lie in [0, 1].")
rng = np.random.default_rng(seed)
labels = canonicalize_labels(labels)
n_channels = int(labels.max()) + 1
specialist_share = (1.0 - generalist_share) / n_channels
probabilities = np.full(n_channels + 1, specialist_share)
probabilities[-1] = generalist_share
user_types = rng.choice(n_channels + 1, size=n_users, p=probabilities)
values = np.zeros((n_users, labels.size))
noise = noise_scale * rng.normal(size=values.shape)
for user_index, user_type in enumerate(user_types):
if user_type < n_channels:
values[user_index] = fringe_value + specialist_value * (
labels == user_type
)
else:
values[user_index] = generalist_base + generalist_value
values = np.clip(values + noise, 0.0, None)
return values, user_types
def subset_matrix(n_channels: int) -> np.ndarray:
"""Enumerate every subset of channels, including the outside option."""
subsets = np.array(list(itertools.product([0, 1], repeat=n_channels)), dtype=int)
binary_rank = subsets @ (2 ** np.arange(n_channels - 1, -1, -1))
order = np.lexsort((binary_rank, subsets.sum(axis=1)))
return subsets[order]
def gross_subset_values(
values: np.ndarray, adjacency: np.ndarray, alpha: float, labels: np.ndarray
) -> tuple[np.ndarray, np.ndarray]:
"""Gross value of each subset of channels for every user."""
membership = build_membership(labels)
subsets = subset_matrix(membership.shape[1])
access = subsets @ membership.T
synergy = 0.5 * alpha * np.einsum("bi,ij,bj->b", access, adjacency, access)
gross_values = values @ access.T + synergy[None, :]
return gross_values, subsets
def purchase_shares(gross_values: np.ndarray, subsets: np.ndarray, prices: np.ndarray) -> np.ndarray:
"""Share of users selecting each subset under a linear menu."""
subset_prices = subsets @ prices
net_utility = gross_values - subset_prices[None, :]
choices = np.argmax(net_utility, axis=1)
counts = np.bincount(choices, minlength=subsets.shape[0])
return counts / gross_values.shape[0]
def menu_revenue(gross_values: np.ndarray, subsets: np.ndarray, prices: np.ndarray) -> float:
"""Average revenue per user from a linear menu."""
shares = purchase_shares(gross_values, subsets, prices)
subset_prices = subsets @ prices
return float(shares @ subset_prices)
def bundle_surplus_metrics(bundle_values: np.ndarray, price: float, bundle_cost: float = 0.0) -> dict:
"""Consumer, producer, and total surplus for a bundle price."""
purchases = bundle_values >= price
producer_surplus = float(((price - bundle_cost) * purchases).mean())
consumer_surplus = float(np.maximum(bundle_values - price, 0.0).mean())
total_surplus = float(((bundle_values - bundle_cost) * purchases).mean())
purchase_rate = float(purchases.mean())
return {
"producer_surplus": producer_surplus,
"consumer_surplus": consumer_surplus,
"total_surplus": total_surplus,
"purchase_rate": purchase_rate,
}
def menu_surplus_metrics(
gross_values: np.ndarray,
subsets: np.ndarray,
prices: np.ndarray,
channel_costs: np.ndarray | None = None,
) -> dict:
"""Consumer, producer, and total surplus for a linear channel menu."""
n_users = gross_values.shape[0]
subset_prices = subsets @ prices
net_utility = gross_values - subset_prices[None, :]
choices = np.argmax(net_utility, axis=1)
chosen_subsets = subsets[choices]
chosen_prices = subset_prices[choices]
chosen_gross_values = gross_values[np.arange(n_users), choices]
chosen_net_utility = net_utility[np.arange(n_users), choices]
if channel_costs is None:
chosen_costs = np.zeros(n_users)
else:
chosen_costs = chosen_subsets @ channel_costs
producer_surplus = float(np.mean(chosen_prices - chosen_costs))
consumer_surplus = float(np.mean(chosen_net_utility))
total_surplus = float(np.mean(chosen_gross_values - chosen_costs))
purchase_rate = float(np.mean(chosen_subsets.sum(axis=1) > 0))
average_channels = float(np.mean(chosen_subsets.sum(axis=1)))
return {
"producer_surplus": producer_surplus,
"consumer_surplus": consumer_surplus,
"total_surplus": total_surplus,
"purchase_rate": purchase_rate,
"average_channels_purchased": average_channels,
}
def optimize_bundle_price(bundle_values: np.ndarray) -> PricingSolution:
"""Exact monopoly bundle price on a finite user sample."""
candidates = np.unique(bundle_values)
revenue = candidates * (bundle_values[:, None] >= candidates[None, :]).mean(axis=0)
best_index = int(np.argmax(revenue))
best_price = float(candidates[best_index])
best_revenue = float(revenue[best_index])
buyers = np.array([1.0 - (bundle_values < best_price).mean(), 0.0])
return PricingSolution(prices=np.array([best_price]), revenue=best_revenue, shares=buyers)
def optimize_linear_prices(
gross_values: np.ndarray,
subsets: np.ndarray,
grid_size: int = 41,
) -> PricingSolution:
"""Optimize linear channel prices with a dense grid search."""
n_channels = subsets.shape[1]
if n_channels > 3:
raise ValueError("Grid search is only implemented for up to three channels.")
full_subset_index = int(np.argmax(subsets.sum(axis=1)))
upper_bound = float(gross_values[:, full_subset_index].max())
price_grids = [np.linspace(0.0, upper_bound, grid_size) for _ in range(n_channels)]
best_prices = np.zeros(n_channels)
best_shares = np.zeros(subsets.shape[0])
best_revenue = -np.inf
for candidate in itertools.product(*price_grids):
prices = np.array(candidate, dtype=float)
revenue = menu_revenue(gross_values, subsets, prices)
if revenue > best_revenue:
best_revenue = revenue
best_prices = prices.copy()
best_shares = purchase_shares(gross_values, subsets, prices)
return PricingSolution(prices=best_prices, revenue=float(best_revenue), shares=best_shares)
def representative_user_experiment(
seed: int = 7,
alpha: float = 0.75,
) -> pd.DataFrame:
"""Compare partitions in the representative-user weighted cut setting."""
adjacency, true_labels = sample_weighted_block_graph(
cluster_sizes=[4, 4],
w_in=1.0,
w_out=0.24,
jitter=0.12,
seed=seed,
)
values = np.full(adjacency.shape[0], 0.6)
spectral_labels, _ = spectral_partition(adjacency, n_channels=2, seed=seed)
random_labels = random_partition(adjacency.shape[0], n_channels=2, seed=seed + 1)
optimal_labels, _ = best_cut_partition(adjacency, n_channels=2)
rows = []
candidates = {
"true blocks": true_labels,
"spectral": spectral_labels,
"random": random_labels,
"best cut": optimal_labels,
}
for name, labels in candidates.items():
bundle_revenue = bundle_revenue_representative(values, adjacency, alpha)
channel_revenue = channel_revenue_standalone(values, adjacency, alpha, labels)
gap, cut_loss = representative_user_gap_identity(values, adjacency, alpha, labels)
rows.append(
{
"partition": name,
"within_weight": within_weight(adjacency, labels),
"cut_weight": cut_weight(adjacency, labels),
"bundle_producer_surplus": bundle_revenue,
"bundle_consumer_surplus": 0.0,
"bundle_total_surplus": bundle_revenue,
"channel_producer_surplus": channel_revenue,
"channel_consumer_surplus": gap,
"channel_total_surplus": channel_revenue + gap,
"revenue_gap": gap,
"alpha_cut_weight": cut_loss,
}
)
table = pd.DataFrame(rows).sort_values(["cut_weight", "partition"]).reset_index(drop=True)
return table
def heterogeneous_partition_comparison(
alpha: float = 0.1,
generalist_share: float = 0.3,
seed: int = 11,
n_users: int = 800,
grid_size: int = 41,
) -> pd.DataFrame:
"""Compare bundle and channel menus under heterogeneous demand."""
adjacency, true_labels = sample_weighted_block_graph(
cluster_sizes=[4, 4],
w_in=1.0,
w_out=0.18,
jitter=0.08,
seed=seed,
)
values, _ = sample_specialist_generalist_users(
true_labels,
n_users=n_users,
generalist_share=generalist_share,
seed=seed + 1,
)
full_access = np.ones(adjacency.shape[0])
bundle_values = values @ full_access + alpha * total_weight(adjacency)
bundle_solution = optimize_bundle_price(bundle_values)
bundle_metrics = bundle_surplus_metrics(bundle_values, price=float(bundle_solution.prices[0]))
spectral_labels, _ = spectral_partition(adjacency, n_channels=2, seed=seed)
random_labels = random_partition(adjacency.shape[0], n_channels=2, seed=seed + 2)
rows = [
{
"strategy": "bundle",
"partition": "all items",
"cut_weight": 0.0,
"price_1": bundle_solution.prices[0],
"price_2": np.nan,
"producer_surplus": bundle_metrics["producer_surplus"],
"consumer_surplus": bundle_metrics["consumer_surplus"],
"total_surplus": bundle_metrics["total_surplus"],
"purchase_rate": bundle_metrics["purchase_rate"],
"average_channels_purchased": bundle_metrics["purchase_rate"],
"revenue": bundle_metrics["producer_surplus"],
}
]
for name, labels in {
"true blocks": true_labels,
"spectral": spectral_labels,
"random": random_labels,
}.items():
gross_values, subsets = gross_subset_values(values, adjacency, alpha, labels)
solution = optimize_linear_prices(gross_values, subsets, grid_size=grid_size)
metrics = menu_surplus_metrics(gross_values, subsets, solution.prices)
rows.append(
{
"strategy": "channels",
"partition": name,
"cut_weight": cut_weight(adjacency, labels),
"price_1": solution.prices[0],
"price_2": solution.prices[1],
"producer_surplus": metrics["producer_surplus"],
"consumer_surplus": metrics["consumer_surplus"],
"total_surplus": metrics["total_surplus"],
"purchase_rate": metrics["purchase_rate"],
"average_channels_purchased": metrics["average_channels_purchased"],
"revenue": metrics["producer_surplus"],
}
)
table = pd.DataFrame(rows).sort_values(["strategy", "revenue"], ascending=[True, False])
return table.reset_index(drop=True)
def phase_diagram(
alpha_grid: list[float],
generalist_grid: list[float],
seed: int = 23,
n_users: int = 700,
grid_size: int = 35,
) -> pd.DataFrame:
"""Sweep complementarity strength and market composition."""
adjacency, base_labels = sample_weighted_block_graph(
cluster_sizes=[4, 4],
w_in=1.0,
w_out=0.2,
jitter=0.08,
seed=seed,
)
spectral_labels, _ = spectral_partition(adjacency, n_channels=2, seed=seed)
rows = []
for alpha in alpha_grid:
for generalist_share in generalist_grid:
values, _ = sample_specialist_generalist_users(
base_labels,
n_users=n_users,
generalist_share=generalist_share,
seed=seed + int(100 * alpha) + int(1000 * generalist_share),
)
bundle_values = values.sum(axis=1) + alpha * total_weight(adjacency)
bundle_solution = optimize_bundle_price(bundle_values)
bundle_metrics = bundle_surplus_metrics(
bundle_values,
price=float(bundle_solution.prices[0]),
)
gross_values, subsets = gross_subset_values(values, adjacency, alpha, spectral_labels)
channel_solution = optimize_linear_prices(gross_values, subsets, grid_size=grid_size)
channel_metrics = menu_surplus_metrics(gross_values, subsets, channel_solution.prices)
rows.append(
{
"alpha": alpha,
"generalist_share": generalist_share,
"bundle_producer_surplus": bundle_metrics["producer_surplus"],
"bundle_consumer_surplus": bundle_metrics["consumer_surplus"],
"bundle_total_surplus": bundle_metrics["total_surplus"],
"bundle_purchase_rate": bundle_metrics["purchase_rate"],
"channel_producer_surplus": channel_metrics["producer_surplus"],
"channel_consumer_surplus": channel_metrics["consumer_surplus"],
"channel_total_surplus": channel_metrics["total_surplus"],
"channel_purchase_rate": channel_metrics["purchase_rate"],
"channel_minus_bundle": (
channel_metrics["producer_surplus"] - bundle_metrics["producer_surplus"]
),
"channel_minus_bundle_total_surplus": (
channel_metrics["total_surplus"] - bundle_metrics["total_surplus"]
),
}
)
return pd.DataFrame(rows)
def _styled_results_table(table: GT) -> GT:
"""Apply a consistent visual style to Quarto tables."""
return table.tab_options(
table_width="100%",
heading_align="left",
data_row_padding="7px",
column_labels_font_weight="600",
source_notes_multiline=True,
)
def representative_results_table(results: pd.DataFrame) -> GT:
"""Format representative-user results with grouped column spanners."""
display = results.copy()
display["partition"] = display["partition"].str.title()
table = (
GT(display, rowname_col="partition")
.tab_header(
title="Representative-user cut decomposition",
subtitle=(
"Bundle and channel welfare under standalone channel pricing for "
"the same realized catalog."
),
)
.cols_label(
within_weight=md("Within-<br>channel<br>weight"),
cut_weight=md("Cut<br>weight"),
bundle_producer_surplus=md("Producer<br>surplus"),
bundle_consumer_surplus=md("Consumer<br>surplus"),
bundle_total_surplus=md("Total<br>surplus"),
channel_producer_surplus=md("Producer<br>surplus"),
channel_consumer_surplus=md("Consumer<br>surplus"),
channel_total_surplus=md("Total<br>surplus"),
revenue_gap=md("Revenue<br>gap"),
alpha_cut_weight=md("Alpha x<br>cut weight"),
)
.tab_spanner("Graph structure", columns=["within_weight", "cut_weight"])
.tab_spanner(
"Uniform bundle",
columns=[
"bundle_producer_surplus",
"bundle_consumer_surplus",
"bundle_total_surplus",
],
)
.tab_spanner(
"Channel menu",
columns=[
"channel_producer_surplus",
"channel_consumer_surplus",
"channel_total_surplus",
],
)
.tab_spanner(
"Identity check",
columns=["revenue_gap", "alpha_cut_weight"],
)
.fmt_number(
columns=[
"within_weight",
"cut_weight",
"bundle_producer_surplus",
"bundle_consumer_surplus",
"bundle_total_surplus",
"channel_producer_surplus",
"channel_consumer_surplus",
"channel_total_surplus",
"revenue_gap",
"alpha_cut_weight",
],
decimals=3,
)
.tab_source_note(
md(
"All surplus measures are reported per user. In this theorem-check "
"exercise, bundle consumer surplus is zero by construction."
)
)
)
return _styled_results_table(table)
def heterogeneous_results_table(
results: pd.DataFrame,
title: str,
subtitle: str | None = None,
) -> GT:
"""Format heterogeneous-demand results with row groups and column spanners."""
display = results.copy()
display["strategy"] = display["strategy"].map(
{
"bundle": "Uniform bundle",
"channels": "Channel menu",
}
)
display["partition"] = display["partition"].map(
{
"all items": "All items",
"true blocks": "True blocks",
"spectral": "Spectral",
"random": "Random",
}
)
display = display[
[
"strategy",
"partition",
"cut_weight",
"price_1",
"price_2",
"producer_surplus",
"consumer_surplus",
"total_surplus",
"purchase_rate",
"average_channels_purchased",
]
]
table = (
GT(display, rowname_col="partition", groupname_col="strategy")
.tab_header(title=title, subtitle=subtitle)
.cols_label(
cut_weight=md("Cut<br>weight"),
price_1=md("Price<br>1"),
price_2=md("Price<br>2"),
producer_surplus=md("Producer<br>surplus"),
consumer_surplus=md("Consumer<br>surplus"),
total_surplus=md("Total<br>surplus"),
purchase_rate=md("Purchase<br>rate"),
average_channels_purchased=md("Average channels<br>purchased"),
)
.tab_spanner("Graph", columns=["cut_weight"])
.tab_spanner("Posted prices", columns=["price_1", "price_2"])
.tab_spanner(
"Welfare per user",
columns=["producer_surplus", "consumer_surplus", "total_surplus"],
)
.tab_spanner(
"Demand outcomes",
columns=["purchase_rate", "average_channels_purchased"],
)
.fmt_number(
columns=[
"cut_weight",
"price_1",
"price_2",
"producer_surplus",
"consumer_surplus",
"total_surplus",
"average_channels_purchased",
],
decimals=3,
)
.fmt_percent(columns=["purchase_rate"], decimals=1)
.sub_missing(columns=["price_2"], missing_text="")
.tab_source_note(
md(
"Producer, consumer, and total surplus are averages per user. "
"Purchase rate is the share selecting a nonempty option."
)
)
)
return _styled_results_table(table)
def plot_phase_diagram(
results: pd.DataFrame,
value_column: str = "channel_minus_bundle",
title: str = "Channel producer surplus minus bundle producer surplus",
colorbar_label: str = "Surplus difference",
ax: plt.Axes | None = None,
) -> plt.Axes:
"""Heatmap of channel-minus-bundle outcomes across the parameter grid."""
if ax is None:
_, ax = plt.subplots(figsize=(7.0, 5.0))
pivot = results.pivot(
index="generalist_share",
columns="alpha",
values=value_column,
).sort_index(ascending=True)
image = ax.imshow(
pivot.to_numpy(),
origin="lower",
aspect="auto",
cmap="coolwarm",
extent=[
float(pivot.columns.min()),
float(pivot.columns.max()),
float(pivot.index.min()),
float(pivot.index.max()),
],
)
ax.set_xlabel("Complementarity strength alpha")
ax.set_ylabel("Generalist share")
ax.set_title(title)
plt.colorbar(image, ax=ax, label=colorbar_label)
return ax
def plot_adjacency_with_partition(
adjacency: np.ndarray, labels: np.ndarray, ax: plt.Axes | None = None
) -> plt.Axes:
"""Visualize the weighted adjacency matrix under a partition ordering."""
if ax is None:
_, ax = plt.subplots(figsize=(5.0, 5.0))
order = np.argsort(labels, kind="stable")
sorted_matrix = adjacency[np.ix_(order, order)]
ax.imshow(sorted_matrix, cmap="Blues")
ax.set_xticks([])
ax.set_yticks([])
ax.set_title("Adjacency ordered by channel labels")
return ax
def plot_intro_graphs(seed: int = 0) -> tuple[plt.Figure, np.ndarray]:
"""Draw stylized dense and clustered complementarity graphs."""
dense_adjacency = sample_binary_graph(n_items=18, edge_probability=0.32, seed=seed)
clustered_adjacency, clustered_labels = sample_binary_block_graph(
cluster_sizes=[9, 9],
p_in=0.68,
p_out=0.06,
seed=seed + 1,
)
figure, axes = plt.subplots(1, 2, figsize=(11.0, 4.8))
dense_graph = nx.from_numpy_array(dense_adjacency)
clustered_graph = nx.from_numpy_array(clustered_adjacency)
dense_positions = nx.spring_layout(dense_graph, seed=seed, k=0.42)
rng = np.random.default_rng(seed + 2)
clustered_positions: dict[int, tuple[float, float]] = {}
centers = [(-1.1, 0.0), (1.1, 0.0)]
for cluster, center in enumerate(centers):
members = np.where(clustered_labels == cluster)[0]
angles = np.linspace(0.0, 2.0 * np.pi, len(members), endpoint=False)
for member, angle in zip(members, angles):
radius = 0.52 + 0.04 * rng.normal()
x_coord = center[0] + radius * np.cos(angle) + 0.05 * rng.normal()
y_coord = center[1] + radius * np.sin(angle) + 0.05 * rng.normal()
clustered_positions[int(member)] = (float(x_coord), float(y_coord))
nx.draw_networkx_edges(
dense_graph,
pos=dense_positions,
ax=axes[0],
width=1.0,
edge_color="#577590",
alpha=0.28,
)
nx.draw_networkx_nodes(
dense_graph,
pos=dense_positions,
ax=axes[0],
node_size=110,
node_color="#2a9d8f",
edgecolors="#264653",
linewidths=0.6,
)
axes[0].set_title("Dense complementarity hairball")
axes[0].set_axis_off()
edge_colors = [
"#8d99ae" if clustered_labels[i] == clustered_labels[j] else "#c9ada7"
for i, j in clustered_graph.edges()
]
edge_widths = [
1.1 if clustered_labels[i] == clustered_labels[j] else 0.9
for i, j in clustered_graph.edges()
]
nx.draw_networkx_edges(
clustered_graph,
pos=clustered_positions,
ax=axes[1],
width=edge_widths,
edge_color=edge_colors,
alpha=0.45,
)
nx.draw_networkx_nodes(
clustered_graph,
pos=clustered_positions,
ax=axes[1],
node_size=110,
node_color=["#e07a5f" if label == 0 else "#3d405b" for label in clustered_labels],
edgecolors="#1f1f1f",
linewidths=0.6,
)
axes[1].set_title("Clustered catalog with weak cross-links")
axes[1].set_axis_off()
figure.tight_layout()
return figure, axes
def main() -> None:
"""Run a compact console summary of the main experiments."""
rep = representative_user_experiment()
print("Representative-user cut decomposition")
print(rep.round(4).to_string(index=False))
hetero = heterogeneous_partition_comparison()
print("\nHeterogeneous-user comparison")
print(hetero.round(4).to_string(index=False))
alpha_grid = [0.0, 0.05, 0.1, 0.15, 0.2, 0.25]
generalist_grid = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
diagram = phase_diagram(alpha_grid=alpha_grid, generalist_grid=generalist_grid)
winners = (diagram["channel_minus_bundle"] > 0).mean()
print("\nShare of sweep where channels beat the bundle:", round(float(winners), 4))
if __name__ == "__main__":
main()