Approximating Nash Social Welfare for asymmetric agents with Rado-valuations


Joint work with Jugal Garg and László Végh.




Edin Husić

Allocation problems


Goods $$\mathcal{G}$$ and agents $$A$$.

$$v_i : 2^{\mathcal{G}} \to \mathbb{R}_+$$ is valuation function of agent $$i$$ s.t.:

  • $$v_i(\emptyset ) = 0$$.
  • $$v_i$$ is monotone.

$$m:=|\mathcal{G}|, n:=|A|$$ and $$m\ge n$$



$$$\max_{x}\sum_{i \in A} v_{i}(x_i)$$$

The Social Welfare problem:

  • Trivial for linear valuations: $$v_i(x_i) = \sum_{j \in \mathcal{G}} u_{ij} x_{ij}$$.
  • Solvable in polynomial-time for gross substitutes valuations.
  • NP-hard for submodular valuations, but $$e/(e-1)$$-approximation algorithm is known.
    Optimal for value oracle. [Vondrak, STOC '08]
  • $$2$$-approximation algorithm for subadditive valuations. [Feige, STOC '06]

Demand bundle: $$D_{v_i}(p) = \text{argmax}\{v_i(X) - p(X)\}$$ for prices $$p$$.

Valuation $$v_i$$ is gross substitutes: if for prices $$q\ge p$$ and $$X\in D_{v_i}(p)$$,
there exists $$Y \in D_{v_i}(q)$$ such that $$Y \supseteq \{j \in X : p_j = q_j\}$$.

Read: $$Y$$ contains all elements in $$X$$ whose price did not increase, i.e,. we continue demanding the goods with the unchanged prices.


linear $$\subset$$ gross substitutes $$\subset$$ submodular $$\subset$$ subadditive



$$$\max_{x} \min_{i \in A} v_{i}(x_i)$$$

The Santa Claus problem.

Open: a constant approximation algorithm for linear valuations?

Best known gurantee:
  • $$\frac{1}{\sqrt{n} \log^3 n}$$ [Asadpour and Saberi, STOC '10]

$$$\max_{x}\left( \prod_{i \in A} v_{i}(x_i) \right)^{1/n}$$$

The Nash Social Welfare (NSW) problem.

Nash-bargaining problems; proportional fairness in networks; CEEI.

Properties:
  • Scale-freeness.
  • A natural compromise between fairness and efficiency.
  • NP-hard to compute.


Reduction from SubsetSum: given $$\mathcal{G} \subset \mathbb{R}$$ decide if there is $$A, B$$ partition of $$\mathcal{G}$$ such that $$$\sum_{a \in A} a = \sum_{b \in B} b.$$$

Timeline

  • $$2 e^{1/e} \approx 2.895$$-app. algorithm for linear valuations

    [Cole and Gkatzelis, STOC '15]
    (spending restricted market equilibrium) = SR
  • $$e^{1/e}$$-app. algorithm for linear valuations

    real stable polynomials [Anari, Gharan, Saberi, Singh, ITCS '17]
    price-envy-freeness [Barman, Krishnamurthy, Vaish, EC '18]
  • $$2.404$$-app. algorithm for budget additive linear valuations

    SR-equilibrium [Garg, Moefer, Mehlhorn SODA '18]
  • $$2$$ and $$e^{2}$$-app. algorithm for separable piecewise linear concave

    SR-equilibrium and real stable polynomials [Anari, Mai, Gharan and Vazirani SODA '18]
  • $$e^{1/e}$$-app. algorithm for budget additive separable piecewise linear concave

    price-envy-freeness [Cheung et al. FSTTCS '18]
  • <->
  • $$O(n \log n)$$-app. algorithm for submodular valuations

    matching, unmatching, rematching [Garg, Kulkarni, Kulkarni, SODA '20]

$$256 e^{3/e}$$-approximation algorithm for agents with Rado-valuations



Given: a bipartite graph $$(\mathcal{G}, T; E)$$ with a weight function $$c : E \to \mathbb{R}_+$$ on the edges; and a matroid $$\mathcal{M} = (T, \mathcal{I})$$.

For $$S\subseteq \mathcal{G}$$, the Rado valuation function $$v(S)$$ is defined as the weight of a maximum weight matching $$M$$ s.t.:
  • $$\delta_{\mathcal{G}}(M) \subseteq S$$,
  • $$\delta_{T}(M) \in \mathcal{I}$$.



$$(\mathcal{G}, T; E)$$ and $$c : E \to \mathbb{R}_+$$; a matroid $$\mathcal{M} = (T, \mathcal{I})$$.

For $$S\subseteq \mathcal{G}$$, value $$v(S)$$ is the weight of maximum weight matching $$M$$ s.t.:

  • $$\delta_{\mathcal{G}}(M) \subseteq S$$,
  • $$\delta_{T}(M) \in \mathcal{I}$$.


For $$\mathcal{I} = 2^{T}$$ $$$ v(S) = \text{ max weight matching with } {\color{darkblue}\delta_{\mathcal G}(M)}\subseteq S $$$
The so-called assignment valuations or OXS valuations.

$$(\mathcal{G}, T; E)$$ and $$c : E \to \mathbb{R}_+$$; a matroid $$\mathcal{M} = (T, \mathcal{I})$$.

For $$S\subseteq \mathcal{G}$$, value $$v(S)$$ is the weight of maximum weight matching $$M$$ s.t.:

  • $$\delta_{\mathcal{G}}(M) \subseteq S$$,
  • $$\delta_{T}(M) \in \mathcal{I}$$.
If $$E$$ is a matching.
$$$ v(S) = \text{max weight of an independent set in } S $$$
Weighted matroid rank function.


Assignment valuations and weighted matroid rank functions are both important subclasses of gross substitute valuations. Rado-valuations are a common generalisation of these.
Rado-valuations$$\subset$$gross substitute. $$=?$$

Relaxation?

$$$ \begin{aligned} &\text{max} \quad \left( \prod_{i \in A} v_{i}(x_i) \right)^{1/n}\\ &\begin{aligned} \text{s.t.: } \quad && \sum_{i\in A} x_{ij} &\le 1 && \forall j \in \mathcal{G} \\ &&x&\ge 0 \,. \end{aligned} \end{aligned} $$$ Unbounded integrality gap:
  • Fractional: $$\left(\prod_{i \in A} \frac{2^n}{n}\right)^{1/n} \ge \frac{2^n}{n}$$.
  • Integral: $$\left( {2^n} \cdot 1 \right)^{1/n} = 2$$.

Mixed integer relaxation

$$$ \begin{aligned} &\text{max} \quad \left( \prod_{i \in A} v_{i}(x_i) \right)^{1/n}\\ &\begin{aligned} \text{s.t.: } \quad && \sum_{i\in A} x_{ij} &\le 1 && \forall j \in \mathcal{G} \\ &&x_{ij} &\in \{0,1\} && \forall j \in H, \forall i\\ &&x&\ge 0 \,. \end{aligned} \end{aligned} $$$

Upper bound: $$OPT_H$$

Approach:

Phase I: finding the set $$H$$.

$$\sigma$$ be a matching maximizing $$$\left( \prod_{i \in A} v_{i \sigma(i)} \right)^{1/n}$$$ where $$v_{ij} = v_i(j)$$.

$$H = \sigma(A)$$ - most prefered items.

Issue: if we fix $$\sigma$$ immediately $$\implies$$ no constant approximation.

$$$\sqrt{1 \cdot 2U} < \sqrt{U \cdot U}$$$

Phase II: reduction to a more structured program.

Matching relaxation $$$\begin{aligned} &\text{max} \quad \left( \prod_{i \in A} (v_{i}(x'_i) + v_{i \sigma(i)}) \right)^{1/n}\\ &\begin{aligned} \text{s.t.: } \quad && \sum_{i\in A} x'_{ij} &\le 1 && \forall j \in \mathcal{G} \setminus H \\ &&\sigma : A \to H &\text{ is a }&&\text{matching.}\\ &&x'&\ge 0 \,. \end{aligned} \end{aligned} $$$
Mixed integer relaxation $$$\begin{aligned} &\text{max} \quad \left( \prod_{i \in A} v_{i}(x_i) \right)^{1/n}\\ &\begin{aligned} \text{s.t.: } \quad && \sum_{i\in A} x_{ij} &\le 1 && \forall j \in \mathcal{G} \\ &&x_{ij} &\in \{0,1\} && \forall j \in H, \forall i \in A\\ &&x&\ge 0 \,. \end{aligned} \end{aligned} $$$

Assume $$v_i$$ is subadditive. We have $$$ \overline {OPT}_H \ge \frac{1}{e^{1/e}} OPT_H. $$$ Let $$(x',\sigma)$$ be an $$\alpha$$-approximate optimal solution for Matching relaxation. Then, $$\overline{NSW}(x',\sigma) \ge \frac{1}{{2\alpha e^{1/e}}} OPT_H$$.

It suffices to approximate Matching relaxation by losing a factor $$2 e^{1/e}$$.

Phase III: approximating Matching relaxation.

Matching relaxation $$$\begin{aligned} &\text{max} \quad \left( \prod_{i \in A} (v_{i}(x'_i) + v_{i \sigma(i)}) \right)^{1/n}\\ &\begin{aligned} \text{s.t.: } \quad && \sum_{i\in A} x'_{ij} &\le 1 && \forall j \in \mathcal{G} \setminus H \\ &&\sigma : A \to H &\text{ is a }&&\text{matching.}\\ &&x'&\ge 0 \,. \end{aligned} \end{aligned} $$$
Assume $$v_i$$ are monotone and concave. There is a polynomial-time algorithm that finds $$(x^*,\pi)$$ such that $$$ \overline{NSW}(x^*, \pi) \ge \frac{1}{2} \overline {OPT}\,.$$$ If for $$x'$$ holds $$v_i(x'_i) \ge \frac{1}{\alpha} v_i(x^*_i)$$ then $$ \overline{NSW}(x', \pi) \ge \frac{1}{2\alpha} \overline {OPT}$$.
Solve:
$$$\begin{aligned} x^*=&\text{argmax} \quad \left( \prod_{i \in A} v_{i}(x'_i)\right)^{1/n}\\ &\begin{aligned} \text{s.t.: } \quad && \sum_{i\in A} x'_{ij} &\le 1 && \forall j \in \mathcal{G} \setminus H \\ &&x'&\ge 0 \,. \end{aligned} \end{aligned} $$$
Rematch:
$$$\begin{aligned} \pi =&\text{argmax} \quad \left( \prod_{i \in A} (v_{i}(x^*_i) + v_{i \sigma(i)}) \right)^{1/n}\\ &\text{s.t.: } \quad \sigma : A \to H \text{ is a matching.} \end{aligned} $$$

Phase V: rounding a sparse solution of Matching relaxation.


Assume $$v_i$$ are subadditive. Let $$(x',\pi)$$ be a feasible solution of Matching relaxation such that $$$ |\text{supp}(x')| \le 2n + m \,.$$$
Then in polynomial-time we can find a matching $$\tau: A\to H$$ s. t.: $$$ \overline {NSW}(x'', \tau) \ge \frac{1}{32 (e^{1/e})^2}\overline {NSW}(x', \pi)\,,$$$ where $$x''$$ is integral allocation of $$\mathcal{G}\setminus H$$ and $$\text{supp}(x'') \subseteq \text{supp}(x')$$.

Combines $$\sigma$$ and $$\pi$$ to obtain $$\tau$$: uses optimality of $$\sigma$$.



Phase IV: finding a sparse solution.

Assume $$v_i$$ are monotone and concave. There is a polynomial-time algorithm that finds $$(x^*,\pi)$$ such that $$ \overline{NSW}(x^*, \pi) \ge \frac{1}{2} \overline {OPT}\,.$$

If for $$x'$$ holds $$v_i(x'_i) \ge \frac{1}{\alpha} v_i(x^*_i)$$ then $$ \overline{NSW}(x', \pi) \ge \frac{1}{2\alpha} \overline {OPT}$$.
We can round any feasible solution $$(x', \pi)$$ provided that $$$|\text{supp}(x')| \le 2n + m \,.$$$
  • Find an optimum $$x^*$$ of the convex program.
  • Rado valutaions: find a basic feasible solution $$x^*$$ of an LP describing a subset of optimal solutions. Tight constraints: $$\text{supp}(x^*) \le n + 2m$$.
  • Rado valuations: obtain a 2-approximate solution $$x'$$ by finding a basic feasible solution of another LP where the coefficients are $$x^*$$.
    Tight constraints:$$\text{supp}(x') \le 2n + m$$.




There is $$256 e^{3/e}$$-approximation algorithm for the Nash Social Welfare problem for agents endowed with Rado valuations.


  • $$2 e^{1/e}$$ for reduction to Matching relaxation,
  • $$4$$ for finding a sparse solution approximating Matching relaxation, and
  • $$32 (e^{1/e})^2$$ for the rounding of sparse solution.

$$$\text{Asymmetric NSW: }\max_{x}\left( \prod_{i \in A} v^{w_i}_{i}(x_i) \right)^{1/\sum_{i \in A} w_i}$$$

There is $$256 \gamma^3$$-approximation algorithm for the asymmetric Nash Social Welfare problem for agents endowed with Rado valuations.

Where $$\gamma = \min \left\{\frac{W}{\log(W)}, n\right\}$$ and $$W= \max_{i\in A} w_i$$.

Existing results for the asymmetric NSW:
  • $$O(n)$$ for linear utilities.
  • $$O(n \log n)$$ for submodular utilities.
    [Garg, Kulkarni, Kulkarni SODA '20]