Читать книгу Hardness of Approximation Between P and NP - Aviad Rubinstein - Страница 12
Оглавление3
Communication Complexity of Approximate Nash Equilibrium
The main motivation for studying the complexity of approximate Nash equilibrium is the insight about the relevance of Nash equilibrium as a predictive solution concept: if specialized algorithms cannot compute an (approximate) equilibrium, it is unreasonable to expect selfish agents to “naturally” converge to one. (See also discussions in the introduction, as well as Daskalakis et al. [2009a], Hart and Mansour [2010], Nisan [2009b].) Although extremely useful and the main focus of this book, lower bounds on computational complexity suffer from an obvious caveat: we actually don’t know how to truly prove any computational lower bounds: All our computational lower bounds inherently rely on complexity assumptions (such as NP ≠ P or PPAD ≠ P); even though these assumptions are widely accepted by computer scientists, they make these theorems less accessible to game theorists and economists. For example, it is not clear how they relate to the uncoupled dynamics model studied in game theory [Babichenko 2012, Hart and Mas-Colell 2003, 2006].
In this part of the book, we prove unconditional lower bounds on the communication complexity of approximate Nash equilibrium. In the communication complexity model, each player knows her own utility function, and we restrict the amount of information exchanged between players in order to agree on an approximate Nash equilibrium. The players in this model are unreasonably powerful beings with unlimited computational power. In this sense, obtaining lower bounds even in this setting is more convincing than our computational complexity results. Furthermore, our lower bounds on communication complexity translate immediately to the uncoupled dynamics model mentioned above (see also Subsection 3.1). The tradeoff is that the computational complexity lower bounds we can prove are stronger. Take two-player games with N actions, for example. The main result in this book is that no polynomial time algorithm can find an approximate equilibrium. In the communication complexity model, per contra, it is trivial for both players to send their entire N × N utility matrix; hence the most we can hope for is a polynomial lower bound.
Indeed, our communication complexity results do not directly fit into the “between P and NP” theme of this book. However, we chose to include them because they provide further evidence that real players may not converge to a Nash equilibrium. More importantly, en route to obtaining our communication complexity lower bounds, we develop a construction of a hard-to-find Brouwer fixed point. This construction will be useful in Chapters 6 and 16.
Our Results
We study both two-player games with a large number (N) of actions, and two-action games with a large number (n) of players. The trivial solution of communicating every player’s entire utility function in normal form requires O(N2) and O(n2n) communication, respectively.1 For constant approximation, no non-trivial lower bounds were previously known for the general model, and even for the restricted case of randomized query complexity (see Subsection 3.3) both settings were stated as open questions in Fearnley et al. [2013] and Babichenko [2016], Chen et al. [2017], Hart and Nisan [2013], respectively. For n-player, Hart and Mansour [2010] gave an exp(n) lower bound on the communication complexity of exact Nash equilibrium in n-player games as also exp(n),2 and even for an approximate parameter of 1/ poly(n) the problem was open [Nisan 2009a].
For two-player games, we prove a polynomial lower bound on the communication complexity:
Theorem 3.1
There exists a constant ϵ > 0 such that the randomized communication complexity (BPPcc)of ϵ-Nash equilibrium in two-player N × N games is at least Nϵ.
For n-player games, we consider a two-party communication problem where the set of players [n] is divided into two disjoint subsets [n] = nA ⊍ nB. Alice holds the utilities of the players in nA, and Bob holds the utilities of the players in nB. In particular, this communication problem is easier than the n-parties communication problem where each player holds his own utility function. Furthermore, our negative result holds for the notion of weak approximate Nash equilibrium [Babichenko et al. 2016], which in particular implies the same negative result for the standard notion of approximate Nash equilibrium (see also Definition 2.3).
Theorem 3.2
There exists a constant ϵ> 0 such that the randomized communication complexity (BPPcc)of (ϵ, ϵ)-weak approximate Nash equilibrium in n-player binary-action games is at least 2ϵn.
3.1 Uncoupled Dynamics
An underlying assumption of the Nash equilibrium solution is that players predict correctly the (mixed) action of their opponents (or alternatively predict correctly their expected payoff at each action). One justification for this problematic assumption, which appears in the seminal work of John Nash [Nash 1951], is that in some scenarios players may learn the behavior of their opponents in cases where the game is played repeatedly. This idea led to an extensive study of learning dynamics and their convergence to Nash equilibrium (see, e.g., [Young 2004, Hart and Mas-Colell 2013, Kalai and Lehrer 1993]). One natural, and general, class of adaptive dynamics is that of uncoupled dynamics [Hart and Mas-Colell 2003, 2006], where it is assumed that players do not know the utilities of their opponents (but observe their past behavior). The question on the existence of uncoupled dynamics that lead to Nash equilibrium is quite well understood [Babichenko 2012, Foster and Young 2006, Germano and Lugosi 2007, Hart and Mas-Colell 2006]. Several uncoupled dynamics that converge to approximate Nash equilibrium (or pure Nash equilibrium [Young 2009]) are known. All these dynamics are based on an exhaustive search principle, where at the moment a player realizes she is acting sub-optimally she updates her action to a random one (rather than to an optimal one or a better one). One criticism of these dynamics is that convergence to equilibrium may take an unreasonably long time in large games where the exhaustive search is done over a large space. This led to the study of the rate of convergence of uncoupled dynamics. As pointed out by Conitzer and Sandholm [2004], for every solution concept (in particular equilibria solutions), the (randomized) communication complexity of a solution is identical (up to a logarithmic factor) to the rate of convergence by any (randomized) uncoupled dynamics to the solution. This observation initiated the communication complexity study in games. As was mentioned above, the communication complexity, and thus also the rate of convergence of uncoupled dynamics, was known only for exact or pure Nash equilibrium. The question on the rate of convergence of uncoupled dynamics to approximate Nash equilibrium was an open question. Given the fact that all known positive results introduce dynamics that converge to approximate Nash equilibrium, this question is central. Our results for communication complexity resolve this open question, yielding the following negative results for uncoupled dynamics:
Corollary 3.1
Uncoupled dynamics. There exists a constant ϵ > 0 such that any uncoupled dynamics require:
2-player. At least poly(N) rounds to converge to an ϵ-Nash equilibrium in two-player N × N games.
n-player. At least 2Ω(n) rounds to converge to an ϵ-Nash equilibrium (or even (ϵ, ϵ)-weak approximate Nash equilibrium) in n-player binary-action games.
3.2 Techniques
Proving communication complexity lower bounds for Nash equilibrium is notoriously challenging for two reasons. The first reason, as is common in hardness of Nash equilibrium in other models, is totality: there always exists at least one (exact) equilibrium, and the proof of existence induces a non-trivial (yet inefficient) algorithm for finding it. In order to construct hard instances we must carefully hide the equilibrium (we can’t just remove it), and make sure that the above algorithm is indeed inefficient for our instances.
Another reason for the communication complexity of approximate equilibrium being an open question for a long time is the fact that there exist efficient non-deterministic communication protocols (polylog(N) for two-player, poly(n) for n-player): verification of equilibrium (exact or approximate) requires only constant communication, and small-representation approximate equilibria always exist (e.g., by Lipton et al. [2003]). Therefore, the communication complexity lower bounds for approximate equilibria, as we prove in this chapter, show an exponential gap between the non-deterministic and randomized (or even deterministic) communication complexity of a total search problem. We are certainly not the first to show such separations (see, e.g., [Karchmer et al. 1995, Raz and McKenzie 1999, Raz and Wigderson 1990]).3 But such separations are still not very common in communication complexity, and for a good reason: for decision problems, they are impossible! The deterministic communication complexity is upper-bounded by the product of the non-deterministic and co-non-deterministic communication complexities [Aho et al. 1983].
The first ingredient in our proof is a construction of a special continuous function f : [0, 1]n → [0, 1]n whose approximate fixed points are hard to find. The construction is inspired by that of Hirsch et al. [1989], and the main new ingredient is the use of error-correcting codes to replace the l∞ inapproximability with l2 inapproximability. The construction appears in Chapter 4. The second ingredient in our proof is the simulation theorems for randomized communication complexity due to Anshu et al. [2017] and Göös et al. [2017].
The main steps in our proofs are as follows. First, we prove a randomized query complexity hardness result for the problem of finding the end of a line in a particular constant-degree graph. Then we use a simulation theorem of Anshu et al. [2017] and Göös et al. [2017] to “lift” this query complexity hardness result to randomized communication complexity hardness. We use the construction in Chapter 4 to embed this line as a continuous Lipschitz function f : [0, 1]n → [0, 1]n. Finally, we build on ideas from Babichenko [2016], McLennan and Tourky [2005], and Shmaya [2012] to construct a two-player (respectively n-player) “imitation game” that simulates the short communication protocol for the computation of f, as well as a fixed point verification. In particular, every (approximate) Nash equilibrium of the game corresponds to an approximate fixed point of f, which in turn corresponds to an end of a line.
Since in this chapter we are proving unconditional intractability results, we have the privilege of reasoning about an explicit distribution of hard instances. In particular, it suffices to begin with the END-OF-THE-LINE special case of the END-OF-A-LINE problem, where the graph consists of just one line—and we want to find the end of that line. This hardly changes the proof, but it makes the notation a little simpler. For example, it suffices to prove that a decision problem (find the most significant bit of the end of the line) is hard. Furthermore, our hardness now holds for the interesting case where the game has a unique Nash equilibrium.
3.3 Additional Related Literature
Pure Nash Equilibrium. The communication complexity of pure Nash equilibrium has been studied before: in two-player N × N games it is poly(N) [Conitzer and Sandholm 2004], and in n-player games it is exp(n) [Hart and Mansour 2010].
Approximation Protocols. For two-player N × N games and ϵ ≈ 0.382, Czumaj et al. [2015] show that polylog(N) communication is sufficient for computing an ∊-approximate Nash equilibrium (improving over a protocol for ϵ ≈ 0.438 due to Goldberg and Pastink [2014]).
Query Complexity. There are several interesting results on the more restricted model of query complexity of approximate Nash equilibria, where the algorithm is assumed to have black-box access to the normal form representation of the utility function. Note that our communication complexity lower bounds immediately extend to this model as well.
Hart and Nisan [2013] prove that any deterministic algorithm needs to query at least an exponential number of queries to compute any ϵ-well-supported Nash equilibrium—and even any ϵ-correlated equilibrium. Babichenko [2016] showed that any randomized algorithm requires an exponential number of queries to find an ϵ-well-supported Nash equilibrium. Chen et al. [2017] extended Babichenko’s result to an almost-exponential (2Ω(n/ logn)) lower bound on the query complexity of ϵ-approximate Nash equilibrium. Note that our lower bound here is not only bigger (saving the log n factor), but also holds for the more general notion of weak approximate Nash equilibrium, and in the more general model of communication complexity.
Goldberg and Roth [2016] give a polynomial upper bound on the query complexity of ϵ-WSNE for any family of games that have any concise representation. This result is to be contrasted with (a) Babichenko’s query complexity lower bound, which uses a larger family of games, and (b) our lower bounds on the computational complexity of succinctly represented games (Theorem 5.1).
A much older yet very interesting and closely related result is that of Hirsch et al. [1989]. They show that any deterministic algorithm for computing a Brouwer fixed point in the oracle model must make an exponential -in the dimension n and the approximation ϵ-number of queries for values of the function. Our construction here builds upon and improves over Hirsch et al. [1989] by working with the l2-norm instead of the l∞-norm.
Correlated Equilibrium. For the related notion of correlated equilibrium, in n-player games with a constant number of actions, it is known that even exact correlated equilibrium can be computed using only poly(n)-communication (see [Hart and Mansour 2010, Jiang and Leyton-Brown 2015, Papadimitriou and Roughgarden 2008]. Interestingly, for exact correlated equilibria, there is an exponential gap between the above communication protocol and the query complexity lower bound of Babichenko and Barman [2015] and Hart and Nisan [2013]. Goldberg and Roth [2016] characterize the query complexity of approximate coarse correlated equilibrium in games with many players. Further discussion on correlated equilibria appears in Section 3.6.
Communication Complexity of Finding Fixed Points. For the related problem of finding a fixed point, Roughgarden and Weinstein [2016] study the communication complexity of an approximate fixed point of the decomposition. Namely, Alice holds a Lipschitz function f : A→B and Bob holds a Lipschitz function g : B→A, where A and B are compact convex sets, and their goal is to compute a fixed point of the decomposition g ◦ f. Roughgarden and Weinstein prove that the following version of this problem is communicationally hard: find an approximate fixed point of g ◦ f on a grid of A, when it is promised that such an approximate fixed point on the grid exists (the problem is not total).
Complexity of Equilibrium and Price of Anarchy. As discussed earlier, the main motivation for studying the (communication) complexity of Nash equilibrium is understanding its relevance as a predictive solution concept. This is a good place to mention a recent work of Roughgarden [2014], which highlights another important motivation for studying the complexity of Nash equilibrium: understanding the quality of equilibria. The Price of Anarchy (PoA) of a game is the ratio between the social welfare (sum of players’ utilities) in an optimum strategy profile, and the social welfare in the worst Nash equilibrium of that game. Roughgarden [2014] provides the following widely applicable recipe for lower bounds on PoA: if a Nash equilibrium can be found efficiently (in particular, via the non-deterministic protocol due to Lipton et al. [2003]), but approximating the optimal social welfare requires a higher communication complexity (even for non-deterministic protocols, e.g., by reduction from set disjointness), then clearly not all Nash equilibria yield high social welfare.
3.4 Proof Overview
The formal proofs appear in Section 3.5. Below we present the main ideas of the proof. As mentioned in the Introduction, the proof consists of four main steps. Below we present the ideas of each step.
Query Complexity of End-of-the-Line
Our proof starts with the following query complexity hardness result (Lemma 3.2): There exists a constant degree graph G = (V, E) with 2Θ(n) vertices, such that finding the end of a line in G requires 2Ω(n) queries. In fact, we prove the hardness result for directed graph G where each vertex has outgoing and incoming degree 2. Therefore, the successor and predecessor of each vertex are binary variables. In particular, for each v ϵ V, the information about its role in the line can be represented using only three bits, which we denote I(v) ≜ (T(v), P(v), S(v)) ∈ {0,1}3:
(a) Whether the line goes through v, which is denoted by T(v),
(b) Who is the successor of v (if v is on the line), which is denoted by S(v),
(c) Who is the predecessor of v (if v is on the line), which is denoted by P(v).
Lemma 3.1
QUERY END-OF-A-LINE; informal. Finding an end of a line with high probability requires 2Ω(n) queries to I.
From Query Complexity to Communication Complexity
We use a recent simulation theorem to “lift” our randomized query complexity lower bound to a randomized communication complexity bound.
The simulated communicationally hard problem has the following form. For each v ∈ V, Alice holds a triplet of vectors αT,v, αS,v, αP,v ∈ {0, 1}M where M = 2O(n), and Bob holds a reasonably small input which is just a triplet of indexes βT,v, βS,v, βP,v ∈ [M]. T(v) is given by the decomposition T(v) = αT,v(βT,v) (similarly for the successor and predecessor). The simulation theorem of Göös et al. [2017] and Anshu et al. [2017] now implies:
Corollary 3.2
CC(SIMULATION END-OF-A-LINE); informal. Finding an end of a line requires 2Ω(n) bits of communication.
Embedding as a Continuous Function
Our next step is to reduce the problem of finding an end of a line to that of finding a Brouwer fixed point. Here, we use the construction from Chapter 4.
We embed the vertices of the discrete graph G in the continuous space [−1, 2]Θ(n). Specifically, we embed each vertex v of G into a point xv in [−1, 2]Θ(n) and each edge (v, w) in G into a (continuous) path in [−1, 2]Θ(n) that connects the corresponding points xv and xw. In particular, we construct a continuous Lipschitz function f :[−1, 2]Θ(n) → [−1, 2]Θ(n) such that:
1. The computation of f can be done using local information about I. Namely, for points that are close to xv it is sufficient to know I(v) to compute f. For points that are close to a path that corresponds to the edge (v, w) but far from the points xv, xw, it is sufficient to know whether (v, w) is an edge in the line (in particular, knowing either I(u) or I(v) suffices). For points that are far from all paths (v, w), f does not depend on I at all (and thus can be computed without any communication).
2. Any (normalized) ||·||2-approximate fixed point of f can be mapped (efficiently) back to an end of some line in I.
Property 1 induces the following efficient communication protocol for computing f(x): Bob finds v such that x is close to xv, and sends βT,v, βS,v, βT,v; Alice replies with and they each use I(v) to locally compute f(x). (Similarly, if x is close to the path corresponding to edge (v, w), they use a similar protocol to compute I(v) and I(w).)
By Property 2, we have:
Corollary 3.3
CC(BROUWER); informal. Finding a (normalized) ||·||2-approximate fixed point of f requires 2Ω(n) bits of communication.
Two-Player Game
Naively thinking, we would like to design a game where Alice chooses a point x ∈ [−1, 2]Θ(n) (on the ε-grid) and Bob chooses a point y ∈ [−1, 2]Θ(n) (on the ε-grid). Alice’s utility will be given by , and Bob’s utility will be given by4 . Then, by applying similar arguments to those in Babichenko [2016], McLennan and Tourky [2005], Rubinstein [2016], and Shmaya [2012], we can deduce that every approximate Nash equilibrium corresponds to an approximate fixed point, and thus also to an end of a line.
However, the above idea is obviously incorrect because Bob’s utility depends on f, whereas in the communication problem his utility should depend on the βs only. Our key idea is to use the fact that f can be computed locally to design a somewhat similar game where similar phenomena to those in the “naive” game will occur in approximate equilibria.
Bob doesn’t know f, but to compute f(x) he should only know the local information about the vertex (or vertices) that corresponds to x. We incentivize Alice and Bob to combine their private information about the corresponding vertex (or vertices) by the following utilities structure.
• Alice’s first component of utility is given by . As in the “naive” game, in any approximate Nash equilibrium Alice will play points in the ϵ-cube of the ϵ-grid that contains with probability close to one.
• Bob tries to guess the vertex v (or the vertices v, w) that corresponds to the point x. Since x (almost always) belongs to the same ϵ-cube, in any (approximate) Nash equilibrium, his guess should be correct (with high probability). In addition, Bob should announce the β indexes βT, βS, and βP of v (of v and w). Again, we incentivize him to do so by defining that he should “guess” also these β indexes, and in an (approximate) equilibrium his guess should be correct with high probability (w.h.p.).
• We want Alice to announce I(v) (similarly for w in the case of two vertices). Thus, we ask her to guess the decomposition αvB(βB) where vB and βB are the announced v and β by Bob. In (approximate) equilibrium, since Bob announces the correct v and β (w.h.p.), this incentivizes her to announce the correct I(v) (w.h.p.).
• Now Bob uses the local information of I(v) (and I(w)) to compute f. Namely, his last utility component is defined by where fIA(v),IA(W) is Bob’s “estimation” of f under the relevant local information announced by Alice. In (approximate) equilibrium Alice announces the correct local information (w.h.p.); thus Bob computes f correctly (w.h.p.).
Summarizing, the (approximate) equilibrium analysis of the presented game is similar to the analysis of the naive game, because in (approximate) equilibrium f is computed correctly (w.h.p.). But unlike the naive game, here Alice’s utility depends only on the αs and Bob’s utility only on the βs.
n-Player Game: ϵ-WSNE
The n-player game reduction is based on the same ideas as the two-player reduction. For clarity, we present first the idea of a reduction that proves the following weaker result:
There exists a constant ϵ > 0 such that the communication complexity of ϵ-well-supported approximate Nash equilibrium in n-player games with a constant number of actions for each player is at least 2cn for some constant c.
After that, we explain how we can strengthen this result in two aspects: first to improve the constant-number-of-action to binary-action, second to improve the ϵ-well-supported Nash equilibrium to (ϵ, ϵ)-weak approximate equilibrium.
The idea is to replace a single player—Alice—who chooses x in the ϵ-grid of [−1, 2]Θ(n) by a population of Θ(n) players ; each player . in the population is responsible for the i th coordinate of x. The payoff of player . is given by −|xi − yi|2. This incentivizes player . to play a single action, or two adjacent actions, in the ϵ-grid of the segment [−1,2] (in every WSNE). By looking at the action profile of all . players, we get the same phenomenon as in the two-player case: every point x in the support of Alice’s players belongs to the same ϵ-cube of the ϵ-grid.
Now, we replace the guess of v ∈ {0,1}Θ(n), which is done by Bob, by population of size Θ (n) where again each player is responsible to a single coordinate of v. Again, in a WSNE all players will guess correctly.
Similarly for the guess of β: we think of β ∈ [M]3 as an element of [0,1}3logM and we construct a population of 3 log M players; each controls a single bit.
Similarly for Alice’s guesses of IA(v) and IA(v): we construct 6 players, and each chooses a bit.
Finally, we again replace the choice of y ∈ [−1, 2]Θ(n) by a population of Θ(n) players . Each is responsible to a single coordinate. The payoff of player is given by . The analysis of this game is very similar to the two-player game analysis.
n-Player Game: (ϵ, ϵ)-Weak ANE and Binary Actions
In the above reduction, the x-type (and y-type) players have 3/ϵ actions each. To construct a binary action game, we use the technique of Babichenko [2016]. We replace each such player by a population of 3/ϵ players, each located at a point in the ϵ-grid of the segment [−1, 2]. A player that is located at j ∈ [−1,2](on the ϵ-grid) has to choose between the two points j or j + ϵ. In a WSNE, all players located to the left of yi will choose j + ϵ, and all players located to the right of yi will choose j.
More tricky is to generalize this reduction to weak approximate equilibria. Recall that in weak approximate equilibria, a constant fraction of players may play an arbitrary suboptimal action. Here we take into account both;
1. Players that are not ϵ-best replying, and
2. Players that are ϵ-best replying, but put small positive weight on the inferior action (among the two) and the realization of their mixed action turns out to be the inferior action.
In order to be immune from this small constant fraction of irrational players, we use error-correcting codes.5 Let Eβ:{0,1}3 log M → {0,1}Θ(3 log M) be a good binary error-correcting code. Instead of having a population of size 3 log M that tries to guess β, we replace it by a population of size Θ(3 log M) where each player tries to guess his bit in the encoding of β. Now even if a small constant fraction of players will act irrationally, the decoding of the action profile of the β-type players will turn out to be ß. We use the same idea for all types of populations (x-type, y-type, v-type, and I-type). This idea completes the reduction for weak approximate equilibria.
3.5 Proofs
In Subsection 3.5.1 we prove a randomized query lower bound for the end-of-the-line problem. In Subsection 3.5.2 we show how the lower bounds of Subsection 3.5.1 can be “lifted” to get a hard problem in the randomized communication complexity models. In Subsections 3.5.3, 3.5.4, and 3.5.5 we reduce the communicationally hard end-of-any-line problem to the approximate Nash equilibrium problem.
3.5.1 A Randomized Query Complexity Lower Bound
Let G be a directed graph with the vertices V = {0, 1}n ×{0, 1}n × [n + 1]. Each vertex (v1, v2, k), where v1, v2 ∈ {0, 1}n and k ∈ [n], has two outgoing edges to the vertices and , where vj(0) = (v1,…, vj−1,0, vj+1,…, vn). We call the 0-successor of v, and the 1-successor of v. Each vertex v = (v1, v2, n + 1) has a single outgoing edge to the vertex (v2, v1, 0). Note that the incoming degree of each vertex v = (v1, v2, k) ∈ V is at most two. For k = 1 there is a single incoming edge from (v2, v1, n + 1). For k > 1 there are two incoming edges from and from . We call the 0-predecessor of v, and the 1-predecessor of v.
We define the QUERY END-OF-THE-LINE to be the problem of finding the end of a line in G that starts at the point 02n+1. More formally, we represent a line in G by a triple I(v) ≜ (T(v), S(v), P(v)) where T(v) ∈ {0, 1} indicates whether the line goes through v, S(v) ϵ {0, 1} indicates who is the successor of v, and P(v) ∈ {0, 1} indicates who is the predecessor of v (here we use the fact that each vertex has outgoing and incoming degree of at most two). Throughout the chapter, we slightly abuse notation and use S(v)/P(v) to refer both to the bits and to the corresponding vertices (i.e., the S(v)/P(v)-successor/predecessor of v). The end of the line is the vertex v* such that T(v*) = 1 but T(S(v*)) = 0.
Definition 3.1
QUERY END-OF-THE-LINE.
Input: A line I = (T, S, P) over the graph G that starts at the point 02n+1.
Output: The first bit ([v*]1) of the end-of-the-line vertex.
Queries: Each query is a vertex v ∈ V. The answer is the triplet of bits I(v) = (T(v), S(v), P(v)) ∈ {0, 1}3.
Lemma 3.2
Randomized query complexity. For every constant ,(QUERYEND-OF-THE-LINE) = Ω(2n).
Proof
By Yao’s Minmax Theorem it is sufficient to introduce a distribution over paths such that every deterministic query algorithm requires Ω(2n) queries to determine the first bit of the end-of-line vertex with probability of at least 1 − δ. We choose a permutation π over [0, 1}n \ {0n} uniformly at random, and set π(0) ≜ 0n. π induces a line of length Θ(2n n) over G, starting at 02n+1, ending at (π(2n − 1), π(2n − 1),0), and where two consecutive vertices v = π(i) and w = π(i+1) are mapped to the following line of n + 1 edges:
Here (w[1,k], v[k+1,n]) denotes the vector with the first k coordinates as in w and the last n − k coordinates as in v.
The information of a single query of QUERY END-OF-THE-LINE (for the above class of lines) can be extracted from π(i − 1), π(i), and π(i + 1). Therefore QUERY END-OF-THE-LINE is at least as hard as the problem of finding the first bit of the last element in a random permutation, where each query returns the previous, the current, and the next vertices. Conditioning on the answers to k queries π(q1 − 1), π(q1), π(q1 + 1),…, π(qk − 1), π(qk), π(qk + 1), the last element of the permutation is still uniformly random across all vertices that are not π(q1),…, π(qk), π(q1 − 1),…, π(qk − 1), π(q1 + 1),…,π(qk + 1). This proves that the latter problem requires Ω(2n) queries.
3.5.2 Communicationally Hard, Discrete End-of-Any-Line Problem
In order to use a simulation theorem (Theorem 2.5) for randomized communication complexity, we define the following simulation variant of QUERY END-OF-THE-LINE:
Definition 3.2
SIMULATION END-OF-THE-LINE. We let N = 2n·2n·(n + 1)·3.
Input: For each v ∈{0, 1}n ×{0, 1}n × [n + 1], Alice receives three vectors , and Bob receives three indices .
We define
We simulate the problem QUERY END-OF-THE-LINE; therefore we restrict attention to inputs such that (T, S, P) that are defined in (3.1) meet all the requirements of QUERY END-OF-THE-LINE.
Output: The first bit ([v*]1) of a non-trivial end or start of a line (v*, v*, 0) ≠ 02n+1.
Applying the randomized simulation theorem (Theorem 2.5) to the query complexity lower bound (Lemma 3.2) gives a lower bound on the randomized communication complexity of a discrete end-of-line problem SIMULATIONEND-OF-THE-LINE.
Corollary 3.4
(SIMULATIONEND-OF-THE-LINE) = Ω(2n).
3.5.3 Embedding a Line as a Local Lipschitz Function
We embed I as a Euclidean-norm hard continuous function à la Section 4.2. Below, we recall some of the properties of the construction that will be useful for our reduction.
It will be more convenient to think of G as a graph over {0, 1}2n+log(n+1).
Let m =Θ(2n+ log(n + 1)) = Θ(n) and let E: {0, 1}2n+log(n+1) → {0, 1}m denote the encoding function of a good binary error-correcting code. We embed the discrete graph G into the continuous cube [−1, 2]4m.
The vertex v is embedded to the point (E(v), E(v), 0m, 0m) ∈ [−1, 2]4m, which is called the embedded vertex.
For two vertices v, w ∈ V such that (v, w) is an edge in the graph G, we define five vertices:
Note that x1(v, w) is the embedded vertex v, x5(v, w) is the embedded vertex w.
The line that connects the points xi(v, w) and xi+1(v, w) is called a Brouwer line segment. The union of these four Brouwer line segments is called the embedded edge (v, w). It is not hard to check that non-consecutive Brouwer line segments are Ω(1)-far one from the other, and in particular it implies that non-consecutive embedded edges are sufficiently far one from the other.
The following proposition shows that the END-OF-THE-LINE problem can be reduced to the problem of finding an approximate fixed point of a continuous Lipschitz function, when the function is “local” in the following sense: every edge in G is embedded as a path in the continuous hypercube (as described above). For points close to the embedding of an edge, f depends only on the “local behavior” of the lines I at the endpoints of this edge; for all other points, f is independent of the lines I.
Proposition 3.1
Theorem 4.2 and Fact 4.2. There exist constants δ, h> 0 such that given a line I = (T, S, P) over G there exists a function f = f(I) = [−1, 2]4m → [−1, 2]4m with the following properties:
1. ||f(x) − x||2 >δ for every x that is not h-close to the embedded edge of the end of the line (i.e., the embedding of the edge (P(v*), v*).
2. f is 0(1)-Lipschitz in ||·||2 norm.
3. f is local in the sense that it can be defined as an interpolation between a few (in fact, 64) functions, , that do not depend on the lines I and such that:
(a) If the first m-tuple of coordinates of x is 6h-close to the encoded vertex E(v), but the second m-tuple of coordinates of x is 6h-far from any encoded vertex E(w), then for every I2 ∈ {0, 1}3.
(b) If the second m-tuple of coordinates of x is 6h-close to the encoded vertex E(w), but the first m-tuple of coordinates of x is 6h-far from any encoded vertex E(v), then for every I1 ∈ {0, 1}3.
(c) If the first m-tuple of coordinates of x is 6h-close to the encoded vertex E(v), and the second m-tuple of coordinates of x is 6h-close to the encoded vertex E(w), then f(I(v),I(w)(x) = f(x).
(d) If none of the above conditions are satisfied, then for every I1, I2 ∈ {0, 1}3.
3.5.4 Two-Player Game
Theorem 3.3
Theorem 3.1, restated. There exists a constant ϵ > 0 such that the communication complexity of ϵ-Nash equilibrium in two-player N × N games is at least Nϵ.
We construct a two-player game between Alice and Bob of size NA × NB for
such that Alice’s utility depends on only, Bob’s utility depends on only, and all ϵ4-approximate Nash equilibria of the game correspond to a δ-fixed point of f from Proposition 3.1. By property 1 in Proposition 3.1, any fixed point of f corresponds to a non-trivial end or start of a line in I.
3.5.4.1 The Game
In this subsection we construct our reduction from SIMULATION END-OF-THE-LINE to the problem of finding an ϵ-WSNE.
Strategies. Recall that δ is the desired approximation parameter for a Brouwer fixed point in the construction of Proposition 3.1. We let ϵ be a sufficiently small constant; in particular, ϵ = Ο(δ) (this will be important later for Inequality (3.10)).
Each of Alice’s actions corresponds to an ordered tuple , where:
• x ∈ [−1, 2]4m, where the interval [−1, 2] is discretized into {−1, −1 + ϵ,…, 2 − ϵ, 2};
• and .
Each of Bob’s actions corresponds to an ordered tuple , where:
• y ∈ [−1, 2]4m, where the interval [−1, 2] is discretized into {−1, −1 + ϵ,…, 2 − ϵ, 2};
• vB, wB ∈ {0, 1}2n+log(n+1) are vertices in the graph G;
• and are triples of indexes.
Utilities. Alice’s and Bob’s utilities decompose as
The first component of Alice’s utility depends only on the first components of her and Bob’s strategies; it is given by:
Given the first component x ∈ [−1, 2]4m of Alice’s strategy, we define two decoding functions Dv, Dw :[−1, 2]4m → {0, 1}n as follows. Let Rv(x) ∈ {0, 1}m be the rounding of the first m-tuple of coordinates of x to {0, 1}m; let Dv(x) = E−1(Rv(x)) ∈ {0, 1}2n+log(n+1), where E−1 denotes the decoding of the error-correcting code from Subsection 3.5.3. We define Dw(x) ∈ {0, 1}2n+log(n+1) analogously with respect to the second m-tuple of coordinates of x. The second component of Bob’s utility is now given by iff he guesses correctly the vertex Dv(x), and the corresponding β operation on this vertex. Namely, if vB = Dv(x) and , and otherwise. We similarly define Bob’s third component with respect to Dw(x).
Note that Bob knows the indexes (for every v); thus to achieve Bob needs to guess correctly only the vertices Dv(x), Dw(x) and announce the corresponding triplet of β indexes.
Going back to Alice, the second component of her utility is given by iff she guesses correctly the triplet I(vB) = (T(vB), S(vB), P(vB)) when the calculation of T, S, P is done by the decomposition of α(βB). Namely, = 1 if , and otherwise. We similarly define Alice’s third component .
Finally, the first component of Bob’s utility is given by:
where the function is as defined in Proposition 3.1.
3.5.4.2 Analysis of Game
In this subsection, we prove the reduction from SIMULATION END-OF-THE-LINE to finding an ϵ4-ANE. The proof proceeds via a sequence of lemmas that establish the structure of any ϵ4-ANE.
Lemma 3.3
In every ϵ4-ANE , it holds that with probability of at least 1 − ϵ2 (where the probability is taken over ).
Proof We denote as the vector of expectations, and as the vector of variances. For every x we can rewrite
Since the variance of the yi’s, as well as that of and , does not depend on x, Alice’s best response to is
where [·]ϵ denotes the rounding to the closest ϵ integer multiplication. x* yields a payoff of at least
Note that in every ϵ4-ANE Alice assigns a probability of at most 1 − ϵ2 to actions that are ϵ2-far from optimal. By Equation (3.2) this implies that the probability that Alice choosing a vector x that satisfies is at most ϵ2.
■
Lemma 3.4
In every ϵ4-ANE , if the first m-tuple of coordinates of is 6h-close to the binary encoding E(v) of a vertex v, then
with probability of at least 1 − O(ϵ4) (where the probability is taken over ).
Proof
By Lemma 3.3 and the triangle inequality, with probability of at least 1 − ϵ2, the first m-tuple of x is O(h)-close to E(v). Rounding to Rv(x) ∈ {0, 1}m can at most double the distance to E(v) in each coordinate. Therefore, the Hamming distance of Rv(x) and E(v) is O(h). Hence Rv(x) is correctly decoded as Dv(x) = v, with probability of at least 1 − ϵ2.
Since do not affect , Bob’s utility from guessing vB = v, and , is at least 1 − ϵ2, whereas his utility from guessing any other guess is at most ϵ2. Therefore, Bob assigns probability at least 1 − ϵ4/(1 − 2ϵ2) to actions that satisfy (3.3).
■
A similar lemma holds for the second m-tuple of x and the vertex w:
Lemma 3.5
In every ϵ4-ANE , if the second m-tuple of coordinates of is 6h-close to the binary encoding E(w) of a vertex w, then
with probability of at least 1 − O(ϵ4) (where the probability is taken over ).
Since Alice receives the correct vB and βB, we also have:
Lemma 3.6
In every ϵ4-ANE , if the first m-tuple of coordinates of is 6h-close to the binary encoding E(v) of a vertex v, then
with probability 1 − O(ϵ4) (where the probability is taken over and ).
Proof
Follows immediately from Lemma 3.4 and the fact that does not affect
■
A similar lemma holds for the second m-tuple of x and the vertex w:
Lemma 3.7
In every ϵ4-ANE , if the second m-tuple of coordinates of is 6h-close to the binary encoding E(w) of a vertex w, then
with probability 1 − O(ϵ4) (where the probability is taken over and ).
Lemma 3.8
In every ϵ4-ANE with probability 1 − O(ϵ4).
Proof
Follows immediately from Lemmas 3.6 and 3.7 and the “locality” condition in Proposition 3.1.
■
The following corollary completes the analysis of the 2-player game.
Corollary 3.5
In every ϵ4-ANE .
Proof
We recall that in Lemma 3.3 we have proved that
with probability 1 − O(ϵ4). This also implies that x is, with high probability, close to its expectation:
with probability 1 − O(ϵ4). Where the first inequality follows from the triangle inequaltiy, the second follows from the Arithmetic-Mean Geometric-Mean inequality (AM-GM inequalty), the third follows from convexity of , and the last follows from Lemma 3.3.
Using that f is O(1)-Lipschitz together with Equation (3.5), we get that
with probability 1 − O(ϵ4).
By Lemma 3.8 we know that with probability 1 − O(ϵ4), which implies that
Using similar arguments to those of Lemma 3.3 we can show that
with probability 1 − O(ϵ4). As in the derivation of Equation (3.5), this implies:
with probability 1 − O(ϵ4).
With probability 1 − O(ϵ2), Inequalities (3.5), (3.4), (3.9), (3.8), (3.7), (3.6) hold simultaneously. In such a case, by the triangle inequality and by applying the inequalities in the exact order above, we have
Proof
Proof of Theorem 3.1. Any communication protocol that solves the ϵ4-Nash equilibrium problem in games of size N × N for N = 2Θ(n) induces a communication protocol for the problem SIMULATION END-OF-THE-LINE: Alice constructs her utility in the above presented game using her private information of the αs, Bob constructs his utility using the βs. They implement the communication protocol to find an ϵ4-Nash equilibrium, and then both of them know , which is a δ-approximate fixed point of f (by Corollary 3.5). Using Dv, they decode the vertex v* and they know the first coordinate of v*.
Using Corollary 3.4, we deduce that the communication complexity of ϵ4-Nash equilibrium in games of size 2Θ(n) × 2Θ(n) is at least 2Ω(n).
■
3.5.5 n-player Game
Theorem 3.4
Theorem 3.2, restated. There exists a constant ϵ > 0 such that the communication complexity of (ϵ, ϵ)-weak approximate Nash equilibrium in n-player binary-action games is at least 2ϵn.