Lattice Sparsification and the Approximate Closest Vector Problem

We give a deterministic algorithm for solving the (1+eps)-approximate Closest Vector Problem (CVP) on any n dimensional lattice and any norm in 2^{O(n)}(1+1/eps)^n time and 2^n poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the"AKS Sieve"based algorithms for (1+eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a poly(n)-space and 2^{O(n)} time algorithm for exact CVP in the l_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for"sparsifying"any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003) for the purpose of proving hardness for Shortest Vector Problem (SVP) under l_p norms.


Results and Techniques
Our main result is as follows: Theorem 1.1 (Approximate CVP in any norm, informal). There is a deterministic algorithm that, given any near-symmetric norm · K , n dimensional lattice L, target x ∈ R n , and 0 < ε ≤ 1, computes y ∈ L, a (1 + ε)-approximate minimizer to y − x K , in (1 + 1 ε ) n · 2 O(n) time andÕ(2 n ) space.
In the above theorem we extend the DPV lattice point enumeration techniques and give the first deterministic alternative to the AKS randomized sieving approach. Compared to AKS, our approach also achieves a better dependence on ε, 2 O(n) (1 + 1 ε ) n instead of 2 O(n) (1 + 1 ε ) 2n , and utilizes significantly less space, O(2 n ) compared to 2 O(n) (1 + 1 ε ) n . Additionally, as we will discuss below, continued progress on exact CVP under ℓ 2 could further reduce the space usage of the algorithm. We note however that the 2 O(n) factors in the running time are currently much larger than in AKS, though little effort has been spent in trying to compute or optimize them. To explain our approach, we first present the main DPV enumeration algorithm in its most recent formulation [Dad12a].

Theorem 1.2 (Enumeration in Convex Bodies, informal).
There is a deterministic algorithm that, given an n-dimensional convex body K and lattice L, enumerates the elements of K ∩ L in time 2 O(n) G(K, L) using O(2 n ) space, where G(K, L) = max x∈R n |(K + x) ∩ L|. Furthermore, given an algorithm that solves exact CVP under ℓ 2 in T (n) time and S(n) space, K ∩ L can be enumerated in 2 O(n) T (n)G(K, L) time using S(n) + poly(n) space.
The main idea for the above algorithm is to first compute a covering of K by 2 O(n) translates of an M -ellipsoid E of K 3 , and to use the MV enumeration techniques to compute the lattice points inside each translate of E. In its first incarnation [DPV11], the above algorithm was randomized -here randomization was needed to construct the M-Ellipsoid -and had space complexity dependent on G(K, L). In [DV12], a deterministic M-Ellipsoid construction was presented yielding a completely deterministic enumerator. Lastly in [Dad12a], the space usage was decoupled from G(K, L) and a direct reduction from lattice point enumeration to exact CVP under ℓ 2 was presented.
The above lattice point enumerator will form the core of our (1 + ε)-CVP algorithm. As we will see from the algorithm's analysis, its space usage will only be an additive polynomial factor larger than the space required for the enumeration. Therefore, if one could develop an exact CVP solver under ℓ 2 which runs in 2 O(n) time and poly(n) space, then the space usage of our (1 + ε)-CVP can be reduced to poly(n) in the same time complexity. The possibility of such a solver is discussed in [MV10] and developing it remains an important open problem. We remark that by plugging in Kannan's algorithm for CVP under ℓ 2 , we do indeed get a poly(n) space (1 + ε)-CVP solver, though at the cost of an n n/2 factor increase in running time.
Using the above enumerator as a blackbox, we now present the approach taken in [DPV11] to solve CVP and explain the main problem that arises. Given the target t ∈ R n , their algorithm first computes an initial coarse underestimate d 0 of the distance of t to L under · K (using LLL for example). For the next step, they use the lattice point enumerator to successively compute the sets (t + 2 i d 0 K) ∩ L (i.e. all lattice points at distance at most 2 i d 0 from t), i ≥ 0, until a lattice point is found. Finally, the closest vector to t in the final enumerated set is returned.
From the description, it is relatively straightforward to show that the complexity of the algorithm is essentially G(dK, L), where d is the distance of t to L. The main problem with this approach is that, in general, one cannot apriori bound G(dK, L); even in 2 dimension this quantity can be made arbitrarily large. The only generic setting where such a bound is indeed available is when the distance d of the target is bounded by αλ, where λ is the length of the shortest non-zero vector under · K . In this situation, we can bound G(dK, L) by 2 O(n) (1 + α) n . We remark that solving CVP with this type of guarantee corresponds to the Bounded Distance Problem problem in the literature, and by a standard reduction can be used to solve SVP in general norms as well [GMSS99].
To circumvent the above problem, we propose the following simple solution. Instead of solving the CVP on the original lattice L, we attempt to solve it on a sparser sublattice L ′ ⊆ L, where the distance of t to L ′ is not much larger than its distance to L (we settle for an approximate solution here) and where the maximum number of lattice points at the new target distance is appropriately bounded. Our main technical contribution is to show the existence of such "lattice sparsifiers" and give a deterministic algorithm to compute them: There is a deterministic algorithm that, given any near-symmetric norm · K , n dimensional lattice L, and distance t ≥ 0, computes a sublattice L ′ ⊆ L in deterministic 2 O(n) time andÕ(2 n ) space satisfying: (1) the distance from L ′ to any point in R n is at most its distance to L plus an additive t, (2) the number of points in L ′ at distance t is at most 2 O(n) .
To solve (1+ε)-CVP using the above lattice sparsifier is straightforward. We simply compute a sparsifier L ′ for L under · K with t = εd K (t, L) (the distance from t to L) , and then solve the exact CVP on L ′ using the DPV algorithm. By the guarantees on the sparsifier, L ′ contains a point at distance at most d + εd = (1 + ε)d, and using a simple packing argument (see Lemma 2.1) we can show that Here we note that the correctness of the output follows from the distance preserving properties of L ′ , and the desired runtime follows from the above bound on G((1 + ε)d, L ′ ).
To prove the existence of lattice sparsifier's we make use of random sublattice restrictions, a tool first employed by Khot [Kho03,Kho04] for the purpose of proving hardness of SVP. More precisely, we show that with constant probability the restriction of L by a random modular form (for an appropriately chosen modulus) yields the desired sparsifier. We remark that our use of sublattice restrictions is somewhat more refined than in [Kho03,Kho04]. In Khot's setting, the random sublattice is calibrated to remove all short vectors on a NO instance, and to keep at least one short vector for a YES instance. In our setting, we somehow need both properties simultaneously for the same lattice, i.e. we want to remove many short vectors to guarantee reasonable enumeration complexity, while at the same time keeping enough vectors so that the original lattice lies "close" to the sublattice. As a final difference, we show that our construction can be derandomized in 2 O(n) time, yielding a completely deterministic algorithm.
Organization. In section 3, we provide the exact reduction from (1 + ε)-CVP to lattice sparsification, formalizing Theorem 1.1. In section 4, we prove the existence of lattice sparsifiers using the probabilistic method. In section 5, we give the derandomized lattice sparsifier construction, formalizing Theorem 1.3. Lastly, in section 6, we discuss futher applications and future directions.

Preliminaries
Convexity and Norms. For sets A, B ⊆ R n , let A+B = {a + b : a ∈ A, b ∈ B} denote their Minkowski sum. B n 2 denotes the n-dimensional euclidean unit ball in R n . A convex body K ⊆ R n is a full dimensional compact, convex set. A convex body K is (a 0 , r, R)-centered if a 0 + rB n 2 ⊆ K ⊆ a 0 + RB n 2 . For a convex body K ⊆ R n containing 0 in its interior, we define the (possibly asymmetric) norm · K induced by K as x K = inf{s ≥ 0 : x ∈ sK}. For a (0, r, R)-centered convex body K, we note that , and hence defines a regular norm on R n . The convex body Computational Model. The convex bodies and norms will be presented to our algorithms via weak membership and distance oracles. For ε ≥ 0 and K ⊆ R n a convex body, we define K ε = K + εB n 2 and K −ε = {x ∈ K : x + εB n 2 ⊆ K}. A weak membership oracle O K for K is a function which takes as input a point x ∈ Q n and real ε > 0, and A weak distance oracle D K,· for K is a function that takes as input a point x ∈ Q n and ε > 0, and returns a rational number satisfying |D K,ε (x) − x K | ≤ ε min{1, x K }. The runtimes of our algorithms will be measured by the number of oracle calls and arithmetic operations. For simplicity, we use the notation poly(·) to denote a polynomial factor in all the relevant input parameters (dimension, encoding length of basis, etc.).
Lattices. An n-dimensional lattice L ⊂ R n is a discrete subgroup of R n ; L can be expressed as BZ n , where B ∈ R n×n is a non-singular matrix, which we refer to as a basis for L. The dual lattice of L is L * = {y ∈ R n : ∀x ∈ L x, y ∈ Z}, which can be generated by the basis B −T (inverse transpose).
We define the length of the shortest non-zero vector of L under · K by λ 1 (K, L) = min y∈L\{0} y K . We let SVP(K, L) = arg min z∈L\{0} z K denote the set of shortest non-zero vectors of L under · . For x ∈ R n , define the distance of x to L under · K by d K (L, x) = min y∈L y − x K . We let CVP(K, L, x) = arg min y∈L y − x K denote the set of closest vectors to x in L under · K .
For a lattice L and convex body K in R n , let G(K, L) be the largest number of lattice points contained in any translate of K, that is G(K, L) = max x∈R n |(K + x) ∩ L|. We will need the following bounds on G(K, L) from [Dad12a] (we include a proof in the appendix for completeness). Lemma 2.1. Let K ⊆ R n denote a γ-symmetric convex body and let L denote an n-dimensional lattice. Then for d > 0 we have that Algorithms. We will need the following lattice point enumeration algorithm from [DPV11,Dad12a]. Theorem 2.2 (Algorithm Lattice-Enum(K, L, ε)). Let K ⊆ R n be a (a 0 , r, R)-centered convex body given by weak membership oracle O K , let L ⊆ R n be an n-dimensional lattice with basis B ∈ Q n×n and let ε > 0. Then there is a deterministic algorithm that on inputs K, L, ε outputs a set S (one element at a time) satisfying K ∩ L ⊆ S ⊆ (K + εB n 2 ) ∩ L in G(K, L) · 2 O(n) · poly(·) time using 2 n poly(·) space.
We will require the following SVP solver from [DPV11,Dad12a]. Theorem 2.3 (Algorithm Shortest-Vectors(K, L, ε)). Let K ⊆ R n be a (a 0 , r, R)-centered symmetric convex body given by weak membership oracle O K , and let L ⊆ R n be an n-dimensional lattice with basis B ∈ Q n×n , and let ε > 0. Let λ 1 = λ 1 (K, L). Then there is an algorithm that on inputs K, L, ε outputs a set S ⊆ L satisfying in deterministic 2 O(n) poly(·) time and 2 n poly(·) space.

CVP via Lattice Sparsification
To start, we give a precise definition of the lattice sparsifier.
Definition 3.1 (Lattice Sparsifier). Let K ⊆ R n be a γ-symmetric convex body, L be an n-dimensional lattice and t ≥ 0. A (K, t) sparsifier for L is a sublattice L ′ ⊆ L satisfying The following theorem represents the formalization of our lattice sparsifier construction.

Theorem 3.2 (Algorithm Lattice-Sparsifier).
Let K ⊆ R n be a (0, r, R)-centered and γ-symmetric convex body specified by a weak membership oracle O K , and let L denote an n dimensional lattice with a basis B ∈ Q n×n . For t ≥ 0, a (K, t) sparsifier can be constructed for L using 2 O(n) poly(·) time and 2 n poly(·) space.
The proof of the above theorem is the subject of Sections 4 and 5 (randomized and deterministic constructions, respectively). Using the above lattice sparsifier construction, we present the following simple algorithm for (1 + ε)-CVP. Theorem 3.3. Algorithm 1 (Approx-Closest-Vectors) is correct, and on inputs K, L, x, ε (as above), K γ-symmetric, it runs in deterministic 2 O(n) γ −n (1 + 1 ε ) n poly(·) time and 2 n poly(·) space.

Proof.
Correctness: If x ∈ L, we are clearly done. Next since K is (0, r, R)-centered, we have that y R ≤ y K ≤ y r for all y ∈ R n . Now take any z ∈ CVP(K, L, x) andz ∈ SVP(B n 2 , L). Here we note that Let d f denote the value of d after the first while loop terminates. We claim that 1 2 When the while loop terminates, we are guaranteed that the call to Lattice-Enum((1+ ε If the while loop terminates after the first iteration, then d f = l ≤ d x and hence 1 2 d f < d x as needed. If the loop iterates more than once, then for the sake of contradiction, assume that 1 2 But then the call to Lattice-Enum((1 + ε 3 )dK + x, L ′ , rε 0 ) is guaranteed to return a lattice point, and hence the while loop terminates at this iteration, a clear contradiction.
for L ′ at the end of the while loop. We now claim thatd x (as in the algorithm) We first note thatd x = min{d f + ε 0 , D K,ε 0 (z − x)} from some z ∈ L ′ . By the guarantees on D K,· , we get that as needed. For the second inequality, we examine two cases. First assume that Lattice- Therefore we are guaranteed that the final call to Lattice-Enum((d x + ε 0 )K + x, L ′ , rε 0 ) outputs all the closest vectors of L ′ to x. Finally, any vector y outputted during this call satisfies Running Time: We first bound the running time of each call to Lattice-Enum. Within the while loop, the calls to Lattice-Enum((1+ε/3)dK +x, L ′ , rε 0 ) run in 2 O(n) G((1+ε/3)dK, L ′ ) poly(·) time and 2 n poly(·) space. By Lemma 2.1, since (1 + ε/3) = t(ε/3) for t = (3/ε + 1), we have that by the guarantee on L ′ . Lastly, note that each call to Lattice-Sparsifier takes at most 2 O(n) poly(·) time and 2 n poly(·) space. Since the while loop iterates polynomially many times (i.e. at most log 2 (2R/r)),the total runtime is 2 O(n) γ −n (1+ 1/ε) n poly(·) and the total space usage is 2 n poly(·) as needed.

A Simple Randomized Lattice Sparsifier Construction
We begin with an existence proof for lattice sparsifiers using the probabilistic method. We will use the Cauchy-Davenport sumset inequality and another lemma in number theory about primegaps, a consequence of a theorem of Rosser and Schoenfeld [RS62,Nar00]. 4 Theorem 4.1. Let p ≥ 1 be a prime. Then for A 1 , . . . , A k ⊆ Z p , we have that 3 . Proof of Lemma 4.2 (Prime Gap). We will use the bounds π(x) > x/ ln(x) if x > 17, and π(x) < We begin with the following crucial lemma. This forms the core of our lattice sparsifier construction.

Lemma 4.3.
Let p be a prime and S ⊆ Z n p satisfying 1000 < |S| < p < 4|S| 3 and 0 ∈ S. Then there exists a ∈ Z n p satisfying 1. |{y ∈ S : y, a ≡ 0 (mod p)}| ≤ 6 Proof. Let a denote a uniform random vector in Z n p . We will show that a satisfies both conditions (1) and (2) with non-zero probability. Let E y i denote the indicator of the event a, y ≡ i for y ∈ S and i ∈ Z p .
Proof. By linearity of expectation it suffices to prove E[E y 0 ] = Pr[ a, y ] = 1 p for y ∈ S \{0}. Since y = 0, p is a prime, and a is uniform in Z n p we have that a, y is uniform in Z p . Therefore Pr[ a, y ] = 1 p .

Derandomizing the Lattice Sparsifier Construction
We begin with a high level outline of the deterministic sparsifier construction. To recap, in the previous section, we build a (K, t) sparsifier for L as follows 1. Compute N ← |tK ∩ L|. If N ≤ 1000 then return L ′ = L. Else find a prime p satisfying N < p < 3. Find a vector a ∈ Z n p satisfying (in fact, for slightly worse parameters, a random a ∈ Z n p succeeds with constant probability) (a) |{y ∈ S : a, y ≡ 0 (mod p)}| ≤ 6 (b) |{ a, y : y ∈ S}| ≥ p + 2 3 4. Return sublattice L ′ = {y ∈ L : y, B * a ≡ 0 (mod p)}.
To implement the above construction efficiently and deterministically, we must overcome several obstacles. First, the number of lattice points N in tK ∩ L could be very large (since we have no control on t). Hence we can not hope to compute N or the set S efficiently via lattice point enumeration. Second, the construction of the vector a is probabilistic (see Lemma 4.3): we must replace this with an explicit deterministic construction.
To overcome the first difficulty, we will build the (K, t) sparsifier iteratively. In particular, we will compute a sequence of sparsifiers L ′ 1 , . . . , L ′ k , satisfying that L ′ i+1 is a (K, c i λ) sparsifier for L ′ i for i ≥ 0, where L ′ 0 = L, λ = λ 1 (K, L) and c > 1 is a constant. We start the sparsification process at the minimum distance of L. We only increase the sparsification distance by a constant factor at each step. Hence we will be able to guarantee that the number of lattice points we process at each step is 2 O(n) . Furthermore, the geometric growth rate in the sparsification distance will allow us to conclude that L ′ i is in fact a (K, c i+1 c−1 λ) sparsifier for L. Hence, iterating the process roughly k ≈ ln t λ 1 times will yield the final desired sparsifier. For the second difficulty, i.e. the deterministic construction of a, the main idea is to use a dimension reduction procedure which allows a to be computed efficiently via exhaustive enumeration (i.e. trying all possible a's). Let N and S be as in the description. Since N < p < 4N 3 , we note that an exhaustive search over Z n p requires a search over p n ≤ ( 4N 3 ) n possibilities, and the validity check (i.e. conditions (a) and (b)) for any particular a can be implemented in poly(N ) time by simple counting. Since the existence of the desired a depends only on |S| and p (and not on n), if we can compute a linear projection π : Z n p → Z n−1 p such that π(S) = |S|, then we can reduce the problem to finding a good a ∈ Z n−1 p for π(S). Indeed, such a map π can be computed efficiently and deterministically as long as n ≥ 3. To see this, we first identify full rank n − 1 dimensional projections with their kernels, i.e. lines in Z n p . From here, we note that distinct elements x, y ∈ S collide under the projection induced by a line l iff x − y ∈ l. Since the total number of lines spanned by differences of elements in S is at most |S| 2 < p 2 , as long as there are at least p 2 lines in Z n p (i.e. for n ≥ 3) we can compute the desired projection. Therefore, repeating the process n − 2 times, we are left with finding a good a ∈ Z 2 p , which we can do by trying all p + 1 < 4N 3 + 1 lines in Z 2 p . As discussed in the previous paragraph, we will be able to guarantee that N = 2 O(n) , and hence the entire construction described above can be implemented in 2 O(n) time and space as desired.

Algorithms
We begin with the deterministic algorithm implementing Lemma 4.3. We denote the set of lines in Z n p by Lines(Z n p ). For a vector q ∈ Z n p we denote its orthogonal complement by q ⊥ = {y ∈ Z n p : q, y ≡ 0 (mod p)}.

Algorithm 2 Algorithm Good-Vector(S, p)
Input: S ⊆ Z n p , 0 ∈ S, integer n ≥ 1, p a prime satisfying 1000 < |S| < p < 4|S| 3 . Output: a ∈ Z n p satisfying conditions of Lemma 4.3 . 1: if n = 1, return 1 2: P ← I n (n × n identity) 3: for n 0 in n to 3 do 4: for all q ∈ Lines(Z n 0 p ) do ∀ distinct x, y ∈ P S check that B T x ≡ B T y (mod pZ n 0 −1 ). If no collisions, set P ← B T P and exit loop; otherwise, continue. 7: for all q ∈ Lines(Z 2 p ) do 8: Pick a ∈ q \ {0} 9: Compute zeros ← |{y ∈ P S : a, y ≡ 0 (mod p)}| 10: Compute distinct ← |{ a, y (mod p) : y ∈ P S}| 11: if zeros ≤ 6 and distinct ≥ p+2 3 then 12: For the desired application of the algorithm given below, the set S above will in fact be represented implicitly. Here the main access methodology we will require from S is a way to iterate over its elements. In the context of (1+ε)-CVP, the enumeration method over S will correspond to the Lattice-Enum algorithm.
Here we state the guarantees of the algorithm abstractly in terms of the number of iterations required over S.
First let us assume that during the loop iteration, we find B ∈ Z n 0 ×(n 0 −1) p satisfying B T x = B T y for all distinct x, y ∈ P S (verified in line 5). This yields that the map x → B T x is injective when restricted to P S, and hence |B T P S| = |S|. Next, since B ∈ Z n 0 ×(n 0 −1) p and P ∈ Z n 0 ×n p , we have that P is set to B T P ∈ Z (n 0 −1)×n p for the next iteration, as needed. Now we show that a valid projection matrix B T is guaranteed to exist as long as n 0 ≥ 3. First, we claim that there exists q ∈ Lines(Z n 0 p ), such that for all distinct x, y ∈ P S, (q + x) ∩ (q + y) = ∅, i.e. all the lines passing through P S in the direction q are disjoint. A line q fails to satisfy (a) if and only if q = Z p (x − y) for distinct x, y ∈ P S. The number of lines that can be generated in this way from P S is at most |P S| for n 0 ≥ 3 we may pick q ∈ Lines(Z n p ) that satisfies (a). Now let B ∈ Z n 0 ×(n 0 −1) p denote a basis satisfying q ⊥ = BZ n 0 −1 p . We claim that |B T P S| = |P S|. Assume not, then there exists distinct x, y ∈ P S such that which contradicts our assumption on q. Therefore, the algorithm is indeed guaranteed to find a valid projection, as needed.
After the first for loop, we have constructed P ∈ Z 2×n p satisfying |P S| = |S|, where |S| < p < 4|S| 3 . By Lemma 4.3, there exists a ∈ Z 2 p satisfying (1) and (2) for the set P S. Since (1) and (2) holds for any non-zero multiple of a, i.e. any vector defining the same line as a, we may restrict the search to elements of Lines(Z 2 p ). Therefore, by trying all p + 1 elements of Lines(Z 2 p ) the algorithm is guaranteed to find a valid a for the P S. Noting that a, P y ≡ P T a, y , we get that P T a satisfies (1) and (2) for the set S, as needed.
Runtime: For n = 1 the runtime is constant. We assume n ≥ 2. Here the first for loop is executed n − 2 times. For each loop iteration we run though q ∈ Lines(Z n 0 p ) until we find one inducing a good projection matrix B. From the above analysis, we iterate through at most |S| 2 < p(p−1) 2 elements q ∈ Lines(Z n 0 p ) before finding a good projection matrix. For each q, we build a basis matrix B for q ⊥ which can be done using poly(n, log p) arithmetic operations. Next, we check for collisions against each pair x, y ∈ P S, which can be done using O(|S|) = O(p) iterations over S. Therefore, at each loop iteration we enumerate over S at most p 3 times while performing only polynomial time computations. Hence, the total number of operations (excluding the time needed to output the elements of S) is at most poly(n, log p)p 4 .
For the last phase, we run through the elements in Lines(Z 2 p ), where |Lines(Z 2 p )| = p + 1. The validity check for a ∈ Lines(Z 2 p ) requires computing both the quantities (1) and (2). To compute |{y ∈ S : y, a ≡ 0 (mod p)}| we iterate once over the set S and count how many zero dot products there are. To compute |{ a, y : y ∈ S}|, we first iterate over all residues in Z p . Next, for each residue i ∈ Z p , if we find y ∈ S satisfying a, y ≡ i (mod p), we increment our counter by one, and otherwise continue. Hence for any specific a ∈ Z 2 p , we iterate over the set S exactly p + 1 times, performing poly(n, log p)p 2 operations. Hence, over the whole loop we perform O(p 2 ) iterations over the set S, and perform poly(n, log p)p 3 operations. Therefore, over the whole algorithm we iterate over the set S at most np 3 times, and perform at most poly(n, log p)p 4 operations. Furthermore, not counting the space needed to iterate over the set S, the space used by the algorithm is poly(n, log p).
Before moving into the derandomized sparsifier construction, we show a simple equivalence between building a sparsifier for symmetric and asymmetric norms.
Lemma 5.2. Let K be a γ-symmetric convex body, and let L be an n-dimensional lattice. Take L ′ ⊆ L, a full dimensional sublattice. Then for t ≥ 0, we have that L ′ is a (K ∩ −K, t) sparsifier ⇒ L ′ is a (K, t) sparsifier.
Proof. Let L ′ ⊆ L be a (K ∩ −K, t) sparsifier. Since K ∩ −K is 1-symmetric, by definition we have that G(t(K ∩ −K), L ′ ) = 2 O(n) . By Lemma A.1 and γ-symmetry of K, we have that Since K ∩ −K ⊆ K, we note that a K ≤ a K∩−K for all a ∈ R n . Now take x ∈ R n , and take z ∈ CVP(K, L, x). By the guarantee on L ′ , there exists y ∈ L ′ such that since z ∈ L. Next, using the triangle inequality we have that as needed. Therefore, L ′ is a (K, t) sparsifier for L as claimed.
From the above lemma, we see that it suffices to build lattice sparsifiers for symmetric convex bodies, i.e. to build a (K, t) sparsifier it suffices to build a (K ∩ −K, t) sparsifier for L.
We now show how to use the Good-Vector algorithm to get a completely deterministic Lattice Sparsifier construction. The correctness and runtime of the algorithm given below yields the proof of Theorem 3.2.

Proof of Theorem 3.2 (Lattice Sparsifier Construction).
Correctness: We show that the outputted lattice is a (K, t) sparsifier for L. By Lemma 5.2 it suffices to show that the algorithm outputs a (K ∩ −K, t) sparsifier, which justifies the switch in line 2 from K to K ∩ −K. In what follows, we therefore assume that K is symmetric.
Assume N > 1000. Here we first compute N < p < 4N 3 , and a dual basis B * i−1 for L * i−1 .
Claim 2: |B * T i−1 S (mod pZ n )| = N Proof. Since |S| = N , if the claim is false, there exists distinct x, y ∈ L such that Given Claim 1, we will show that L k is a (K, t) sparsifier for L. By our choice of k, note that 3 It therefore only remains to bound G(tK, L k ). By the previous bounds Therefore, the claim and Lemma 2.1 imply as needed. The algorithm returns a valid (K, t) sparsifier for L.

Runtime:
The algorithm first runs the Shortest-Vectors on K and L, which takes 2 O(n) poly(·) time and 2 n poly(·) space. Next, the for loop on line 6 iterates k = ⌊ln( 2 3 t λ + 1)/ ln 3⌋ = poly(·) times. Each for loop iteration, indexed by i satisfying 0 ≤ i ≤ k − 1, consists of computations over the set S ← Lattice-Enum(3 i (1 − ε)λK, L i , ελr). For the intended implementation, we do not store the set S explicitly. Every time the algorithm needs to iterate over S, we implement this by performing a call to Lattice-Enum(3 i (1 − ε)λK, L i , ελr). Furthermore, the algorithm only interacts with S by iterating over its elements, and hence the implemented interface suffices. Now at the loop iteration indexed by i, we do as follows:

Further Applications and Future Directions
Integer Programming. We explain how the techniques in this paper apply to Integer Programming (IP), i.e. the problem of deciding whether a polytope contains an integer point, and discuss some potential associated venues for improving the complexity of IP. For a brief history, the first breakthrough works on IP are by Lenstra [Len83] and Kannan [Kan87], where it was shown that any n-variable IP can be solved in 2 O(n) n 2.5n time (with polynomial dependencies on the remaining parameters). Since then, progress on IP has been slow, though recent complexity improvements have been made: the dependence on n was reduced to n 2n [HK10],Õ(n) 4 3 n [DPV11], and finally n n [Dad12a]. Let K ⊆ R n denote a polytope. To find an integer point inside K, the general outline of the above algorithms is as follows. Pick a center point c ∈ K, and attempt to "round" c to a point in Z n inside K. If this fails, decompose the integer program on K into subproblems. Here, the decomposition is generally achieved by partitioning Z n along shifts of some rational linear subspace H (often a hyperplane) and recursing on the integral shifts of H intersecting K.
In [Dad12b], an algorithm is given to perform the above rounding step in a "near optimal" manner. More precisely, the center c of K is chosen to be the center of gravity b of K (which can be estimated via random sampling), and rounding b to Z n is done via an approximate CVP computation with target b, lattice Z n , and norm · K−b (corresponding to scaling K about b(K)). Here the AKS randomized sieve is used to perform the approximate CVP computation, which is efficient due to the fact that K − b is near-symmetric (see [MP00]). Let y ∈ Z n be the returned (1 + ε)-CVP solution, and assume that y is correctly computed (which occurs with high probability). We can now examine the following cases. If y ∈ K, we have solved the IP. If y − b K−b > (1 + ε), then by the guarantee on y, for any z ∈ Z n we have that z In this final case, we are in essentially a near-optimal situation for computing a "good" decomposition (using the so-called "flatness" theorems in the geometry of numbers). We note with previous methods (i.e. using only symmetric norm or ℓ 2 techniques), the ratio of scalings between the integer free and non integer free case was O(n) in the worst case as opposed to (1 + ε) 2 (here ε can be any constant ≤ 1).
With the techniques in this paper, we note that the above rounding procedure can be made Las Vegas (i.e. no probability of error, randomized running time) by replacing the AKS Sieve with our new DPV based solver (randomness is still needed to estimate the center of gravity). This removes any probability of error in the above inferences, making the above rounding algorithm easier to apply in the IP setting. We note that the geometry induced by the above rounding procedure is currently poorly understood, and very little of it is being exploited by IP algorithms. One hope for improving the complexity of IP with the above methods, is that with a strong rounding procedure as above one maybe able to avoid the worst case bounds on the number of subproblems created at every recursion node. Currently, the main way to show that K admits a small decomposition into subproblems is to show that the covering radius of K (i.e. the minimum scaling such that every shift of K intersects Z n ) is large. Using the above techniques, we easily get that in the final case the covering radius is ≥ 1 1+ε (since 1 1+ε K + ε 1+ε b is integer free), however in reality the covering radius could be much larger (yielding smaller decompositions). Here, an interesting direction would be to try and show that on the aggregate (over all subproblems), the covering radii of the nodes must grow as we go down the recursion tree. This would allow us to show that as we descend the recursion tree, the branching factor shrinks quickly, allowing us to get better bounds on the size of the recursion tree (which yields the dominant complexity term for current IP algorithms).
CVP under ℓ ∞ . While the ideas presented here do not seem to be practically implementable in general (at least currently), there are special cases where the overhead incurred by our approach maybe acceptable. One potential target is solving (1 + ε)-CVP under ℓ ∞ . This is one of the most useful norms that is often approximated by ℓ 2 for lack of a better alternative.
As an example, in [BC07], they reduce the problem of computing machine efficient polynomial approximations (i.e. having small coefficient sizes) of 1 dimensional functions to CVP under ℓ ∞ . The goal in this setting is to generate a high quality approximation that is suitable for hardware implementation or for use in a software library, and hence spending considerable computational resources to generate it is justified.
We now explain why the ℓ ∞ norm version of our algorithms maybe suitable for practical implementation (or at least efficient "heuristic" implementation). Most importantly, for ℓ ∞ the DPV lattice point enumerator is trivial to implement. In particular, to enumerate the lattice points in a cube, one simply enumerates the points in the outer containing ball and retains those in the cube. Second, if one is comfortable with randomization, the sparsifier can be constructed by adding a simple random modular form to the base lattice. For provable guarantees, the main issue is that the modulus must be carefully chosen (see Section 4), however it seems plausible that in practice an appropriate modulus may be guessed heuristically.
[vEB81] P. van Emde Boas. Another NP-complete problem and the complexity of computing short vectors in a lattice. Technical Report 81-04, University of Amsterdam, 1981.

A Covering Bound
In this section, we prove the basic covering bound stated in Lemma 2.1. For a set A ⊆ R n , let int(A) denote the interior of A. For convex bodies A, B ⊆ R n , we define the covering number N (A, B) = inf{|Λ| : Λ ⊆ R n , A ⊆ Λ + B}, i.e. the minimum number of translates of B needed to cover A. We will require the following standard inequality on the covering number. Proof. Let T ⊆ A be any maximal set of points such that for all distinct x, y ∈ T , (x+B/2)∩(y+B/2) = ∅. We claim that A ⊆ T + B. For any z ∈ A, note by maximality of T that there exists x ∈ T such that (z + B/2) ∩ (x + B/2) = ∅. Therefore z ∈ x + B/2 − B/2 = x + B, as needed.
Rearranging the above inequality yields the lemma.