Читать книгу The Sparse Fourier Transform - Haitham Hassanieh - Страница 9

Оглавление

1

Introduction

The Fourier transform is one of the most important and widely used computational tasks. It is a foundational tool commonly used to analyze the spectral representation of signals. Its applications include audio/video processing, radar and GPS systems, wireless communications, medical imaging and spectroscopy, the processing of seismic data, and many other tasks [Bahskarna and Konstantinides 1995, Chan and Koo 2008, Heiskala and Terry 2001, Nishimura 2010, Van Nee and Coenen 1991, Yilmaz 2008]. Hence, faster algorithms for computing the Fourier transform can benefit a wide range of applications. The fastest algorithm to compute the Fourier transform today is the Fast Fourier Transform (FFT) algorithm [Cooley and Tukey 1965]. Invented in 1965 by Cooley and Tukey, the FFT computes the Fourier transform of a signal of size n in O(n log n) time. This near-linear time of the FFT made it one of the most influential algorithms in recent history [Cipra 2000]. However, the emergence of big data problems, in which the processed datasets can exceed terabytes [Schadt et al. 2010], has rendered the FFT’s runtime too slow. Furthermore, in many domains (e.g., medical imaging, computational photography), data acquisition is costly or cumbersome, and hence one may be unable to collect enough measurements to compute the FFT. These scenarios motivate the need for sublinear time algorithms that compute the Fourier transform faster than the FFT algorithm and use only a subset of the input data required by the FFT.

The key insight to enable sublinear Fourier transform algorithms is to exploit the inherit sparsity of natural signals. In many applications, most of the Fourier coefficients of the signal are small or equal to zero, i.e., the output of the Fourier transform is sparse. For such signals, one does not need to compute the entire output of the Fourier transform; it is sufficient to only compute the large frequency coefficients. Fourier sparsity is in fact very common as it appears in audio/video, medical imaging, computational learning theory, analysis of Boolean functions, similarity search in databases, spectrum sensing, datacenter monitoring, etc. [Agrawal et al. 1993, Chandrakasan et al. 1996, Kahn et al. 1988, Lin et al. 2011, Lustig et al. 2008, Mueen et al. 2010].

This book pursues the above insight in the context of both algorithms and systems in order to answer the following two core questions.

How can we leverage sparsity to design faster Fourier transform algorithms?

and

How do we build software and hardware systems that adapt our algorithms to various application domains in order to deliver practical gains?

In this book, we will answer the above questions by presenting the Sparse Fourier Transform algorithms: a family of sublinear algorithms for computing the Fourier transform of frequency-sparse signals faster than FFT and using a small subset of the input samples. The book also describes architectures for leveraging sparsity to build practical systems that solve key problems in wireless networks, mobile systems, computer graphics, medical imaging, biochemistry, and digital circuits.

The book is divided into two parts: theoretical algorithms and systems. The theoretical part introduces the algorithmic foundations of the Sparse Fourier Transform which encompass two main axes.

Optimizing the Runtime Complexity. The book presents Sparse Fourier Transform algorithms with the lowest runtime complexity known to date. For exactly sparse signals, we present an algorithm that runs in O(k log n) time where k is the number of large frequency coefficients (i.e., sparsity) and n is the signal size. This algorithm is optimal if the FFT algorithm is optimal. For approximately sparse signals, which we will formally define in Section 1.1.1, we present an algorithm that runs in O(k log n log (n/k)) time which is log n factor away from optimal. Both algorithms improve over FFT for any sparsity k = o(n) and have small “Big-Oh” constants. As a result, they are often faster than FFT in practice and run quickly on very large data sets.

Optimizing the Sampling Complexity. The book presents Sparse Fourier Transform algorithms with the optimal sampling complexity for average case inputs, i.e., these algorithms use the minimum number of input data samples that would produce a correct answer. Hence, they reduce the acquisition cost, bandwidth, and I/O overhead needed to collect, transfer, and store the data. Specifically, these algorithms require only O(k) samples for exactly sparse signals and O(k log n) samples for approximately sparse signals while keeping the same runtime complexity of the aforementioned worst case algorithms. Furthermore, the algorithms naturally extend to multi-dimensional Sparse Fourier Transforms, without incurring much overhead.

The simplicity and practicality of the Sparse Fourier Transform algorithms enabled the design of six new systems that address major challenges in the areas of wireless networks and mobile systems, computer graphics, medical imaging, biochemistry, and digital circuits. Table 1.1 summarizes the systems developed using the Sparse Fourier Transform and the innovation in each system

Leveraging the Sparse Fourier Transform to build practical systems, however, is not always straightforward. The Sparse Fourier Transform is a framework of algorithms and techniques for analyzing sparse signals. It inherently depends on the sparsity of the signal which changes from one application to another. Thus, incorporating domain knowledge from each application allows us to deliver much more significant gains. First, different applications exhibit different levels and structure of sparsity. For example, in wireless networks, occupied frequencies in the wireless spectrum are not randomly distributed. They are instead clustered based on transmission regulations set by the FCC. Hence, incorporating the structure of the sparsity into the algorithm and system is essential for achieving good performance gains. In addition, in some applications such as medical imaging, the sparsity of the signal is not apparent, which requires developing methods to sparsify the signal before being able to use the Sparse Fourier Transform. In other applications, sparsity appears only in part of the system and thus we have to redesign the entire system in order to propagate the gains of the Sparse Fourier Transform to other stages and improve the overall system performance. Hence, adapting the Sparse Fourier Transform into practical applications requires a deep understanding of the application domain and customizing the algorithms to become more in-sync with the system requirements which we will explore throughout this book.

The next two sections provide an overview of the theoretical algorithms and the software and hardware systems presented in this book.

1.1 Sparse Fourier Transform Algorithms

The existence of Fourier transform algorithms faster than FFT is one of the central questions in the theory of algorithms. The past two decades have witnessed significant advances in sublinear Fourier algorithms for sparse signals. The first such algorithm (for the Hadamard transform) appeared in Kushilevitz and Mansour [1991] (building on Goldreich and Levin [1989]). Since then, several sublinear algorithms for complex Fourier inputs have been discovered [Akavia et al. 2003, Akavia 2010, Gilbert et al. 2002, Gilbert et al. 2005a, Iwen 2010b, Mansour 1992]. The main value of these algorithms is that they outperform FFT’s runtime for sparse signals. For very sparse signals, the fastest algorithm is due to Gilbert et al. [2005a] and has O(k logc(n) log(n/k)) runtime, for some c > 2. This algorithm outperforms FFT for any k smaller than Θ(n/ loga n) for some a > 1.

Table 1.1 Practical systems developed using the Sparse Fourier Transform


Despite this impressive progress, the prior work suffers from two main limitations. First, none of the existing algorithms improves over FFT’s runtime for the whole range of sparse signals, i.e., k = o(n). Second, the aforementioned algorithms are quite complex, and suffer from large “Big-Oh” constants that lead to long runtime in practice. For example, an implementation of the algorithm in Gilbert et al. [2005a] can only outperform FFT for extremely sparse signals where k/n ≤ 3.2 × 10−5 [Iwen et al. 2007]. The algorithms in Gilbert et al. [2002] and Iwen [2010b] require an even sparser signal (i.e., larger n and smaller k). As a result, it has been difficult to incorporate those algorithms into practical systems.

In this section, we give an overview of our Sparse Fourier Transform algorithms, which address the above limitations of prior work. We start by formalizing the problem. We then describe the algorithmic framework underlying all of our Sparse Fourier Transform algorithms. Once we establish this framework, we describe the different techniques that can be used at each step of a Sparse Fourier Transform algorithm. We finally present the various algorithms that result from using these techniques.

1.1.1 Problem Statement

Consider a signal x of size n whose discrete Fourier transform is defined by:


is exactly k-sparse if it has exactly k non-zero frequency coefficients while the remaining nk coefficients are zero. In this case, the goal of the Sparse Fourier Transform is to exactly recover by finding the frequency positions f and values (f) of the k non-zero coefficients. For general signals, the Sparse Fourier Transform computes a k-sparse approximation ′ of . The best k-sparse approximation of can be obtained by setting all but the largest k coefficients of to 0. The goal is to compute an approximation x̂′ in which the error in approximating is bounded by the error on the best k-sparse approximation. Formally, has to satisfy the following ℓ2/ℓ2 guarantee:

where C is some approximation factor and the minimization is over exactly k-sparse signals.

In the remainder of this section, we will describe the algorithmic framework and techniques in terms of exactly sparse signals. However, the full details and extensions to the general case of approximately sparse signals can be found in Chapters 3, 4, and 5.

1.1.2 Algorithmic Framework

The Sparse Fourier Transform has three main components: Frequency Bucketization, Frequency Estimation, and Collision Resolution.

1.1.2.1 Frequency Bucketization

The Sparse Fourier Transform starts by hashing the frequency coefficients of into buckets such that the value of the bucket is the sum of the values of the frequency coefficients that hash into the bucket. Since is sparse, many buckets will be empty and can be simply discarded. The algorithm then focuses on the non-empty buckets and computes the positions and values of the large frequency coefficients in those buckets in what we call the frequency estimation step.

The process of frequency bucketization is achieved through the use of filters. A filter suppresses and zeroes out frequency coefficients that hash outside the bucket while passing through frequency coefficients that hash into the bucket. The simplest example of this is the aliasing filter. Recall the following basic property of the Fourier transform: subsampling in the time domain causes aliasing in the frequency domain. Formally, let b be a subsampled version of x, i.e., b(i) = x(i · p) where p is a subsampling factor that divides n. Then, , the Fourier transform of b is an aliased version of , i.e.,

Thus, an aliasing filter is a form of bucketization in which frequencies equally spaced by an interval B = n/p hash to the same bucket and there are B such buckets, as shown in Figure 1.1. The hashing function resulting from this bucketization can be written as: h(f) = f mod n/p. Further, the value in each bucket is the sum of the values of only the frequency coefficients that hash to the bucket as can be seen from Equation 1.3.

For the above aliasing filter, the buckets can be computed efficiently using a B-point FFT which takes O(B log B) time. We set B = O(k) and hence bucketization takes only O(k log k) time and uses only O(B) = O(k) of the input samples of x. In Section 1.1.3, we describe additional types of filters that are used by our Sparse Fourier Transform algorithms to perform frequency bucketization.

Figure 1.1 Bucketization using aliasing filter. Subsampling a signal by 3× in the time domain results in the spectrum aliasing. Specifically, the 12 frequency will alias into 4 buckets. Frequencies that are equally spaced by 4 (shown with the same color) end up in the same bucket.

1.1.2.2 Frequency Estimation

In this step, the Sparse Fourier Transform estimates the positions and values of the non-zero frequency coefficients which created the energy in each of the non-empty buckets. Since is sparse, many of the non-empty buckets will likely have a single non-zero frequency coefficient hashing into them, and only a small number will have a collision of multiple non-zero coefficients. We first focus on buckets with a single non-zero frequency coefficients and estimate the value and the position of this non-zero frequency, i.e., (f) and the corresponding f.

In the absence of a collision, the value of the non-zero frequency coefficient is the value of the bucket it hashes to since all other frequencies that hash into the bucket have zero values. Hence, we can easily find the value of the non-zero frequency coefficient in a bucket. However, we still do not know its frequency position f, since frequency bucketization mapped multiple frequencies to the same bucket. The simplest way to compute f is to leverage the phase-rotation property of the Fourier transform, which states that a shift in time domain translates into phase rotation in the frequency domain [Lyons 1996]. Specifically, we perform the process of bucketization again, after a circular shift of x by τ samples. Since a shift in time translates into a phase rotation in the frequency domain, the value of the bucket changes from (i) = (f) to (τ)(i) = (f) · ejΔϕ where the phase rotation is


Hence, using the change in the phase of the bucket, we can estimate the position of the non-zero frequency coefficient in the bucket. Note that the phase wraps around every 2π and so the shift τ should be 1 to avoid the phase wrapping for large values of f.1 Since there are k non-zero frequency coefficients, this frequency estimation can be done efficiently using at most O(k) computations. In Section 1.1.3, we describe additional techniques that are used by our Sparse Fourier Transform algorithms to estimate the values and positions of non-zero frequency coefficients.

1.1.2.3 Collision Resolution

Non-zero frequency coefficients that are isolated in their own bucket can be properly estimated, as described above. However, when non-zero frequencies collide in the same bucket, we are unable to estimate them correctly. Hence, to recover the full frequency spectrum, we need to resolve the collisions.

To resolve collision, we need to repeat the frequency bucketization in a manner that ensures that the same non-zero frequencies do not collide with each other every time. The manner in which we achieve this depends on the type of filter used for bucketization. For example, with the aliasing filters described above, we can bucketize the spectrum multiple times using aliasing filters with co-prime sampling rates. This changes the hashing function from h(f) = f mod n/p to h′(f) = f mod n/p′ where p and p′ are co-prime. Co-prime aliasing filters guarantee that any two frequencies that collide in one bucketization will not collide in the other bucketization. To better understand this point, consider the example in Figure 1.2. The first time we bucketize, we use an aliasing filter that subsamples the time signal by a factor of 3. In this case, the two frequencies labeled in red and blue collide in a bucket whereas the frequency labeled in green does not collide, as shown in the figure. The second time we bucketize, we use an aliasing filter that subsamples by 4. This time the blue and green frequencies collide whereas the red frequency does not collide. Now we can resolve collisions by iterating between the two bucketizations. For example, we can estimate the green frequency from the first bucketization, where it does not collide.2 We subtract the green frequency from the colliding bucket in the second bucketization to obtain the blue frequency. We then go back to the first bucketization and subtract the blue frequency from the bucket where it collides to obtain the red frequency.

Figure 1.2 Resolving collisions with co-prime aliasing. Using two co-prime aliasing filters, we ensure the frequencies that collide in one filter will not collide in the second. For example, frequencies 5 and 9 collide in the first filter. But frequency 5 does not collide in the second which allows us to estimate it and subtract it.

Iterating between the different bucketizations by estimating frequencies from buckets where they do not collide and subtracting them from buckets where they do collide, ensures that each non-zero frequency will be isolated in its own bucket during some iteration of the algorithm. This allows us to estimate each non-zero frequency correctly. Thus, at the end of the of the collision resolution step, we have recovered all non-zero frequencies and hence have successfully computed the Fourier transform of the signal.

1.1.3 Algorithmic Techniques

The previous section established a general framework for computing the Sparse Fourier Transform and gave one example of a technique that can be used in each step of this framework. In this section, we describe a more comprehensive list of techniques that are used by different Sparse Fourier Transform algorithms.

Figure 1.3 Filters used for frequency bucketization shown in the time (upper row) and the frequency (lower row) domain.

1.1.3.1 Frequency Bucketization Techniques

As described earlier, bucketization is done using filters. The choice of the filter can severely affect the running time of a Sparse Fourier Transform algorithm. Ideally, we would like a filter that uses a small number of input time samples to hash the frequency coefficients into buckets. For example, the rectangular or boxcar filter shown in Figure 1.3(a), uses only B time samples to hash the frequency coefficients into B buckets which is ideal in time domain. However, in the frequency domain, it is equal to the sinc function,3 which decays polynomially as shown in Figure 1.3(a). This polynomial decay means that the frequency coefficients “leak” between buckets, i.e., the value of the bucket is no longer the sum of n/B coefficients that hash to the bucket. It is a weighted sum of all the n frequency coefficients. Hence, a nonzero frequency coefficient can never be isolated in a bucket and estimated correctly. One the other hand, a rectangular filter is ideal in the frequency domain since it has no leakage, as shown in Figure 1.3(b). However, it is sinc function in the time domain and hence requires using all n input samples which take at least Ω(n) time to process.

In this book, we identify several efficient filters that use a small number of samples in time domain and have minimal or no leakage in the frequency domain and as a result can be used to perform fast bucketization.

Flat Window Filter. This filter looks very similar to a rectangle or box in the frequency domain while still using a small number of time samples. An example of such filter is a Gaussian function multiplied by a sinc function in the time domain which is shown in Figure 1.3(c). Since the Gaussian function decays exponentially fast both in time and frequency, the leakage between buckets in this filter is negligible and can be ignored. Similarly, the filter is concentrated in time and hence uses only a small number of time samples. The resulting hash function of such filter can be written as h(f) = ⌈f/(n/B)⌉. Gaussian is only one example of such functions. One can potentially use a Dolph-Chebyshev or a Kaiser-Bessel function as we describe in more detail in Chapter 2.

Aliasing Filter. We presented this filter in the previous section. It is simply a spike-train of period p in time domain since it subsamples the time domain signal by a factor of p, as shown in Figure 1.3(d). It is also a spike-train in the frequency domain since it sums up frequency coefficients that are equally spaced by n/p. The aliasing filter is ideal both in time and in frequency since it only uses B time samples and has zero leakage between buckets. Unfortunately, we will later show that the aliasing filter does not lend itself to powerful randomization techniques and, as a result, can only be shown to work for average case input signals as opposed to worst case.

Fourier Projection Filter. This filter is a generalization of the aliasing filter to higher dimensions. It is a direct result of the Fourier Slice Projection Theorem which states that taking the Fourier transform of samples along a slice (e.g., a 1D line in 2D time signal or a 2D plane in a 3D signal) results in the orthogonal projection of the frequency spectrum onto the corresponding slice in the Fourier domain. For example, if we sample a row in a 2D time domain signal and then take its 1D Fourier transform, each output point of the Fourier transform will be the sum of the coefficients along one column, as shown in Figure 1.4(a). Alternatively, if we sample a column, each output point will be the sum of the coefficients along one row, as shown in Figure 1.4(b). This also holds for any discrete line, as shown in Figure 1.4(c,d). Thus, if we sample a line with slope equal to 1 or 2, we get a projection in the frequency domain along lines with slopes equal to −1or −1/2. Note that discrete lines wrap around and hence can generate non-uniform sampling, as shown in Figure 1.4(d).

Figure 1.4 Fourier projection filters. The top row of figures shows the sampled lines of the time signal and the bottom row of figures shows how the spectrum is projected. Frequencies of the same color are projected onto the same point: (a) row projection, (b) column projection, (c) diagonal projection, and (d) line with slope = 2.

1.1.3.2 Frequency Estimation Techniques

Recall that the goal of the frequency estimation step is to compute the positions f and values (f) of the non-zero frequency coefficients that have been hashed to non-empty buckets. In what follows, we will present the techniques used by the Sparse Fourier Transform algorithms to perform frequency estimation.

Time-Shift/Phase-Rotation Approach. In this approach, which we have previously described in Section 1.1.2, we repeat the bucketization after shifting the time signal x by a circular shift of τ samples. For buckets with a single isolated non-zero frequency coefficient, this results in a phase rotation Δϕ of the complex value of the coefficient. Δϕ = 2πfτ/n which we can use to compute the frequency position f. Since the frequency is isolated, its value can be immediately computed from the value of the bucket as described in the previous section. This is an extremely efficient approach since it requires constant O(1) time to estimate each non-zero frequency. For noisy signals, we repeat this process for multiple different time shifts to average the noise. The details of how we average the noise to ensure robust recovery can be found in Chapter 4.

Voting Approach. This is a non-iterative streaming approach where we repeat the bucketization few times while changing the hashing function. For each bucketization, the non-empty buckets vote for all the frequency coefficients that hash into those buckets. Non-zero frequency coefficients will get a vote from every bucketization with high probability. On the other hand, zero and negligible frequencies are unlikely to get many votes. Hence, after repeating the bucketization and this voting procedure a few times, the k non-zero frequency coefficients will have the largest number of votes and can be identified. In Chapter 3, we will show that this approach is more resilient to signal noise. However, this comes at the cost of increased runtime which we prove to be .

1.1.3.3 Collision Resolution Techniques

Collision resolution is achieved by repeating the bucketization in a manner that changes the way frequency coefficients hash into buckets so that the same coefficients do not continue to collide.

We can randomize the hashing function if we can perform a random permutation of the positions of the coefficients in before repeating the bucketization. This can be achieved by rearranging the indices of the input time signal x and rescaling it in the following manner:4


where σ is a random integer invertible modulo n and β is a random integer. This results in a random linear mapping of the positions of the frequency coefficients: fσ−1f + β mod n. The proof of this can be found in Chapter 2. While this randomization is a very powerful collision resolution technique, it only works with the flat window filter and does not work with aliasing filters as we will show in Chapter 3. To resolve collisions using aliasing filters, we need to use co-prime subsampling, i.e., subsample the signal using different co-prime subsampling rates as explained in the previous section.

1.1.4 Algorithmic Results

In this section, we will present the theoretical results of the Sparse Fourier Transform algorithms that we developed. All the algorithms follow the framework described in Section 1.1.2. However, they use different techniques from Section 1.1.3 and as a result achieve different runtime and sampling complexities as well as different guarantees. Most of the theoretical algorithms presented in this book are randomized and succeed with a large constant probability. Table 1.2 summarizes the theoretical Sparse Fourier Transform algorithms presented in this book along with their complexity results, guarantees, and techniques used.

1.2 Applications of the Sparse Fourier Transform

The second half of this book focuses on developing software and hardware systems that harness the Sparse Fourier Transform to solve practical problems. The book presents the design and implementation of six new systems that solve challenges in the areas of wireless networks, mobile systems, computer graphics, medical imaging, biochemistry, and digital circuits. All six systems are prototyped and evaluated in accordance with the standards of each application’s field.

Adapting the Sparse Fourier Transform to a particular application requires a careful design and deep knowledge of the application domain. The Sparse Fourier Transform is a framework of algorithms and techniques for analyzing sparse signals. It is not a single algorithm that can be plugged directly into an application. To apply it to a problem, one has to deeply understand the structure of the sparsity in the application of interest, and customize the framework to the observed sparsity. More generally, real applications add new constraints that are application-dependent, and can deviate from our mathematical abstraction. Below we highlight some of the common themes that arise in mapping the Sparse Fourier Transform to practical systems.

Structure of Sparsity. In most applications, the occupied frequency coefficients are not randomly distributed; they follow a specific structure. For example, as mentioned earlier, in wireless communication, the occupied frequencies in the wireless spectrum are clustered. In computational photography, the occupied frequencies are more likely to be present in part of the Fourier spectrum. On the other hand, in an application like GPS, the occupied coefficient can be anywhere. Understanding the structure of sparsity in each application allows us to design a system that leverages it to achieve the best performance gains.

Level of Sparsity. A main difference between theory and practice is that the Sparse Fourier Transform algorithms operate in the discrete domain. In real and natural signals, however, the frequency coefficients do not necessarily lie on discrete grid points. Simply rounding off the locations of the coefficients to the nearest grid points can create bad artifacts which significantly reduce the sparsity of the signal and damage the quality of the results. This problem occurs in application like medical imaging and light-field photography. Hence, we have to design systems that can sparsify the signals and recover their original sparsity in the continuous domain in order to address the mismatch between the theoretical Sparse Fourier Transform framework and the scenarios in which it is applied.

Table 1.2 Theoretical Sparse Fourier Transform algorithms


a. The sampling complexity of algorithms SFT 1.0–4.1 is the same as their runtime complexity.

System Requirements. Different applications have different goals. For example, in medical imaging, the acquisition cost is high since it requires the patient to spend more time in the MRI machine while processing the captured data afterward is not a problem. Hence, the goal would be to minimize the number of input samples that need to be collected even if it requires additional processing time. On the other hand, in applications like GPS, collecting the samples is very cheap. However, processing them consumes a lot of power. Hence, the goal would be to minimize computational load even if all the input samples are used. Finally, in applications like spectrum acquisition and NMR spectroscopy, the goal would be to minimize both the runtime and the sampling. Hence, understanding the system is essential for adapting the Sparse Fourier Transform techniques to satisfy the requirements of each application.

System Architecture. In some applications, the applicability of the Sparse Fourier Transform to a system architecture might not even be apparent. For example, the Fourier transform of the GPS signal is not sparse and hence applying the Sparse Fourier Transform directly to GPS is not feasible. However, we observed that the GPS receiver correlates its signal with a special code transmitted by the satellite, and the output of the correlation is sparse because it spikes only when the code and the signal are aligned. Hence, we have to map this indirect form of sparsity to the Sparse Fourier Transform framework. We also need to ensure that the gains of the Sparse Fourier Transform are not bottlenecked by other components in the system. Thus, careful system design is essential for propagating these gains along the system pipeline and improving the overall system performance.

Signal to Noise Ratio. In practice, the gains of the Sparse Fourier Transform are constraint by the noise level in the system. It is essential to perform sampling and processing that would be sufficient to bring the signal above the noise floor of the system. For example, although the sparsity that appears in a GPS system is extremely high (≈0.025%), GPS signals are -30 dB to -20 dB below the noise floor which requires additional computation that upper bounds the performance gains. Thus, understanding the noise level and structure is essential in designing any system that uses the Sparse Fourier Transform.

The following subsections summarize the systems presented in this book and how they benefit from the Sparse Fourier Transform.

1.2.1 Spectrum Sensing and Decoding

The ever-increasing demand for wireless connectivity has led to a spectrum shortage which prompted the FCC to release multiple new bands for dynamic spectrum sharing. This is part of a bigger vision to dynamically share much of the currently under-utilized spectrum, creating GHz-wide spectrum superhighways that can be shared by different types of wireless services. However, a major technical obstacle precluding this vision is the need for receivers that can capture and sense GHz of spectrum in real-time in order to quickly identify unoccupied bands. Such receivers consume a lot of power because they need to sample the wireless signal at GigaSample/s.

To overcome this challenge, we leverage the fact that the wireless spectrum is sparsely utilized and use the Sparse Fourier Transform to build a receiver that can capture and recover GHz of spectrum in real-time, while sampling only at MegaSample/s. We use the aliasing filters and phase rotation techniques described in Section 1.1.3 to build the entire receiver using only cheap, low power hardware similar to what is used today by WiFi and LTE in every mobile phone. We implement our design using three software radios, each sampling at 50 MegaSample/s, and produce a device that captures 0.9 GHz, i.e., 6× larger digital bandwidth than the three software radios combined. The details of the system design, implementation, and evaluation can be found in Chapter 7.

1.2.2 GPS Receivers

GPS is one of the most widely used wireless systems. In order to calculate its position, a GPS receiver has to lock on the satellite signals by aligning the received signal with each satellite’s code. This process requires heavy computation, which consumes both time and power. As a result, running GPS on a phone can quickly drain the battery.

We introduced a new GPS receiver that minimizes the required computation to lock on the satellite’s signal hence reducing localization delay and power consumption. Specifically, we observed that GPS synchronization can be reduced into a sparse computation problem by leveraging the fact that only the correct alignment between the received GPS signal and the satellite code causes their correlation to spike. We built on this insight to develop a GPS receiver that exploits the Sparse Fourier Transform to quickly lock on the satellite signal and identify its location. We prototyped this design using software radios. Our empirical results with real satellite signals demonstrate that the new GPS receiver reduces the computational overhead by 2× to 6×, which translates into significant reduction in localization delay and power consumption. Chapter 8 presents the analysis and evaluation of the design in detail.

1.2.3 Light Field Photography

Light field photography is an active area in graphics where a 2D array of cameras or lenslets is used to capture the 4D light field of a scene.5 This enables a user to extract the 3D depth, refocus the scene to any plane, and change the angle from which he views the scene. This is essential for Virtual Reality (VR) systems as well as post processing of images and videos. Capturing light fields, however, is costly since it requires many cameras or lenslets to sample the scene from different viewpoints.

Thus, our goal is to reduce the cost of light field capture by using only few of the cameras in the 2D array and reconstructing the images from the missing cameras. To do this, we leverage the fact that the 4D Fourier transform of a light field is sparse and we use the Sparse Fourier Transform to subsample the input and reduce the number of cameras. Once we have computed the Fourier transform, we can invert it back to recover the images from the missing cameras which were not sampled and hence recover the full 2D array. However, as explained earlier, natural signals like light fields are not very sparse in the discrete domain. To address this issue, we developed a light field reconstruction system that optimizes for sparsity in the continuous Fourier domain. This improves the quality of reconstruction while reducing the required number of cameras by 6× to 10×. The light field reconstruction algorithm along with the reconstruction results can be found in Chapter 9.

1.2.4 Magnetic Resonance Spectroscopy (MRS)

One of the next frontiers in MRI is Magnetic Resonance Spectroscopy (MRS). MRS enables zooming in and detecting the biochemical content of each voxel in the brain, which can be used to discover disease biomarkers that allow early detection of cancer, autism, and Alzheimer’s. MRS tests, however, take prohibitively long time, requiring the patient to stay in the MRI machine for more than 2 hr. MRS images also suffer from a lot of clutter and artifacts that can mask some disease biomarkers. These two challenges have been a major barrier against adopting these tests in clinical diagnosis. To overcome this barrier, we demonstrated that processing MRS data using the Sparse Fourier Transform enhances image quality by suppressing artifacts and reduces the time the patient has to spend in the machine by 3× (e.g., from 2 hr to 40 mins). The details of our MRS algorithm, experiments and results can be found in Chapter 10.

1.2.5 Nuclear Magnetic Resonance (NMR)

NMR is a technique that provides the detailed structural properties of chemical compounds, providing the 3D structure of complex proteins and nucleic acids. However, collecting NMR measurements is a very time consuming and costly process that can take from several days up to weeks. This prevents researchers from running high-dimensional NMR experiments which are needed for analyzing more complex protein structures. NMR uses spectral analysis to find the resonance frequencies that correspond to the coupling between different atoms. NMR spectra are sparse. Hence, using the Sparse Fourier Transform, we show how to generate the NMR spectra by subsampling the NMR measurements. We customized the Sparse Fourier Transform for multi-dimensional NMR and showed that it can reduce the time of an NMR experiment by 16×. Chapter 11 describes the Sparse Fourier Transform techniques used for processing our NMR experiments along with our recovery results.

1.2.6 The Sparse Fourier Transform Chip

Traditionally, hardware implementations of FFT have been limited in size to few thousands of points. This is because large FFTs require a huge I/O bandwidth, consume a lot of power, and occupy a large silicon area. The Sparse Fourier Transform naturally addresses these issues due to its low computational and memory cost, enabling very large Fourier transforms. We built the largest Fourier transform VLSI chip to date with nearly a million point Fourier transform while consuming 40× less power than prior FFT VLSI implementations. The hardware architecture and benchmarking of the fabricated chip can be found in Appendix G.

1.3 Book Overview

This book is divided into two parts. Part I describes the theoretical foundations of the Sparse Fourier Transform. It presents the Sparse Fourier Transform algorithms in detail and provides the analysis and proofs of the guarantees of these algorithms. Chapter 2 presents the notation and basic definitions that will be used in this part of the book. Chapters 3 and 4 focus on reducing the runtime complexity of the Sparse Fourier Transform algorithms while Chapter 5 focuses on optimizing the sample complexity. Finally, in Chapter 6, we present numerical simulations to evaluate the performance of the Sparse Fourier Transform.

Part II describes the applications and systems designed using the Sparse Fourier Transform. Chapter 7 describes the design and implementation of a wireless receiver that can capture GHz of spectrum in real time. Chapter 8 presents a GPS receiver design with lower computational overhead. Chapter 9 describes a light field photography reconstruction algorithm that achieves high quality image recovery. Chapter 10 shows how the Sparse Fourier Transform can be used to reduce the time a patient spends in an MRI machine and generate clearer images. Chapter 11 presents the application of the Sparse Fourier Transform to Nuclear Magnetic Resonance in biochemistry.

Finally, in Chapter 12, we conclude and discuss future algorithms and applications of the Sparse Fourier Transform.

1. Note that for approximately sparse signals, multiple time shifts are used to average the noise and ensure robust estimation as we show in Chapter 4.

2. In Chapter 5, we will present techniques to detect collisions. However, accurately detecting collision is not always necessary. Since is sparse, the number of collisions will be very small and errors caused by assuming a non-zero frequency is isolated when it is in a collision can be corrected in subsequent iterations of the algorithm.

3. The sinc function is defined as: sinc(x) = sin(x)/x.

4. Note that only time samples that will be used by the bucketization filters need to be computed.

5. The four dimensions of a light field correspond to the 2D pixels in each image captured by a camera in the 2D camera array.

The Sparse Fourier Transform

Подняться наверх