Читать книгу The Self-Taught Computer Scientist - Cory Althoff - Страница 24
Quadratic Time
ОглавлениеAfter log-linear, the next most efficient time complexity is quadratic time. An algorithm runs in quadratic time when its performance is directly proportional to the problem's size squared. In big O notation, you express a quadratic algorithm as O(n**2).
Here is an example of an algorithm with quadratic complexity:
numbers = [1, 2, 3, 4, 5] for i in numbers: for j in numbers: x = i * j print(x)
This algorithm multiplies every number in a list of numbers by every other number, stores the result in a variable, and prints it.
In this case, n is the size of your numbers
list. The equation for this algorithm's time complexity is as follows:
f(n) = 1 + n * n * (1 + 1)
The (1 + 1) in the equation comes from the multiplication and print statement. You repeat the multiplication and print statement n * n times with your two nested for
loops. You can simplify the equation to this:
f(n) = 1 + (1 + 1) * n**2
which is the same as the following:
f(n) = 1 + 2 * n**2
As you may have guessed, the n**2 part of the equation overshadows the rest, so in big O notation, the equation is as follows:
O(n) = n**2
When you graph an algorithm with quadratic complexity, the number of steps increases sharply as the problem's size increases (Figure 1.5).
Figure 1.5 Quadratic complexity
As a general rule, if your algorithm contains two nested loops running from 1 to n (or from 0 to n − 1), its time complexity will be at least O(n**2). Many sorting algorithms such as insertion and bubble sort (which you will learn about later in the book) follow quadratic time.