Chapter 9: Algorithm Design Paradigms
In the previous chapters, we have explored a wide variety of algorithms for solving specific problems, such as sorting, searching, graph traversal, and string processing. While these algorithms are diverse in their applications and implementations, many of them share common underlying design principles or paradigms.
In this chapter, we will examine three fundamental algorithm design paradigms: divide and conquer, greedy algorithms, and dynamic programming. These paradigms provide general approaches to problemsolving that can be adapted to solve a vast array of problems. By understanding these paradigms, we can gain insight into the structure of algorithms and develop new algorithms for problems we encounter.
Divide and Conquer
The divide and conquer paradigm is a powerful and widely used approach for designing efficient algorithms. The basic idea is to break a problem down into smaller subproblems, solve these subproblems recursively, and then combine their solutions to solve the original problem.
A typical divide and conquer algorithm consists of three steps:
 Divide: If the problem is small enough to be solved directly, solve it. Otherwise, divide the problem into smaller subproblems.
 Conquer: Recursively solve each subproblem.
 Combine: Combine the solutions to the subproblems to obtain the solution to the original problem.
The effectiveness of divide and conquer algorithms stems from their ability to reduce the size of a problem by a constant factor at each recursive step. This often leads to algorithms with logarithmic or polylogarithmic running times.
Mergesort: A Classic Divide and Conquer Algorithm
One of the most wellknown examples of a divide and conquer algorithm is mergesort, which we studied in detail in Chapter 2. Recall that mergesort sorts an array by dividing it into two halves, recursively sorting each half, and then merging the sorted halves.
Here's a highlevel description of the mergesort algorithm:
function mergesort(array):
if array.length <= 1:
return array
else:
mid = array.length / 2
left = mergesort(array[0:mid])
right = mergesort(array[mid:])
return merge(left, right)
The merge
function combines two sorted arrays into a single sorted array:
function merge(left, right):
result = []
while left is not empty and right is not empty:
if left[0] <= right[0]:
append left[0] to result
remove left[0] from left
else:
append right[0] to result
remove right[0] from right
append remaining elements of left to result
append remaining elements of right to result
return result
The divide and conquer strategy allows mergesort to achieve a worstcase running time of O(n log n), making it one of the most efficient generalpurpose sorting algorithms.
The Master Theorem
The running time of many divide and conquer algorithms can be analyzed using the Master Theorem, which provides a general formula for recurrences of the form:
T(n) = aT(n/b) + f(n)
Here, a
is the number of recursive calls, n/b
is the size of each subproblem, and f(n)
is the cost of dividing the problem and combining the results.
The Master Theorem states that the solution to this recurrence is:
 If
f(n) = O(n^(log_b(a)  ε))
for some constantε > 0
, thenT(n) = Θ(n^log_b(a))
.  If
f(n) = Θ(n^log_b(a))
, thenT(n) = Θ(n^log_b(a) * log n)
.  If
f(n) = Ω(n^(log_b(a) + ε))
for some constantε > 0
, and ifaf(n/b) ≤ cf(n)
for some constantc < 1
and all sufficiently largen
, thenT(n) = Θ(f(n))
.
For mergesort, we have a = 2
(two recursive calls), b = 2
(each subproblem is half the size), and f(n) = Θ(n)
(the merge step takes linear time). Since log_2(2) = 1
, we are in case 2 of the Master Theorem, and the running time is Θ(n log n)
.
Other Divide and Conquer Algorithms
Many other algorithms can be designed using the divide and conquer paradigm. Some notable examples include:

Quicksort: Like mergesort, quicksort is a divide and conquer sorting algorithm. It partitions the array around a pivot element, recursively sorts the subarrays to the left and right of the pivot, and concatenates the results.

Binary search: The binary search algorithm for finding an element in a sorted array can be viewed as a divide and conquer algorithm. It compares the target value to the middle element of the array and recursively searches the left or right half, depending on the comparison.

Karatsuba multiplication: This is a divide and conquer algorithm for multiplying two ndigit numbers in O(n^log_2(3)) ≈ O(n^1.585) time, which is faster than the traditional O(n^2) algorithm.

Strassen's matrix multiplication: Strassen's algorithm multiplies two n × n matrices in O(n^log_2(7)) ≈ O(n^2.807) time, improving on the naive O(n^3) algorithm.
These examples demonstrate the versatility and power of the divide and conquer paradigm for designing efficient algorithms.
Greedy Algorithms
Greedy algorithms are a class of algorithms that make the locally optimal choice at each stage with the hope of finding a globally optimal solution. They are often used for optimization problems where a solution is built incrementally by making a series of choices, each of which seems the best at the moment.
The key characteristics of greedy algorithms are:
 They make a locally optimal choice at each step, without worrying about future consequences.
 They assume that a locally optimal choice will lead to a globally optimal solution.
 They never reconsider previous choices.
Greedy algorithms are often simple to understand and implement, and they can be very efficient. However, they do not always produce the optimal solution, as the locally optimal choices may not lead to the globally optimal solution.
Huffman Coding: A Greedy Algorithm for Data Compression
Huffman coding, which we encountered in Chapter 5, is a greedy algorithm for constructing an optimal prefixfree code for compressing data. The algorithm builds a binary tree bottomup, assigning shorter bit sequences to more frequent characters.
Here's a highlevel description of the Huffman coding algorithm:
 Create a leaf node for each character and add it to a priority queue.
 While there is more than one node in the queue:
 Remove the two nodes of lowest frequency from the queue.
 Create a new internal node with these two nodes as children and with frequency equal to the sum of the two nodes' frequencies.
 Add the new node to the priority queue.
 The remaining node is the root node, and the tree is complete.
The greedy choice is to always merge the two lowestfrequency nodes. This locally optimal choice leads to a globally optimal prefixfree code.
Here's an example of Huffman coding in action:
Suppose we have the following character frequencies:
d: 1
e: 1
Here's the Huffman tree for this example:
(15)
/ \
(7) (8)
/ \ / \
(4) (3) (3) (5)
/\ /\ /\ /\
A B C D E
The resulting Huffman codes are:
A: 00
B: 01
C: 10
D: 110
E: 111
So the original string "AAAABBBCCCDDDEEE" would be encoded as:
00000000010101101010110110110111111111
Huffman coding achieves compression by assigning shorter codes to more frequent symbols. The codes are prefixfree, meaning no code is a prefix of any other, allowing unambiguous decoding.
LZW Compression
LempelZivWelch (LZW) compression is a dictionarybased compression algorithm that builds a dictionary (or codebook) of strings while compressing the input. LZW is widely used in file compression utilities and was used in the GIF image format.
The key idea behind LZW is to replace strings of characters with single codes. It reads the input string character by character and encodes the string into a compact representation by replacing each fixedlength code with a variablelength code. The longer the string, the more space is saved by encoding it as a single number.
Here's a stepbystep description of how LZW compression works:
 Initialize the dictionary to contain all singlecharacter strings.
 Find the longest string W in the dictionary that matches the current input.
 Emit the dictionary index for W to output and remove W from the input.
 Add W followed by the next symbol in the input to the dictionary.
 Go to Step 2.
Let's consider an example. Suppose we want to compress the string "ABABABABA" using LZW.
 Initialize the dictionary to contain "A" and "B".
 The longest match is "A". Emit its index (0) and remove it from the input. The dictionary now contains "A", "B", and "AB".
 The longest match is "B". Emit its index (1) and remove it from the input. The dictionary now contains "A", "B", "AB", and "BA".
 The longest match is "AB". Emit its index (2) and remove it from the input. The dictionary now contains "A", "B", "AB", "BA", and "ABA".
 The longest match is "ABA". Emit its index (4) and remove it from the input. The dictionary now contains "A", "B", "AB", "BA", "ABA", and "ABAB".
 The longest match is "BA". Emit its index (3). The input is now empty.
The compressed representation of "ABABABABA" is thus the sequence of indices[1], which requires fewer bits to represent than the original ASCII representation.
Decompression works similarly, but in reverse:
 Initialize the dictionary to contain all singlecharacter strings.
 Read a code X from the input.
 Output the string for X from the dictionary.
 If the previous code exists, add the previous string concatenated with the first character of the string for X to the dictionary.
 Go to Step 2.
LZW compression is simple and fast, making it a good choice for many applications. However, it does have some limitations. The size of the dictionary can grow quite large, consuming a significant amount of memory. Additionally, the dictionary resets after each block of input, which can reduce the compression ratio for small files.
Despite these limitations, LZW remains a popular and effective compression algorithm, particularly for applications where speed is more important than achieving the highest possible compression ratios.
Conclusion
In this chapter, we explored several important stringprocessing algorithms, including string sorts, tries, substring search, regular expressions, and data compression. These algorithms form the basis for many realworld applications and are essential tools for any programmer working with textual data.
We began by discussing string sorts, which are optimized sorting algorithms that take advantage of the special properties of strings. Keyindexed counting, LSD radix sort, and MSD radix sort provide efficient methods for sorting strings based on their individual characters.
Next, we examined tries, a treelike data structure for storing and retrieving strings. Tries enable fast prefix matching and are commonly used in applications such as autocomplete and IP routing tables.
Substring search algorithms, such as the KnuthMorrisPratt and BoyerMoore algorithms, allow us to efficiently search for patterns within larger strings. These algorithms have numerous applications in text editing, computational biology, and information retrieval.
Regular expressions provide a powerful and flexible way to describe string patterns. We discussed the basic syntax of regular expressions and how they can be used for pattern matching and string manipulation in various programming languages and tools.
Finally, we explored data compression algorithms, which reduce the size of data by exploiting redundancy and patterns within the input. We covered runlength encoding, Huffman coding, and LempelZivWelch compression, each of which has its own strengths and applications.
Understanding these stringprocessing algorithms and data structures is crucial for anyone working with textual data. As the amount of unstructured data continues to grow, the ability to efficiently manipulate, search, and compress strings will only become more valuable. By mastering the techniques covered in this chapter, you will be wellequipped to tackle a wide range of stringprocessing challenges in your own projects and applications.