Skip to main content
[░░░░░░░░░░░░░░░░░░░░]0% — 7 min left
~/blog/midterm-1-review-algorithms.mdx
$cat~/blog/midterm-1-review-algorithms.mdx

Midterm 1 Study Guide: Algorithms

February 23, 20257min

1. Data Structures Review

Fundamental Structures

  • Arrays & Linked Lists
    • Arrays: Contiguous memory structures; constant-time access.
    • Linked Lists: Nodes connected by pointers; dynamic size but sequential access.

Heaps

  • Definition: A complete binary tree satisfying the heap property:
    • Min-Heap: Every parent node is ≤ its children.
    • Max-Heap: Every parent node is ≥ its children.
  • Key Operations:
    • Insertion: Add at the end then “reheap” (up-heap), in O(log n)O(log\space n) time.
    • Deletion (e.g., Delete-Min): Replace root with the last element and “sift down”, also O(log n)O(log\space n).

Graph Data Structures

  • Graph Representations (detailed in Section 6 below):
    • Adjacency Matrix: Uses a 2D array; Θ(V2)\Theta(|V|^2) space.
    • Adjacency List: Uses an array of lists; Θ(V+E)\Theta(|V| + |E|) space.
    • Where V|V| and E|E| is the number of Vertices and Edges in the graph.

2. Math in Algorithms

Sets and Notation

  • Sets: A collection of distinguishable elements.
    • Example: A={1,4,7,2}A = \set{1, 4, 7, 2}; membership denoted by xAx\in A.
    • Operations: Union ABA\cup B, Intersection ABA\cap B, Difference ABA-B.

Factorials, Permutations, and Combinations

  • Factorial:
    n!=n×(n1)××1,0!=1.n! = n \times (n-1) \times \cdots \times 1,\quad 0! = 1.
  • Permutations: Number of arrangements of nn items: P(n)=n!P(n) = n!.
  • Combinations: Number of unordered arrangements of nn items.

Logarithms

  • Definition:
    logby=x    bx=y.\log_b y = x \iff b^x = y.
  • Properties:
    • loga(xy)=logax+logay\log_a (xy) = \log_a x + \log_a y
    • loga(nm)=loganlogam\log_a \left(\frac{n}{m}\right) = \log_a n - \log_a m
    • loga(nr)=rlogan\log_a (n^r) = r\log_a n

Summations and Recurrences

  • Summations:
    i=1ni=n(n+1)2\sum_{i=1}^n i = \frac{n(n+1)}{2}
  • Recurrence Relations:
    • Example: Fibonacci sequence
      F(n)=F(n1)+F(n2),F(1)=F(2)=1.F(n) = F(n-1) + F(n-2), \quad F(1) = F(2) = 1.
    • Towers of Hanoi:
      T(n)=2T(n1)+1.T(n) = 2\,T(n-1) + 1.

Proof Techniques

  • Proof by Contradiction: Assume the theorem is false and derive a contradiction.
  • Proof by Induction: Prove a base case and show if the statement holds for n-1, then it holds for n.

3. Computational Complexity

Asymptotic Notation

  • Big-O Notation (OO) (Upper-Bound):
    T(n)O(f(n))    c>0, n0 such that T(n)cf(n) for all n>n0.T(n) \in O(f(n)) \iff \exists\, c > 0,\ n_0 \text{ such that } T(n) \le c\,f(n) \text{ for all } n > n_0.
  • Big-Omega (Ω\Omega) (Lower-Bound):
    T(n)Ω(g(n))    c>0, n0 such that T(n)cg(n) for all n>n0.T(n) \in \Omega(g(n)) \iff \exists\, c > 0,\ n_0 \text{ such that } T(n) \ge c\,g(n) \text{ for all } n > n_0.
  • Big-Theta (Θ\Theta) (Exact Growth Rate):
    T(n)Θ(h(n))    T(n) is both O(h(n)) and Ω(h(n)).T(n) \in \Theta(h(n)) \iff T(n) \text{ is both } O(h(n)) \text{ and } \Omega(h(n)).

Growth Rate Functions

  • Common Orders: Constant: O(1)O(1), Linear: O(n)O(n) , Linearithmic: O(n log n)O(n\space log\space n), Quadratic: O(n2)O(n²), Exponential: O(2n)O(2^n).
  • Example (Selection Sort):
    Θ(n(n1)2)=Θ(n2).\Theta\left(\frac{n(n-1)}{2}\right) = \Theta(n^2).

4. Sorting Algorithms

Quadratic Sorting Algorithms

Insertion Sort

  • Concept: Builds a sorted subarray by inserting each element into its proper place.
  • Usually the BEST of the three.
  • Pseudocode:
for (i = 1; i < n; i++) {
	v = A[i];
	j = i - 1;
	while (j >= 0 && A[j] > v) {
		A[j+1] = A[j];
		j--;
	}
	A[j+1] = v;
}
  • Complexity:
    • Best Case: O(n)O(n) (when already sorted).
    • Worst/Average Case: O(n2)O(n^2).

Bubble Sort

  • Concept: Repeatedly swaps adjacent elements to “bubble” the largest element to the end.
  • Pseudocode:
do {
	swapFlag = false;
	for (i = 1; i < n; i++) {
		if (A[i-1] > A[i]) {
			swap(A[i-1], A[i]);
			swapFlag = true;
		}
	}
} while (swapFlag);
  • Complexity:
    • Best Case: O(n)O(n) (when already sorted).
    • Average/Worst Case: O(n2)O(n^2).
    • Efficient for nearly sorted arrays.

Selection Sort

  • Concept: Repeatedly scans the unsorted portion of the array to find the smallest element and swaps it with the first unsorted element. This process progressively builds a sorted subarray from left to right.
  • Pseudocode:
for (i = 0; i < n - 1; i++) { 
	minIndex = i; 
	for (j = i + 1; j < n; j++) { 
		if (A[j] < A[minIndex]) 
			minIndex = j; 
	} 
	swap(A[i], A[minIndex]); 
}
  • Complexity:
    • Always: Θ(n2)\Theta(n^2) comparisons, regardless of the input.
    • Although the number of swaps is at most O(n)O(n), the dominant cost is in the comparisons, making the overall time complexity Θ(n2)Θ(n^2).

5. Divide and Conquer: Master Theorem, Merge Sort, Quick Sort

Master Theorem

  • General Form:
    T(n)=aT(nb)+f(n)T(n) = a\,T\left(\frac{n}{b}\right) + f(n)
  • Cases:
    • Case 1: If f(n)Θ(nd)f(n) \in \Theta(n^d) with a<bda < b^d, then
      T(n)Θ(nd).T(n) \in \Theta(n^d).
    • Case 2: If a=bda = b^d, then
      T(n)Θ(ndlogn).T(n) \in \Theta(n^d \log n).
    • Case 3: If a>bda > b^d, then
      T(n)Θ(nlogba).T(n) \in \Theta\left(n^{\log_b a}\right).

Example (Merge Sort): Here, a=2,b=2,d=1a = 2, b = 2, d = 1 so that 2=212 = 2^1, hence
T(n)=2T(n2)+nΘ(nlogn).T(n) = 2T\left(\frac{n}{2}\right) + n \in \Theta(n \log n).

Merge Sort

  • Concept: Divide the array into two halves, recursively sort each half, then merge.
  • Recurrence:
    T(n)=2T(n2)+nT(n) = 2T\left(\frac{n}{2}\right) + n
  • Complexity: Θ(n log n)\Theta(n\space log\space n).
  • Pseudocode:
void mergeSort(int A[], int n) {
	if (n > 1) {
		int mid = n / 2;
		// Copy A[0...mid-1] to B, and A[mid...n-1] to C
		mergeSort(B, mid);
		mergeSort(C, n - mid);
		merge(B, C, A);
	}
}

Quick Sort

  • Concept: Choose a pivot element, partition the array so that elements less than the pivot come before it and those greater come after, then recursively sort the partitions.
  • Partitioning: Rearranges elements based on a pivot, ensuring left side elements are less than the pivot and right side elements are greater.
  • Complexity:
    • Best Case: Θ(n log n)\Theta(n\space log\space n).
    • Worst Case: O(n2)O(n²) (mitigated by a good pivot choice such as median-of-three).
  • Pseudocode:
void quickSort(int A[], int l, int r) {
	if (l < r) {
		int s = hoarePartition(A, l, r);
		quickSort(A, l, s - 1);
		quickSort(A, s + 1, r);
	}
}

6. Graph Basics

Definitions

  • Graph: A structure G=(V,E)G = (V, E) with:
    • V (Vertices): Nodes of the graph.
    • E (Edges): Connections between nodes.
  • Types:
    • Directed Graph (Digraph): Edges have a direction.
    • Undirected Graph: Edges have no direction.
    • Weighted Graph: Edges carry weights (e.g., cost, distance).
  • Key Concepts:
    • Path: A sequence of vertices where each consecutive pair is connected by an edge.
    • Cycle: A path that begins and ends at the same vertex.
    • Connected Graph: There is a path between every pair of vertices.
    • DAG: Directed Acyclic Graph (no cycles).

Representations

  • Adjacency Matrix:
    • Uses a V×V|V| \times |V| matrix.
    • Space Complexity: Θ(V2)\Theta(|V|^2).
    • Edge Lookup: O(1)O(1) per query.
  • Adjacency List:
    • An array (or list) where each entry contains a list of adjacent vertices.
    • Space Complexity: Θ(V+E)\Theta(|V| + |E|).
    • Efficient for sparse graphs.

7. Graph Traversals

Depth-First Search (DFS)

  • Concept: Explore as deep as possible along each branch before backtracking.
  • Algorithm (Recursive Approach):
void DFS(Vertex v) {
	mark v as visited;
	for (each neighbor w of v) {
		if (w is not visited)
			DFS(w);
	}
}
  • Properties:
    • Uses recursion or an explicit stack.
    • Time Complexity: O(V+E)O(|V| + |E|) using an adjacency list; O(V2)O(V^2) with an adjacency matrix.
    • Applications: Identifying connected components, cycle detection.

Breadth-First Search (BFS)

  • Concept: Visit vertices level by level starting from a source vertex.
  • Algorithm (Iterative Approach):
void BFS(Vertex start) {
	initialize queue Q;
	mark start as visited;
	enqueue start into Q;
	while (Q is not empty) {
		Vertex v = dequeue(Q);
		for (each neighbor w of v) {
			if (w is not visited) {
				mark w as visited;
				enqueue w;
			}
		}
	}
}
  • Properties:
    • Uses a queue for level-order traversal.
    • Time Complexity: O(V+E)O(|V| + |E|) using an adjacency list; O(V2)O(V^2) with an adjacency matrix.
    • Applications: Finding the shortest path in unweighted graphs.

Summary

  • Data Structures: Fundamental structures (arrays, linked lists, heaps) and graph representations underpin efficient algorithm design.
  • Math in Algorithms: Sets, factorials, logarithms, summations, and recurrences provide the language for algorithm analysis.
  • Computational Complexity: Big-O, Big-Omega, and Big-Theta notations quantify performance.
  • Sorting Algorithms: Quadratic sorts (insertion, bubble) are simple, while divide-and-conquer sorts (merge, quick) offer improved efficiency via the Master Theorem.
  • Graphs: Understanding definitions, representations, and traversal strategies (DFS/BFS) is crucial for solving real-world problems.