## Code for Finding Median in Linear Time

```import java.util.ArrayList;

public class LinearMedian {
public int mMedian(int[] raw) {
if (raw.length == 1) return raw;
int[] medianArray = new int[(raw.length - 1) / 5 + 1];
for (int i = 0; i < (raw.length - 1) / 5 + 1; i++) {
int start = i * 5;
int end = i * 5 + 5;
int[] temp;
if (end <= raw.length)
temp = new int;
else {
temp = new int[raw.length - start];
}
for (int j = start; j < end && j < raw.length; j++) {
temp[j - start] = raw[j];
}
medianArray[i] = smallMedian(temp);
}
return findMedian(medianArray, medianArray.length / 2);
}

public int smallMedian(int[] raw) {
for (int i = 0; i < raw.length - 1; i++) {
for (int j = i + 1; j < raw.length; j++) {
if (raw[i] < raw[j]) {
int temp = raw[i];
raw[i] = raw[j];
raw[j] = temp;
}
}
}
return raw[raw.length / 2];
}

public int findMedian(int[] raw, int k) {
if (raw.length == 1) return raw;
int median = mMedian(raw);
if (raw.length <= 5) return median;
return partitionMedian(raw, median, k);
}

public int partitionMedian(int[] raw, int median, int k) {
if (raw.length == 1) return raw;
ArrayList left = new ArrayList();
ArrayList right = new ArrayList();
for (int i = 0; i < raw.length; i++) {
else if (raw[i] > median) right.add(raw[i]);
}
if (k > left.size() && k < raw.length - right.size())
return median;
else if (k < left.size()) {
int[] leftArray = new int[left.size()];
for (int i = 0; i < left.size(); i++) {
leftArray[i] = left.get(i);
}
return findMedian(leftArray, k);
}
else {
int[] rightArray = new int[right.size()];
for (int i = 0; i < right.size(); i++) {
rightArray[i] = right.get(i);
}
return findMedian(rightArray, k - raw.length + right.size());
}
}

public static void main(String[] args) {
int[] input = new int[] {6, 2, 1, 5, 3, 7, 9, 10, 123, 4, 3};
LinearMedian test = new LinearMedian();
System.out.println(test.findMedian(input, input.length / 2));
}
}
```

## An interesting coin problem

Okay I admit I got busy with classes and jobs last week and did not update, so I decided just now to write about this problem we talked about in CS330 discussion last week.

Problem:
Consider the following game:
There is a line of coins with values 𝑣1, 𝑣2 … 𝑣𝑛 (𝑣1 is the value of the left-most coin, 𝑣2 is the value of the second leftmost coin… 𝑣𝑛 is the value of the right-most coin).
Each player takes turns taking either the left-most or rightmost remaining coin until all coins are gone. A player’s score is the sum of the values of the coins he takes on his turns. Using dynamic programming, write an algorithm which, given the current sequence of coins 𝑣1, 𝑣2 … 𝑣𝑛 in the form of the array 𝑉, determines the score of the two players if both play optimally, i.e. play to maximize their score at the end of the game. Report its runtime as well.

Solution:
The central spirit of dynamic programming is to dismantle the problem into steps and only caring about each step. The recursion will simply take care of the rest. In this particular problem, what we care about is which coin the player chooses at his/her turn. There are only two options: choosing the left-most or the right-most. After that we are going to have a smaller subsequence of coins, which we will leave to the recursion process. Therefore, we can store the best possible scores for the two players for each subsequence.
A subsequence is determined by two variables: its start point and end point. However, if we increment these two variables to do dynamic programming we are going to have a problem: the current entry will depend on both larger index and smaller index of results to calculate, which will cost us a lot of time. As an alternative, we can increment the start index and the length of subsequence. The next step is to figure out the dp equation.
Let A[i, j](1) and A[i, j](2) denote the best possible scores of two players if they start playing next on the subsequence A[i, j]. c[i] is the value of coin i. Obviously, A[i, i](1) = c[i] and A[i, i](2) = 0. Suppose player 1 is facing the subsequence A[i, j]. He can pick either the coin i or j, obtaining a vlue of c[i] or c[j]. If he picks coin i, his gain would be A[i+1, j](2) + c[i]. Number 2 is in the parentheses because for the sequence A[i+1, j] left after his pick, he would be considered the second player. Similarly, if he picks coin j, his gain would be A[i, j-1](2) + c[j]. Therefore, for this player, A[i, j](1) = max(A[i+1, j](2) + c[i], A[i, j-1] + c[j]).
Now let’s consider the other player. For the subsequence A[i][j], what is left for the second player is completely dependent on which coin the first player picked, so two possible subsequences for player 2: A[i+1, j] and A[i, j-1], for which he is considered the first player. Now we can derive the complete relationship:

```For subsequence A[i, j]:
if (A[i+1, j](2) + c[i]) > (A[i, j-1] + c[j]):
A[i, j](1) = A[i+1, j](2) + c[i]
A[i, j](2) = A[i+1, j](1)
else:
A[i, j](1) = A[i, j-1](2) + c[j]
A[i, j](2) = A[i, j-1](1)
```

Again, we can’t increment on i and j in this case. Instead, we can increment the start index i and a length l. j would be equal to i+l-1.
I am beginning to experience the power of dynamic programming. The problem is since it’s so powerful, the test for this class could be insanely hard…

## Sorting Algorithms

So class has started for a week now, but I’ve only gone to class for six days because we had MLK holiday and class got cancelled because of the snow. North Carolina can’t handle the mere thought of snow lol.

I’m writing down some of the easy Quicksort and Mergesort I’ve learned in COMPSCI 330. Maybe they’ll help me in the review process.

QuickSort:

Quicksort uses a divide-and-conquer spirit. It gets a pivot in the array and recurse on all numbers smaller or larger than the pivot. There are several ways to implement it. Here’s the method on Introduction to Algorithms:

```QSORT(A, i, j)
pivot = SPLIT(A, i, j)
QSORT(A, i, pivot - 1)
QSORT(A, pivot + 1, j)
SPLIT(A, m, n)
i = m - 1
FOR j from 0 to n - 1
if A[j] < A[n]
i++
SWAP A[i], A[j]
SWAP A[i + 1], A[n]
return i + 1
```

This version is pretty hard to understand, but the key is that for each m <= k <= i, A[k] < A[n], and for each i + 1 <= k <= j, A[k] > A[n].

The running time is obviously O(n log n).

MergeSort

MergeSort also uses divide-and-conquer. Instead of dealing with a pivot, it first divide, and then merge two divisions in linear time.

```MSORT(A, i, j)
B = MSORT(A, i, (i + j) / 2)
C = MSORT(A, (i + j) / 2 + 1, j)
A = MERGE(B, C)
Merge(B, C)
D = empty array
i = j = k = 1
while (i < |B| and j < |C|)
if(B[i] < C[j])
D[k] = B[i], i++
else
D[k]=C[j], j++
k++
Copy all elements of the other array into D
return D
```

The running time is still O(n log n).