Sunday, January 22, 2012

Prim's algorithm - Greedy Approach

Wikipedia : http://en.wikipedia.org/wiki/Prim%27s_algorithm

Refer to Example Run.

Saturday, January 21, 2012

Dijkstra Algorithm - Very simple explanation

Dijkstra : http://renaud.waldura.com/doc/java/dijkstra/

Phone Number - Permutation of chars

package com.algorithms;

// @author - Lakshman
// Does not validate input.

public class PhoneNumbers {

private static final boolean debug = false;
private static final char[][] digits = {{'0'}, {'1'},
{'a', 'b', 'c'}, {'d', 'e', 'f'},
{'g', 'h', 'i'}, {'j', 'k', 'l'},
{'m', 'n', 'o'}, {'p', 'q', 'r', 's'},
{'t', 'u', 'v'}, {'w', 'x', 'y', 'z'}
};
private int permutationsCount = 0;

private void numberDisplay() {
for (int i=0; i System.out.print("array[" + i + "] length = " + digits[i].length);
System.out.print(" :: Characters are : ");
for (int j=0; j System.out.print(digits[i][j] + " , ");
}
System.out.println();
}
System.out.println("Total number of array elements = " + digits.length);
}


// Recursively display permutations.
private void numberPerms(String number, String result) {
int digit = Integer.parseInt(number.substring(0,1));
if (debug) {
System.out.println("digit = " + digit);
System.out.println("digits[numberCh].length = " + digits[digit].length);
}

// Exit condition
if (number.length() == 1) {
for (int i=0; i permutationsCount++;
System.out.println("Permutation [" + permutationsCount + "] = " + result + digits[digit][i]);
}
return;
}

// Recursive function
for (int i=0; i String result1 = result + digits[digit][i];
numberPerms(number.substring(1), result1);
}
}

public static void main(String[] args) {
PhoneNumbers phone = new PhoneNumbers();
if (debug) {
phone.numberDisplay();
}
phone.numberPerms("246", "");
}

}

*******
Output
*******
Permutation [1] = agm
Permutation [2] = agn
Permutation [3] = ago
Permutation [4] = ahm
Permutation [5] = ahn
Permutation [6] = aho
Permutation [7] = aim
Permutation [8] = ain
Permutation [9] = aio
Permutation [10] = bgm
Permutation [11] = bgn
Permutation [12] = bgo
Permutation [13] = bhm
Permutation [14] = bhn
Permutation [15] = bho
Permutation [16] = bim
Permutation [17] = bin
Permutation [18] = bio
Permutation [19] = cgm
Permutation [20] = cgn
Permutation [21] = cgo
Permutation [22] = chm
Permutation [23] = chn
Permutation [24] = cho
Permutation [25] = cim
Permutation [26] = cin
Permutation [27] = cio

Friday, January 20, 2012

Longest Increasing Sequence

Algorithm : Two ways.
1. LCS with one array - being input array a[]; second array - sort input array b[];
So complexity = nlogn (for sort) + DP

2. Another approach
Algorithm: http://www.algorithmist.com/index.php/Longest_Increasing_Subsequence
Impl: http://www.algorithmist.com/index.php/Longest_Increasing_Subsequence.c

LCS - Longest Common Subsequence

Very Easy approach :
Wikipedia - http://en.wikipedia.org/wiki/Longest_common_subsequence_problem

Dynamic Programming

Problem Varieties - http://www.codeforces.com/blog/entry/325

CMU edu - http://mat.gsia.cmu.edu/classes/dynamic/node2.html#SECTION00020000000000000000

Code Chef - http://www.codechef.com/wiki/tutorial-dynamic-programming

Problem : Minimum Steps to One

Problem : Minimum Steps to One
===============================

Problem Statement: On a positive integer, you can perform any one of the following 3 steps.
1.) Subtract 1 from it. ( n = n - 1 ) ,
2.) If its divisible by 2, divide by 2. ( if n % 2 == 0 , then n = n / 2 ) ,
3.) If its divisible by 3, divide by 3. ( if n % 3 == 0 , then n = n / 3 ). Now the question is, given a positive integer n, find the minimum number of steps that takes n to 1

eg: 1.)For n = 1 , output: 0 2.) For n = 4 , output: 2 ( 4 /2 = 2 /2 = 1 ) 3.) For n = 7 , output: 3 ( 7 -1 = 6 /3 = 2 /2 = 1 )

Approach / Idea: One can think of greedily choosing the step, which makes n as low as possible and conitnue the same, till it reaches 1. If you observe carefully, the greedy strategy doesn't work here. Eg: Given n = 10 , Greedy --> 10 /2 = 5 -1 = 4 /2 = 2 /2 = 1 ( 4 steps ). But the optimal way is --> 10 -1 = 9 /3 = 3 /3 = 1 ( 3 steps ). So, we need to try out all possible steps we can make for each possible value of n we encounter and choose the minimum of these possibilities.

It all starts with recursion :). F(n) = 1 + min{ F(n-1) , F(n/2) , F(n/3) } if (n>1) , else 0 ( i.e., F(1) = 0 ) . Now that we have our recurrence equation, we can right way start coding the recursion. Wait.., does it have over-lapping subproblems ? YES. Is the optimal solution to a given input depends on the optimal solution of its subproblems ? Yes... Bingo ! its DP :) So, we just store the solutions to the subproblems we solve and use them later on, as in memoization.. or we start from bottom and move up till the given n, as in dp. As its the very first problem we are looking at here, lets see both the codes.

Memoization

[code]

int memo[n+1]; // we will initialize the elements to -1 ( -1 means, not solved it yet )

int getMinSteps ( int n )

{

if ( n == 1 ) return 0; // base case

if( memo[n] != -1 ) return memo[n]; // we have solved it already :)

int r = 1 + getMinSteps( n - 1 ); // '-1' step . 'r' will contain the optimal answer finally

if( n%2 == 0 ) r = min( r , 1 + getMinSteps( n / 2 ) ) ; // '/2' step

if( n%3 == 0 ) r = min( r , 1 + getMinSteps( n / 3 ) ) ; // '/3' step

memo[n] = r ; // save the result. If you forget this step, then its same as plain recursion.

return r;

}

[/code]

Bottom-Up DP

[code]

int getMinSteps ( int n )

{

int dp[n+1] , i;

dp[1] = 0; // trivial case

for( i = 2 ; i < = n ; i ++ )

{

dp[i] = 1 + dp[i-1];

if(i%2==0) dp[i] = min( dp[i] , 1+ dp[i/2] );

if(i%3==0) dp[i] = min( dp[i] , 1+ dp[i/3] );

}

return dp[n];

}

[/code]

Both the approaches are fine. But one should also take care of the lot of over head involved in the function calls in Memoization, which may give StackOverFlow error or TLE rarely.

Reference : http://www.codechef.com/wiki/tutorial-dynamic-programming

Thursday, January 19, 2012

Generate Subsets

Question:
How to generate a list of subsets with restrictions?
Input
{1,2,3}

Output
{{1},{2,3}}
{{2},{1,3}}
{{3},{1,2}}

Input
{1,2,3,4}

Output
{{1},{2,3,4}}
{{2},{1,3,4}}
{{3},{1,2,4}}
{{4},{1,2,3}}
{{1,2},{3,4}}
{{1,3},{2,4}}
{{1,4},{2,3}}

Input
{1,2,2,3}

Output
{{1},{2,2,3}}
{{2},{1,2,3}}
{{3},{1,2,2}}
{{1,2},{2,3}}
{{1,3},{2,2}}

Great Answer :
---------------
If you were generating all subsets you would end up generating 2n subsets for a list of length n. A common way to do this is to iterate through all the numbers i from 0 to 2n-1 and use the bits that are set in i to determine which items are in the ith subset. This works because any item either is or is not present in any particular subset, so by iterating through all the combinations of n bits you iterate through the 2n subsets.

For example, to generate the subsets of (1, 2, 3) you would iterate through the numbers 0 to 7:

0 = 000b → ()
1 = 001b → (1)
2 = 010b → (2)
3 = 011b → (1, 2)
4 = 100b → (3)
5 = 101b → (1, 3)
6 = 110b → (2, 3)
7 = 111b → (1, 2, 3)

In your problem you can generate each subset and its complement to get your pair of mutually exclusive subsets. Each pair would be repeated when you do this so you only need to iterate up to 2n-1 - 1 and then stop.

1 = 001b → (1) + (2, 3)
2 = 010b → (2) + (1, 3)
3 = 011b → (1, 2) + (3)

To deal with duplicate items you could generate subsets of list indices instead of subsets of list items. Like with the list (1, 2, 2, 3) generate subsets of the list (0, 1, 2, 3) instead and then use those numbers as indices into the (1, 2, 2, 3) list. Add a level of indirection, basically.

Here's some Python code putting this all together.

#!/usr/bin/env python

def split_subsets(items):
subsets = set()

for n in xrange(1, 2 ** len(items) / 2):
# Use ith index if ith bit of n is set.
l_indices = [i for i in xrange(0, len(items)) if n & (1 << i) != 0]
# Use the indices NOT present in l_indices.
r_indices = [i for i in xrange(0, len(items)) if i not in l_indices]

# Get the items corresponding to the indices above.
l = tuple(items[i] for i in l_indices)
r = tuple(items[i] for i in r_indices)

# Swap l and r if they are reversed.
if (len(l), l) > (len(r), r):
l, r = r, l

subsets.add((l, r))

# Sort the subset pairs so the left items are in ascending order.
return sorted(subsets, key = lambda (l, r): (len(l), l))

for l, r in split_subsets([1, 2, 2, 3]):
print l, r

Output:

(1,) (2, 2, 3)
(2,) (1, 2, 3)
(3,) (1, 2, 2)
(1, 2) (2, 3)
(1, 3) (2, 2)

Reference : Stack Overflow

Terms and Definitions

Combinations:
*************
Combinations is used to refer to picking k items from a set of n items, where the order of the k items does not matter.
The related concept of picking k items from a set of n items, where the order of the k items does matter, is referred to as a permutation.

Dynamic Programming :
**********************
Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly 'Remember your Past' :) . If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).

There are two ways of doing this.

1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization.

2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming.

Note that divide and conquer is slightly a different technique. In that, we divide the problem in to non-overlapping subproblems and solve them independently, like in mergesort and quick sort.

In case you are interested in seeing visualizations related to Dynamic Programming, you could also see : http://www.thelearningpoint.net/computer-science/dynamic-programming

Reference : http://www.codechef.com/wiki/tutorial-dynamic-programming

Adjacency Matrix : http://en.wikipedia.org/wiki/Adjacency_matrix

Tuesday, January 17, 2012

Problems

1. 1. Print all possible letter combination in a phone number: http://stackoverflow.com/questions/2344496/how-can-i-print-out-all-possible-letter-combinations-a-given-phone-number-can-re

2. Finding 3 elements whose sum is closest to the input number : http://stackoverflow.com/questions/2070359/finding-three-elements-in-an-array-whose-sum-is-closest-to-an-given-number

3. Arrangement of blocks : http://stackoverflow.com/questions/7692653/google-interview-arrangement-of-blocks

4. Dynamic Programming : http://stackoverflow.com/questions/6135443/dynamic-programming-question

Dynamic Programming

Top Coders - Dynamic Programming tutorial : http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=dynProg