I could be wrong, but binary search might be as fast as you can get it.
–
CupcakeJul 6 '11 at 21:24

I'm not sure your method works. Imagine a matrix whose first row is [0,10,20,..,90] and the next row is [1,11,21,..,91] up to [9,19,29,...,99]. In this case each row and column is ordered. Now you start from 55, and you're looking for 72. 72 > 55 but it's not in the bottom half of the matrix. If you're looking for 19, it's not in the top half. Maybe I didn't understand the algorithm. I also don't understand how you have consecutive "return" statements - it's unreachable code.
–
Omri BarelJul 6 '11 at 21:49

@secureFish consider new answer . And let me know it is right .
–
ImposterSep 27 '12 at 10:12

you are saying from the bottom_left from the matrix as you illustrated. I think we still can use binary search to speed up. Here is my thought. For the column, we find the first element that equals or greater than target; if fails, we move right of the same row, still looking for the first equal/greater element than target; if yes, move back in the same column and find the first element that is less than the target; if yes, move right and continue previous algorithm, if no, we might exit. It still can be log(m), log(n). BTW, I like your graph very much
–
SecureFishJul 6 '11 at 22:06

Very nice. But you should edit your answer to start in the "bottom left" rather than "bottom right"... I wonder if this approach is optimal? Even if you used a binary search throughout this algorithm, you might wind up just "walking the diagonal", so it would not help the worst-case performance.
–
NemoJul 6 '11 at 22:14

@Nemo, corrected the typo, thanks. This approach is probably not optimal, but the worst case is O(n+m). The OP's question contains an algorithm that he claims is faster, but is incorrect according to you. Probably some additional binary searches could improve performance.
–
PatrickJul 7 '11 at 6:25

For optimization, i think that you can start from top-left corner also. Also, a better way to visualize this would be, by assuming that top-left is a root of BST and left and right children are down and right elements, then you can continue this way.
–
Priyank BhatnagarJul 7 '11 at 8:38

@logic_max: That does not work because the "left and right children" can be in either order. Again, the elements along the diagonal are totally unsorted. So I believe the optimal algorithm requires at least O(length of diagonal).
–
NemoJul 7 '11 at 17:16

Your algorithm may be O(log m + log n), but it also gives the wrong answer.

Suppose you search for "4" in the following matrix (where the upper-left is row=0, col=0):

0 1 4
1 2 5
2 3 6

Your algorithm starts by looking at the 2 in the center. Since 4 is greater than 2, you proceed to search the same row (not there), same column (not there), and the lower-right corner (not there). Whoops.

The constraint that each row and column is sorted is actually pretty weak. In particular, the elements along the diagonal could be in any order.

I think the correct approach is to do a binary search on the first and last column to narrow down a range of possible rows. Then do binary search on the first and last of those rows to narrow down the possible columns. And so forth.

Here's what I would try. Given an m by n matrix A, compare the value X with the entry A(m/2,n/2) (use floors if necessary).

If A(m/2,n/2) == X, done.

If A(m/2,n/2) < X, then there are 3 smaller matrices to check:

A(1:m/2, 1:n/2)
A(1:m/2, n/2+1:n)
A(m/2+1:m, 1:n/2)

If A(m/2,n/2) > X, , then there are 3 smaller matrices to check:

A(m/2:m, n/2:n)
A(1:m/2, n/2+1:n)
A(m/2+1:m, 1:n/2)

You can eliminate two of them (not always) by comparing the value to the smallest value in the corresponding matrix (the upper left value). Then you recursively try to find the value in each of the remaining matrices.

For a comparison based algorithm, O(lg(m) + lg(n)) queries is optimal.

Proof

For a comparison based query, each query can only have two results: true or false. An obvious extension of this is that for N queries you can have at most 2N results. Therefore, using N queries, you can only locate elements in a matrix with at most 2N elements.

How many queries then are required to search an m x n matrix? Just solve for N.

2N = mn
lg(2N) = lg(mn)
N = lg(m) + lg(n)

Therefore lg(m) + lg(n) queries is optimal.

Non-comparison based queries

That proof is conclusive, but only for comparison based queries. If you query the matrix in a way that doesn't involve comparisons then you can get near-constant time if you know the distribution of values. I won't give you an algorithm, but I would suggest looking at Radix sort as it contains the kind of non-comparison based techniques that are required to beat the lg(m) + lg(n) lower bound.

Reading the previous comments I came up with this algorithm. It basically suppose that by starting in the upper-right corner, the matrix can be used as a BST with some "loops" (we don't care about this loops).

1 4 9
5 6 10
9
/ \
4 10
/ \ /
1 6
\ /
5

This algorithm is the same as searching in a BST and is really easy to understand. The worst case run time is O( n + m ).

I believe that your algorithm does not have the time bounds that you believe that it does.

To see this, for simplicity let's assume that your grid is an n x n square (let's call this size m). If I can derive a different time bound than O(log n) in this case, I can argue that what you have should not be correct.

In particular, notice that in the worst case you make three recursive calls to problems that are of size (n / 2) x (n / 2) = m / 4. This means that we have the recurrence

T(1) = 1
T(m) = 3T(m / 4) + O(1)

Using the Master Theorem, the runtime for this function would be O(mlog43) = O(n2 log43) = O(n log49) &approx; O(n1.5849625). This is ω(log n + log m); that is, it's asymptotically strictly greater.

As many other people have posted, there are several well-known algorithms that run in O(m + n) that are based by walking one step in the right direction at each step. Consequently, correctness aside, I would not advise using the algorithm you've posted.

Given elements in 2d array(be a[n][m]) are increasing horizontally and vertically . So for the given question we need to find the index of the element first. So if we can find the element in quicker way then we can optimize the solution . The question is how do we find it in efficient way. One approach is take the middle element of the matrix and check given element with it

if given element is lesser than middle element then our solution lies in matrix a[0][0] to a[n/2][m/2] because all elements to right and below are greater than middle (since given element is less than middle element) so we have reduced our search space from a[n][m] to a[n/2][m/2] which is one fourth of original size.

if given element is greater than middle element then our solution doesnt lies in matrices a[0][0] to a[n/2][m/2] because all elements to left and above are lesser than middle (since given element is greater than middle element) so our search space is total array minus a[0][0] to a[n/2][m/2] which is three-fourth of original size. Total array minus a[0][0] to a[n/2][m/2] means ,there will be three recursive call with array index

Time complexity of our function will as follows . NOTE: In time function n represents total number of element but not no of rows as mentioned .n=(no_of_rows)*(no_of_columns)

_________________T(n/4) if given element is less than middle of the array.
/
/
T(n)==========------------------- 1 if n=1 (if element found)
\
\_________________3T(n/4) if given element is greater than middle element of array