Is the first element of each row guaranteed to be larger than the last element in the previous row? (As in your example) In this case the problem is trivial with a modified binary search
– BrokenGlassMay 14 '12 at 19:50

@BrokenGlass: Nope, that's not guaranteed at all. The only condition is the rows are sorted in ascending order and so are the columns.
– noMADMay 14 '12 at 19:53

5 Answers
5

Let's look at a simplified task: in "sorted" square matrix assume all elements above secondary diagonal (green) less than given number, all elements below secondary diagonal (red) greater than given number, and no additional assumptions for elements on secondary diagonal (yellow).

Neither original assumptions of this task, nor these additional assumptions tell us how elements on secondary diagonal are related to each other. Which means we just have an unsorted array of N integers. We cannot find given number in the unsorted array faster than O(N). So for original (more complicated) problem with square matrix we cannot get a solution better than O(N).

For a rectangular matrix, stretch the square picture and set the additional assumptions accordingly. Here we have min(N,M) sorted sub-arrays of size max(N,M)/min(N,M) each. The best way to search here is to use linear search to find one or several sub-arrays that may contain given value, then to use binary search inside these sub-arrays. In the worst case it is necessary to binary-search in each sub-array. Complexity is O(min(N,M) * (1 + log(max(N,M) / min(N,M)))). So for original (more complicated) problem with rectangular matrix we cannot get a solution better than O(min(N,M) * ( 1 + log(max(N,M)) - log(min(N,M)))).

It is not possible to do better than O(n). Some guys (there are at least three of them on this page) think they can do better but that's because their algorithms are wrong or because they don't know how to compute the complexity of their algorithm so they try to guess it. This blog post is very good and will explain you the errors of these guys.

+1 I would have accepted this answer if the proof completely convinced me. But thanks for that lovely link. :-)
– noMADMay 14 '12 at 23:09

@Thomash: Though it is almost 1 year. But I am just curious about the algorithm complexity computed in the above blog post for "Quad Partition" method by the author. As we are dividing the matrix in "4" submatrix and discarding one of these four. So shouldn't be T(n) = 3*T(n/4)+c instead of T(n) = 3*T(n/2)+c(this computed by author)
– ManishJul 27 '13 at 23:24

1

@Manish the n in T(n) is the number of rows (or columns) in the matrix, not the number of cells (which is n^2). The 3 smaller matrices have n/2 rows, which means that they have n^2/4 cells.
– ThomashJul 27 '13 at 23:42

@Thomash Yeah, got it. I just read the comments below the post. Thank you for the link..
– ManishJul 27 '13 at 23:55

1

@Thomash the link is broken, can you please redirect?
– AerinApr 6 '17 at 23:16

Since both rows and columns are sorted, if we look at the first element of each row we can find which one contains the number we're looking for. Then, again, we can exploit the fact that the elements in each row are sorted and find that number.
The fastest search algorithm I know is Binary Search, which has a complexity of O(log n), so the total complexity will be O(log m + log n).
Here's an example, suppose we're looking for 28:

We do a binary search over the elements of the first column (1, 11, 21, 31, 41) and find that the row is the third, because its first element is smaller than our number but the next row's first element is larger. Number of steps: 2 (21, 31, found)

We do a binary search again over the third row (21, 22, 23, 24, 25, 26, 27, 28, 29, 30) and find our number. Number of steps: 2 - 3 (25, 27 or 28, found)

This solution won't work since according to OP it is not guaranteed that a row starts with a number that is larger than the last column value in the previous row - the example matrix given is just unfortunate
– BrokenGlassMay 14 '12 at 20:22

I think this can be done in O(log(n*n)*log(n)) time, where n is the no. of the rows of a square matrix.

By the properties of Matrix, the principal diagonal of the matrix is a sorted array. So, we can search an element or its lower bound in O(log(n)). Now, using this element as pivot, we have 4 sub-matrix. and we can say that all the elements in sub-matrix(top-left) are smaller, all the elements in sub-matrix (lower-right) are bigger. So, we can remove that from the search space.

Now, recursively search in sub-matrix (top-right) and in sub-matrix(lower-left).

Since, at each step, we perform a log(n) search (along principal diagonal) ad there can be atmost log(n*n) steps (as we reduce the search space by half in each step).