A tree is suitable for decoding Morse code, that is converting from dots and dashes to letters. As long as you know where the breaks are between letters, this is entirely feasible.
To go the other way, from letters to dots and dashes, there's no need to use a tree. Finding a letter in such a tree would be annoying. Just make a simple lookup table:
a => ...

The representation of the letters is determined (roughly) by "the frequency of use of letters in the English language . . ., and the letters most commonly used were assigned the shorter sequences of dots and dashes." You could build a tree with traversing to the left means adding a '.' and traversing to the right means adding a '-' to the previous code. You ...

I would consider an alternative: Skip Lists.
From a high-level point of view, it's a tree structure, except that it's not implemented as a tree but as a list with multiple layers of links.
You'll get O(log N) insertions / searches / deletes, and you won't have to deal with all those tricky rebalancing cases.
I've never considered implementing them in a ...

I would recommend you start with either a Red-Black tree, or an AVL tree.
The red-black tree is faster for inserting, but the AVL tree has a slight edge for lookups. The AVL tree is probably a little easier to implement, but not by all that much based on my own experience.
The AVL tree ensures that the tree is balanced after each insert or delete (no ...

One way to do this would be to create a method on your tree that measures the depth of the tree at a given node. You don't have to store the value, and if you use such a getDepth() method only for testing, then there's no extra overhead for normal tree operations. The getDepth() method would recursively traverse its child nodes and return the maximum depth ...

Here's how I would answer this question in an interview situation (I haven't seen this question before, and I didn't look at the other answers until I had my solution):
First, I tried to just figure it out (which you called the "math solution") and when I got to glass 8 I realized that it would be tougher than it seemed because glass 5 starts to overflow ...

There are some self-balanced trees such as Red-Black tree and AVL tree. For more information see:
Wikipedia: Self-balancing binary search tree
Wikipedia: Red–Black tree
Wikipedia: AVL tree
or Chapter 13 of CRLS book

Yes, B-Trees still make good sense in managed languages.
A few points of explanation:
If you're using the B-Tree as an on-disk data structure, then I can absolutely guarantee that disk IO will be your bottleneck, not the fact that you are using a managed language.
If you are using a B-Tree in memory, then you can still have considerable control over ...

You need to do various things with trees, like translate between the data structure and some serial representation, like on a file or in a language.
So, for example, suppose you have a parse tree like this:
*
/ \
+ \
/ \ \
A B C
You could serialize it as * + A B C by walking it in prefix order, or as A B + C * by walking it in postfix ...

Which is most efficient?
Vague and difficult to answer. The computational complexities are all well-defined. If that's what you mean by efficiency, there's no real debate. Indeed, all good algorithms come with proofs and complexity factors.
If you mean "run time" or "memory use" then you'll need to compare actual implementations. Then language, ...

Your first tree is not an AVL tree. Even minus an element it is not an AVL tree (so we can't even consider it as an AVL tree undergoing balance).
AVL operations are defined on AVL trees to produce a new AVL tree. They are defined in terms of the node that was inserted or deleted.
So, we don't have an AVL tree to start, and we have no information about ...

I don't think this question is unreasonably hard. It's certainly not trivial, but a good programmer should be able to write it with minor mistakes in a few minutes.
50 lines of code would indeed be pretty long for a whiteboard question, but this question can be solved in much less code.
Walk the tree recursively, keeping track of the depth.
Have a ...

Right. This should really be on UX, but I'll tell you where I think you're going wrong.
THE USER DOES NOT WANT TO "EDIT A BINARY TREE"
The user has no particular concept of left-hand values or nodes. Those are based inside your implementation approach to the problem.
The user has a task to complete. You need to consider what they will generally try to do ...

Thinking about this as a tree problem is a red-herring, it's really a directed graph. But forget all about that.
Think of a glass anywhere below the top one. It will have one or two glasses above it that can overflow into it. With the appropriate choice of coordinate system (don't worry, see the end) we can write a function to get the "parent" glasses ...

The point of having different algorithms to deal with binary trees is not to do things with trees. On this abstract level, one order is largely as good as any other, since you only get abstract symbols out of the procedure.
But trees are typically used to represent interesting stuff, and that can make a big difference in the outcome. For instance, if the ...

It is important to note that hash tables only have average access time of O(1). This means a particular operation could be much worse. Additionally, there are several requirements for properly formed hash trees:
Mostly empty - few hash algorithms perform well beyond 70% usage, and most recommend 50% usage.
Collision handling is complex - either having to ...

Unlike a plain binary tree, AVL trees are self-balancing. When an element is inserted into an AVL tree, the tree may need to perform node rotations in order to maintain a certain tree depth, which allows for logarithmic lookup time. So, if you try to build a second AVL tree using pre-order node traversal on an existing AVL tree, the resulting tree will not ...

I don't think the question was difficult. I think they were simply testing if you're comfortable parsing a tree recursively. Obviously, if you haven't done it before, it'll seem very difficult on the spot. On the other hand, now that you have actually done it, you will definitely know in interview how to answer "recurively-parsing-trees" questions.
My point ...

This problem is solved using a queue based order level traversal.
http://en.wikipedia.org/wiki/Tree_traversal#Queue-based_level_order_traversal
You start by enqueuing the root node. You then push something on the queue that acts like a separator. Each time you deque an item, you push the left/right child of what you just removed from the queue onto the ...

The use of a managed language like Java, C# etc. has absolutely nothing to do with the way data is accessed from the drive, and in any case it certainly does not deprive developers from an iota of control over precisely how, when, and in what order data will be accessed from the drive.
The problem is elsewhere: managed languages suffer from the overhead of ...

If you're interested in Splay trees, there is a simpler version of those which I believe was first described in a paper by Allen and Munroe. It doesn't have the same performance guarantees, but avoids complications in dealing with "zig-zig" vs. "zig-zag" rebalancing.
Basically, when searching (including searches for an insert point or node to delete), the ...

If you want a relatively easy structure to start with (both AVL trees and red-black trees are fiddly), one option is a treap - named as a combination of "tree" and "heap".
Each node gets a "priority" value, often randomly assigned as the node is created. Nodes are positioned in the tree so that key ordering is respected, and so that heap-like ordering of ...

pre order traversal is a traversal, it visits every node in a binary tree
Depth First Search is a search, it goes around an arbitrary graph looking for a certain node (that it works best in a non cyclic graph (a.k.a. tree) is irrelevant)
this alone is a large enough difference to call them difference names

Indexes tend to be much smaller than the table. If the whole index fits in memory, then there will be an average of 1 disk seek per random lookup. If not, there will generally be 2 disk seeks (once for the index, once into the table for your actual data). Keep in mind that a disk seek averages 1/200th of a second. If you're going to do a million lookups, ...

I completed kevincline's answer to include a tree class and also made some minor changes for convenience. Please note that naked pointers are not a good practice and are only used for the sake of this example.
Here is the complete, compileable code:
#include <iostream>
#include <map>
#include <vector>
class Tree {
public:
explicit ...

Yes, but there isn't a standard name for it shorter than what you've already written.
Wikipedia says:
[...] by using a self-balancing tree, the theoretical worst-case time of common hash table operations (insertion, deletion, lookup) can be brought down to O(log n) rather than O(n). However, this approach is only worth the trouble and extra memory cost ...