"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell

On my compiler (g++), maps are implemented as a red-black tree. I guess other implementations could use hash tables or other data structures, but any way it's done it will be very fast. Seriously, I wouldn't worry about it.

There are quicker ways. In one of my projects I had a sort of settings list, and I converted each string setting into an enumeration value and then always did indexing based on that. It was a really stupid idea, making me work much harder as a programmer for a fractional speed increase that I'd probably never even notice.

Maybe you could describe more about what you're trying to do? . . . that way you'll receive more informed suggestions.

"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell

I thought about map, but then there will be some time wasted for searching elements in the map...

"wasted"?
How is the time wasted if it is doing exactly what you want it to?
It's not exactly slow to perform a map.find, it does not search through all entries just to find the one you want. It's far smarter than that.

Well as they say, the fastest instruction is the one that is never executed.
Though sure, I was totally lacking an explanation of an alternative.

So in effect you want GetField() to get a value from the correct outer vector, and from the inner vector position corresponding to how many times GoToNextRecord() has ben called, right?

Well then you're duplicating a lot of work. Each time through the loop you're constructing several strings from string literals, and essentially looking up the index of of the outer vector to find which inner vector corresponds to that field.
The best thing you can do then is to resolve the field name to an outer vector index before the loop, and then inside the loop, just use the index.

Then, as I've hoped, there wont be any such lookups by name, inside the loop at all. Not quite "not doing it at all", but "not doing it in the loop" is the next best thing.

For the actual lookup, you could have a vector< pair<string, int> > and use std::lower_bound, which will be a tad faster than a map lookup.

On my compiler (g++), maps are implemented as a red-black tree. I guess other implementations could use hash tables or other data structures, but any way it's done it will be very fast. Seriously, I wouldn't worry about it.

VC++ also has it implemented as a red-black tree. Other implementations could use a different type of tree, though, I suppose.
Lookup time is O(logn) for map. For hash maps, it is not as clear. The ordo depends on the implementation type of the hash table. If the hash table is proper balanced, the lookup time could be O(1). But then again, if it's fat, it could take linear time.
Insertions are also a problem. Trees take O(logn) time, but hash tables could take O(N^2) time. It all depends on how well balanced it is when the insertion is done.

std::map is very fast as Elysia and many have said. The only thing I would caution on, based on experience with the MSVS PJ Plaguer implementation, is you do not iterate through the collection in time critical loops. Iteration through the map is quite slow - sometimes as much as 4 times slower than iterating through a vector.

But combine the map with a vector or some other data type and you get the best of both worlds. If iteration time is not important or you are not going to iterate the collection often then I would not worry about it. It all depends on what your requirements are as to what approach you will take.

I suppose it would make more sense to iterate through a skip list. It should be constant time, and fast, if anything.
Insertion is O(logn) amortized time (or guaranteed if you use a deterministic skip list).
I have been pondering that, but never actually really tested it. There is no deterministic skip list in the library either

How large is this file? Reading a large amount of data into stl structures seems unneccessary waste because of the memory overhead of each structure. Read your file into RAM in one continuous block. If you need to look up certain things fast, create pointer arrays to index this block. Jumping around in this pointer arrays may then be done by maps or something like that.