Apache Spark RDDs are not meant to be used for lookups. The most "efficient" way to get the nth line would be lines.take(n + 1).get(n). Every time you do this, it will read the first n lines of the file. You could run lines.cache to avoid that, but it will still move the first n lines over the network in a very inefficient dance.

If the data can fit on one machine, just collect it all once, and access it locally: List<String> local = lines.collect(); local.get(n);.

If the data does not fit on one machine, you need a distributed system which supports efficient lookups. Popular examples are HBase and Cassandra.

It is also possible that your problem can be solved efficiently with Spark, but not via lookups. If you explain the larger problem in a separate question, you may get a solution like that. (Lookups are very common in single-machine applications, but distributed algorithms have to think differently.)