// charsetName can be null to use the default charset. publicstaticStringreadFileAsString(StringfileName,StringcharsetName)throwsjava.io.IOException{java.io.InputStreamis=newjava.io.FileInputStream(fileName);try{finalintbufsize=4096;intavailable=is.available();bytedata[]=newbyte[available<bufsize?bufsize:available];intused=0;while(true){if(data.length-used<bufsize){bytenewData[]=newbyte[data.length<<1];System.arraycopy(data,0,newData,0,used);data=newData;}intgot=is.read(data,used,data.length-used);if(got<=0)break;used+=got;}returncharsetName!=null?newString(data,0,used,charsetName):newString(data,0,used);}finally{is.close();}}

2 comments:

You don't really trust JVM implementations too much. :-) Otherwise you'd have used an InputStreamReader + a StringBuilder and appended each chunk of data read in to the end of the temporary buffer. Instead you decided to use a custom managed buffer (data[]) and handle the buffer allocation yourself.

It'd be interesting to see which performs better.

Btw. I'd choose a buffersize larger than 4K. If your filesystem's block (or sector ... whatever you call it) size is larger than 4K, then a larger buffersize will perform better. If it's 4K or less, than having a buffer size that is a multiple of the block size performs the same as if you chose the blocksize for the buffer size. At least in theory. :-)