It is hard to give a definite answer to your question. Probably you are the only one able to answer.
If your second code works for you and is correct, why should it be wrong?

The first section of your code requires at least temporary about 1GB of memory in one or another way. Roughly 480 MB for readAll and afterwards at least 480 MB for setText. There are a couple of ways to make it even worse, but this cannot be seen in your given code.

The second code section is less of a problem to waste a lot of memory in the reading process.

From you posted the section using readAll will consume temporarily at least
readAll -> 480 MB
setText -> 480 MB
Total 960 MB almost 1 GB
Possibly setText is formulated that it may consume temporarily another 480 MB, which may result in approximate 1.5 GB.

All this is because of one code line.

The second version you have posted reads line by line and adds the stuff to your storage. Assuming a pretty long line of 1 kB this is peanuts on each read. When you have read your stuff you have increased your memory requirement by appr 480 MB. So you are increasing your memory step by step.

When you are working with computer with 32bit OS (respectively 32bit Qt libs) you are temporarily pretty close to the limits with the first approach. The second approach looks saver from the part of code you are posting.

QScintilla
is using Qt not the other way around. QScintilla is a port of Scintilla using Qt. You better ask questions in their forum. Over here it may take longer for getting an answer.

Typically readLine reads up to end of line stdlib and Qt (marked by newline character). However, the newline character is typically not stored in string. Therefore, you need to check what append of QsciScintilla does. If it does not add a newline character this could be your problem.

As @jsulm already picks up having a newline character after each character sounds strange.@jsulm I did not provide the code snippet.

In case you like to add a newline character after each line, you may try:

append(line);
append("\n");

in your example above. No guarantee that this works because QsciScintilla is not part of Qt.

The second version you have posted reads line by line and adds the stuff to your storage. Assuming a pretty long line of 1 kB this is peanuts on each read. When you have read your stuff you have increased your memory requirement by appr 480 MB. So you are increasing your memory step by step.

When you are working with computer with 32bit OS (respectively 32bit Qt libs) you are temporarily pretty close to the limits with the first approach. The second approach looks saver from the part of code you are posting.

This is not

I have tried this code , it does not crash the tool hangs do we have solution so that I can remove the hang and the files load incrementatlly on moviing the scrollbar so that user is able to see some actions

I have tried this code , it does not crash the tool hangs do we have solution so that I can remove the hang and the files load incrementatlly on moviing the scrollbar so that user is able to see some actions

QString QTextStream::readAll()
Reads the entire content of the stream, and returns it as a QString. Avoid this function when working on large files, as it will consume a significant amount of memory.
Calling readLine() is better if you do not know how much data is available.

Possibly the implementation is consuming too much space during reading. There are always different ways to implement things.

Just reading the crystal ball. FILE is a C construct. Certainly you can easily use it in C++. However, you just have a file handle to read from. AFAIK all information especially file size is available after you have read the complete file. Basically you need to read and fill the buffer which size is not known at the start. Some containers have an issue. String containers rely on allocation of continuous memory, because of pointer access to whole container. Consequently the routine will read a chunk of data until buffer is full. In order to read more it has to allocate a new larger chunk. A common technique is doubling the already allocated memory in order to avoid too many allocation. However, this has a disadvantage.
Assume first size equals 1 MB
at next it will allocate 2 MB,
copy the 1 MB,
release the 1 MB
continue and read another 1 MB.
In between it uses 1.5 of the final size. Since you are really at the system limits with your file size, this might be the case. Just a single byte to read in addition the routine triple at least for a moment the required allocated memory and you are gone.

QFile knows probably more about the file than FILE*. It knows the file size and readAll is simply able to allocate the proper size without guessing.

BTW you should be able to see this in a task manager for windows for instance. Since the reading time is considerable you should be able to follow the memory allocation process.