I don't shy away from complexity.
Actually I prefer to (and did) write my own library for it... nearly 100 lines of code, just to open a .txt file... My Lord!

...

So much for ... fp = fopen("myfile.txt","r");

And you even write really compact code at that.
I'm curious though, unless this is professional or something, this seems like overkill for most projects. Not very may people use things other than ASCII or UTF-8 in plain text files. And even then, it what cases do you really need to be able to open UTF-32 BE as well as ASCII?

This entire headche started when I was asked to convert a large number of playlist files that all referred to the same FTP file structure, but were written on several different machines, using differing media players and text editors... When checking the files I discovered that only about half of them were simple ASCII... some were UTF8 (which was the target format)... but the rest were a dogs breakfast of UTF16BE and UTF16LE with both Windows (CR/LF) and Posix (LF) line endings... Just to make it worse, some had Byte Order Messages, most didn't. The site operator wanted to standardize on UTF8 because it's the "new" internet standard and because he figured it would save him a fair chunk of disk space. My job was first to batch convert what he already had then automate the conversion for new uploads. Some of these playlists have 10,000 files listed in them.

That's where that FileLaunch function came from...
It will open any text format ... ASCII, ANSI, UTF16LE/BOM, UTF16LE, UTF16BE/BOM, UTF16BE, UTF8/BOM or UTF8... it also handles line endings such as CR/LF, LF/CR, CR, LF, NULL...

But it did not pay my increased Excedrin bill.

How often do you encounter this? Well, if you are working with files that you did not create... it could happen at any time. In fact, it will happen more and more as time goes on and we move away from the "English is the language of computing" mindset. Everything is going to get more complex...
I recently had the joy (?) of watching one of my programs working with Cyrillic file names...

I really do wish they would settle down and say "From now on text is represented as 32bit values"... and get back to the "one standard" rule... This business of having to crack upwards of 20 possibilities in every program where I load a file is just ridiculous.

I'm just waiting for the day an instructor gives his class a text file to process in an exercise and saves the thing as UTF16BE/BOM for a course based on Windows.

Your code is unreadable due to poor indentation (or should I say code style?).
And where does finally come from?
And finally, if you would just look around, you would find a good Unicode handling library. There's even one for C++ that makes opening files a cinch!
You don't need to reinvent the wheel every time.

Originally Posted by Adak

io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.

Originally Posted by Salem

You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

I really do wish they would settle down and say "From now on text is represented as 32bit values"... and get back to the "one standard" rule... This business of having to crack upwards of 20 possibilities in every program where I load a file is just ridiculous.

Well, you know how it works. When you try to replace N technologies with a new one, you end up with N+1 technologies.

All the buzzt! CornedBee

"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law

Your code is unreadable due to poor indentation (or should I say code style?).
And where does finally come from?
And finally, if you would just look around, you would find a good Unicode handling library. There's even one for C++ that makes opening files a cinch!
You don't need to reinvent the wheel every time.

We've had this discussion before... you con't like the way I set up my code... I do. (Get used to it.)
try and finally are implemented by another library I wrote which does SEH on Pelles C. (Open Source on PellesC forums)

Why would I look for another library? I already wrote one... It's all nicely tucked away as a .lib file I can use any time I need it.

Moreover, it's far more successful than the IsTextUnicode() call in WinApi... theirs got about half the files I was working with wrong... mine hasn't missed yet and it goes the extra bit of actually translating the text to wchar_t (UTF16LE) for me.

I dunno, Elysia, you seem bent upon using 3rd party libraries for everything. I've never used one yet (except to mess around with). I actually enjoy the challenges of writing my own .lib files...

But Windows API is a third party library: The first party includes your stuff, the second party would be ISO C stuff, and the third party is Windows stuff. "Never" makes the statement false more often than not, just like you learned in school.