The BOM is a byte sequence that identifies the document as UTF16 and is prepended to the file contents.

I am under the impression that MS documents (Excel,Word) contain this sequence not on the beginning of the file but somewhere later because they reserve the first few bytes for their header information.(They even write a BOM for UTF8 too,which sucks of course)

Because the document and BOM can be UTF16 LE or BE,Encode does need to understand what kind it is. Check this Stackoveflow answer by brian d foy.

However to simplify the process and save you from the trouble with excel,I would suggest that you open the excel file and export the data as UTF8 text or CSV file. Then you
can use your Perl parser/module of choice to get to the contents

nikosv: I took your suggestion to export from excel as a csv file, and that did help a great deal. I want to thank you for that. It gave me a solid place to move from. By reading the csv file and writing it immediately out as a first step, I was able to verify the characters were like for like. Then I stacked on the additional steps of importing into the DB, etc., and watched what happened.

It was also quite useful to understand the bit about double encoding. I noticed as I made code changes that the data became more or less mangled. I found something quite amazing by watching this behavior. When creating my tables in SQLite, I used DBI more or less directly; I had been using the sqlite_unicode connection setting, but found I needed NOT to set sqlite_unicode => 1 in the connection statement. But contrary to that, when running queries through DBIx::Class machinery, I DID need to set sqlite_unicode => 1 in the DB connection. Once all that was sorted out, all data were read, input into the DB, read back out of the DB, and written out to the final files while preserving proper encoding.