Hi, I've just started working with SQL Server and am having an issue with bulk insert. When I try to load data from a file into a table I created, some of the records from the file return errors when trying to load a certain part of the record. The error I'm receiving is:

And here's an example of the part of the faulty records thats giving an error:

3179815000.48

I've spent hours looking at this and cant find the problem. There are no delimiter errors, and no matter what I change the data type to I get the same error. Any help would be greatly appreciated! Also, sorry if I have anything formatted incorrectly I'm entirely new here :P if you need any more info feel free to let me know, thanks in advance.

Although you have ruled out delimiter errors, most often you see this error when there are delimiter errors.

I have seen two kinds

- the field terminator error which happens very often with comma-delimited files when there is a comma as part of the data itself. That causes the columns to be offset and a column that is not numeric gets placed where it should not be.

- the row terminator error which happens when bulk insert is expecting \r, or \n or \r\n and the file does not exactly match what bulk insert is expecting.

In your case, I am suspecting the latter because usually you don't see pipes as part of the data.

Assuming that it is not a delimiter problem, another thing you could try to do (and this is what I always do) is to import the data into a staging table that has all VARCHAR or NVARCHAR columns. That should succeed with no conversion errors. Then, use a script to move the data to your destination table. My experience has been that it is easier to debug and catch errors when you do it this way.

Yet another possibility is to use SSIS, which can process good data to the actual destination table and send bad data to a different destination.

For your first suggestion, by staging table do you mean like a table with identical column names and just change all the data types to varchar? Also, the only rows that are getting this error are all decimal numbers that have 9 or more characters, which most programs will convert to scientific notation (when i open the data file in Excel, for example, it automatically converts it to scientific notation). Does this have anything to do with it?and thanks for your help!

Yes, creating an intermediate table with identical column names and data types for all columns as varchar(512) or something similar is exactly what I meant. You would then import into that intermediate table.

In the abasence of a staging table, each time there is an issue with the data, you are going to at a loss to figure out exactly what went wrong. With staging table, you will always get the data into the table (well, in most cases). Once you have it there and if the move to the final target table fails, it is easier to examine the data and fix any problems.

As to your question about whether numeric data type is causing the problem, I don't think it is. If you are unable to use a staging table, you might consider using a format file to specify the exact nature of the expected input data. http://msdn.microsoft.com/en-us/library/ms178129.aspx