Thanks for this example
I am trying to adapt it to a dataset with the same colum structure for simplicity, the only difference is that the datetime field also has millisconds (eg 3 digits after the dot (like so: 02/03/2010 16:53:50.158).
As such I have made a slight modification to the code as per below, the problem I have is that I then get 3 extra random digits at the back of those millisecond (e.g. instead of 02/03/2010 16:53:50.158 I get 02/03/2010 16:53:50.158003) :
parser.add_argument('--dtformat', '-dt',
required=False, default='%Y%m%d %H:%M:%S.%f',
help='Format of datetime in input')
I have tried to replace the above code with "default='%Y%m%d %H:%M:%S.%fff'" but that doesn't work .

WOuld it be fair to say that this fix is only cosmetic and the the 3 extra random digits that were added at the back of the micorseconds are still occuring in the background?
Just wondering if I would get consistent results should we process those micorsecond ticks (for instance in terms of count within 5ms OHLC bars)

I have tried date2num function which I noticed you use a few times across background files, but it isn't working either:
At this stage, I am at a loss as to where it would make sense to apply any modification so as not to corrupt the input (Tick data from TrueFX)

The datetimes are coded to a float (using matplotlib definition). The coding (as any coding trying to fit microseconds into 8 bytes) loses precision.

In any case I can't really understand what the real problem is here. It seemed above you couldn't parse the timestamps, but it seems you are concerned about display issues and that the timestamps have been parsed right all along.

@backtrader - This is really helpful. I looked into dateintern.py to better grasb where rounding is occuring, I now understand why adding (timespec='milliseconds') brought more confusion, I have therefore removed it as it was truncating as opposed to rounding to nearest .001s. However to your point this was only a display issue - in the background the actual rounding is much less insignificant and clearly not a source of concern. See relevant code here for those with an interest:

Going deep into your code really helped me appreciate how much work was put into this, (and I know I am only scratching the surface , this is humbling - I wish I could code like you do :-).
For newbies like myself - it is probably worth mentioning the F7 button in Pycharm Debuging facility to execute line by line. This is a real life saver. Hope that helps some people.

One last question on this, as mentionned, I did use a csv file with the exact same column structure as in your example (columns=[bid, ask, datetime in last position]). The original file however come with the following structure [symbol, datetime, bid, ask] as show here:

The above won't work due to lines = ('PAIR','datetime','bid', 'ask') however replacing the section below instead is not returning an error message and so seems to be working fine:

lines = ('datetime','bid', 'ask')

I understand that the field 'PAIR' being a string is what is causing the error (code section below from -loadline), thus my questions:

What is the underlying rationale ? Is it because "lines" should only refers to preset categories defined in BT (datetime, OHLV, volume, etc...)?

What is the impact of simply not mentioning "PAIR" in the "lines" list (if any)?

for linefield in (x for x in self.getlinealiases() if x != 'datetime'):
# Get the index created from the passed params
csvidx = getattr(self.params, linefield)
if csvidx is None or csvidx < 0:
# the field will not be present, assignt the "nullvalue"
csvfield = self.p.nullvalue
else:
# get it from the token
csvfield = linetokens[csvidx]
if csvfield == '':
# if empty ... assign the "nullvalue"
csvfield = self.p.nullvalue
# get the corresponding line reference and set the value
line = getattr(self.lines, linefield)
line[0] = float(float(csvfield))# Why is the expectation to ALWAYS have a float - can it be changed for more flexibility? (cg 'PAIR' filed in TruFX import - Error message
return True