Godzil wrote:Also, if the floppy have some defect, it is possible that the hardware try to resync (the $c2)

Well I tested 2 different controllers (mine and NightBird's one), 3 drives (including 3" or 3.5") and about 5 or 6 different disks. On each test I read the same track several times and compared the results, that's how I saw those $C2 bytes.
I haven't been further so far in the reliability of the reading, this was already a too low reliability for me Even with multiple reads and comparison it would bee complicated (because of both inserted or crushed bytes)

Godzil wrote:There is one thing I'm not sure to understand, some bytes are sometimes replaced by $C2, sometimes between valid bytes, there is some $C2 added?

Exactly Sir!

Godzil wrote:Could you measure the time needed to read the track? It may help to understand if there is some read errors.

Not before this weekend, but I'd say between 5 to 8 seconds, from memory.

Godzil wrote:The FDC Read Track is not 100% reliable, I suspect that nibble will read the same track multiple times, and do some check before "validate" it.

For my culture, the FDC doc talks about an "AM" the chip is waiting for... What's an "AM"?

Dbug wrote:He probably want to write a program that can do a raw copy of the floppy, independently of the number of sectors, tracks, special gaps, protections, etc.

Exactly. To answer to Chema, the idea was to modify my old Savedisk/Cloaddsk tool (that created a DSK file from a Sedoric disk, by transferring it thru the tape port), so it could transfer any disk (Jasmin, Oricdos...), including the gaps that sometimes hide a protection (like for XL-Dos). The existing tool only reads Sedoric and only transfers sectors, hence losing some information.
I'm not sure it could have been achieved without checking the disk format anyway, but if the track reading is not reliable, it trashes the simple and "universal" solution.

It is "not" reliable, true, but it your code that will make it reliable, you will need to read the same track multiple times, and using other techniques to make this "unreliable" command into a reliable one

0x4E == 0b01001110 which does not seems a good sync byte.. Anyway, the Oric 0x16 is not "good" either (0b00010110)

Why they didn't use 0xAA (0b10101010) or 0x55 (0b01010101) or a mix of both for sync? It will be easier to sync with a regular toggling pattern..

Well I'm not a specialist but I disagree
The BIG problem I had for slow speed tapes decoding was that the bits were repeated (even the stop/start bits), so I had signals with 0101010101010101010101 at the beginning.
Well, with such information, you just don't know if you began reading a byte right in the middle, or at its beginning, or whatever.
But with 01101000 it's much easier to know "where" you are in the byte - so it's easier to synchronise

Last edited by Symoon on Thu Jun 26, 2014 7:31 pm, edited 1 time in total.

Thanks to all for information.
I wish I could try multiple readings to correct errors but it won't be easy:
- the unsynchronised bytes at the beginning and the end will probably differ at each reading
- the $C2 inserted here and there lead to a track size that changes, so comparing byte for byte won't be easy
- finding the right limit to say "ok, this byte is fine" will change with the hardware reliability: do I decide a value is correct after two, three, four similar reads? How can I be sure there were not two unexpected $C2 bytes at the same place?
-...

Basically I guess what you need is to implement the equivalent of the 'diff' tools for your tracks:
- Have 2 or more track buffers
- Perform a complete read track in each of the buffers
- Perform the diff to find the sequences of identical bytes

Not trivial, but not impossible either

You can also probably do a combination of read sector and read track: The read sector will give you access to 256/512 know bytes, and you can identify these blocks (or part of them) in the track buffer using the same 'diff' method.

Also, normally the bytes in the gaps and crc are not pure random. You know more or less which values some of these bytes should have (sector number, crc can be computed).

Dbug wrote:Basically I guess what you need is to implement the equivalent of the 'diff' tools for your tracks:
- Have 2 or more track buffers
- Perform a complete read track in each of the buffers
- Perform the diff to find the sequences of identical bytes

Not trivial, but not impossible either

Sure, but really not easy! Especially if many bytes are inserted, taking the room of end of the track in the reading buffer (even thought if they only replace gaps bytes, it will be easy to re-create... But that's exactly what I was NOT trying to do: re-create instead of reading)...
Well I mean, there will be loads of "possible dirty things and combinations" to think of and deal with.

Dbug wrote:You can also probably do a combination of read sector and read track

Aha, well I thought of that but if I begin reading the sectors then I lose the interest of the track-reading, which was to read without bothering too much with the disk format (not having to write a version for each DOS).

Stupid problem ain't it ?

I'll try something this weekend: changing the machine itself. I'm surprised that I only got about 20 $C2 bytes with my old tests (done on Oric-1 IIRC), and the 140 I got with my latest tests (done with what looks like a heavily modified Atmos).

Symoon wrote:
Well I'm not a specialist but I disagree
The BIG problem I had for slow speed tapes decoding was that the bits were repeated (even the stop/start bits), so I had signals with 0101010101010101010101 at the beginning.
Well, with such information, you just don't know if you began reading a byte right in the middle, or at its beginning, or whatever.
But with 01101000 it's much easier to know "where" you are in the byte - so it's easier to synchronise

Ho I don't mean to only use a 0101/1010 pattern for an header You are right it will be difficult to know where you are in such situation, but the original meaning of a sync patern is first to sync clocks to get the correct speed, after of course you need to have pattern to determine the start of the data flow, we also could name this a syncronisation.

In the case of the floppy, the sync byte (that are follow by a "PLL Lockup" phase are clearly to set a timebase for a clock, that's why I'm surprised, where on the TAP the sync bytes are more to know where is start, because the baudrate is almost fix (not completely true on floppy as the RPM could change a bit between drives and the physical sector size change if you are on the extern or internal circle..)

Ok! Some new tests.
Changed the machine: no effect.
Changed the track read on the disk: incredible differences. According to the track I read (tried about 6 different ones), there are between 6 to 144 $C2 bytes. Of course this is probably changing according to the data but I used the disk that, on track 8, had 144 $C2 when read by a simple call to $CFCD routine, and 44 when read by Nibble.

Anyway, maybe this is a way to check the health of a track on disk? Read it under Sedoric using $CFCD, then count the amount of $C2 bytes...

Well that still doesn't explain the differences between Nibble and $CFCD though. I guess I'll have to dig further in Nibble's decompilation - all I hate because of the time it takes

Tracks the closest to the center of the disk have a faster rotation speed than the ones at the periphery, so you can expect differences I guess.
In that regard, I always wondered why we had a fixed number of sectors per track, it would make sense to me to have more sectors on the outside tracks