After fixing the seeking for the FDC to not error out anymore on seeking past cylinder 0(idk why that was there in the first place), now(running my test TP6.0 program for the second FDC disk), it seems to double seek on the second floppy drive instead?

Trying to use the fake86 MS-DOS Wolfenstein 3D on the 80286 seems to run fine now? Only the 80386+ software seems to bug out(Windows 3.0 on all CPUs(video mode problems? Also, win /r seems to crashing into an #UD?), FreeDOS crashes during the initial ... loading process (saying ".....Error?." then hangs)?

Just tried to run 8088 MPH with the current 80188 core(which is essentially a 8088 with 80186 instructions added and the segment limit overflow bug added). I see it's doing something strange during the IBM vectorball part:

Vector balls acting odd?

Can you shed some light on this, reenigne, Jepael, vladstamate?

The RETF timings have changed a bit, though, now at 166X cycles(1% divergeance), so that's probably causing the sprite part to (once again) be out of sync?

Currently running from the 360K 8088MPH floppy disk:Edit: The Delorean car is out of sync again(figures).Edit: The racing the CRT part seems to be more black-screenery again(way more black screens again).Edit: Credits still crash, though.

Just tried running EMM386.EXE from MS-DOS 6.22 again. It seems to crash somewhere, then crash in the VM monitor executing opcode 0xFF14? So there's a problem with the 0xFF /2 CALL opcode?

I see that the invalid value is caused by ECX being 0xFF(DS:[disp32+ECX*2]), where it's lower in all other cases? So the ECX register is the problem here? Then the question is: what causes the 0xFF value in the ECX register?

Edit: One little question: Is EMM386 supposed to run in 16-bit protected mode? Is the VM86 monitor supposed to run in 16-bit protected mode instead of 32-bit protected mode?

I see the loaded CS descriptor in the monitor being loaded with descriptor 0x00009a121400ffff which is a 16-bit CS descriptor? Is this correct for EMM386?

Edit: Hmmmm.... After adding some simple logging for leaving Virtual 8086 mode, I see the last INT21h before a bunch of text output instructions and video calls(probably the EMM386 fault handler showing it's information about the 06h crash) being AX=0659h.

Edit: Maybe that's already the fault handler of EMM386.EXE? The last call before that was AX=4B00. So MS-DOS was trying to load and execute a program(which after the device drivers itself(which load and show themselves loaded) should just be COMMAND.COM being loaded afaik?). This happens at timestamp 00:05:33:46.01056 .

Just ran setup.exe on the fake86 Windows 3.0 disk image. It seems Windows 3.0 was setup for the Hercules graphics card. Changing it back to CGA makes it work correctly on the CGA graphics card.

OK. Running in VGA mode seems to have unreponsive cursor keys(left/up/right/down keys) on the Compaq Deskpro 386(running in real mode using the /r switch). Otherwise, it's running without other noticable problems.

Odd: Just tried running the Windows 95 setup.exe from a CD-ROM ISO file(a minimalized custom version containing only the WIN95 folder and SETUP.EXE(with simple KEY.TXT for the key to test with). Running SETUP from the root of the disk(not the WIN95 folder) seems to result in a "run-time error R6003<line break>- integer divide by 0"?

Windows 95 setup R6003 error?

Edit: It's in a permanent HLT state(HLT with Interrupt Flag being 0) at 0000:0005.

Edit: Just tried running the SETUP.EXE from the Windows 95a harddisk(which still is very slow with video seeming to have refresh issues?). It still crashes with the #DE exception which seems to somehow crash into 0000:00XX crashing again into an #UD?

Edit: Trying to boot the Windows 95a boot disk from http://www.allbootdisks.com/download/95.html , seems to cause it to read sector 0, then sectors 19-21(twice), which is followed by an odd seek to cylinder 96, which doesn't exist on the 80-track 1.44MB disk? It then errors out with an "Disk I/O error", which is probably because of the invalid seek?

Since it's seeking to an impossible high cylinder number, does that mean that there's a problem in the boot sector/IO.SYS(in this case of MS-DOS 7) loading the OS?

Edit: Yay:/ It seems to somehow be double seeking again(seeking cylinder 2 instead of cylinder 1(which is requested to be read by the program that's running from the floppy disk))! Edit: Looking at the BIOS calls, this is once again turned on at the very first CHS 0,0,1 boot sector read when booting the floppy:S (Monitoring address 0x490 to be having bit 6 set, being set to 0x61(as is in the BIOS)).Edit: Looking at the code just before the setting of the double seek, it seems to be turning double seeking it always in two cases:- somehow, also for 1.44MB and 2.88MB drives, due to bit 2 being set on the second drive?- when the drive type in the CMOS is less than 3(less than 720K drive)

Now looking at the MS-DOS 7.0 which is used with the Dosbox Windows 95 disk image(from the Windows 95 on PSP tutorials). It seems to constantly (at least once a second) set the video mode again when running the command prompt, resulting in one part of the heavy slowdown using it in UniPCemu?

Edit: Windows 95 setup ends up in the woods when executing from that disk image.

I'm currently still trying to get the floppy disk(which keeps double seeking still for some unknown reason?) and Windows 95 setup to run(with the issues still present). Can anyone see what's wrong with the CPU emulation?

Oddly enough, I've compared my instruction information data with 80386 manual appendix A over and over again, but can't find any errors anymore.

So there must be a problem with some instructions executing(the opcode handlers) somewhere? Can anyone see what's going wrong? As far as I can see(except for faults, which don't occur until the #UD exception in the Windows 95 setup) the CPU cores themselves look fine?

Then why is setup erroring out? Some rogue jump? An error in calculations(unpacking?)?

Does anyone know the different phases of the Windows 95 setup program? So what it's doing and it's accompanying segment selector executing the block(or offset within the program's segment)? That way I at least know what it's trying to do?

Just tried running the MS-DOS 5.0a setup from an setup floppy on the XT NECV20 configuration. When proceeding to the first installation step(configuration) and selecting the country option(and pressing enter), it seems to somehow re-execute the boot sector in a corrupted way, displaying garbage on the screen and crashing showing a not bootable message(although this can be attributed to the boot sector as well).

So there's definitely a big problem in the base 80(1)86 emulation core? But where is said problem? Any way to find out?

Just found some problems concerning the XT PPI and keyboard clock line(allowing to disable and enable(also resets) the keyboard). These have been fixed now.Fixing the XT PPI to work correctly again also fixes the parity errors that were reported by the Supersoft/Landmark diagnostics BIOS for the RAM tests.

Just went and fixed a lot of simple 'bugs' and warnings issued by MinGW and Visual Studio(code analysis functionality). Now somehow UniPCemu flat out crashes when using the 1kHz Dosbox-style IPS mode?

Edit: Just tested at the default setting(setting value 0) and setting value 2kIPS(setting value 2). Both run in IPS mode without errors, but only at the 1kIPS setting, the emulation somehow crashes before it even gets to the ROM code for the emulator configuration itself?

Edit: Managed to get it a bit more accurate: only on Android, running at the 1 cycles setting(1kIPS or 1kHz doesn't matter), it causes the app to crash(before(executing required instructions normally)/at loading the Settings menu bootscript)?

Just tried my 80 track seek test program(see the UniPCemu reporsitory for it's Turbo Pascal 6.0 code) against my current IBM AT emulation. Apparently, tracks 53-79 fail the test, they time out because the BIOS seems to be counting too fast(only ~1.43 seconds according to step rate byte being set to 0xDF, thus step rate 0xD(28000028ns for each track). So somehow, the IBM AT is running too fast?

Edit: Changing DMA to run at half the CPU clock speed(3MHz in the case of the default AT config) increases the range to a maximum of sector 62. So the CPU is still too fast, for some reason.

The documentation on the AT says that bus transactions(I/O using the IN and OUT instructions) take 6 cycles or 12 cycles(two byte accesses) in total. That matches documentation on I/O reads(which are 5 cycles). But the OUT instruction seems to be faster(only taking 3 cycles)? Thus resulting in 4 cycles for each transaction instead of 6?

So somehow, the speed of the DMA isn't timing correctly, or the FDC isn't timing correctly? The DMA should be running at 4MHz in the AT 8MHz configuration(exactly half the CPU clocking, running off the CPU clock itself divided by 2), so why isn't the speed correct? Why isn't it waiting the full 2 seconds required?

So that AT still uses the PIT1 timer for it's DMA Memory Refresh it seems?

Edit: It seems the BIOS uses Interrupt 15h, function 86h for it's FDC timing purposes? So it actually uses the PIT periodic interrupt function to time it?Edit: It seems odd that the higher(past 62 tracks) track seeks fail from track 0(after a recalibrate)? Especially since the PIT is delivering the correct rate(which should be unused by the FDC code) and the RTC should use the correct 1024Hz timer that it calibrates it's FDC seek against?

Whoops, found a bug in the generation of the FDC step rate lookup table, where the rates were invalid(counting up from the 0th value, becoming bigger, instead of the correct substraction instead of addition). Thus the 26000026ns wasn't the correct value to use for the set mode:S

Edit: Having fixed those lookup tables to be calculated properly, now the AT BIOS seeks correctly on the 1.44MB disk again Although MS-DOS thinks there's only one floppy disk in the system? Selecting drive B: will cause it to ask for a disk to be inserted and press a key, then read drive A instead?

Edit: The second 1.44MB disk(A drive is 1.44MB too) acts odd on the IBM AT emulation: It seeks to cylinder 48, then executes a recalibrate, but executes a sense interrupt status before it finishes the recalibration(Unlike drive A)?