If we have two orics, with two RGB outputs, how can we be sure that they are updating the screen at the same time, is that somewhat based on the main power source frequency, or is that purely chance based?

The chip takes 2 RGB sources and merges these 2 sources into one RGB signal. It's merged in real time.

I don't know if the chip needs the same signal for the 2 input sources. For multicoloric card, you have a switch (manual), if this switch is off, it displays Oric normal screen, if this chip is on, it displays the oric screen with the palette chip.

Since I asked the question a few months ago, I had some more brainstorming done with one of my colleagues who lent me his oscilloscope for a couple of week, and then with Thierry who helped me by doing a few experiments

Basically there was multiple questions to solve:
- Is it possible to power two Oric motherboards using the same power-supply, common ground, common clock signal
- Is it possible to somewhat get these synced together
- Is it possible to merge the video signal

Thierry had two defective Oric motherboards, he fixed them, removed the regulators to get them work directly with a clean +5v power supply, and removed the clock generation of one of the machines and made it use the one from the second machine.

The result is that the computers work just fine, and they are nicely synced... except there is some random (stable) offset when the machines are powered on.

Since they share the same clock, the offset does not move over time.

By restarting the machines a few dozen times, Thierry found out that the offset is roughly in the -/+ 500ns range, with approximately (visually read on the oscilloscope screen) offsets being about -410, -320, -240, -160, -80, 0, +90, +170, +240, +360, +440, +520.

Thierry suspects that these values are more or less related to the 12mhz input frequency, like if the ULA snapped itself on the frequency, but not always at the same time. Since we have two mother boards doing the same thing, we get a relatively large number of different offsets.

Pressing the RESET pin does not change anything to the offset, the reason being that the ULA is always ON and is not connected to the RESET pin anyway

One thing I noticed is that the Oric schematics were showing the presence of a small component between the CLK and VCC pins connected to the ULA... which interestingly was not present on any of Thierry's machines.

Further research indicated that this was a modification made by Oric International, called "63b" which is supposed to improve the machine boot sequence by doing something on the 12mhz clock signal.

I'm posting here so everybody can suggest how we can proceed to get a perfect synchronization

Small summary:
- In PAL, the screen is refreshed with a vertical 50hz frequency, and a 15.625khz horizontal frequency.
- At this speed, a scan-line lasts exactly 64 microseconds.
- On the Oric in particular, the 240 visible pixels are drawn in 40 microseconds
- Since we have 40 columns (bytes) per line, it means each block of 6 pixels takes exactly 1 microsecond to draw
- The display of one particular pixel takes 166.66 ns

As we said earlier, Thierry's test shown that we were out of sync by +/- 500 ns, which means basically +/- 3 pixels!

Regarding the variation, my first idea to limit the wobbling is to not reboot the two machines together, but instead to keep the first machine (master) ON at all time, and only start and stop the second machine.

A possible way to control the second machine, could be to use the TAPE relay of the first machine to control the power to the second machine, and ask the user to press a key until the lines are aligned on screen

If this is doable, I have plenty of ideas on how to go from there, one being to actually generate more colors by having some half-bright combinations when both machines output the same component values, so we could get DARK RED, ORANGE, GRAY, BROWN, etc... that could be done with a simple component that follows this logical table:
0 and 0 -> 0
1 and 0 -> 1
0 and 1 -> 1
1 and 1 -> 0.5
- This way, if the machine #1 displays a full Oric picture, you get a full normal Oric picture on screen.
- If the machine #2 display a full Oric picture (and the other one shows BLACK) you also get a full normal Oric picture.
- If one machine display on even lines while the other draws odd lines, you still get a standard Oric picture... but displayed twice as fast.
- By using the two machines to display complementary pixels on the same scanlines, it's possible to remove half of the constraints and limits on the number of colors closed together.
- And by artistically complementing and displaying the same pixels here and there, you get gradients and intermediate tints.

That's my idea at least

Of course after that comes the small details: How to boot the machines, do I need two cumuluses, can I have custom ROMs/Eproms to facilitate the testing, etc...

Very interesting project!
IMO the problem with synchronization comes from the bad edges of the signal on ULA pin 7 (CLK).
Modification "63b" is supposed to solve this (and maybe it really helps) but in this case there is better solution.
First remove both "63b" resistors and connect the IC22's unused part (see picture below) using pins 4-5 or 6-7.
This should improve the edges significantly and so the both ULA's will "understand" the transitions with minimal offset! Of course an external piggybacked 74LS04, 74LS14, 74LS00... can be used too.
Additionally I can recommend to use both ULA's from one and the same manufacturer batch i.e. all labels on both chips have to be equal.

Yesterday I also contacted Mike Brown, since he had been working on the ULA quite extensively, by sending him that:

Dbug wrote:Hi Mike,
I'm currently doing some thought experiment involving hardware synchronisation of multiple Orics to try to get them to have their ULA synced together so they can get merged into one single video output to drive one screen.

The idea is based on some (unimpressive) attempt of some C64 people using four C64, each one drawing a quarter of the screen, but just done manually with four video projectors use space-bar to "sync" them.

I'd like to do a cool demos with the two motherboards perfectly synced, and used together to draw pictures twice faster, or complement each other to do more color changes than what would normally be possible.

Thierry from the CEO has done some hardware tests, but we have some synchronization issues, so I wonder if you could take a look at my forum post and give me your opinion on the topic.

and this morning I woke up with a quite significant amount of email answer!

Mike wrote:Hi!

> I'm currently doing some thought experiment involving hardware
> synchronisation of multiple Orics to try to get them to have their ULA
> synced together so they can get merged into one single video output to
> drive one screen.

OK, sounds crazy enough, I'm interested!

> using four C64, each one drawing a quarter of the screen, but just done
> manually with four video projectors use space-bar to "sync" them.

I can see how that appeals for simplicity, the 4 machines don't need to
be locked together *at the video level* for that to work, 4 independent
non-synced videos, 4 projectors. Easy. Something (anything) can sync the
drawing software. OK.

But what you are talking about for Oric is more fun. In the real world
you would solve this problem with a "genlock" - in the analogue TV/video
days you could (up to a point) send a clock signal OUT to studio cameras
and get them to use it to return perfectly synced video signals. But with
external sources, videos (tape) you would not be able to do this. So
genlock was used to effectively acquire a video signal, store it, resync it
and reproduce it. This way you could cut between video sources without
glitch. Probably a very expensive solution to sync two Orics though ...

But it seems you are trying to make 2 Oric's work together *in the
first place*, giving you a video output of R,G,B,R',G',B',Vsync/HSync
which could then be used to create a larger palette etc. Or to overlay
picture A on B (240x200 drawn twice as quickly from 2 sources). OK.

Well. You are on the right track. The power supply mods (clean 5v feed)
simplifies problems you *would* have if you feed a common 9v to both, and
then ever accidentally joined the grounds together. So that's a good
step.

The modification resistor isn't anything hugely relevant, it's just
to improve the quality of the clock signal driving the ULA by shifting
it a bit more to positive voltage.

Commoning the 12MHZ clocks is good, that will guarantee they stay in
lockstep with each other.

As you've discovered, the ULA is a free spirit and cannot be started
or stopped -- no reset pin, reset must be built into the ULA's
silicon, as it needs some kind of power-on-reset for all those registers
and toggles inside it.

The observation on the slip between the video outputs is correct, there's
no guarantee that both ULAs will start up the same, even powered on "exactly
together". In normal use, there's no need for this to be a concern.

But whatever offset you do find will (should?) be constant, with a
shared clock.

I would say that asking the user to line things up and repeatedly
powering on and off is a bit rough and ready, so ... as we're already
into hardware mods and fiddling with the clocks :-

[ ... Brain dumping ... warning ... ]

(1)

I think that it might be possible to measure the error between the two
vsync outputs to work out which machine is ahead or behind, and use this
to *drop* a clock cycle out of the 12MHz clock to one machine UNTIL the
two slide into line, and then leave it alone.

Does this make sense? If A leads B, drop a clock cycle of A.
If B leads A, drop a clock cycle of B. It *may* be acceptable to interrupt
the timing like this, by just dropping 1 cycle in n cycles, that way
it won't overly screw up DRAM timings and cause all the data to fall
out of memory You can't just stop the clock outright.

So you need 1 clock source, something to selectively gate it to drive
Oric 1 or 2 (or not, when dropping a cycle). Then, much like a quadrature
encoder rotary switch/optical encoder can indicate rotation left or right
by whether signal X leads X' or X lags X', you can combine the two VSYNCs
to decide whether to be dropping clocks for Oric 1, or 2.

Best of all, this would be a non-software solution -- so no need to modify
ROMS to get things aligned. They would align soon after power on and stay
there.

(2)

If it turns out you can't drop a clock cycle without the ULA immediately
rejecting it ... the next step up would be to use two 12MHz clocks, locked
together (like a PLL). But the VSYNC-difference signal I mentioned above would
have to be used to speed up, or slow down, by a small amount, one
(or both) clocks until there is no error signal any more -- which means
the video signals must now be aligned.

(3)

Now I think about it .... two 12MHz clocks that are NOT locked together
(umm, much like two independent Orics *without* modification!) will drift
past each other in this way. What you need to do is notice when they
*are* lined up (in respect of the VSYNC signal), and LOCK THEM TOGETHER
only then. Now that would be fun ... is this a winning idea yet?

So let's say we XOR together Oric-1 and Oric-2's VSYNC lines. When they
disagree with each other, allow the clocks to free run and slip. When
they agree with each other lock the clocks together.

(If necessary, nudge one Oric to deliberately run a bit off speed
to ensure there is a slip: It would be embarrassing if they just sat
there perfectly unaligned and stayed like it! A trimming capacitor
across the crystal could do this!)

What this would do (I hope) is this ... because of the nature of
the sync signals, they will mostly agree just by chance -- let's say it
spends most of it's time LOW. And goes HIGH for a short duration once
every 20ms (frame). So at startup, the clocks will spend most of the
time locked, just by chance/coincidence.

When one machine asserts VSYNC and the other HASN'T, briefly there
will be a discrepancy, the XOR gate will show this up, and clocks
will be allowed to unlock and drift for the duration of the discrepancy.

And then lock together again. This will keep happening until there are no
more discrepancies, at which point the clocks can stay locked.

Unlocking may make the discrepancy worse (let's say A leads B, and you
unlock, and A runs a bit quicker ... now A will lead B EVEN MORE. Eventually
it will have lead so much that ... it's a whole frame ahead and meets
the video signal coming the other way, and ends up lining up!)

So: To sense the difference: XOR together the sync signal (maybe from IC25
Pin 1/2) where it is still TTL levels. Use this "error signal" to operate
a link between the clocks. Unlock when disagreeing, lock when agreeing.

You lock them by simply forcing the things to run together -- like a
three legged race -- by connecting the two clock circuits via a
link (using something like a bilateral switch used to couple analog signals
together). Maybe not a direct link, but a low enough value resistor to force
them to oscillate absolutely together.

For example, coupling R9 to the corresponding R9 on the other board via
a 1K resistor could well lock the oscillators together without causing any
distress. Opening up that link = free run drifting.

If that worked -- one XOR gate, one bilateral switch and a resistor, I
would be as amazed as you ... but it might just work

I'm aware the sync signal isn't purely VSYNC (annoyingly it's combine h and
v sync). But the principle still seems to make sense, yes, it might inadvertantly
"lock" with the HSYNCS matching, and the VSYNCS mismatching, but that won't
last long -- as long as there IS a mismatch between the signals, then
something is wrong, and they must be allowed to drift.

If any of this is helpful, let me know, or if you see a stunningly obvious
flaw that I've missed ...

> Thierry from the CEO has done some hardware tests, but we have some
> synchronization issues, so I wonder if you could take a look at my forum
> post and give me your opinion on the topic.

Also: In my opinion, the delays you have describe of ± 83ns, 160ns etc. are NOT
caused by the "bad edges of the 12MHz clock" as a later post suggests. As you
have noted, they cluster at multiples of 83.3ns, which is the clock cycle
at 12MHz. This is very relevant.

It is that the ULA is starting after a different number of whole-cycles
of the 12M clock. I'm sure if you were measuring *really* carefully, hard to
do on a scope, you'd see that the numbers are EXACT multiples of 1/12x10^6
(83.33ns).

If you are down to the level of worrying about the ULAs not synchronising
due to the rise/fall time (bad edges) of the 12MHz clock, you are looking
at details too small to matter :-

Yep, I missed that offsets are multiple of 83.3ns. Mike Brown's solutions is cool and I hope it will work.
... but having 2 equal ULA's is good idea too .

EDIT: Dbug, which signals are actually on the scope? (I thought it's 12MHz, but the shape reminds the famous F2)

Good question! Yes this is the PHI2 raw signal coming out of the 6502A (pin 39).

Lots of interesting things to try!

My idea was rather to try to act directly on the 12 MHz which enters the second ULA - there is always a unique quartz for both ULA -, in order to correct the lag of PhiOut ( "asservissement" in French for Dbug )

asservissement_synop_gen.JPG (2.51 KiB) Viewed 1879 times

Some pictures of PhiOut (- ULA pin 14- the cross on the signal indicates the reference ULA, after different startups.)
(The probability of synchronizing at startup is 1/12 #8%, which is low)

Do you know any simple electronic schemes to make this correction?

Last edited by NightBird on Sat Nov 25, 2017 9:27 am, edited 4 times in total.

So, to summarize what we had so far:
- The desynchronization has nothing to do with the cleanliness of the signal coming from the clock, it has more to do with internal ULA implementation that takes a number of clock signals to initialize, which possibly depend on the electrical state, random values in the internal registers, etc... and which could or could not be impacted by the batch revision number of the used ULA
- Mike suggestion is to use some self balancing signal clock that uses the SYNC output signal to throttle down either of the ULAs as long as they are not in sync
- Thierry suggest to keep a common 12mhz clock, and just to throttle the secondary ULA until it get synced to the first one

Correct me if I misunderstood the ideas

My hardware knowledge is definitely not good enough to have an opinion on which solution would work better... but my gut feelings tell me that if we wanted to extend the system with for examples three... or four orics all pigybacked on top of each other, Thierry's suggestion may be more scalable:
- Keep the master Oric "as is", totally autonomous, and use this clock signal on all the other machines, then gets each secondary machine ULA's input signal throttled down until it matches the position of the previous machine, so #2 syncs on #1 (master), #3 syncs onto #2's SYNC signal, #4 on #3's signal, etc...

Since we only need ONE sync line to rebuild the video signal, there's no need to merge any of the secondary machine VSYNCs so they are freely available to synchronize other things, including the machine itself by piggybacking it to itself as a Hardware VSYNC

(And yes, I've no idea how to do that, and my track record using a soldering iron has resulted in many dead hardware... including my Atari MegaSTe a few month ago when I tried to install some extension board... *sigh*)

Assuming this whole thing is totally doable, the second step would be to merge the video outputs into one signal.

There are a multiple different ways to think of the problem, one could be to get something like the "Multicoloric" board, and use the various inputs to drive some "palette component" to generate more colors, but that's definitely on the "very complex" department (but would definitely allow having like 64 different colors at the same time on the screen with 2 bit per component).

By current idea was much more simpler, and was basically something like that:
- if RED_0 and RED_1 then RED_OUTPUT=(RED_0+RED_1)/2 else RED_OUTPUT=RED_0 OR RED_1
- if GREEN_0 and GREEN_1 then GREEN_OUTPUT=(GREEN_0+GREEN_1)/2 else GREEN_OUTPUT=GREEN_0 OR GREEN_1
- if BLUE_0 and BLUE_1 then BLUE_OUTPUT=(BLUE_0+RED_1)/2 else BLUE_OUTPUT=BLUE_0 OR BLUE_1
this has the advantage of allowing the normal display of Oric pictures by any of the machines as long as the other machines display black, and by combining pixels we get half tints.

Without the "half bright" part, it's just basically the matter of putting each pair of component colors in a OR gate and you get the final color on the other side, so definitely easy to implement as a first step.

What I'm wondering is how to load the software (custom ROMs for each machine? cumulus/microdisc for each machine?) and how to be able to iterate fast to develop code for that: If I have to burn eproms or create two sets of cumulus files, copy to sd cards, boot each machine... it's going to take a veryyyyy long time

A bonus question is: Since the loading time is going to be variable, how can I ensure that my two (or more) Orics actually start playing effects at the same time?

The main machine obviously has to be working fine by itself, so I'll assume it is a "non modified" Atmos with the BASIC rom, possibly a cumulus, still have a keyboard, etc...

The other machines could have custom ROMs (or not), but in all case they kind of need to be controlled by the master machine, if only to tell them "you can start the demo/game now".

The secondary machines having no keyboard, I guess that's a 8 bit input port that could be used to communicate? Could that be connected to the printer port of the main machine somewhat?

Basically, what would be the easier way to exchange data between two Orics that are perfectly synced at the hardware level, with a common clock and common ground/power ?