The sample itself comes from another source, which I thought I'd leave nameless in case anyone kicks up a fuss about this small snippet of copyrighted material. It is a good showcase of ld-decode though, and getting this result makes me really excited to take it further! Thanks for link to those other videos, I'd seen some of them but not all.

I got the new TBC code up and running and ported to Windows this afternoon. I probably won't get much solid time until next weekend, but when I do I'll be working on getting regular composite video correctly handled.

I haven't said anything here in the last month, but I've been busy. I found the new TBC app like the old one failed pretty badly at a lot of real-world signals I tried to throw at it. Originally I started working on improving the code, but I found it seemed to be entirely based around expected values from the NTSC signal specs, and was quite rigid and hard to adapt from that point to be more tolerant of other signals that didn't conform to its expectations. I ended up heading in a different direction and writing something new from scratch. It's still early days, and there isn't any repair process in place yet for really badly damaged sync events, but I've got a workable program now that can take raw composite video signals (such as the one output by lddecode.py) and identify sync events, group into frames, and synchronise lines. There's no colour decoding yet, but here's the kind of raw frame output I get right now on the Fantasia sample I used before:

And here's a frame of progressive video from Sonic 2, which I couldn't decode previously:

Some of the main advantages this has over the previous decoder are as follows:-Universal format support (Can decode signals with any number of lines, any line/sync timing, progressive or interlaced)-Supports any sample type (templated code allows any data type to be used)-Supports any sample rate (adaptive processing algorithms scale to the length of detected events in the data stream)-Extensive source comments and cleaner code structure-Better performance (Efficient algorithms and multithreaded processing gives over 5x speedup on my PC)-Cross-platform code with no external dependencies (Only Visual Studio projects provided right now for Windows compilation. I'll add a makefile soon for Linux compilation.)

It's far from perfect yet, but the elements are there to build from. If my interest keeps up, I'll split the core processing work out to a library, so a set of thin tools can grow around the common set of code. I'd like to get this to the point where it can decode basically any analog video signal.

I found the new TBC app like the old one failed pretty badly at a lot of real-world signals I tried to throw at it. Originally I started working on improving the code, but I found it seemed to be entirely based around expected values from the NTSC signal specs, and was quite rigid and hard to adapt from that point to be more tolerant of other signals that didn't conform to its expectations.

I guess it depends what you expect a TBC to do. Normally the idea is to take a signal which is slightly out-of-spec, & shift it around just enough to get it into spec. But you seem to want to do some much more exotic things.

nemesis wrote:

The next big thing is adding colour decoding, which I'm not really looking forward to, but I'll give it a shot sometime soon. … I'd like to get this to the point where it can decode basically any analog video signal.

You might want to have a look at the PAL transform decoder, which should give superior performance on that system.

I found the new TBC app like the old one failed pretty badly at a lot of real-world signals I tried to throw at it. Originally I started working on improving the code, but I found it seemed to be entirely based around expected values from the NTSC signal specs, and was quite rigid and hard to adapt from that point to be more tolerant of other signals that didn't conform to its expectations.

I guess it depends what you expect a TBC to do. Normally the idea is to take a signal which is slightly out-of-spec, & shift it around just enough to get it into spec. But you seem to want to do some much more exotic things.

Well to be fair, the program I was replacing did a lot more than time base correction, it actually took the raw composite signal and performed all the sync detection and line decoding too, which was the part I was most interested in. The signals I was testing didn't need TBC applied at all, they just needed sync detection and decoding into lines/frames, but the current code didn't handle them. My goal was to develop a program that was at least as tolerant as actual (ideally multi-sync) monitors which handle composite signals, so that any signal you could have displayed directly on an analog monitor could be decoded in software too. This program isn't constrained to time base correction, that's actually about the smallest part of what it does, it's intended to perform all decoding of the composite signal, from raw sample data to fully realised images, ready for writing out to file or encoding into a video stream.

Quote:

You might want to have a look at the PAL transform decoder, which should give superior performance on that system.

Thanks, I'll try and digest that info. The math for most of this stuff makes my brain melt though. If anyone knows of a "Quadrature Decoding and Comb Filtering for Dummies" reference, it'd be appreciated.

To do color decoding easily, you need to get the signal going into the comb filter into a phase-aligned signal clocked at four times the color frequency. Once you have that the math becomes a lot simpler since each pixel is at a different 90 degree phase and you can do some relatively direct computations to get the YIQ signal albeit with considerable cross color, but once you have that making it a 2D comb filter is easier.

You could also recreate the format of the original output and pipe it into the comb filter in ld-decode. (Maybe I could - I just haven't had that much oomph outside of work lately)

Check out Video Demystified (there are some .pdf's of earlier editions floating around out there) - it goes into digital processing of composite signals.

_________________Happycube Labs: Where the past is being re-made, today. [meep!]

To do color decoding easily, you need to get the signal going into the comb filter into a phase-aligned signal clocked at four times the color frequency. Once you have that the math becomes a lot simpler since each pixel is at a different 90 degree phase and you can do some relatively direct computations to get the YIQ signal albeit with considerable cross color, but once you have that making it a 2D comb filter is easier.

Thanks for the tip. Right now I load the line samples into a cubic spline so I can sample arbitrarily at any point along it, and it'll interpolate from the original sample values. I'm aiming to perform the colour decoding directly from this spline representation rather than a resampled form, as then any (further) loss converting from the sample data is minimized. I already decode the colour burst and phase lock to it, as like you I use the phase of the colour burst signal to help synchronize line start positions, so it should be simple enough to start performing colour decoding with the information at hand for each line. It's really just digesting the formulas I'm seeing on the page and figuring out exactly how to turn that into code that I need to dive in and try. I'll start with very basic decoding, without any smarts to try and eliminate crosstalk, and work up from there. Once I've at least attempted that myself, I should hopefully understand enough about the process to be able to interpret what you've already done, and figure out ways to build from there.

Quote:

You could also recreate the format of the original output and pipe it into the comb filter in ld-decode. (Maybe I could - I just haven't had that much oomph outside of work lately)

I have no doubt this could be done. I hacked your program to dump out raw frames during decoding to compare with my own, so I know what I'm producing is close to the output you were getting. It took me a damn long time to match the results of your line synchronization by the way, you did a great job.

Quote:

Check out Video Demystified (there are some .pdf's of earlier editions floating around out there) - it goes into digital processing of composite signals.

Yep, found that little gem just last week. A colleague at work heard what I was working on and unearthed it from a shelf. They got it for an aborted 25 year old project and it's been gathering dust in the corner ever since. I tracked down PDF's of later versions afterwards, but I'm glad I got my hands on the first edition. The later versions have a lot less information on analog video, as they focused increasingly on digital standards.

I've been busy working on something else too. This weekend I assembled this:I'm still waiting on the inductor coil from another supplier, but I'm hoping that'll come in the mail tomorrow. After that, it should be good to go!

I'm also sitting on a stack of these:It only cost $5 more to stamp out 20 of these boards, so that's what I did. If anyone wants a bare board, drop me a line.

I've been busy working on something else too. This weekend I assembled this

Oh sweet open-hardware goodness I like the blue boards.

I'm planning on doing some more work on the capture software for these to allow the sample rate to be switch-able between 8xFSC for both PAL and NTSC. I've been a bit under the weather for the past couple of weeks - but I will try to get back to work on it shortly!

And syncing the color waveform to the originally sampled form sounds good and would probably increase color quality - never thought of trying that. I need to study how your code works in a debugger, once I find a good Linux one to use (dayjob's warmed me up to using them a bit )

_________________Happycube Labs: Where the past is being re-made, today. [meep!]

simoni i assume you are Simon Inns from http://www.domesday86.com ??? I saw your jason zone plate page: https://www.domesday86.com/?page_id=1332 and noticed the color artifacts. Do you guys think that is from the lines not being completely in sync? I thought the PAL lines were halved and then inverted in phase so every other line can cancel each other out? if the line sync was off by a little, maybe it would cause that artifacting?

I found this excellent PAL decder: EXTRON YCS-100 (it is no good for NTSC though, made in USA, go figure?)

It says it has a built in TBC. I got it for $18 on ebay with free shipping. See my attached picture (provided in 16x9 because yours was). It doesn't have the overall B&W resolution, but the color decoding is really good. For this capture, i used the 4300D-->YCS-100. From their, I split the svideo and sent the Y through the leich x75 for a little NR and the C straight to my blackmagic design capture, then the Y out of the x75 to the black magic capture. I think I may be one line off, color vs. black and white.

Not sure if this helps you guys, as i don't know if you can extract the code that does this from the device. But i do know it has a TBC in it, and was thinking maybe that is important for proper color decoding in PAL? TBC first-then color decode?

That's me Those test-cards pictures are simply for reference; the purpose of that page is to provide raw RF sample data for anyone working on the ld-decode software. Right now the PAL decoding isn't optimal; and it's being worked on.

Each test-card has a raw RF sample captured using the Domesday Duplicator board - you can use those samples to test the quality of a software decode; Although there are a number of hardware based TBC and colour decoding solutions, the idea of ld-decode is to go from the raw RF sample to a recreated picture in software. So external/internal TBCs and colour decoders aren't really useful since the sample is taken from the RF output of a laser in a modified player.

This is why I didn't bother to correct the aspect ratio of the jpg images - they are just there to let you know the type of test-card in the sample I hope this makes the purpose clearer!

I think i understand the goals of these projects. I find it very interesting and wish I knew more about the analog signal processing. I am trying to decide if i want to put together an ld-decode system. To that end, the question about the CLD-1050 vs. the 4300D was in respect to the resolution difference.

Does the fact that the 1050 displays more resolution have any bearing on the signal it would capture raw? Or is the 4300d resolution simply limited by the signal path it follows?

I also have both the CLD-1050 and the V4300D and have also noticed the same thing as 995tony.To me, it really does look like the 1050 has a slightly sharper and more detailed picture than the 4300D, but it's also a pretty noisy player.I'm sure that I read somewhere that the 1050 has a RED laser similar to the X0,X9 etc, so maybe this has something to do with things?I think you should definitely check out this player for your Doomsday project as it may well give the most detailed PAL picture. Any noise problems could almost certainly be removed with multiple capture averaging/median.

I think "could work" is the important bit here. Unless someone with a lot of RF knowledge is willing to spend time studying the design of LimeSDR or any other SDRadio, then it's very hard to say for sure if it would actually work. SDRs have RF tuning systems before the sampling stages (usually) as they are designed to receive radio signals. This also means they are expecting a signal within a certain amplitude range. You can be sure that your laserdisc player won't be outputting the correct thing.

I thought about this a lot during the design stages of the Duplicator and my conclusion was, a straight-forward sampling system was a simpler approach. If you're willing to throw down 300 dollars, simply give it to someone with the skill to build a duplicator board. That's what open-designs are for

I think "could work" is the important bit here. Unless someone with a lot of RF knowledge is willing to spend time studying the design of LimeSDR or any other SDRadio, then it's very hard to say for sure if it would actually work. SDRs have RF tuning systems before the sampling stages (usually) as they are designed to receive radio signals. This also means they are expecting a signal within a certain amplitude range. You can be sure that your laserdisc player won't be outputting the correct thing.

I thought about this a lot during the design stages of the Duplicator and my conclusion was, a straight-forward sampling system was a simpler approach. If you're willing to throw down 300 dollars, simply give it to someone with the skill to build a duplicator board. That's what open-designs are for

I have purchased 2 of the TV cards and they are on the way, i will try that first.

The LimeSDR interests me because it is full duplex, and i was thinking it might be possible to feed the LD RF in, demodulate, and send CVBS out to a broadcast grade decoder in near real time?

The act of hooking the cable up to the tv card degrades the picture of the player a bit. Trying a different cable helped, but the blacks in between the whites don't seem as "black" as simoni's. I think i may have bumped some of the adjustment pots while routing my cable, the player doesn't seem to look as good as it did before I tried

edit: I see that hooking up to the RF test point directly can cause this issue. I have an amp on the way that will hopefully help with that.

Who is online

Users browsing this forum: No registered users and 0 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum