Example of non-linear playing of different video blocks. Each block can be played at will meaning a "Whoops" vid can be played whenever either side lets something slip in conversation. Or a "*** you" if the player selects an aggressive/uncompliant response.

Here, the blocks are being played back in a random order (well, still hardcoded but you get the idea)

Next is to produce some more vids, get them to display on the smartphone and add some rudimentary interaction with text selection.

OK, what I really need now is a flexible unpacker routine. In the example below, the "looking right" sequence is represented by one file (seperated by Oric software into 6 frames of 72 pixels (12 bytes) by 48 lines.)

Looking Right

look-right.png (54.82 KiB) Viewed 8817 times

As an unpacked image everything is peachy and the following line generates the data for the oric:

When this is applied to all the sequences then that 26K gets reduced to 16K (clearly a very useful saving). OK so far.

The problem comes at the unpack stage. Using the file_unpack routine I used in the Hnefatfl loading screen I found it doesn't work. It spreads the image over the whole width of the screen and of course, it unpacks the WHOLE image. I must confess that the file_unpack routine is borrowed from somewhere else and I don't fully understand how it works...that makes it very difficult to amend.

So, what I'm looking for is a more flexible, indexed file_unpack routine that can be given parameters such as the following:

Start position of image. (e.g. $a002)
Number of bytes width (e.g 12)
Number of lines (e.g. 48)
An index (0,1,5 etc)

maybe the whole file would need to be uncompressed first (but not displayed) and then the other parameters are then used to display the required frame. Ideally, it would be nice to be able to uncompress PART of the file (should require less memory and be faster?) but that depends on wether the .s file is "evenly distributed" not sure if that is the case...

Hi barnsey123, I'm reading your posts with big interest expecting to watch movies on Oric .
Now seriously, bellow is kind of way to save some bytes.
I took your attached picture and split it into 6 individual frames than using GIMP I made left, right and bottom bands equal in all frames:

Now you can use the first original frame as background and overlay
only the different parts (in files result-02 .. result-06.png)
with X-offset=11 and Y-offset=0 relative to background position.
New size of overlay images is 49x41 pixels.
Here is the python script:

As you can see this is the original android script (slightly modified by me)
which generates the image sequence in the same way for any android device .
When executed script reports offsets and overlay size.

@iss
Thanks, you've probably solved a separate problem that I had and I'm definitely going to make use of that technique. Imagine a space suit scene, I want to use the same spacesuit, overlaid on different backgrounds, with different faces shown through the visor (all chattering away at the same time)

For this smartwatch scene I want to have a bit more video though. It's not just the face that
changes. There's all sorts of other movements. See the tap below for an example. This includes more video but now with a better way of scripting it. I can play partial "chunks" to make full use of the chunks I have (a partial "right head turn" rather than the full deal can add some variety to the "yabber" chunk)

Ignore the bit where he checks his watch - it's terrible - a real rush job and I'm tired...BTW, this is not meant to represent the final video...just playing.

Lesson 1: When making videos that are supposed be contiguous...make em on the same day, in the same light, with the camera at the same angle...try not to be on a swivel chair...

The FilePack unpack routine in 6502 assembler is already in the osdk, you don't have to use the C version.
Now yes, it will not work with a 'width offset' simply because that would make the code extremely complex: Just use a temporary unpack buffer and copy it to screen.

Dbug wrote:The FilePack unpack routine in 6502 assembler is already in the osdk, you don't have to use the C version.
Now yes, it will not work with a 'width offset' simply because that would make the code extremely complex: Just use a temporary unpack buffer and copy it to screen.

I have a cunning plan...a whole range of video FX awaits...

Step 1: get the unpack routine working properly
Step 2: routine for flipping the video vertically (should be pretty simple...famous last words!)
Step 3: flip horizontal (XOR on last 6 bits) and read video buffer backwards on each row

So, in some circumstances (say a robot is rolling towards you) we only need a video of half a robot (as long as the background is symetrical - eg a coridoor - a walking thing can be achieved by flipping the frames in a different order). Should be a blast!

If you are serious about doing pseudo-video on the Oric, I think you should consider some alternative ideas:
- You could refresh only every second line, using some color changes to simulate the screen persistence like we did in the Assembly 2002 invitation intro: http://www.youtube.com/watch?v=EfegwPu_0xA. There are only two colors, green and red, the yellow is just made by your brain when pixels are close
- Using FilePack is not a good idea. Normally things like video or audio should be encoded with a lossy method, not a lossless one, using the previous frame as the base to create the new frame with a combination of commands like "copy this 8x8 pixels block of data from the previous image by two pixels to the right and one down" or "fill this 4x4 block of pixels with this color".
- I would suggest to not have dithering in the source video, and apply the dithering at run time when you display the picture, that would increase tremendously the ease of compression.

Dbug wrote:If you are serious about doing pseudo-video on the Oric, I think you should consider some alternative ideas:
- You could refresh only every second line, using some color changes to simulate the screen persistence like we did in the Assembly 2002 invitation intro: http://www.youtube.com/watch?v=EfegwPu_0xA. There are only two colors, green and red, the yellow is just made by your brain when pixels are close
- Using FilePack is not a good idea. Normally things like video or audio should be encoded with a lossy method, not a lossless one, using the previous frame as the base to create the new frame with a combination of commands like "copy this 8x8 pixels block of data from the previous image by two pixels to the right and one down" or "fill this 4x4 block of pixels with this color".
- I would suggest to not have dithering in the source video, and apply the dithering at run time when you display the picture, that would increase tremendously the ease of compression.

Actually I've been thinking of a different approach. I'm going to write something on windows that will take all the images in a given folder (let's say a.png, b.png and so on) and runs pictconv so we end up with a.s, b.s, c.s etc.

Then we create a baseline array (initially populated with a.s...the file a.s will remain untouched) then we perform a byte by byte comparison with the next file (b.s). If a corresponding byte is identical the value 255 gets written to b-out.s else the different byte is written to b-out unchanged. B.s becomes the new baseline and is then compared with c.s. And the process repeats until no more files.

We then have a.s, b-out.s, c-out.s etc. these files are then compressed using filepack as normal. The compression efficiency should be much better as many of the bytes will be the same. Only the changed bytes will have a value < 255.

To display it back on the oric we unpack a.s to a buffer and display as normal. Then we unpack b-out.s outputting only the bytes that are < 255.

If Filepack is not more efficient with consecutive identical bytes then it'll be a waste of time but I'm guessing we will see much smaller files (and so longer, more detailed or larger sequences). No sound of course and only monochrome...but you can't have everything.

yay...got the unpacking correct. I must admit I was struggling earlier and I nearly fired a distress flare. The unpacking adds a slight overhead as each chunk of 6 frames gets unpacked. It's noticeable but not a show stopper.

It might be an issue where each frame is one file (rather than 6 frames per file here) but I'm not worried at the moment. If the file sizes are small I doubt we'd notice anything.

The C version of file_unpack is way too slow...so am using the one from the osdk. I was using the ASM one in hnefatafl but I had the C function defined also....duh! So can save some memory over there.

Anyway, by having each Chunk packed the memory required gets reduced from 32K to 24K - a VERY useful saving of 8K! And that's WITHOUT the further savings I expect to gain from using MPEG-style playback (where only the bytes containing the changed pixels are recorded).

A development on the previous idea might mean that the "out" files (e.g. b-out.s) might be a list like this (awkwardly, the first value would have to be 16bit)

byte number,value
319, $10
320, $1F
1015,$22
2026,$0F
etc

On average this should mean MUCH smaller out files WITHOUT the overhead of unpacking (only the first frame/picture needs to be unpacked). The position on screen of the byte value would be calculated relative to the start address of the video and the number of bytes WIDE and bytes HIGH.
Of course, there could be times when the out file may be BIGGER (if many bytes have changed) so I'm going to suggest a change to the OSDK. We add a new command based on filepack but called VideoPack. This would create two files (a normal filepacked one and a bytepacked one), the bytepacked one would contain a header distinguishing it from a filepacked one so a different routine would be called for unpacking purposes. This way we don't have to choose which method is most efficient as VideoPack would do it for us (only the smallest file would be saved).

Is this the start of Oric-O-Vision?

It all still needs to be written but possibilities are opening up. Exciting stuff.

The next two "videos" have been extracted from animated GIF's. They are not very fast as they are running at quite high resolution (the Star Wars shuttle craft is the biggest at a resolution of 198 x 147). The Asteroid is 120x120.

To extract images (in order) from your selected GIF and perform RESIZING at the same time (saves lots of fiddling about) do this:

1. Install ImageMagick on your PC
2. Download an animated GIF (tons of them on the interweb)
3. From command line (navigate to directory where the gif is)

Now, Imagemagick has a function to detect and display differences between images and this could be a cool way of generating images that contain only the changes. Got to do some work on that as it would save some coding. This would make the packing process more efficient (just need a different display routine after unpacking maybe ignoring values of 000000). This may not be faster to draw but should reduce memory footprint.

Shown above are some examples of using different types of dithering (note -d0 in pictconv to turn off dithering)

It's difficult to determine which is best. The craters on the asteroid are brought out more using o2x2,6 but it can look a bit washed out. 02x2 (without the ,6) shows detail elsewhere but the craters dont show up so much.

There is a great deal of difference between the sizes output (of the final TAP file) and a wide variety of "textures" (see below, these are just the ones I've tested that also look pretty good: h8x8a was bigger and looked worse..)

I've developed a really good technique for extracting the differences between images and only recording that changed data (so the packing is more efficient). The previous packed images for the Asteroid GIF resulted in a 28.5K tape file.

What this bat file does is this:
Take the GIF file and extract all the images as resized, dithered PNG's.
File 1 is processed normally (no changes, we want the whole image)
File 2 is compared to File 1
File 3 is compared to File 2 and so on...
In each comparison we create a MASK file (of ALL the pixels that have changed)
So we have say: FILE2-Original.PNG and FILE2-Mask.PNG
We then use the MASK file on the original which yields ONLY the interesting data FILE-X.PNG.

FILE-X.PNG is turned into Oric data using the usual Pictconv, FilePack and Bin2Txt.

This means MUCH smaller .s files and hence much smaller program. Down to 21.4K!

For playback (within the Oric) we have to be a bit careful.

File 1 has to be displayed as is (the whole file)...that works OK.
File 2 and subsequent files only need to have the data that has changed to be printed. If we print the whole file it will overwrite the original data with BLACK. And this is where the fun starts and I am stuck...