Log..
Suddenly realised that by changing the sample format i had to recapture the example sample file. However i decided to concentrate on getting some instruments and sfx samples done.
Around 90 instruments are now in LHL format together with around 80 sfx such as lines from Monty Python. I think i originally got these sample disks either from Steve Marshall or Steven Meechen, either way thanks dudes.
Now i have to compile into the equivalent example bank to test in wave.

Well, the formula for calculating a note frequency in Hz works in a similar way, just inverse math:

A1 = 110 Hz, A2 = 220 Hz, A3 = 440 Hz, A4 = 880 Hz and so on.

In a matter of seconds we realize a glissando from A1 to A2 has a range of 110 Hz while a glissando from A4 to A3 is 440 Hz long.

The sound chip would need to be tuned to some tempered scale to compensate for this, e.g. if there were exactly six steps between each half note, each octave would be 72 steps out of 256 possible. It would offer a range of 3.5 octaves. Fewer steps between each note would offer a wider range if the sound chip can reproduce those frequencies.

carlsson wrote:Well, the formula for calculating a note frequency in Hz works in a similar way, just inverse math:

A1 = 110 Hz, A2 = 220 Hz, A3 = 440 Hz, A4 = 880 Hz and so on.

In a matter of seconds we realize a glissando from A1 to A2 has a range of 110 Hz while a glissando from A4 to A3 is 440 Hz long.

The sound chip would need to be tuned to some tempered scale to compensate for this, e.g. if there were exactly six steps between each half note, each octave would be 72 steps out of 256 possible. It would offer a range of 3.5 octaves.

This is uneccasary. You are referring to pitchbend?
Pitchbend is a movement through a range of notes by a specific pitch step. Each pitch is a value sent to the sound chip.
This method relies on the distance between semitones to remain the same throughout the entire range.
however for the AY and other log based pitch devices, a different schema should be chosen.
Now the distance between semitones in semitone steps is always constant, as is the distance in half, quarter, etc. semitones.

The method i am using tracks the semitones (or half notes) the pitchbend passes getting from start to destination.
It then calculates the difference in pitch between each semitone, and divides by 2 X number of times where X is the value specified in the pitchbend parameter to get the fractional dividend of the pitch. An Example of the code is given below.

The result of this code is a pitch size that will take the note from the current semitone to the next by an equal number of steps regardless what semitone in what octave is chosen.
Also note that this very same code and technique can also be applied to SID which uses 16 bit fractional stepping to approximate the notes it is playing.

Technically this method is still not perfect since the steps between semitones should increase or decrease as the accumulation approaches the next semitone. But i doubt this will prove noticeable.

carlsson wrote:Well, the formula for calculating a note frequency in Hz works in a similar way, just inverse math:

A1 = 110 Hz, A2 = 220 Hz, A3 = 440 Hz, A4 = 880 Hz and so on.

In a matter of seconds we realize a glissando from A1 to A2 has a range of 110 Hz while a glissando from A4 to A3 is 440 Hz long.

The sound chip would need to be tuned to some tempered scale to compensate for this, e.g. if there were exactly six steps between each half note, each octave would be 72 steps out of 256 possible.

Thinking about what you said a bit more i agree with you up to this point.

carlsson wrote: It would offer a range of 3.5 octaves. Fewer steps between each note would offer a wider range if the sound chip can reproduce those frequencies.

That last part i don't understand. In fact i would go further to suggest greater steps would permit a wider range whilst the sound chip can reproduce those frequencies.
I'm currently testing pitchbend. Got my code math a bit wrong but i believe my technique is still perfectly sound.

For example the step provided in the parameter is multiplied by 4 to provide the code offset for the Branch instruction in the Shifting stage.
However it is also used as a countdown between semitones and must be the inverse of the other

Ok, quite alot has changed to WAVE since i last updated here.
I have now totally rewritten the Sample and SID play routines so now you can have both at the same time. The constraints are that using the editor during playback of both sid and sample playing will slow things down (including the music), the Samples are only played back at 5Khz and Sample looping is not supported.
However the benefits far outway the drawbacks.

Also finished testing and debugging the pitchbend routines which now work perfectly with Effects and Ornaments. Also using SID on a channel that is pitchbended will also bend the SID frequency creating some very cool versatile and synchronous sweeps.

Now just to fix a couple of bugs in other areas, do a couple of demo songs and implement it all into the compiler and player and also fix Chema's Bar issue. Phew! i feel IM music getting alot closer now.. at bloomin last!

Brilliant! I am truly astounded at the quality of your hacking. Never would I have imagined, in my early desperate Oric-1 hacking years, that I would have access to such genius in the 21st Century, nor that it would give me hope for the Oric as a platform for the future ..

My son better appreciate all this hording I'm doing for him when he's older!

I love how the first notes sound (appart from the astonishing SID effects later on). I will look forward the next version of WAVE and have a look at this example to create new ornaments and effects to shape up the notes in the 1337 tune!

One important point to note is SID doesn't use any more memory beyond the code to generate it. However it does use substantially more cpu time to play, about 60% iirc.

Technically its possible to have 2 channel SID instead of SID+Sample but it depends what people want?

I'm also working on the possibility of making the choice more dynamic, so that during play one could select in the music whether to use 2 channel sid or sample and sid.

Anyway i'm also working on bugs i and Chema have found in the tracker and compiler. Also some more enhancements. Unfortunately (due to the sheer size of the flat music file) i am having to optimise to fit all the stuff in.
Ideally i'd like to give the option of how much memory is assigned to patterns and how much to samples so that larger projects could get more patterns by losing sample memory.

carlsson wrote:The sound chip would need to be tuned to some tempered scale to compensate for this, e.g. if there were exactly six steps between each half note, each octave would be 72 steps out of 256 possible. It would offer a range of 3.5 octaves. Fewer steps between each note would offer a wider range if the sound chip can reproduce those frequencies.

Let me rephrase and expand on this reasoning.

Assume a sound chip in which each voice can have 256 different values. 0 for silence, the rest will produce a note of some kind.

Most known sound chips will produce a sound relative to frequency. A low value gives a low frequency, a high value gives a high frequency. As we have seen, the delta frequency between two octaves are fewer Hz the higher note we want to play. It will be reflected in the sound chip as each value will yield an increasingly higher frequency. We get poor note resolution at the upper end, music sounds detuned. The bigger range of possible values, the better resolution. For example the SID chip uses 16-bit values which gives it quite good note resolution.

Now assume a different sound chip which is hard tied to a well-tempered Western music scale. Our friends in Arabia, Africa, Asia and the Carribean may not buy this computer, neither will the folk musicians who rely on micro tonality but that is another matter.

This new sound chip will relate each value to a given note instead of a frequency. Let us assume 255 possible values, and we get a well tempered note every 5 values: 001 = C0, 006 = C#0, 011 = D0 and so on.

That gives us room for 51 semitones, which corresponds to four whole octaves plus three more semitones: C4, C#4, D4. The sound chip can not produce any higher notes than that.

We see that between each semitone there are four frequencies, well spaced out. In practise the frequency distance between each semitone will get smaller as we play higher notes, but the sound chip compensates for us. We can still make quite OK glissandi and other sound effects. I'm not sure about vibrato, if the human ear prefers hearing vibratos at a relative distance to the original note or at fixed number of Hz from the same.

Now assume a third sound chip. This chip is also well tempered, but produces a semitone every third value: 001 = C0, 004 = C#0, 007 = D1 and so on. It gives us room for 255 / 3 = 85 semitones, which is a little over seven octaves. There certainly are 8-bit sound chips capable of playing such low and high frequencies, even if you usually don't use the lowest nor the highest values since (1) the TV or speaker may not be able to reproduce them and (2) you won't hear them anyway. Perhaps your dog will hear them, so this is a sound chip made for dog owners.

With this sound chip, you only have two possible values between each semitone. That may be enough for most people, but glissandi and vibratos may sound like arpeggio effects rather than smooth sweeps.

I hope this lengthy message explains what I meant. All of this is theoretical and has no real point in the discussion. I just wanted to point out that poor note resolution in the upper octaves is not unexpected, given on what premises the sound chips actually produce sounds.