July 05, 2015, 04:50:54 am

Ok, once upon a time I made a Web Audio VGM player featuring my JavaScript YM2612. I'd like someone here to use it in a Sphere project, preferably to try out the upcoming 1.6 Sound(Effect?) APIs, but writing YM2612 output to a wave file (find my old NWaveform project if you want existing Sphere-compatible RIFF WAVE file writing) and playing that would also work. I haven't checked if minisphere has implemented any 1.6 functionality yet, but I'd personally like to see the audio enhancements from 1.6 be added among the first set of 1.6 API.

If you can understand it, feel free to use the player's source, especially the vgm.js as that contains the code that fills the audio buffer with the chip's output, as a guideline for how to use the YM2612. I understand using my YM2612.js as-is is not trivial and don't expect anyone who takes up this request to make it happen in a few hours. This will provide another set of eyes that help me ascertain how much is web-dependent, where possible performance bottlenecks may be, and how the YM2612 API can be simplified.

I'm interested in creating tunes or sounds in the style that a Mega Drive could, but this script looks far from trivial to use, requiring knowledge of the chip and its registers. I might mess around with it a bit, but as with anything of this caliber it either won't happen or I lose interest early on because of the complexity. We'll see.

Well, the emulation seems to work in minisphere at least. No errors, and all the initialization code succeeds without issue. I even got it to give me back a buffer full of zeroes during updates, so it looks like everything is working well enough. The issue now is, how do I get the buffer to the sound card.

I did notice that trying to generate too many audio frames per update (say, oh, 1000) has an adverse effect on the game's framerate. Even generating just 32 frames per update (60fps) made my CPU usage shoot from around 5-6% in the map engine up to 14%. It appears to be a rather hefty task to emulate this chip...

edit: Hm, this might be the killer feature I was looking for to get minisphere to v1.5, a low-level audio API.

Last Edit: July 05, 2015, 12:34:52 pm by Lord English

miniSphere 5.1.3 - Cell compiler - SSj debugger - thread | on GitHubFor the sake of our continued health I very much hope that Fat Cerberus does not become skilled enough at whatever arcane art it would require to cause computers to spawn enourmous man eating pigs ~Rhuan

What is the specific format of the samples output by ym.update()? Having read a little about the chip I'm going to assume 9-bit signed int, but I'd like to know for sure so I can test this out with Audialis.

miniSphere 5.1.3 - Cell compiler - SSj debugger - thread | on GitHubFor the sake of our continued health I very much hope that Fat Cerberus does not become skilled enough at whatever arcane art it would require to cause computers to spawn enourmous man eating pigs ~Rhuan

What is the specific format of the samples output by ym.update()? Having read a little about the chip I'm going to assume 9-bit signed int, but I'd like to know for sure so I can test this out with Audialis.

It's actually 14-bit signed when all six channels are summed, and clamped instead of normalized, but if you use ym.mixStereo() instead to interleave into an existing stereo audio buffer that expects -1.0:1.0 it will automatically attenuate to -1.0:1.0 before mixing.