An algorithm for line antialiasing,which was presented in the articleAn Efficient Antialiasing Techniquein the July 1991 issue of ComputerGraphics, as well as in the articleFast Antialiasing in the June 1992issue of Dr. Dobb's Journal.

if sparse then for i as myint = des->l to des->u n(i).new_axon randc(src->c, src->l) n(i).a(n(i).u).w = rand_weight Next else for i as myint = des->l to des->u for j as myint = src->l to src->u n(i).new_axon j n(i).a(n(i).u).w = rand_weight Next Next endif

It is unclear to me what problem is being solved here?You prove an ANN's viability by giving it a problem to solve.

The number of possible wiring combinations can quickly become enormous as the number of components increases and thus cannot be solved in real time by trying different randomly wired units. The whole point about an ANN is that it can learn via techniques such as back propagation to move its wiring toward a better and better solution. Only the first attempt is random because you have to start somewhere.

It would help if you could explain with words what the program is actually doing.I notice the word "evolutionary" in the top comments.You can evolve a net which is a kind of learning by selecting the best of a set of random networks and mutating some of their connections then repeating that process. Is that what you are doing?What exactly is string a and string b?You run the program and it does what before printing string b?I am responding because learning machines interest me but I am unsure what exactly this programs does?

this net is evolutionary in that it randomizes weights, biases and it will even rewire.

the basic neuron i imagine having inputs (axons) coming from the left.an axon stores neuron source index (also coming from the left), and a weight.the neuron receiving the signal adds a bias.neuron can have several axons (again, coming from the left)

the output layer, bn_out (biases, nodes out) is an array of neurons overlapping the output data, which, when compared, gives the error. you can tap a neuron with wsum (see the last loop in Main)

sean_vn wrote:Anyway I had this to say about quantization and evolution:

My understanding of quantization is one of fidelity. Copy a digital picture many times and providing the noise is below the threshold each copy will remain the same. Copy an analogue picture many times and it quickly degrades into noise. With a digital image (or data) you can add redundancy to protect against the occasional large noise spike that might change a data value.

@dafhi,The reason I would have liked it explained in detail, in words, is the code is too long and complicated for me to follow or decipher in a reasonable time to figure out how and what it is actually doing. From what I can make out from the last example you claim to have evolved a net that given 00011011 as input it will output the data_out string but how it does that from random weight changes is unclear to me. I just can't see what is going on ...

I just feel there is something not quite right as random changes in a complex system should not converge, it should just jump all over the place.

Well, you can quantize 10.68372892 to 10.6 or 10 or 11, whatever way you want to do it. I was just saying there are some reasons to avoid anything like if possible when trying to solve problems by evolution...change of topic..I was looking at Fortran as an alternative to FB. However I read that Basic actually was created as an improved version of Fortran.Fortran has continued to be used at the highest tiers of scientific computing while Basic is rather looked down on. That's a bit unfortunate.

Ok I just noticed "Permuted Congruence Generator" in the top comments.I was confused because I thought this was a neural network of some kind that could learn input/output patterns.By random network I assume you actually mean a network that produces random numbers.Sorry I messed up the thread. I would go back and erase my posts if possible.

.BasicCoder2, I'm pretty sure it is an attempt at a sparse neural network.

dafhi implemented the PCG PRNG function in FB language simply to have control over the type of random numbers that he wanted to utilize in his routines. (He also implemented Xiaolin Wu's line algorithm, etc.).

It appears that most of dafhi's example code is used for creating a visual display of the the program's results..