How The Problem Of Sight Could Help Servers

This site may earn affiliate commissions from the links on this page. Terms of use.

IBM researchers have drawn closer ties between computational science and the way the brain processes images, in a bid to better understand both areas.

In a paper presented Tuesday, four IBM researchers attempted to abstract the neurons and axons in the neural cortex to discover if they could be used as a useful model for computation.

The idea was to learn “what the fundamental principles of brain functions are, and operate technology to solve problems in much the same way systems solve them,” said James Kovloski, the second author of the paper and a researcher at IBM. The IBM researchers presented their paper Tuesday at the International Conference on Adaptive and Natural Computing Algorithms in Coimbra, Portugal.

Although so-called “machine sight” systems like those manufactured by Cognex Corp. are already being used to check for defects in manufactured goods, the IBM researchers aren’t necessarily interested in improving that technology.

Instead, the idea is to help computers become aware of changing conditions and react to them, a concept that IBM has labeled “autonomic computing.” However, the problem is that the current model for artificial neural networks and the actual workings of the brain are far apart.

“The challenges confronting system managers are analogous to those confronting vertebrate organisms in every day existence,” Peck said. “Both operate in a complex, changing, and novelty-rich environment with limited access to information. Both have a simple set of actions available to them and needs that must be achieved. The relationships between these needs, the environment, and actions, however, are vague and must be learned. Finally, each must exploit these learned relationships and ongoing observations to generate an elaborate sequence of actions that will satisfy its needs through changes in its environment.

“One could think of applications in the context of system administration, where the biometaphorical system is learning to recognize potential problems in the system and take corrective action,” Peck added. “Sort of hand-eye coordination with digital event monitoring for eyes and change management actions for hands.”

The neural cortex is made up of “minicolumns,” the fundamental order of measurement of computational power inside the brain. Each minicolumn about 1/20 of a millimeter in diameter, contains about 80 to 100 neurons, and corresponds to a certain physical orientation.

The simple act of recognizing a box, for example, is actually a complex computational task. The three researchers concentrated on breaking down the problem of how the brain parses, characterizes, and eventually recognizes an object. An object like a box is broken down into component characteristics  its color, size, and the orientation of its edges  to try and determine what it is the eye is seeing, said Charles Peck, the lead author of the paper.

Identifying the object through its component characteristics is hard enough; a minicolumn is not all that sophisticated, so a typical artificial neural network will assign each “minicolumn” node the task of looking for the presence of a vertical or horizontal edge, with perhaps another set of “minicolumns” assigned to check for the presence of any edge within a certain space. In the case of a square box, however, horizontal edges in both the top and bottom of the image appear, which essentially confuses the network and impedes its ability to recognize the box.

If that task is completed, however, the minicolumn or node must then try and reassemble the image from its components parts, a challenge the IBM paper did not tackle.

The research model used two 10×10 networks, each feeding into a higher-level 10×10 array. The experiments were executed for 30,288 iterationsand each stimulus was presented for 24 iterations. While the IBM team concluded that the model still needs refining, the team concluded that it had achieved self-organization and could become more dependent on just the inputs needed for classification, intelligently weeding out “noise”.

“We continue to design models trying to maintain the biological plausibility of the model, computationally,” Peck said. “We can stay close to biology and draw to it.”

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Email

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
Terms of Use and
Privacy Policy. You may unsubscribe from the newsletter at any time.