Im using eight electret microphones that are spatially separated, and the sound source will change in position and time in no predictable way. Im looking for a way to amplify only that microphone that is closest to the source. This means I want to amplify only that mic with the largest output signal.

The fact that the sound source changes in space, time, and intensity complicates the issue, although I think/hope Ive worked out the basic logic for an acceptable solution, as follows. Im not, however, very good at designing electronic circuits, and so Id greatly appreciate some input here on how to proceed further. Heres how I see the logic.

We need the following sub-circuits: 1) decision making (which mic is the dominant mic), 2) clocking, 3) switching/latching, and 4) amplification.

A clocking circuit determines that the decision circuit is to make a decision. The decision circuit might consist of a bank of comparators, and since microphones put out an AC signal, the signal upstream of the comparators might be first rectified, then either averaged, or integrated. After identifying the mic with the largest signal, the decision circuit sends a signal to the switching circuit, switching amplification of the dominant mic on and all other mic circuits off (whether or not in an on state). Each switch contains a latch, or toggle, maintaining the switch position until another decision changes the switch state.

Some details are the following. The best decision time duration depends upon the frequency of the acoustic sound, and so, the minimum frequency, which is 85 Hz will fix the shortest  and most desirable - practical duration, which is 1/85 = 12 ms. In addition, the sampling rate, determined by the clocking circuit, should be equal to the duration of the decision making (12 ms). Thus, as soon as one decision is completed, another is initiated.

With the above logic, there will sometimes arise an artifact, which Im not too concerned about, but Ill mention it, in order to solidify these concepts. If it happens that the spatial position of the sound source changes greatly (from where the dominant mic is to another mic that is far away), there will be a relative drop in amplification for a time period as great as the duration of the decision process (12 ms max). This is because, the mic being amplified will, at the instant the sound source changes, not really be the dominant mic, and it will not be close enough to the sound source in order to provide the prevailing level of amplification. But as I say, this state will only occur for at most 12 ms, and I can live with that. I wouldnt be surprised if there are other, more serious, issues that I failed to foresee.

So to summarize, we need:

1) eight rectifier/averaging circuits (or rectifier/integrating circuits) - one for each mic.
2) a comparator circuit (or perhaps a bank of comparators) that takes all 8 outputs from (1)
3) 8 latch/switch components, each wired to the output of (2)
4) a clocking circuit that sets the sampling start times for (2)
5) a final amplifier circuit that amplifies the chosen microphone

I would appreciate any comments or suggestions on how to proceed with the electronics. Notice that I framework the solution to this challenging problem using analog concepts, and thats simply because Ive done very little work with digital circuits. Thanks in advance.

Boardroom telephone conference systems use a mic mixer that switches on the mic that has the first or the loudest signal. Frequently there is false switching due to noises in the room or echoes.
The switching from one mic to another is not immediate so part of the first spoken word is frequently missing.

Boardroom telephone conference systems use a mic mixer that switches on the mic that has the first or the loudest signal. Frequently there is false switching due to noises in the room or echoes.
The switching from one mic to another is not immediate so part of the first spoken word is frequently missing.

Click to expand...

Thanks. Perhaps I shouldve mentioned that my application is musical tones, which I think is much simpler than trying to select a dominant human speaker in a group. The latter issue presents many additional problems, and some Voice Activity Detection (VAD) algorithms are very complicated. A simple first or loudest algorithm wont work well for me, for the reasons I explain in my original post, and I dont think I can easily adapt the simplest of these voice systems to work for me. I should also mention that, in my case, theres no issue if two microphones will have comparable outputs, an event thats very unlikely to begin with.
Tom

Where are the microphones positioned relative to your sound sources? If you could get them all into a stereo field, then you could conceivably record all of them with two microphones and process the amplitude and phase information to separate them in post-recording. This would also avoid your the problem of missing data due to the wrong microphone being sampled.
Or am I barking up the wrong tree and you're trying to do this in real time for a performance piece?

Have sombody speak in a fairly large reverberant room. There will be echoes but the speech will be fairly clear.
Then cover one ear and hear with only the other ear. The speech will be mumble-jumble due to echoes. Your "detector circuit" will also be mumble-jumble unless it is stereo and has a brain.
They make that mistake on TV shows and movies ALL THE TIME!