I have never understood the condescending comparison between the Echo products and the HomePod. If you want a digital assistant to tell you jokes and turn your lights on then yes, go buy an Echo. But if you are interested in superb high quality audio first and digital assistance as a side benefit then the HomePod is your gadget. The two products shouldn’t even be talked about in the same sentence. They are Apples and Oranges. As the author makes quite clear the HomePod is a high end home audio product, not primarily a digital assistant to compete with the Echo line. It’s no different than comparing a cheap pre-paid Android phone with the iPhone X.

1) Is HomePod Siri controlled? Yes.

2) Does Siri tell jokes? Yes.

3) Are there products that are Alexa controlled and speakers that work with the Echo and other Alexa-capable devices that will sound better than the HomePod? Based on the size and stated specs, that seems like a certainty.

4) You know Amazon lets anyone license Alexa for pretty much any system they wish, right?

FWIW Apple's beam-forming for the Home Pod is called TruePlay by Sonos and marketed as Smart Sound by Google.

Absolutely false.

All Sonos and Google perform are equalization. Apple is not only analyzing the frequency response of the room(and modifying the equalization to compensate) they are also analyzing sound in the time domain.

This is far more complex than simple EQ (which has been around forever) and will give the HomePod a huge advantage over Sonos, Google or anyone else.

Eric, Patently Apple chimed in on this several weeks ago commenting that Google Smart Sound was "a clear Apple HomePod rip-off with Smart Sound that will readjust sound using beamforming technology". Another Apple blog, iMore, made the same observation. Same applies to Sonos and TruePlay which based on description of the tech seems little different from Home Pod except in marketing language.

Smart Sound can’t do beamforming. Neither can Sonos. I haven’t read the articles you quoted, but if they claim Google and Sonos are doing beamforming then they clearly don’t know a damn thing about audio. And if you’re relying on their opinions than neither do you.

Unlike you I don't claim to know that Apple or Google Max or Sonos is already the best-sounding of the lot. I also don't claim as fact whether in practice they all arrive at much the same result even if not using the identical hardware or marketing terms. Just repeating comments made by others who have presumably had a least some exposure to them and/or the technology involved just as you are. You claim to already have all the available data necessary to crown the winner tho I doubt it. It may be months before Apple even has a finalized product you can try for yourself, and what Apple demoed 6 months ago may not be identical to what Apple ships a month(s) from now.

And yes you are absolutely correct that Google Smart Sound (not certain about Sonos) relies on EQ according to the reading I've done tonight (Thanks. Seriously), so whether "beam-forming" as done in a shipping and finished Home Pod makes much if any difference in a relatively small mid-range individual speaker streaming 256kbps Apple Music (is any other music service supported?) over your own home network in a normal room will be interesting at least. We will all know. Eventually.

Where did I ever say that HomePod has been "crowned the winner"? All I did was take exception to this incorrect statement made by you:

"FWIW Apple's beam-forming for the Home Pod is called TruePlay by Sonos and marketed as Smart Sound by Google."

The patent you listed refers to beamforming with microphones (as I just mentioned in the post above). This has been around for some time in many products (like the Echo). What I've been referring to is beamforming using the speaker drivers to direct sound.

I’m starting to think that there never be a HomePod. Who would buy it?

In order for it to compete someone has to be willing to give up their existing system. Perhaps someone moving into their first apartment without a sound system?

for us the sound has to be substantially better than our Sonos system. Also we gave up HomeKit as the latency with alexa is about 2-3 seconds vs a minute with siri. But also there’s a 50/50 chance that the HomeKit hookup works compared to 100%.

This is a very basic example what the multiple tweeters in the HomePod can do to create a wider soundstage and also to eliminate problems with phase (and will look familiar to anyone who watched the HomePod video).

When sound is reflected off the rear walls, it has a longer path to take to get to the listener than sound coming directly from the speaker. In my example above, the total time for sound to reach the listener is 2.5ms from the front driver and a total of 8.0ms from the rear drivers. When the sound reaches your ears it could be perfectly in phase, completely out of phase or most likely, somewhere in between.

A 360 degree speaker like the Google Home or Amazon Echo can't do anything to compensate for any potential phase issues that can arise when sound reflected off walls and other objects interacts with sound that travels directly to the listener. And given their target audience and use, it doesn't really matter. These devices are not used for any serious music listening and are typically just for background music.

Apple has told us the HomePod can use beamforming to direct sound. Beamforming has two requirements to work: you need multiple drivers and you need to be able to adjust the phase individually for each driver. Phase can be adjusted mechanically (physically changing the position of a driver or speaker) or electrically (through digital time delay). Obviously, the HomePod uses digital time delay to adjust phase.

In the example above, the sound to the rear tweeters would be sent out normally. However, the sound to the front tweeter would be delayed by 5.5ms. This delay allows the rear sound to "catch up" to the direct sound such that by the time it's reflected off the rear walls and starts moving forward it will end up being in phase with the sound from the front tweeter. All the possible issues that can arise with sounds being out of phase are thus eliminated. Since the HomePod also has 6 microphones, calculating this delay time would be fairly straightforward. A few clicks or other test tones played through the individual tweeters can be measured by the microphones so the HomePod can determine exactly how far away it is from any walls and set the appropriate delay time accordingly.

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

This is a very basic example what the multiple tweeters in the HomePod can do to create a wider soundstage and also to eliminate problems with phase (and will look familiar to anyone who watched the HomePod video).

When sound is reflected off the rear walls, it has a longer path to take to get to the listener than sound coming directly from the speaker. In my example above, the total time for sound to reach the listener is 2.5ms from the front driver and a total of 8.0ms from the rear drivers. When the sound reaches your ears it could be perfectly in phase, completely out of phase or most likely, somewhere in between.Phase can be adjusted mechanically (physically changing the position of a driver or speaker) or electrically (through digital time delay). Obviously, the HomePod uses digital time delay to adjust phase.

In the example above, the sound to the rear tweeters would be sent out normally. However, the sound to the front tweeter would be delayed by 5.5ms. This delay allows the rear sound to "catch up" to the direct sound such that by the time it's reflected off the rear walls and starts moving forward it will end up being in phase with the sound from the front tweeter. All the possible issues that can arise with sounds being out of phase are thus eliminated. Since the HomePod also has 6 microphones, calculating this delay time would be fairly straightforward. A few clicks or other test tones played through the individual tweeters can be measured by the microphones so the HomePod can determine exactly how far away it is from any walls and set the appropriate delay time accordingly.

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

But do at least a couple of those (Sonos/Google) have the hardware needed to accomplish what you diagramed, other than reflecting off the back wall which doesn't seem like a necessary feature. (Why not just compute the time delay firing forward?) I think so. Both Sonos One and Home Max appear perfectly capable of computing delay times and adjusting phase matched to your room structure (wall angles, size, furniture placement, other reflecting/absorbing surfaces, etc). We already factually know Google uses beamforming in concert with the microphones for better sound recognition. It's a short drive to using the same technology for much of the same sound adjustments as Apple might be accomplishing thru other methods isn't it? As for beamforming, and despite your stated years of professional audio experience, I think you yourself have realized your knowledge about what Apple may be doing with it is somewhat lacking since you are having some difficulty expressing how it will benefit the Home Pod or why it disadvantages others who lack it.

Now having said all that I personally expect the Google Home Max to be noticeably bass-heavy just as the original 2016 Google Home was. I'm not a fan of boom-sound myself and I've no doubt many others would agree. Fortunately for those owners there is an included equalizer now. It's certainly possible the Home Max may suffer some muddiness.

Apple will of course have a well-thought-out Home Pod with very good sound and well matched to other Apple products. It may well bring in more revenues than other mid-range smart-speakers by a significant margin, which at the end of the day is all this is about: More profits.

Sonos One with Alexa (other voice assistant support coming soon) is what I would expect to have the better features for most folks and at least comparable overall sound to the Home Pod if not a bit fuller, particularly for the price which is significantly undercutting both Apple and Google. That's before the expected Play:5 replacement this next year that will likely support Amazon Alexa, Google Assistant and Apple's Siri at launch.

Another plus for some Homepod competitors is the much better cross-platform and 3rd party support, services like Spotify, Pandora, iHeart Radio, Amazon Music, TuneIn and others, that won't be supported by Apple. IMHO If you're not already deep into the Apple ecosystem there are better featured products for you than the Home Pod, but for dedicated Apple fans the Home Pod might be (and probably will be) the best choice among the three as long as your music sources only from Apple Music/iTunes or your personal library.

And again this is just personal opinion but at some point Apple's Homepod will have to support some 3rd party streaming. As is they are too limited IMO, assuming of course that the product they eventually ship is the same as the one they demo'd and spec'd back in the summer.

I wonder if Apple went after the wrong market here. I just helped my sister set up an Echo yesterday and she absolutely loves it. And she’s someone all-in on Apple devices. But she would never pay $350 for a speaker. The Echo she got was $69 with tax. The audio isn’t amazing but I’ve heard worse and she thought it sounded great. On TV I’m constantly seeing commercials for Google Home mini starting at $29. Where is the evidence peope are willing to pay a significant premium for smart speakers, and where is the evidence Home Pod will be good enough for serious audiophiles? It seems like it’s in this niche space that will appeal to Apple die hards but not a mass market.

Regardless of this hyperbiased article, Siri still needs a lot of work. There's simply zero excuse for it not being up to the level of Alexa. It's half-assed in comparison, which I would hope is uncharacteristic of Apple.

Say you. I don't think Siri is worse than Alexa, or even that Alexa is better than Siri, because they two is not exactly the same. I have never had any problem with Siri, she can understand me 100% all the time. What I think Siri lack is her ability to store information about you. Alexa is equipped to get as many information about you as possible so that it can cater to your needs even if you don't need it. Siri will not do that for privacy reason obviously. But for day to day use, Siri is more capable to do any of your request. Play music, set up calendar, set up meeting, set up timer, check email, check game scrores, etc. she does it brilliantly. Asking if you need to buy a new underwear? Not so much, that is what Alexa does.

I am living in Antwerp, Belgium. I did a small test asking Siri and Google Now the route to 5 main streets in Antwerp: Americalei, Grote Steenweg, Meir, Noorderlaan, and Desguinlei. I did the test in Dutch, being the local language in Antwerp. Siri had "Noorderlaan" correct and recognized "Grote Steenweg" but pointed to the "Grote Steenweg" in Mortsel, a nearby city. It failed for the rest which is very disappointing. Google Now had 4 correct , only missing "Desguinlei", which is a tricky one because of the difficult pronunciation.

Although this is only a small test that does not cover all the use cases, it just shows that Siri is nothing to be proud of

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

Your explanation is a reasonable approximation and in-fact early beamformers used simple electromagnetic delay lines to achieved spatial definition. What's not apparent from your diagram is that the HomePod (as-described by Apple) is not simply a way to "optimize" the listening experience for a listener sitting in one static position in the room. It is also not tracking listeners as they move around the room. What I suspect it is doing is forming multiple beams radially around the speaker (360 degrees), kind of like the flower petals on a daisy viewed from above. A listener situated anywhere around the HomePod should hear essentially the same composite sound. This would work ideally in an open auditorium where there are no obstructions anywhere in the 360 degrees around the HomePod. When there are obstructions, you will get multi-path interference at multiple listening positions as ericthehalfbee shows. The HomePod will have to sample the generated sound from every beam to determine what type of compensation is needed to reduce the interference for all beams. This may include attenuating some of the speakers in the array that are projecting into the obstruction as well as adapting the beamforming, effectively killing certain beams. Whatever it does, it must be a compromise strategy that works good enough for all transmit beams since the goal of the HomePod is always to provide listeners situated anywhere around the room with a subjectively good listening experience, not just one listener in one location. I'm fairly certain there will be some measurable differences in the sound at different angles due to the compromises and unique geometries involved with different room layouts, but it should still be qualitatively and quantitatively better than the sound produced without beamforming. Using 2 HopePods together will also change the sound dynamic because each speaker may have different obstructions and room geometries to contend with.

Beamforming is not a marketing term by itself. But technical terms are quite often used to create an air of sophistication and/or technical prowess that is absolutely intended to convey product superiority. In the case of the HomePod, Apple is using beamforming terminology to back up their assertion that HomePod will provide a better listening experience for all listeners in a room, regardless of where they are situated. It's a bit of technical "why" to bolster a claim and is clearly placed at the intersection of technology and marketing. I don't see this as a negative unless the actual product does not live up to expectations and beamforming subsequently takes on a negative connotation. But yeah, at this point Apple is using beamforming as a marketing term for the majority of consumers because they could have simply said "HomePod will sound great no matter where you are sitting or standing in the room."

I wonder if Apple went after the wrong market here. I just helped my sister set up an Echo yesterday and she absolutely loves it. And she’s someone all-in on Apple devices. But she would never pay $350 for a speaker. The Echo she got was $69 with tax. The audio isn’t amazing but I’ve heard worse and she thought it sounded great. On TV I’m constantly seeing commercials for Google Home mini starting at $29. Where is the evidence peope are willing to pay a significant premium for smart speakers, and where is the evidence Home Pod will be good enough for serious audiophiles? It seems like it’s in this niche space that will appeal to Apple die hards but not a mass market.

They better either have a vastly improved Siri built-in or drop the price to move volume on this ($299, $999 for 4). The fact that Amazon has developed Alexa in a few years shows that you can build an assistant that does very good voice recognition in a few years. I just don't think its been a priority at Apple. It has been more like the proverbial step child.

FWIW the current Google Home (from last year) has 360 degree sound, and beamforming is used at least for voice recognition purposes. The current Echo also features 360 degree sound.

The Amazon Echo (2nd gen) has a single woofer and a single tweeter (no stereo). The woofer fires down and the tweeter fires up. They claim it's "360 degree sound" simply because the speakers aren't outward facing and the circular grille allows sound to escape on all sides. You could place a boombox on the floor facing up and basically claim the same thing. In other words, it's a half-baked solution. The current Google Home has speakers that face outward... basically one on each side + woofer. Does that really give you 360 degree sound? Only if you want to include reflected sound waves. The side facing speakers aren't really going to cover 360 degrees themselves. Also, you can forget about stereo since the tweeters face in opposite directions.

The assertion that comparing HomePod and Echo is like comparing "Apples to Oranges" is simply ludicrous. They both offer similar features and provide similar functionality.

This is just fanboi back paddle because Echo is selling like hotcakes this holiday and Apple FAILED to get their overpriced, seriously limited product to market in time for Christmas. Denial in it's finest form.

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

Your explanation is a reasonable approximation and in-fact early beamformers used simple electromagnetic delay lines to achieved spatial definition. What's not apparent from your diagram is that the HomePod (as-described by Apple) is not simply a way to "optimize" the listening experience for a listener sitting in one static position in the room. It is also not tracking listeners as they move around the room. What I suspect it is doing is forming multiple beams radially around the speaker (360 degrees), kind of like the flower petals on a daisy viewed from above. A listener situated anywhere around the HomePod should hear essentially the same composite sound. This would work ideally in an open auditorium where there are no obstructions anywhere in the 360 degrees around the HomePod. When there are obstructions, you will get multi-path interference at multiple listening positions as ericthehalfbee shows. The HomePod will have to sample the generated sound from every beam to determine what type of compensation is needed to reduce the interference for all beams. This may include attenuating some of the speakers in the array that are projecting into the obstruction as well as adapting the beamforming, effectively killing certain beams. Whatever it does, it must be a compromise strategy that works good enough for all transmit beams since the goal of the HomePod is always to provide listeners situated anywhere around the room with a subjectively good listening experience, not just one listener in one location. I'm fairly certain there will be some measurable differences in the sound at different angles due to the compromises and unique geometries involved with different room layouts, but it should still be qualitatively and quantitatively better than the sound produced without beamforming (All other things including speaker quality/enclosure being equal). Using 2 HopePods together will also change the sound dynamic because each speaker may have different obstructions and room geometries to contend with.

Beamforming is not a marketing term by itself. But technical terms are quite often used to create an air of sophistication and/or technical prowess that is absolutely intended to convey product superiority. In the case of the HomePod, Apple is using beamforming terminology to back up their assertion that HomePod will provide a better listening experience for all listeners in a room, regardless of where they are situated. It's a bit of technical "why" to bolster a claim and is clearly placed at the intersection of technology and marketing. I don't see this as a negative unless the actual product does not live up to expectations and beamforming subsequently takes on a negative connotation. But yeah, at this point Apple is using beamforming as a marketing term for the majority of consumers because they could have simply said "HomePod will sound great no matter where you are sitting or standing in the room."

Thanks. Your explanation is far more helpful. I like the visual of petals on a daisy, very descriptive.

BTW something I meant to comment on earlier is that IMO Amazon and Google, and to a lesser extent Sonos, are competing more with each other than with Apple's Homepod. Well perhaps Sonos has an eye on the Homepod (adding Alexa was relatively easy), not so much Google and Amazon. Homepod is for those already embedded in Apple's ecosystem, and that's a pretty big segment. They'll of course be successful with it just as they are with nearly every Apple product.

The Home Max has been in the works for over a year, and Amazon is of course always actively developing newer versions of its Echo products. Neither was developed as a knee-jerk response to the Home Pod, but between Amazon and Google the former is well in the lead at least in mindshare.

I wonder if Apple went after the wrong market here. I just helped my sister set up an Echo yesterday and she absolutely loves it. And she’s someone all-in on Apple devices. But she would never pay $350 for a speaker. The Echo she got was $69 with tax. The audio isn’t amazing but I’ve heard worse and she thought it sounded great. On TV I’m constantly seeing commercials for Google Home mini starting at $29. Where is the evidence peope are willing to pay a significant premium for smart speakers, and where is the evidence Home Pod will be good enough for serious audiophiles? It seems like it’s in this niche space that will appeal to Apple die hards but not a mass market.

Look at the price range for headphones. Then compare it to the price range for compact speaker systems. They're not that different, so there's little doubt that a market exists for Apple's product/price. For example, the Sonos 5 compact speaker is retailing for $499...and that isn't really an audiophile system. That's more mid-range. $350 for all of the technology that Apple is packing into the HomePod is not a premium at all in the current market.

This is a very basic example what the multiple tweeters in the HomePod can do to create a wider soundstage and also to eliminate problems with phase (and will look familiar to anyone who watched the HomePod video).

When sound is reflected off the rear walls, it has a longer path to take to get to the listener than sound coming directly from the speaker. In my example above, the total time for sound to reach the listener is 2.5ms from the front driver and a total of 8.0ms from the rear drivers. When the sound reaches your ears it could be perfectly in phase, completely out of phase or most likely, somewhere in between.Phase can be adjusted mechanically (physically changing the position of a driver or speaker) or electrically (through digital time delay). Obviously, the HomePod uses digital time delay to adjust phase.

In the example above, the sound to the rear tweeters would be sent out normally. However, the sound to the front tweeter would be delayed by 5.5ms. This delay allows the rear sound to "catch up" to the direct sound such that by the time it's reflected off the rear walls and starts moving forward it will end up being in phase with the sound from the front tweeter. All the possible issues that can arise with sounds being out of phase are thus eliminated. Since the HomePod also has 6 microphones, calculating this delay time would be fairly straightforward. A few clicks or other test tones played through the individual tweeters can be measured by the microphones so the HomePod can determine exactly how far away it is from any walls and set the appropriate delay time accordingly.

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

But do at least a couple of those (Sonos/Google) have the hardware needed to accomplish what you diagramed, other than reflecting off the back wall which doesn't seem like a necessary feature. (Why not just compute the time delay firing forward?) I think so. Both Sonos One and Home Max appear perfectly capable of computing delay times and adjusting phase matched to your room structure (wall angles, size, furniture placement, other reflecting/absorbing surfaces, etc). We already factually know Google uses beamforming in concert with the microphones for better sound recognition. It's a short drive to using the same technology for much of the same sound adjustments as Apple might be accomplishing thru other methods isn't it? As for beamforming, and despite your stated years of professional audio experience, I think you yourself have realized your knowledge about what Apple may be doing with it is somewhat lacking since you are having some difficulty expressing how it will benefit the Home Pod or why it disadvantages others who lack it.

Now having said all that I personally expect the Google Home Max to be noticeably bass-heavy just as the original 2016 Google Home was. I'm not a fan of boom-sound myself and I've no doubt many others would agree. Fortunately for those owners there is an included equalizer now. It's certainly possible the Home Max may suffer some muddiness.

Apple will of course have a well-thought-out Home Pod with very good sound and well matched to other Apple products. It may well bring in more revenues than other mid-range smart-speakers by a significant margin, which at the end of the day is all this is about: More profits.

Sonos One with Alexa (other voice assistant support coming soon) is what I would expect to have the better features for most folks and at least comparable overall sound to the Home Pod if not a bit fuller, particularly for the price which is significantly undercutting both Apple and Google. That's before the expected Play:5 replacement this next year that will likely support Amazon Alexa, Google Assistant and Apple's Siri at launch.

Another plus for some Homepod competitors is the much better cross-platform and 3rd party support, services like Spotify, Pandora, iHeart Radio, Amazon Music, TuneIn and others, that won't be supported by Apple. IMHO If you're not already deep into the Apple ecosystem there are better featured products for you than the Home Pod, but for dedicated Apple fans the Home Pod might be (and probably will be) the best choice among the three as long as your music sources only from Apple Music/iTunes or your personal library.

And again this is just personal opinion but at some point Apple's Homepod will have to support some 3rd party streaming. As is they are too limited IMO, assuming of course that the product they eventually ship is the same as the one they demo'd and spec'd back in the summer.

Everything you said in regards to phase and beamforming is, again, completely false.

My only problem is trying to explain complex ideas so the layperson can understand. Your comments about phase make this abundantly clear.

This is a very basic example what the multiple tweeters in the HomePod can do to create a wider soundstage and also to eliminate problems with phase (and will look familiar to anyone who watched the HomePod video).

When sound is reflected off the rear walls, it has a longer path to take to get to the listener than sound coming directly from the speaker. In my example above, the total time for sound to reach the listener is 2.5ms from the front driver and a total of 8.0ms from the rear drivers. When the sound reaches your ears it could be perfectly in phase, completely out of phase or most likely, somewhere in between.Phase can be adjusted mechanically (physically changing the position of a driver or speaker) or electrically (through digital time delay). Obviously, the HomePod uses digital time delay to adjust phase.

In the example above, the sound to the rear tweeters would be sent out normally. However, the sound to the front tweeter would be delayed by 5.5ms. This delay allows the rear sound to "catch up" to the direct sound such that by the time it's reflected off the rear walls and starts moving forward it will end up being in phase with the sound from the front tweeter. All the possible issues that can arise with sounds being out of phase are thus eliminated. Since the HomePod also has 6 microphones, calculating this delay time would be fairly straightforward. A few clicks or other test tones played through the individual tweeters can be measured by the microphones so the HomePod can determine exactly how far away it is from any walls and set the appropriate delay time accordingly.

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

But do at least a couple of those (Sonos/Google) have the hardware needed to accomplish what you diagramed, other than reflecting off the back wall which doesn't seem like a necessary feature. (Why not just compute the time delay firing forward?) I think so. Both Sonos One and Home Max appear perfectly capable of computing delay times and adjusting phase matched to your room structure (wall angles, size, furniture placement, other reflecting/absorbing surfaces, etc). We already factually know Google uses beamforming in concert with the microphones for better sound recognition. It's a short drive to using the same technology for much of the same sound adjustments as Apple might be accomplishing thru other methods isn't it? As for beamforming, and despite your stated years of professional audio experience, I think you yourself have realized your knowledge about what Apple may be doing with it is somewhat lacking since you are having some difficulty expressing how it will benefit the Home Pod or why it disadvantages others who lack it.

Now having said all that I personally expect the Google Home Max to be noticeably bass-heavy just as the original 2016 Google Home was. I'm not a fan of boom-sound myself and I've no doubt many others would agree. Fortunately for those owners there is an included equalizer now. It's certainly possible the Home Max may suffer some muddiness.

Apple will of course have a well-thought-out Home Pod with very good sound and well matched to other Apple products. It may well bring in more revenues than other mid-range smart-speakers by a significant margin, which at the end of the day is all this is about: More profits.

Sonos One with Alexa (other voice assistant support coming soon) is what I would expect to have the better features for most folks and at least comparable overall sound to the Home Pod if not a bit fuller, particularly for the price which is significantly undercutting both Apple and Google. That's before the expected Play:5 replacement this next year that will likely support Amazon Alexa, Google Assistant and Apple's Siri at launch.

Another plus for some Homepod competitors is the much better cross-platform and 3rd party support, services like Spotify, Pandora, iHeart Radio, Amazon Music, TuneIn and others, that won't be supported by Apple. IMHO If you're not already deep into the Apple ecosystem there are better featured products for you than the Home Pod, but for dedicated Apple fans the Home Pod might be (and probably will be) the best choice among the three as long as your music sources only from Apple Music/iTunes or your personal library.

And again this is just personal opinion but at some point Apple's Homepod will have to support some 3rd party streaming. As is they are too limited IMO, assuming of course that the product they eventually ship is the same as the one they demo'd and spec'd back in the summer.

Everything you said in regards to phase and beamforming is, again, completely false.

My only problem is trying to explain complex ideas so the layperson can understand. Your comments about phase make this abundantly clear.

Ummm... Well since I didn't state any fact about either beamforming or phase in that post I suppose you must be referring to the questions I asked you as being "completely false"? :eyeroll:

What I specifically wanted you to comment on, and what you successfully avoided addressing, was whether Google's Home Max and Sonos One had the necessary components to accomplish what you diagrammed, a "well established method of using time delay to control phase and improve sound quality". The obvious exception to your diagram is bouncing it off a rear wall, and if that's a mandatory element please explain why. Perhaps you haven't yet explained it properly, but you've not made any attempt to explain why not relying on beamforming would automatically disadvantage Sonos or anyone which was one of the original points you made , and what lead to much of this back and forth.

Your quote: "This is far more complex than simple EQ (which has been around forever) and will give the HomePod a huge advantage over Sonos, Google or anyone else", while also mentioning Apple would be "analyzing sound in the time domain" without explaining why you believe Sonos, Google and anyone else's hardware not using "beamforming" would be incapable of doing so. As you've explained things so far beamforming is not a mandatory feature for that. Explaining in layman's terms is of course what you should strive for because that's who most of us are when discussing audio tech. We're laymen, while you claim not to be.

My sole comment about beamforming in that post was not a statement of fact but was a question posed to you which apparently you cannot answer? Fair enough. TBH DewMe is much more helpful at explaining what he believes Apple is doing.

The assertion that comparing HomePod and Echo is like comparing "Apples to Oranges" is simply ludicrous. They both offer similar features and provide similar functionality.

This is just fanboi back paddle because Echo is selling like hotcakes this holiday and Apple FAILED to get their overpriced, seriously limited product to market in time for Christmas. Denial in it's finest form.

You clearly missed the thesis of this article.

If Car and Driver said "Comparing a $100,000 Porsche with a $15,000 Kia is comparing apples to oranges" would you post a comment saying "This is simply ludicrous. They both offer similar features and provide similar functionality. This is just fanboi back paddle because Kia is selling like hotcakes this holiday and Porsche FAILED to get their overpriced, seriously limited product to market in time for Christmas. Denial in it's finest form."

You sound like those people who blasted the iPod when it was first announced, except that at least in that case the iPod was specially competing against cheap, small-capacity MP3 players.

The assertion that comparing HomePod and Echo is like comparing "Apples to Oranges" is simply ludicrous. They both offer similar features and provide similar functionality.

This is just fanboi back paddle because Echo is selling like hotcakes this holiday and Apple FAILED to get their overpriced, seriously limited product to market in time for Christmas. Denial in it's finest form.

Yep , 11 months ago DED was saying great improvements are coming to Siri because of Alexa.

The assertion that comparing HomePod and Echo is like comparing "Apples to Oranges" is simply ludicrous. They both offer similar features and provide similar functionality.

This is just fanboi back paddle because Echo is selling like hotcakes this holiday and Apple FAILED to get their overpriced, seriously limited product to market in time for Christmas. Denial in it's finest form.

Yep , 11 months ago DED was saying great improvements are coming to Siri because of Alexa.

Siri should be so much better just because of the microphones in the HomePod.

It would frightening if Dilger is basing current spin on inside knowledge about Siri just not measuring up.

I'd assume that just having an array of far-field microphones will make Siri appear to be more intelligent because she'll objectively be a better listener. That may be the thread where I was told that Amazon is pathetic for needing more than one microphone.

This is a very basic example what the multiple tweeters in the HomePod can do to create a wider soundstage and also to eliminate problems with phase (and will look familiar to anyone who watched the HomePod video).

When sound is reflected off the rear walls, it has a longer path to take to get to the listener than sound coming directly from the speaker. In my example above, the total time for sound to reach the listener is 2.5ms from the front driver and a total of 8.0ms from the rear drivers. When the sound reaches your ears it could be perfectly in phase, completely out of phase or most likely, somewhere in between.Phase can be adjusted mechanically (physically changing the position of a driver or speaker) or electrically (through digital time delay). Obviously, the HomePod uses digital time delay to adjust phase.

In the example above, the sound to the rear tweeters would be sent out normally. However, the sound to the front tweeter would be delayed by 5.5ms. This delay allows the rear sound to "catch up" to the direct sound such that by the time it's reflected off the rear walls and starts moving forward it will end up being in phase with the sound from the front tweeter. All the possible issues that can arise with sounds being out of phase are thus eliminated. Since the HomePod also has 6 microphones, calculating this delay time would be fairly straightforward. A few clicks or other test tones played through the individual tweeters can be measured by the microphones so the HomePod can determine exactly how far away it is from any walls and set the appropriate delay time accordingly.

It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).

When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.

But do at least a couple of those (Sonos/Google) have the hardware needed to accomplish what you diagramed, other than reflecting off the back wall which doesn't seem like a necessary feature. (Why not just compute the time delay firing forward?) I think so. Both Sonos One and Home Max appear perfectly capable of computing delay times and adjusting phase matched to your room structure (wall angles, size, furniture placement, other reflecting/absorbing surfaces, etc). We already factually know Google uses beamforming in concert with the microphones for better sound recognition. It's a short drive to using the same technology for much of the same sound adjustments as Apple might be accomplishing thru other methods isn't it? As for beamforming, and despite your stated years of professional audio experience, I think you yourself have realized your knowledge about what Apple may be doing with it is somewhat lacking since you are having some difficulty expressing how it will benefit the Home Pod or why it disadvantages others who lack it.

Now having said all that I personally expect the Google Home Max to be noticeably bass-heavy just as the original 2016 Google Home was. I'm not a fan of boom-sound myself and I've no doubt many others would agree. Fortunately for those owners there is an included equalizer now. It's certainly possible the Home Max may suffer some muddiness.

Apple will of course have a well-thought-out Home Pod with very good sound and well matched to other Apple products. It may well bring in more revenues than other mid-range smart-speakers by a significant margin, which at the end of the day is all this is about: More profits.

Sonos One with Alexa (other voice assistant support coming soon) is what I would expect to have the better features for most folks and at least comparable overall sound to the Home Pod if not a bit fuller, particularly for the price which is significantly undercutting both Apple and Google. That's before the expected Play:5 replacement this next year that will likely support Amazon Alexa, Google Assistant and Apple's Siri at launch.

Another plus for some Homepod competitors is the much better cross-platform and 3rd party support, services like Spotify, Pandora, iHeart Radio, Amazon Music, TuneIn and others, that won't be supported by Apple. IMHO If you're not already deep into the Apple ecosystem there are better featured products for you than the Home Pod, but for dedicated Apple fans the Home Pod might be (and probably will be) the best choice among the three as long as your music sources only from Apple Music/iTunes or your personal library.

And again this is just personal opinion but at some point Apple's Homepod will have to support some 3rd party streaming. As is they are too limited IMO, assuming of course that the product they eventually ship is the same as the one they demo'd and spec'd back in the summer.

Everything you said in regards to phase and beamforming is, again, completely false.

My only problem is trying to explain complex ideas so the layperson can understand. Your comments about phase make this abundantly clear.

He doesn't give a hoot if he's in error (it's the new thing don't you know, not caring about being wrong without embarrassment), just throw things out, lie, distort, withhold, cherry pick, and more and feel smug doing it; it is "working" for that orange juice man wringing the life out of the US right now and he's got legions of copiers.

The assertion that comparing HomePod and Echo is like comparing "Apples to Oranges" is simply ludicrous. They both offer similar features and provide similar functionality.

This is just fanboi back paddle because Echo is selling like hotcakes this holiday and Apple FAILED to get their overpriced, seriously limited product to market in time for Christmas. Denial in it's finest form.

Right... Similar features, only if 128kb/s mp3's sound coming from something that sounds similar to a 1999 mp3 portable mp3 player is the same as 256kb/s aac going out to an array of speakers. They're both the same, they make sound... That's how your assessment goes seemingly.

The fact you did the whole false equivalency means you don't care about facts and the use of "fanboi" cements that status.