Apple has brought design and development of the A11 Bionic processor even more in-house with the eradication of outside solutions, allowing the company to speed up and further integrate the new chip's various processes more than it ever has before.

In an interview published on Thursday, Mashable exposed the philosophy of the design process of the A11 Bionic processor as found in the iPhone 8 and iPhone X. The interview, held with Apple Senior Vice President of Worldwide Marketing Phil Schiller and Senior Vice President of Hardware Technologies Johny Srouji discussed the new chip, and some of the embedded technologies.

Apple's executives shied away from most technical details of the processor. Instead, the interview focused on what Apple can bring to the table in regards of a coherent whole, and not a processor made of discrete parts from different suppliers.

The pair from Apple said that Apple's custom GPU solution in the A11 Bionic was part of Apple's differentiation between its own solution and using somebody else's. The full integration provides an "optimized value" for Apple, and by association, consumers and developers.

Using an Apple GPU, and not one developed by Imagination Technologies, like it has in the past, allows Apple to control and tightly refine the entire process all the way from software to hardware, and how it interfaces with each other.

"It's not just Lego pieces stacked together," said Schiller, "the team designed [The A11 Bionic components] to work together."

The effort appears to have paid off, from a raw performance perspective. Early benchmarks of devices purporting to use the new chip test the A11 Bionic as notably faster in every regard than the processor found in the Samsung Galaxy S8 and Note 8.

New developments

Apple mentioned the Neural Engine in the A11 Bionic several times during the release event. The new aspect of the processor enhances the device's ability to deal with machine learning tasks on-device, and not shunt them to a processing farm across the internet.

Additionally, Apple's implementation is more power-efficient than using a graphics engine to do the same calculation.

Speaking to Mashable, Schiller confirmed that the secure element of the processor has been resdesigned, but declined much comment other than Apple takes security "very seriously."

Advancements carried forward

The A10 Fusion in the iPhone 7 was Apple's first to use two high-performance cores, and two high-efficiency cores -- but how the cores were addressed was possibly less efficient than it could have been.

The new A11 Bionic processor now has six cores, with the addition of two high-efficiency cores to the four in the A10 Fusion. The new chip is also capable of asymmetric multi-processing, meaning that all six cores can be running simultaneously on one task.

It appears that silicon is reaching limits, imposed by the laws of physics. Srouji, without commenting on specifics, said that Apple was "thinking ahead" on the matter -- but given that the A11 Bionic was three years in development, the executive has confidence that Apple had more tricks up its sleeve, and advances ahead.

"Doing this year over year and pushing complexity to the limit...," said Srouji. "I believe we have a world-class team."

The question remains when will MacBook One do away with Intel entirely. 2020 is my rough estimate. More than anything, I’d be most interested in what battery life they could squeeze for web browsing / YouTube video watching.

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

We don’t see 30-50% increase on desktop computers year after year; not these days.

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

That's because people lack creativity.

the other day idiots online were arguing with me about cars. Their argument was that cars have reached their innovation limits and there was no where else to go.....

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

Not in IPC for CPUs.

Intel, for example, saw a ~40% increase in performance with Kaby Lake Refresh with about 25% coming just from adding 2 more cores while being able to operate in the same 15W package. The process remained the same and, for the most part, so did the architecture.

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

That because The Matrix is real. In that environment, anything is possible. Matter becomes an expression of infinity.

you can take care of it...by throttling it down from the beginning. That way, you will not see that drop.It is a physical property of the system - to heat up and build up that heat to the point, where that SoC has to be throttled down in order not to burn something/someone or destroy SoC.

The question remains when will MacBook One do away with Intel entirely. 2020 is my rough estimate. More than anything, I’d be most interested in what battery life they could squeeze for web browsing / YouTube video watching.

1) The opportunity is there to offer a Mac running ARM, but Apple has opportunities that they don’t think is important enough or good enough for them to enter a market. I think that, like 4K/HEVC came when they could upgrade their services, HW, and SW at the same time, that we may have to wait until there is a shift in how macOS looks and feels before we see them enter the market with a low-end model.

2) I think that doing away with Intel “entirely” by 2020 is far too soon. Even if we had an ARM-based MAC on the market today I’d say that 2020 is too soon to replace MacBook Pros, Mac Pro, and iMac Pro.

The question remains when will MacBook One do away with Intel entirely. 2020 is my rough estimate. More than anything, I’d be most interested in what battery life they could squeeze for web browsing / YouTube video watching.

1) The opportunity is there to offer a Mac running ARM, but Apple has opportunities that they don’t think is important enough or good enough for them to enter a market. I think that, like 4K/HEVC came when they could upgrade their services, HW, and SW at the same time, that we may have to wait until there is a shift in how macOS looks and feels before we see them enter the market with a low-end model.

2) I think that doing away with Intel “entirely” by 2020 is far too soon. Even if we had an ARM-based MAC on the market today I’d say that 2020 is too soon to replace MacBook Pros, Mac Pro, and iMac Pro.

Apple and its developers could certainly deliver a MacOS device for education, think an ARM analog to the Mac Book, and leave off x86 compatibility. I don't think that Apple would provide an iOS version though. Whether there would ba a wide enough market as an entry level Mac Book is unknown.

The question remains when will MacBook One do away with Intel entirely. 2020 is my rough estimate. More than anything, I’d be most interested in what battery life they could squeeze for web browsing / YouTube video watching.

1) The opportunity is there to offer a Mac running ARM, but Apple has opportunities that they don’t think is important enough or good enough for them to enter a market. I think that, like 4K/HEVC came when they could upgrade their services, HW, and SW at the same time, that we may have to wait until there is a shift in how macOS looks and feels before we see them enter the market with a low-end model.

2) I think that doing away with Intel “entirely” by 2020 is far too soon. Even if we had an ARM-based MAC on the market today I’d say that 2020 is too soon to replace MacBook Pros, Mac Pro, and iMac Pro.

Apple and its developers could certainly deliver a MacOS device for education, think an ARM analog to the Mac Book, and leave off x86 compatibility. I don't think that Apple would provide an iOS version though. Whether there would ba a wide enough market as an entry level Mac Book is unknown.

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

You'll be seeing the 30-50%* for years to come on Apple ICs. Two reasons: 1) Everything owned by Apple so can be optimized for 2) a single line of hardware.

* The only reason not to see that is if they decide to work on lowering TDP (by extension - improving battery life) in an iteration.

Thinking through MacBook futures, wherein Face ID comes to the fore, I'm wonderingif it impossible without the neural net hardware of the A11. Can this equivalency be bolted onto the Intel (or AMD in future?) CPUs in MacBooks easily via a 'T2' rev?

I have been hearing that same line "It appears that silicon is reaching limits, imposed by the laws of physics." since the mid-1990's. Yet, mysteriously year after year; we see performance improvements in the 30-50% range. Those improvements do not sound anything like hitting a limit, improvements in the 2-5% range sound like you are banging against a limit.

That because The Matrix is real. In that environment, anything is possible. Matter becomes an expression of infinity.

I don't know why there wasn't much change from the 6S to the 7S; I suspect it is because there was only a minor node change from 16nm to 14nm FIFO. I do expect a substantial improvement from the A10 to the A11, if nothing other than the node change., not to mention the more advanced architecture. Still, I don't think that test is representative of what you would require for A/R since the test incorporates LTE connection.