From my perspective, AMD is currently the most fascinating company in tech. Its Zen CPU microarchitecture and Ryzen desktop CPUs met or even exceeded expectations, realizing comparable performance and IPC to Broadwell and providing Intel with real competition in the X86 space for the first time in years. I am increasingly convinced by CEO Lisa Su’s efforts to turn around the company from the dire straits it was in until the launch of Zen.

AMD’s new Vega GPU architecture has been especially interesting to follow in recent months. I will caveat this article by saying I’m not very familiar with desktop parts, especially GPUs, so I don’t really know much beyond the basics.

What I don’t think most people know, though, is that GPUs are process-constrained. Vega is fabbed on GlobalFoundries’ 14LPP process, which is licensed from Samsung Foundry. TSMC’s 16FF+ was a little better than 14LPP in terms of power and performance, though it’s at least possible process maturity may have closed some of the gap over time. Quite how so many people expected Vega 10 to outperform NVIDIA’s GP104 GPU escapes me, then, given that the two GPUs are fabbed on very comparable processes. (HBM2 memory should make a difference, though, on paper.) I think many people simply assumed that if Vega came out later than NVIDIA’s Pascal, then it must be better.

If you are not familiar with the current state of the PC GPU market, NVIDIA has had a significant efficiency advantage since the introduction of its Maxwell architecture in 2014. It was later revealed that NVIDIA had adopted a tile-based rasterizer, which played a major though not exclusive role in eeking out this efficiency advantage.

Beyond that, it was apparent once AMD announced the TBPs (typical board power ratings) for the first Vega cards that the architecture is fairly terrible on power efficiency. This is not good, because power efficiency is pretty much the most important metric for any IC. To speculate on the reasons behind it at this point would be wild guessing, but it does appear that some things went wrong.

Speaking from experience in the mobile space, I’ve seen vendors who are uncompetitive to some degree on efficiency often boost performance to match the competition on benchmarks, by operating their silicon at more inefficient points in the performance/watt curve. That said, Vega’s being able to match Pascal’s performance is not something to be taken for granted either, and is thankfully the case. Vega's clock speeds are also a non-worry.

Software-wise, AMD’s drivers were clearly running very late. Software historically has not been AMD’s strength, though I am optimistic things will be improving from now on. However, one wonders why the drivers and various new features are so delayed.

Everyone knows that Vega was late. While HBM2yields likely played a role, there’s probably more to it. Someone smart said that AMD did the right thing to delay the products (as opposed to ostensibly doing something stupid).

To me, it looks like AMD probably had enough issues with Vega that it had to rush out a respin. On the one hand, that would clearly not be good. On the other hand, if so, I’m really glad AMD paid to do it and delayed the non-Frontier cards. Respins are really expensive, and in mobile consumers are often not so lucky to get them. That is the extent of my familiarity with these things at least. The situation with Vega is not the end of the world since its performance is still competitive, and Vega will sell out for quite a long time regardless.

For architecture and competitive analysis, I recommend reading AnandTech (and only AnandTech), though of course useful benchmarks are found on many sites. I would also recommend waiting a week or two to see how the AT review gets updated, because it's impossible to actually analyze much of anything before a review embargo.

And as much as this will probably pain gamers to hear, I consider Vega’s performance on deep learning operations to be much more important than its gaming credentials. There is an inordinate amount of money at stake if AMD can manage to move the needle with Radeon Instinct and HIP against NVIDIA’s domination in deep learning.

A former Apple engineer has shared that Apple switched A2DP and HFP over to its own Bluetooth LE audio standard in iOS 9. This blew my mind.

For background, here are some of the basics. There are two “Bluetooths”: Classic and Low Energy (LE). The former is the streaming standard that everyone knows through wireless headsets and speakers, while the latter is basically what every modern peripheral device or hardware accessory, such as a smartwatch, uses to transmit data.

LE is also called Bluetooth Smart. LE is bursty and lower power (though not necessarily inherently more efficient), and was designed to enable devices running on coin cell batteries. You can do crazy things like stream video over it, though, if you so desire. (Don't do that.)

I’ve been vaguely keeping track of progress on BLE audio for a few years. I knew that the Bluetooth SIG was working on an LE audio standard, but am amazed that Apple secretly deployed its own in 2015. But it’s not magic, and is still based on LE. “Configuring the HAs is performed through LE services & characteristics, but the audio streaming channel is secret sauce.”

Bluetooth LEA, as Apple calls it, is not used by the AirPods. I’m not sure why, but it may simply be because LEA’s quality is still inferior to Classic audio streaming. Streaming audio is inherently difficult because of LE’s lower duty cycle, which is what makes LE more efficient in general.

Pairing is the same as for the AirPods, using standard LE protocols, though there may be specific codec features that Apple depends on. To emphasize, this is all still built on top of standard Bluetooth. And I believe the SIG is working on a similar pairing UX feature. (Keep in mind that pairing is not required with LE as it is with Classic. Otherwise, say, Bluetooth beacons wouldn’t exist.)

Aside:

I frequently see people complaining that “Bluetooth sucks” or “Bluetooth is always supposed to get better next year.” Before they were announced, for some reason people even wondered if Apple was going to replace “Bluetooth” for its AirPods. The problem is that people are almost always thinking of the wrong Bluetooth.

I won’t fully explain it here, but basically Classic and LE are different radios. To oversimplify: you can think of Bluetooth 4.0 and later as a completely different spec than 3.0 and earlier. For example, Bluetooth 5 has absolutely nothing to do with the Bluetooth that people normally think of (Classic).

* Thanks to Brendan Sharks for suggesting a correction to the article title.

Even though Firefox is not my main browser, its “Don't load tabs until selected” option has always been my favorite browser feature. The number of tabs I want to load on first launch is exactly one. In an ideal world, the resource overhead of tabs you’re not currently looking at should be as close to zero as possible.

In short, individual display variance is too significant, and you will probably make things worse. If you want to individually calibrate your TV, don't do it yourself, and certainly not by eye. Have a professional do it.

If you want to learn about TVs and home theater equipment, I recommend following Chris Heinonen and reading his articles. Note that I am only recommending him, specifically.

I need your support in order make this blog a sustainable effort. I know that $10 is not insignificant, but it's honestly what I think will be necessary to get Tech Specs off the ground. I'm trying to keep my costs as close to zero as humanly possible, and to date have funded everything out of pocket.

Your money will go towards:

Access to all Tech Specs articles. At least one in-depth piece per week, on average. While it could end being more, I would rather overdeliver than overpromise

A small number of free articles for everyone, generally introductory educational pieces or minor news commentary

Some of these topics I can write about in great detail. Beyond that, I have many ideas for the future of the blog. There are also certain guests I would like to host on the podcast (eventually).

There will be no ads, ever. I believe in the subscription model.

Advertising's enormous advantage is of course the democratization of content — everyone has access. I am highly sympathetic to this benefit, which is why I will continue to make introductory articles available to everyone from time to time. They will always be a very important part of the blog.

But the advertising model on the internet is often detrimental. When you have reputable technology websites flooded with highly questionable ads and auto-play videos that greatly inhibit the performance and battery life of readers’ devices, the system is broken. This is not an indictment of journalism, but simply economic reality. And inherently all advertising is consequential to the message of its medium.

For all of these reasons, I prefer the subscription model. It also allows me to avoid the temptation of clickbait headlines. Before publishing a piece I can ask myself whether I even have anything of value to add on a topic. If journalists have already covered it well, then that's great, and I’m happy to share those articles.

I also believe there is a great need for tech coverage that provides at least some of the perspective of the industry itself, and it's worth emphasizing how much the average industry observer does not get to see. Talk to an engineer at any tech company, and it's clear that "how things actually work" is often radically different than how it's portrayed online. There is an tremendous amount of work that goes into creating, testing, and manufacturing tech products, and the vast majority of this work goes completely unappreciated in the public record. What goes into making a product is often just as important as the final result.

I genuinely want to do something different. I’ll do this by covering things that are not normally discussed in the press, or often are not on the internet at all. Sometimes I’ll be able to go into much greater depth on technical subjects. Relatedly, nothing is more important to me than accuracy, and I will always correct any identified mistakes.

Lastly, within the realm of independent content I am indebted to several influences, including Jessica Lessin and The Information, Dan Luu, and Chris Pirillo. My thanks to them for the inspiration.

Thank you all for your consideration and your support. It’s deeply appreciated.

Android's open source nature makes it vastly easier to learn about than closed source OSes. As such, I want to address several misconceptions about the platform that constantly come up. This article will be occasionally updated on an ongoing basis as I think of more topics to include.

GMS

GMS actually stood for Google Mobile Suite, not Google Mobile Services, at least originally. So many people assumed it stood for the latter that even Google seems to use it now.

Perhaps it was a situation like Qualcomm’s Gobi, its cellular firmware API that so many people confused for a modem brand that eventually Qualcomm gave up and rebranded its modems to Gobi. Or Samsung’s ISOCELL, the deep trench isolation implementation that so many people thought was Samsung’s camera brand, that it also recently gave up and branded its CMOS image sensors as ISOCELL at Mobile World Congress 2017.

Force quitting apps

If memory serves, with Android X.X (can someone please remind me?), swiping away an app in the multitasking UI no longer force quit even the background services of an app in AOSP. Swiping away apps can actually still force quit them on a specific device, though, depending on the vendor’s chosen implementation.

If, however, an app is actually stalled or causing real problems in the background, you may have to manually force quit it. But in general, avoid doing so. Be nice to your NAND’s endurance, folks.

Project Treble

Based on recent changes to AOSP, Treble appears to be an attempt at a stable driver API. By painfully rewriting its various HALs to conform to a new standardized hardware IDL, the Android team is speeding up Android updates by enabling silicon and device vendor bring-up efforts to be more parallelized, and by making updates a bit more economically viable for the silicon vendors. This does not mean, however, that SoC vendor support no longer matters at all. It’s a huge deal, but it’s not the same thing as having a stable driver ABI. See: Fuchsia.

F2FS

F2FS is a file system developed by Samsung LSI and upstreamed into Linux. It was designed for NAND flash memory and is claimed to be faster than ext4, Android’s default file system. The Android team disputes this based on its own testing, and says it sees no significant performance differences between the two. Regardless, there are several issues with F2FS, and more importantly Google cannot hardware accelerate file-based encryption with it. There isn’t a pressing need to replace ext4, though of course a better, more feature-rich file system could always supercede it.

Security

Malware on Android is often portrayed as an ever-growing, constant crisis. While Android does have tons of major security concerns, the overall issue is still hugely overstated.

Firstly, the term malware can mean absolutely anything. The vast majority of stories about mobile security spread FUD and sensationalism, to the detriment of readers. I won’t pretend to be a security expert, but even imperfect sandboxing probably goes a long way compared to the completely unsandboxed traditional PC application environments. It doesn’t seem clear to me whether Android or macOS is more secure overall, for example. As with many things, it probably depends.

There is however an extreme case: the Chinese market. Because Android is out of Google’s control in China, the OS genuinely is a security nightmare in the country. I remember waiting for a flight at the airport in Beijing and watching with amusement as some seemingly low-threat app started downloading itself onto my phone over the air. All I did was merely have Wi-Fi on; I hadn’t attempted to connect to any access points.

Everyone knows the fundamental issue with Android security: the horrible update problem. If devices consistently received timely updates for multiple years, the perception of Android’s security architecture would be radically different. I would personally attribute that to licensing and Linux's deliberately non-stable driver ABI, but there are a few hundred other opinions out there on the matter. And of course the overall topic of security is much, much more complex than what I am addressing here.

Which leads us to...

Android Things

What is Android Things, really? Why is it a distinct platform from Android, in other words? One very important difference is how Google manages the BSPs (board support packages) and drivers for Android Things. It works with the silicon vendors but provides the BSPs itself. Device vendors cannot modify the behavior of kernel drivers or HALs. Developers that need to add drivers for peripheral devices to add to their baseboard are able to write user space drivers, unsurprisingly called user drivers.

Furthermore, the intersection of updates and the Internet of Things would seem to be an obvious disaster, so how does Google address the issue? The company is actually releasing monthly BSP security patches through its Developer Console that soon roll out directly to devices, with the caveat that the direct updates will only be for the same platform version of Android the device is on (such as, say, 7.X Nougat).

Project Brillo is not exactly the same thing as Android Things. Brillo was killed, and the initiative was changed in terms of goals and morphed into Android Things. The Accessory Development Kit (ADK) was also somewhat of a predecessor to Android Things.

Android Wear

Touch latency

To oversimplify: Android touch latency was never good, until Android 7.1 essentially finally “solved” the problem. Additional features were added to the new Hardware Composer 2 (HWC2) HAL in 7.1 which can reduce touch latency by up to 20ms, or 1.2 frames, but not always. (It is not correct to say that touch latency is simply 20ms faster in 7.1.) According to the Android team, this was done by staggering some operations on batched input events, not doing everything on the VSync frame boundary, in order to reduce the likelihood of triple buffering and the increased latency that it causes.

The improvement in touch latency is extremely noticeable, and it immediately impressed me on the Nexus 6P after installing 7.1. While the HWC2 improvements make a huge difference, silicon and device vendor implementations still matter! A device still needs to have a quality touch controller, touch stack, and associated software. There are also other parameters that can be tuned by vendors, such as move sensitivity, which should vary based on device size.

It’s also important to understand that touch and general input responsiveness is a function of rendering performance. This is why, say, G-SYNC improves input latency when it manages to improve performance, and especially when exceeding 60fps under VSync. Thus on any device, the higher the realized display refresh rate, the lower the input latency. This is how Adaptive-Sync and proprietary variable refresh implementations will soon benefit Android touch latency.

Graphics rendering

For years people have debated the causes of Android’s infamous “lag” problem. The causes of jank on Android have never been as simple as a binary distinction of whether the OS is hardware-accelerated (running graphics operations on the GPU to some extent) or not. I have no idea what the true reasons are, but I do know that Android’s rendering pipeline is extremely complicated and requires graphics expertise to really understand. At the end of the day, most signs point to initial design decisions made in the early days of Android that are not easily undone.

While many have unrealistically hoped for a single “performance boosting thing,” Android performance does constantly improve in each new release. As previously discussed, one interesting new feature introduced in O is an optional new graphics renderer. (Updated explanation: the option just replaces hwui's GL renderer with Skia's GL renderer.) Skia's renderer will probably be made the default GL renderer in Android P.

Variable refresh

One thing to note is that adaptive sync/refresh will benefit Android more than Apple’s ProMotion benefits iOS.

Color

No vendor should target anything other than sRGB for a device display until Android O ships. In other words, the display’s software calibration must target sRGB to be correct, because that is the only color space that Android currently supports.

I hope this article at least conveys how almost all engineering decisions involve tradeoffs. There are rarely magic bullets that solve everything.

I found Erica's opinions valuable in terms of thinking through iOS 11's multitasking UI redesign, even though I don't agree with her conclusion (that the new design is worse overall). Her concerns about whether it serves all users are commendable. In general it's critical to consider all points of view on subjective decisions. If you don't think about the other side of an argument, you haven't really thought through a problem.

I think the new multitasking UI is awesome and necessary for implementing Spaces, but Erica correctly identifies many of the downsides to the redesign. All of her points are valid, but it's also worth noting that none of these features are strictly necessary to use an iPad. If users never discover the multitasking UI in the first place, they can continue using the iPad exactly the same as always.

All engineering involves making tradeoffs. The cost of not implementing this more power user-friendly redesign would be the iPad continuing to stagnate. Tablets need to continuing evolving to do more than phones, and they've arguably taken far too long to do so. The increase in complexity is a necessary tradeoff in order to make tablets more valuable in their own right. That's not to say there isn't definitely a lot of room for improvement with this new UI, though, of course.

I personally like that a side effect of the new UI is that users can no longer easily swipe away apps, hurting performance and battery life even though they are often actually trying to improve battery life. The previous UI probably would have made it too easy to accidentally swipe away a Space that users had bothered to set up.

iOS 11's new Control Center is also pretty much exactly what I wanted to improve watchOS: an untruncated vertical scrolling list. Having to perform separate swipes to access multitasking and Control Center would be more confusing and time-consuming for users. The unified bottom swipe not only encourages more frequent multitasking, but it also provides a simplification of the previous four-finger gesture shortcut. There are always many aspects to consider regarding accessibility.

Matthewmatosis is my favorite YouTuber, as he makes amazing videos commenting on game design. In this video segment, Matthew talks about leaks in the video game industry, but what he says applies equally to the tech or any other industry. His feelings are obviously not profound, but I could never say it any better myself. The developers deserved their moment of joy after three long years of work.

I also saw the leaks of Mario + Rabbids Kingdom Battle before E3, and I wish I never had. The game's unveiling would genuinely have been an awesome surprise otherwise. Thankfully the game at least looks pretty great. And do check out Matthew's videos if you're interested in game design. I can't recommend them enough.