How Intel missed the iPhone revolution

How Intel missed the iPhone revolution

Intel just laid off 12,000 workers in face of declining PC revenues, and the move has all of us asking “what’s next?” for the company that launched the microprocessor revolution.

The booming smartphone market is almost exclusively based on microprocessor technology from tiny ARM, a British chip design company with no manufacturing capability and a market cap that, for most of the pre-iPhone era, was smaller than Intel’s advertising budget.

Intel, then, is left clinging to the top of a palm tree while the technological tsunami that it started with the 1971 launch of the Intel 4004 sweeps across humanity.

So what happened? How did the company founded by Gordon Moore, the father of the integrated circuit, the eponymous author of Moore’s Law, and one most far-seeing prophets of the digital age, miss the mobile boat entirely?

To understand what went wrong for Intel after the launch of the iPhone, you first have to know what went right for the chipmaker during the “Wintel” duopoly years — how the company used a specific set of business practices to maintain shockingly high profit margins, shut out rivals, and ultimately draw the wrath of the FTC.

This isn’t so much a technology horserace story as it is a business story, and once you see how the business and tech parts fit together to create the “PC era” we all lived through, it’ll be obvious why the company couldn’t pull off a daring pivot into mobile the way it once pivoted from making memory chips to microprocessors.

But before we can get into the history, I have to take a brief detour and try to brain a zombie idea that just won’t die. It came up most recently in this piece by Jean-Louis Gassée, and I call it the “ARM Performance Elves” hypothesis.

The Performance Elves Were Real… Until They Weren’t

Photo courtesy of Flickr/Quinn Dombrowski

Photo courtesy of Flickr/Quinn Dombrowski

The reason that Intel lost mobile to ARM has nothing to do with the supposed defects of the ancient x86 Instruction Set Architecture (ISA), or the magical performance properties of the ARM ISA. In a world of multibillion-transistor processors, anyone who suggests that one ISA has any sort of intrinsic advantage over another is peddling nonsense on stilts. I wrote about this five years ago — it was true then, and it’s still true:

First, there’s simply no way that any ARM CPU vendor, NVIDIA included, will even approach Intel’s desktop and server x86 parts in terms of raw performance any time in the next five years, and probably not in this decade. Intel will retain its process leadership, and Xeon will retain the CPU performance crown. Per-thread performance is a very, very hard problem to solve, and Intel is the hands-down leader here. The ARM enthusiasm on this front among pundits and analysts is way overblown—you don’t just sprinkle magic out-of-order pixie dust on a mobile phone CPU core and turn it into a Core i3, i5, or Xeon competitor. People who expect to see a classic processor performance shoot-out in which some multicore ARM chip spanks a Xeon are going to be disappointed for the foreseeable future.

It’s also the case that as ARM moves up the performance ladder, it will necessarily start to drop in terms of power efficiency. Again, there is no magic pixie dust here, and the impact of the ISA alone on power consumption in processors that draw many tens of watts is negligible. A multicore ARM chip and a multicore Xeon chip that give similar performance on compute-intensive workloads will have similar power profiles; to believe otherwise is to believe in magical little ARM performance elves.

This notion that ARM is somehow inherently more power-efficient than x86 is a hold-over from the Pentium Pro days when the “x86 tax” was actually a real thing.

Intel spent a double-digit percentage of the Pentium Pro’s transistor budget on special hardware that could translate big, bulky x86 instructions into simpler, smaller ARM-like “micro-ops”.

In the subsequent decades, as Moore’s Law has inflated transistor counts from the low single-digit millions into the high single-digit billions, that translation hardware hasn’t grown much, and is now a fraction of a percent of the total transistor count for a modern x86 processor.

In short, anyone who believes that ARM confers a performance-per-watt advantage over x86 is well over a decade behind the times.

With that out of the way, on with the real story.

ISA Lock-In

Since the dawn of the PC era, Intel has enjoyed a supremely rare and lucrative combination of both high volumes and high margins. The company has sold a ton of chips, and it has been able to mark them up way more than should normally be possible.

There is one technical reason that Intel has historically been able to mark its chips up so high: x86 lock-in. A processor’s ISA does matter, but for backwards compatibility, not performance.

When a massive, complex software platform like Windows is compiled for a specific ISA, it’s a giant pain to re-compile and optimize it for a different ISA, like ARM. Things like just-in-time (JIT) compilation and translation have been supposed to do away with this problem for a long time, but they’ve never panned out, and ISA lock-in is still very real in 2016.

Apart from some failed commercial experiments and lab builds, Windows has always been effectively x86-only as far as the mass market is concerned. This meant that PC vendors like Dell, the erstwhile Gateway, and even more boutique shops were in the business of selling “Wintel” PCs to customers who may have just wanted Windows but who also had to buy Intel in order to get it.

Thanks to their effective “duopoloy” status, Intel and Microsoft could charge quite a bit of money for their respective technologies, jacking up PC sticker prices for consumers and leaving systems integrators to scramble for what little profit they could eke out. But Intel in particular was famous for using its half of the Wintel duopoloy to starve up-and-coming rivals for cash by suppressing their margins. The scheme worked as follows.

The Margin Mafia


Let’s say that Intel wants to boost its margins, so it goes to Dell and says, “we’re going to start charging you more money for our CPUs.” Since Dell has no real alternative if it wants high-performing x86 processors (I’ll talk about AMD in a moment), Dell has to suck up the price increase.

Now that Dell is sending Intel more money per PC shipped, the PC maker now has three options: 1) it can raise prices, thereby making its offerings less competitive in a cut-throat PC marketplace where anyone with a screwdriver and a credit card can start a PC assembly shop, 2) it can eat the cost increase and see its own margins suffer (and its stock price get hammered), or 3) it can start squeezing the rest of the component makers that provide its PC parts (i.e. the GPU, the soundcard, the motherboard, etc.) to lower their prices and their margins, in order to make up the difference.

Not surprisingly, Dell and the rest of the PC vendors got into the habit of choosing option 3. Because there are multiple GPU vendors in the market, Dell could go to Nvidia and ATI and play them off against each other, forcing them to offer lower prices in order to secure a spot in a Dell PC. And likewise with other component makers.

This, then, was the basic mechanism by which Intel was able to “steal” margin from everyone else inside the PC box, especially the GPU makers like Nvidia and ATI.

As for AMD, Intel’s primary way of locking them out was the “Intel Inside” branding program. In exchange for putting an “Intel Inside” sticker on their PCs, and for including the little “Intel Inside” logo in their advertisements, Intel would subsidize the PC vendors’ marketing efforts. This was effectively a kickback scheme.

I call it a kickback scheme, because it works as follows: Intel raises its margins on its CPUs, forcing Dell to demand that Nvidia and/or ATI accept smaller margins on their GPUs, and then Intel kicks back a portion of the money that was peeled from Nvidia and ATI’s share of the pie to Dell by subsidizing their PC marketing efforts.

Dell gets to keep its unit cost the same and gets some extra money for marketing, Intel gets fatter margins and weakened rivals, and everybody but the component makers and AMD are happy. Why would Dell rock the boat by threatening to switch to AMD? (Dell did eventually introduce AMD-based products, and it was big news at the time.)

Taking a Pass on ARM


Intel had a good thing going with the scheme described above, and they wanted to protect it at all costs. The chipmaker also had a pretty sweet line of ARM processors, but it sold that line off when it decided that low-margin, high-volume businesses were for the birds.

By the time ARM was gaining traction and Jobs had secretly gone to work on the iPhone, Intel was thoroughly addicted to its fat margins. Thus it was no surprise that Intel passed Steve Jobs’ suggestion that they fabricate an ARM chip for the iPhone. Intel didn’t want to be in the low-margin business of providing phone CPUs, and it had no idea that the iPhone would be the biggest technological revolution since the original IBM PC. So Intel’s CEO at the time, Paul Otellini, politely declined.

It’s also important to note that there was also no mad scramble from other phone vendors for x86-powered phone chips. Everyone in the mobile space already knew everything I just outlined above — they were wise to Intel’s tricks, and they knew that the minute they adopted a hypothetical low-power x86 CPU then Intel would use ISA lock-in to start ratcheting up its margins.

There was no way they were going to give Intel the leverage to do to the smartphone space what the chipmaker did to the PC space. So the Nokias of the world had as little interest in Intel as Intel had in them, and the former happily went with the cheap, ubiquitous ARM architecture.

ARM is great, despite the fact that it has mostly run a feature size node or two behind Intel, because it gives a system integrator options. Unlike with x86, if one ARM vendor tries to squeeze you, you ditch them and move to another. ARM chips may not have had the same performance/per watt or transistor size as Intel, but they’ve always been cheap, easy, available, and totally devoid of the threat of lock-in.

Playing Defense with Atom


At some point, Intel realized that ARM was a threat in the low-power space, so the company released the low-power Atom x86 processor line as a way to play defense. But Atom just plain sucked — seriously, it ran like a dog, and Windows-based Atom netbooks were borderline unusable for the longest time.

Atom was terrible because it had to be, though. If Intel would have released a low-margin, high-performance, low-power x86 part, then server makers would’ve been among the first to ditch the incredibly expensive Xeon line for a cheaper alternative.

Intel basically couldn’t allow low-margin x86 products to cannibalize its high-margin x86 products, so it was stuck with half-measures, like Atom, aimed more at keeping ARM from moving upmarket into laptops than at moving x86 down into smartphones.


The TL;DR of this entire piece is that Intel missed out on the mobile CPU market because that market is a high-volume, low-margin business, and Intel is a high-volume, high-margin company that can’t afford to offer low-margin versions of its products without killing its existing cash cow.

The other thing you should take away from this is, if your entire business is build on using your monopoly status to squeeze partners, starve competitors, and fatten your own margins at the expense of everyone else, then that won’t go unnoticed. Incumbents in any new space you try to enter will be leery of partnering with you lest they find themselves subject to the same tactics.

At this point in 2016, even if Intel wanted to go all-in on mobile, it’s not clear they could. Who in their right mind would bet their entire company on an x86 mobile processor, no matter how mind-meltingly awesome its specs, given Intel’s history of using x86 as leverage to crush an entire ecosystem.

Indeed, if Apple ever moves its laptop and desktop lines from x86 to ARM, it won’t be because of ARM Performance Elves or the supposed deficiencies of the legacy x86 ISA — it’ll be because they’ve finally migrated ARM up-market to the point that the performance/watt gap vs. Intel at the same unit cost is small enough that they think they can get away with another big switch.

An ARM-based Macbook will have worse performance/watt on CPU-bound workloads than a comparable Intel-based laptop, probably forever. But Apple won’t care, because they’ll be able to lower prices and/or widen their margins, and their customers will keep buying them anyway because Apple.

As for what’s next for Intel, it’s a choice between stagnation vs. a painful transition to a lower margin business. They’ll never get into hot growth areas like cars or IoT clients, and still keep the x86 lock-in plus high-margin gravy train rolling. No, if Intel wants to grow they’ll have to give up their margins one way or the other — either by letting a low-margin x86 part cannibalize their higher-end products, or by getting back into the ARM business.

Why the NRA hates smart guns

Why the NRA hates smart guns

With yet another push from President Obama to revive initiatives to develop “smart gun” technology, it’s time to once again revisit the issue.

The most common question I got in response to my previous piece on smart guns is, “why is the NRA so opposed to smart guns?”

The NRA’s official position is that they don’t care one way or the other about smart gun tech, and that the market should decide, but we all know that’s baloney. The NRA, along with the vast majority of rank-and-file pro-gun people, hate smart guns, and will do everything in their power to sabotage smart gun efforts.

So the question is, why?

The simple answer to this question is widely known, but also widely misunderstood.

Most who follow this issue know that the NRA hates smart guns because they’re afraid that once a seemingly viable smart gun technology exists, anti-gun legislators at the state and federal levels will attempt to mandate it in all future guns by comparing it to seat belts, air bags, and other product safety features.

If you read my previous piece on the problems with smart guns, then you hopefully understand the differences between a seat belt and smart gun tech, and why the prospect of having smart technology forcibly included in all new firearms drives gun nuts into apoplectic fits.

But maybe you’re thinking, “that’s fine, then. We just won’t mandate it. There will be no mandate. There, you happy now? Can we just get on with the smart gun innovation and let this play out in the market?”

Here’s the thing, though: the NRA is actually right, in this case. If smart guns get any traction, then non-smart-guns will come under legislative assault.

I realize some of you went into shock and stopped reading after you saw the phrase “NRA is actually right” appear on TechCrunch, but if you’re still with me then give me a moment to explain.

The Series of Tubes

Guns are a technology, and, like most members of the general public, gun control advocates are thoroughly confused about how guns operate outside of Hollywood — as in, “the Internet is a series of tubes“-level confused. It’s hard for me to overstate just how bad it is out there, even among much of the gun-owning public.

But maybe you grew up on a farm, you had a little hunting rifle, you went to the range a few times and shot a pistol, and so on. And you’ve seen lots and lots of movies with guns in them. Even so, unless you have a background in the military, law enforcement, or executive protection, or have undergone a decent amount of civilian training in real-world defensive use of firearms, then you know as much about guns as a teen with a brand new learner’s permit knows about steering an 18-wheeler through downtown Chicago in rush-hour traffic.

It’s bad that the general public — including the majority of casual gun owners — are so confused about guns that they don’t know how much they don’t know. But what’s worse, at least if you’re a gun person, is that lawmakers and activists who know less than nothing about guns often find themselves in a position to confidently enshrine their technological ignorance into law.

This, then, is what the NRA is terrified of: that lawmakers who don’t even know how to begin to evaluate the impact of the smallest, most random-seeming feature of a given firearm on that firearm’s effectiveness and functionality for different types of users with different training backgrounds under different circumstances will get into the business of gun design.

And they’re right to be afraid, because it has happened before.

The Vertical Foregrip: A Case Study in Well-Intentioned Insanity

To understand how gun control advocates’ high-handed indifference to basic gun knowlege feeds into to the visceral reaction that pro-gun folks have to smart gun tech, you have to go back to the original, Clinton-era Assault Weapon Ban. Whatever you think of the merits of the AWB, you have to admit that this ban put legislators in the business of gun design. And I don’t mean this metaphorically — legislators were literally doing gun design.

Before they could ban “assault weapons” as a category, lawmakers (or lobbyists and interns, more likely) had to compile a list of cosmetic, ergonomic, and functional features that, when they appear on a firearm either singly or in certain combinations, turn that gun from an ordinary rifle into a deadly assault weapon.

This process of looking at features and combinations of features and deciding what stays and what goes is pretty much what you’d do if you worked in product design at a Remington or a Smith & Wesson. But the people who put together the AWB’s feature lists not only were not professional gun designers, but they didn’t even appear to know anything at all about the guns they were designing for the public’s use.

This article isn’t the place to catalog the insanity of the Clinton-era AWB, or the recently proposed AWB that seeks to take its place, but I do want to offer to the uninitiated one concrete example of what I’m talking about, so you can get a feel for just how mad and maddening the results of this design-by-lawyer process are.

Consider the lowly vertical foregrip (VFG), a simple handle that hangs down from the front of a rifle and gives the weapon’s operator a place to put their off-hand.

Screen Shot 2016-04-30 at 4.02.47 AM

Photo courtesy of Oleg Volk.

Some shooters love the VFG for the same reason that the AWB’s authors presumably hated it — because it’s popularly seen on military guns in movies and violent video games.

On the other hand, many competitive shooters dislike vertical foregrips, preferring instead to wrap their entire hand around the front of the gun.

Other shooters prefer a halfway option called the “angled foregrip”, which is a small ramp placed under the front of the gun that can anchor the hand.

Still other shooters put a vertical foregrip on their guns but don’t actually grip it when shooting — instead, the VFG gives such shooters a tactile marker that lets them quickly and easily place their off hand in the same spot every time without looking.

Finally, some people use a VFG solely as a small storage compartment for batteries or cleaning supplies.

Like all things ergonomic, the decision to VFG or not to VFG is ultimately a matter of shooter preference… that is, unless you were a shooter during the original 1994 AWB, in which case your preference didn’t matter because VFGs were totally illegal.

Yes, that is correct — a small plastic handle that some people like to attach to the front of their gun because they feel it gives them a more comfortable hold on the weapon was considered a feature so deadly it must be outlawed. The lawmakers who added the VFG to the list of banned features not only declined to produce any sort of rationale for how the VFG makes guns deadlier or less safe, but it’s likely that they didn’t even know what it was or how it was used.

This ignorance about all things gun-related is on painful display in the video below, in which a young Tucker Carlson goes down a list of banned “assault weapon” features with former congresswoman Carolyn McCarthy, the original AWB’s primary backer, asking her what certain features do and why they had to be banned.

[embedded content]