Who’s afraid of the IoT?

Who’s afraid of the IoT?

It’s heeeere! The Internet of Things, I mean. I just spent several days at the Connected Conference in Paris, which focused on IoT hardware. They built a whole home full of connected devices, showcased the War for the WAN, and underscored that the IoT is — as always — not quite what we imagined, and not quite what we hoped.

Home and Away

The home was interesting: it boasted “smart” shutters, speakers, heater, boiler, umbrella, mirror, toothbrush, shower, bed, and a scent-driven alarm clock, along with the obvious smart lock, Nest, Dropcam, and Echo. Did all of these things actually seem useful? Well, no. But it did reinforce that the Echo is a big deal.

As more things in your home (which, let’s remember, are still not “your” things, unless you have root access, which you won’t) accumulate membership in your Wi-Fi network, the more cluttered any kind of computer or phone dashboard gets, and the more important a single simple interface comes. The Echo has every chance of becoming the de facto control center for hundreds of millions of homes. No wonder Google (and maybe Apple) are racing to introduce competitors.

Tomorrow’s Surveillance Today

But while homes get all the press, the IoT will largely consist of industrial surveillance. I don’t necessarily mean that in the scary political way, although there’s plenty of that: Global Sensing, for instance, has built neural networks that live on the edges of the network rather than in data centers, which can handle facial and posture recognition at a rate of 480 parsed images/second on a Raspberry Pi. Their admirable intent is to, for instance, identify people stricken by illness, or would-be suicide jumpers, in Paris Metro stations; but they still sound eerily like they could easily become the tools of a police states.

Relatedly, of course, the IoT has every chance of becoming a security nightmare — though everyone at the conference was at least talking about security, which offers some hope of salvation.

Mostly, though, the IoT is about collecting industrial data on an industrial scale. Tracking vibrations in buildings to measure their stability. Tracking smells via adsorption, courtesy of the very cool Aryballe Technologies‘ tech. Tracking noises, lights, moisture, toxins. And then making this data available despite the physical restrictions of battery power and radio networks.

The War For The WAN

Speaking of networks: did you know there was a three-way war on to be the Wide Area Network of the IoT? Choose your allegiance: Sigfox, LoRa, or NB-IOT! I was especially impressed by SigFox. NB-IOT is a standard that existing wireless carriers claim they’ll implement any year now. LoRa is a loose “open” alliance, in the sense that it has a single manufacturer (Semtech.)

Sigfox, though, is a really interesting company. Their ambitious objective — a level of ambition which is all too rare in France — is to become a bona fide global utility. They want to create a vast radio network that anyone anywhere can use for IoT data, for the low price of one euro per device per year.

Sigfox’s bandwidth is extremely limited — 12 bytes per message; an example of 140 messages/day was cited — but that suffices for most sensors, and it’s extremely low power. We were shown a small GPS tracker which can last for an entire month, and a moisture sensor with an expected battery lifetime of two years.

Like LoRa, they use unlicensed ISM bandwidth, which makes rollouts much easier. They’re only now expanding into the USA, but they expect to cover the 100 largest US cities by the end of this year.

The best thing about Sigfox is its simplicity. Take a device, pay the euro, plant the device anywhere within range, and Sigfox will handle capturing its sensor data and forwarding it to your cloud servers. (And/or sending messages back; it’s a two-way service.) Like Amazon, it doesn’t just want to be a company, it wants to become a utility.

And, of course, with the advent of wide-area low-power networks like this, the new limiting factor for the Internet of Things becomes the sensors, not the networks. Interesting times indeed.

1Full disclosure: this trip to France to visit the French tech scene was paid for by Business France, a tentacle of the French government. Studies show that this inevitably subconsciously biases me in their favor. (Although, interestingly, the bias effect seems to be substantially larger for small gifts than large ones.)

Source: TechCrunch

42: the answer to life, the universe, and education

42: the answer to life, the universe, and education

“We believe that IT has nothing to do with math and physics … it is more artistic than scientific,” says Nicolas Sadirac, as he cheerfully slaughters whole herds of sacred cows. “Knowledge is not useful any more, because IT advances in revolutionary ways, not iterative ones … We ask our students not to learn, just to solve the problem.” Oh, yes, and: “There is no teacher.”

42, the coding school with no teachers, the quasi-university whose name comes from The Hitchhiker’s Guide to the Galaxy, the pedagogical folly and/or revolution funded to the tune of some $250 million from the deep pockets of French multibillionaire Xavier Niel, has two campuses. One will open in Fremont, California later this year.

Yesterday I visited the other, in the outskirts of Paris1. I walked in thinking it a folly, and walked out thinking it might just be a revolution.

Some basic facts: 42 accepts 1000 students between 18 and 30 every year. Tuition is free. Student loans pay for living expenses. The program lasts roughly three years, but some students finish in 18 months; some in five years; some take jobs and then return. 40% of its students are previous high-school dropouts. Only 10% are women, but that grim statistic is still twice as good as traditional French IT schools, and they’re trying to improve it further. The French school has been running for three years now.

The selection process is Darwinian. Two years ago, 40,000 people applied, and 20,000 completed the online test; last year 80,000 applied, and … 20,000 completed the online test. Over the summer, before the school year begins, the best 3,000 of those 20,000 are selected to come spend four weeks in the school full-time.

There they work on projects which sometimes double as personality tests–allotted to them by 42’s custom software, not by anything so quaint and obsolete as a teacher–in informal, self-organized groups. Only 1,000 of those 3,000 are accepted into the school. It all sounds a little Hunger Games.

Once accepted, students choose their own path, in a heavily gamified, entirely software-driven pedagogical environment that begins with a blank black screen; you essentially figure out your own way through 42’s gamified study software from there. (It reminded me a little of the Illustrated Primer from Neal Stephenson’s Diamond Age.) Everyone starts with the same basic core curriculum, but then chooses fields and individual projects which interest them. This may include robotics, game design, AR/VR, IT security, collaborating with artists who spend a month working in the school, etc.

The projects are constructed in hopes of fostering the ability to think flexibly and learn on the fly, rather than try to predict what knowledge will be valuable. As students succeed they “level up” — the gamification is paramount — until they reach Level 21, and “graduate.”

This gamification is apparently extremely effective for (some) extremely bright students who failed at, and/or rejected, traditionally structured educational environments; hence the 40% of students here who are former dropouts. 30% have zero previous programming experience. “Information technology is about connecting with other people,” Sadirac says. “Innovation comes from diversity.”

Again, the school has no teachers at all, none whatsoever, and only about 30 staff, divided into three “teams”: sysadmins, devs, and course designers, basically. Additionally, students elected by their peers make up 30-40% of each of those teams.

The cost to Niel was about €20M to set up the school, and €7M per year in running costs; the Fremont school will cost about $40M to set up and an estimated $8M/year to run (which seems curiously low.) They currently have no other business model. Sadirac shrugs: “Xavier told us: I will pay for 10 years, and you don’t think about what happens after for 7 years.” He’ll worry about that in 2020.

(All told the costs to Niel, for both schools, total up to about $250M, a little less than, say, Tony Hsieh’s Downtown Project in Las Vegas. Don’t worry, Niel can afford it; he’s worth an estimated $10 billion.)

OK, so, so far so cool / revolutionary / outrageous, as schools go. But what do students actually learn? The demand for them as interns is intense — French companies offered 11,000 proposals for 750 internships recently — but so is demand across the industry. Is 42 churning out blinkered coders who only know how to do a few things, a la so many of the bad “coding bootcamps” out there?

Ahahahahaha. No.

Seriously. As I said, everyone starts with the core curriculum. The first project consists of writing C code from scratch — no wait, it gets better — using your own handwritten set of C library functions, rather than being allowed to lean on the crutch of stdlib. (Apparently this immediately levels the playing field between neophytes and people with “programming” experience that only includes calling Node or Rails or PHP library functions.) The system doesn’t even allow you to start using PHP until you attain level 5.

Obviously 42 is not for everyone, which they make very clear. But it is a breathtakingly great alternative for people who do not thrive in the traditional educational system. (A lot of the post-visit chatter centered on the lack of professors — “I learned so much from my best professors!” As someone who acquired his engineering degree mostly just by reading textbooks, while entirely eschewing professors’ office hours and only sporadically attending classes, I had trouble sympathizing.) It’s young, yet; the devils always lurk in the details; and it’s too early to judge by results. But I loved the approach.

The US school’s web site features a video which includes Evan Spiegel, Jack Dorsey, and Matt Cohler, among many others, singing 42’s praises. You can add me to their number. I was really impressed.

1Full disclosure: I should note that this trip to France to visit the French tech scene and this week’s Connected Conference was paid for by Business France, a tentacle of the French government. Studies show that this inevitably subconsciously biases me in their favor (although, interestingly, the bias effect seems to be substantially larger for small gifts than large ones.) But I don’t think that will have (much) affected the gap between my expectations of 42 and what I saw.

Source: TechCrunch

All the cool kids are doing Ethereum now

All the cool kids are doing Ethereum now

In the beginning the Prophet Satoshi brought us Bitcoin. And the cryptogeeks and libertarians looked upon it, and said lo, we smile upon this, for it is good, and decentralized, and solves the Byzantine Generals Problem. For a time all was well. But then came wailing and gnashing of teeth and wearing of sackcloth. And then came the Prophet Vitalik, bearing Ethereum; and lo, it was even better.


What is Ethereum? It’s a combination of a cryptocurrency, like Bitcoin, and a vast decentralized computer. Let me explain. As an above-average TechCrunch reader, you already know Bitcoin is a currency whose transactions are secured by the immense computing power of its distributed network of “miners,” rather than any central entity. But you may not appreciate that every Bitcoin transaction is actually a program written in the Bitcoin scripting language — aka a “smart contract.”

Bitcoin’s contractual language is quite limited, by design. But it allows for transactions that can be delayed until a particular time; or transactions that occur only if, say, 3 of 5 signatories agree to them; or crowdfunding campaigns that only transfer money if a particular total is attained; and many other possibilities. Importantly, once incorporated into the Bitcoin blockchain, these contracts require no trust and no human intervention. Bitcoin is programmable money … with a highly restrictive programming language.

Ethereum removes those restrictions entirely. The Ethereum scripting language is Turing-complete, meaning it can replicate any program written in any traditional programming language. However, to prevent ill-behaved contracts with infinite loops from running forever, every Ethereum transaction computation must be paid for. Just as Bitcoin miners collect small amounts of bitcoin, known as “fees,” in exchange for mining transactions onto the Bitcoin blockchain, Ethereum miners collect “ether,” the Ethereum currency, for running Ethereum contracts.

You may well be thinking: “Oh come on. Bitcoin was more than abstruse and geeky enough. Now this new made-up-magical-money thing is even more complicated? Why should I care?”

You should care because decentralized cryptocurrencies like Bitcoin and Ethereum are–or at least could be–essentially an Internet for money, securities, and other contractual transactions. Like the Internet, they are permissionless networks that anyone can join and use. Ethereum optimists might analogize Bitcoin as the FTP of this transactional Internet, with Ethereum as its World Wide Web.

I’ve waxed about why I think Bitcoin matters. I’m a little less enthusiastic about Ethereum … so far. To be clear: as I’ve written before, Ethereum is really cool, truly innovative, and potentially revolutionary. However, it is now–probably–at the peak of its initial hype cycle.

Consider: heavily funded Bitcoin startup Coinbase will soon support Ethereum trading on its rebranded cryptocurrency exchange. Microsoft offers “Ethereum Blockchain As A Service” on Azure. Ether has risen in value more than tenfold over the last year, to a market cap which now exceeds $1 billion. And while Bitcoin’s hashrate, a measure of the computing power devoted to mining, still vastly exceeds Ethereum’s, look at the hockey-stick nature of that latter chart.

Most of all, though, consider the DAO, and the $163 million — $163 million! — it has raised. Sorry: I mean “raised.”

What is the DAO? It stands for “Decentralized Autonomous Organization.” Ethereum offers a tutorial explaining how to create your own. The DAO, however, as Seth Bannon explained on TC recently, is a particular DAO which:

as of the time of writing, controls more than $100 million in assets, and yet it exists entirely on the Ethereum blockchain.

In exchange for supporting The DAO financially (in the form of Ether), backers get DAO tokens, which they can then use to vote on the direction of the organization. They can use their tokens to vote on big governance issues (akin to traditional shareholders) but also on minute details of how The DAO spends its resources. In this way, token holders have total control over The DAO’s assets and its actions.

People with projects they’d like to build for The DAO can submit ideas in the form of a proposal written in plain English accompanied by smart contract code. The code automatically executes payments so long as certain agreed-upon conditions are met. Because this is all built on top of Ethereum, which allows for robust smart contracts, this can all be done autonomously.

Or as Peter Vessenes put it:

It’s a cross between a crowdfunding site and a venture capital / private equity partnership. It’s controlled by a set of voting rules encapsulated and enforced on the Ethereum blockchain as a smart contract. People that trust the code, rules and plan are sending ether directly to fund the contract. […] If a certain percentage decide to fund a proposal, then it’s funded.

Think of it as a kind of corporation incorporated only on the Ethereum blockchain, whose laws consist entirely of those defined by its contract code. A corporation that appears to be a means of investing in the future … without having any concrete idea exactly what that future is yet. For many people, that kind of investment is a holy grail.

But if this sounds to you like a poor fit with existing legal and regulatory structures, and/or a disaster waiting to happen, well, you’re certainly not alone:

To quote Eris COO (and attorney) Preston Byrne:

the plain-English covenants made on funding proposals, the absence of legal certainty as to what THEDAO actually is and the nebulous and ever-shifting nature of THEDAO’s “membership,” will make it very difficult to properly assign ownership in these projects’ work product.

#THEDAO might look and feel like a company, but on cursory examination, too many gaps, too few formalities, not enough structure and legally incorrect methods reveal themselves as fatal to the exercise.


I sympathise with THEDAO’s intentions, in that I believe that the financial markets are currently rigged against the “little guy” and that there is no reason why the kinds of investment opportunities (and returns) available to the super-wealthy should not be available to small investors whose traditional means of accumulating wealth (savings) are all but useless given current, zero interest-rate monetary policy.

I also believe that blockchain tech will one day play a role in facilitating more democratic access to the capital markets. However, the current body of laws governing this sphere of conduct exists to ensure that people to whom investments are marketed can be absolutely certain about what they’re getting in exchange for their money.

In this respect THEDAO clearly falls very short of the mark.

Its worth noting that the money the DAO has “raised” is essentially refundable. As Bitshares founder and DAO skeptic Dan Larimer puts it:

The DAO has tentatively raised $100 million dollars worth of ETH, but so far the investors have taken no real risk. Every single person who has purchased DAO tokens has the ability to reclaim their ETH so long as they never vote. The end result is a massive marketing campaign that totally misrepresents what has been invested and what hasn’t. Considering there is no real risk being taken beyond the risk of holding ETH and that there is the potential for a large gain it is no wonder so many people have participated.

So let’s all try to damp down the hype just a bit. Right now all we have headlines, promises, a lot of “raised” money which has not actually been committed. Let’s wait for the results, if any — legal and otherwise — to roll in before declaring the DAO revolution underway. Because, I mean, I like hype too, but this is getting more than a little ridiculous.

That said, the DAO does serve to illustrate that these are fecund, exciting times for Ethereum. I’m not worried about the hype; that gets in everywhere. What most concerns me about the Ethereum project is security.

Ethereum is planning a transition from proof-of-work security (mining) to proof-of-stake security. There are very good reasons to do this, but proof-of-work, for all its flaws and excesses, is simple and thoroughly tested. Ethereum’s “Casper” proof-of-stake mechanism is fascinating; but if it has a serious undiscovered flaw, the entire network is at risk.

Similarly, one reason Bitcoin’s scripting language is limited is to help prevent hacking and denial-of-service attacks on the Bitcoin network and its miners. There’s no denying that Ethereum offers a vastly larger attack surface than Bitcoin does.

Worse yet, this applies not just to the network itself, but to individual Ethereum contracts. As Vessenes puts it: “Ethereum Contracts Are Going To Be Candy For Hackers.” To quote the ensuing, and surprisingly good, Hacker News discussion: “Running a machine on a blockchain (Ethereum) is much more complex and error prone then recording transactions on a blockchain (bitcoin.)”

I hope this doesn’t sound too pessimistic. I am genuinely excited about Ethereum in the medium to long term, and you should be too. But I also think we’re now at the peak of its first hype cycle, and important lessons need to be learned, hopefully the easy way, before it begins to achieve its revolutionary potential. There is a reason that “may you live in interesting times” is deemed a curse.

Source: TechCrunch

Power laws rule everything around me

Power laws rule everything around me

We live in a time of great polarization. Politically, in nations across the world, both the left and the far right grow more numerous, and draw further away from the mainstream establishment, every year. Economically, the rich continue to separate themselves from the poor, as the very rich do from the rich. And in tech, of course, we increasingly live in a “winner-take-most” world.

Last year, Quartz reports, 48% of Americans self-identified as “lower class,” up from 35% in 2008; meanwhile, according to the New York Times, “the top 20 percent of the income distribution is also steadily separating itself — by geography and by education as well as by income.”

Winner takes most. Economic gains go disproportionately to the now-infamous 1%; what’s left over goes disproportionately to the top 20%. It’s like we’re all collectively moving from Mediocristan to Extremistan; from a more (so-called) “normal” distribution of wealth to something more like a power law:

(Image by Hay Kranen / PD)

Meanwhile, across the political spectrum, people are spending so much time in filter bubbles mediated by Facebook and Google that Politico warns “Google Could Rig the 2016 Election” while Trevor Timm of the Freedom of the Press Foundation asks: “You may hate Donald Trump. But do you want Facebook to rig the election against him?”

As the always brilliant Ben Thompson observes:

on the part of Facebook people actually see — the News Feed, not Trending News — conservatives see conservative stories, and liberals see liberal ones […] liking opinions that tell us we’re right instead of engaging with viewpoints that make us question our assumptions.

He quotes Ezra Klein, in Vox, re a study of 10,000 American adults:

It’s tempting to imagine that rising political polarization is just a temporary blip and America will soon return to a calmer, friendlier political system. Don’t bet on it. Political polarization maps onto more than just politics. It’s changing where people live, what they watch, and who they see — and, in all cases, it’s changing those things in ways that lead to more political polarization

…And, of course, all online news follows a power law as well.

Polarization is even reshaping — and amplifying — the fringes of political belief, by boosting conspiracy theories, according to Fast Company. Facebook claims polarization isn’t their fault, and that’s basically true; it’s an emergent property of human nature, accidentally midwifed, not the outcome of anyone’s conscious malevolent decision.

One can’t help but wonder if this unexpected political redistribution is one reason why both the pundits and the pollsters have been so wrong of late. Look at FiveThirtyEight.com, the heroes of 2012. Of late they look like soothsayers reading entrails. I’m not just talking about the rise of Trump; last year their predictions of the UK’s general election were terribly wrong.

This is all driven, at least in part, by technology. The winner-take-most world of software — if your software is better, it quickly eats the world faster than others, without being limited by geography, distribution limitations, or other hardware constraints — accelerates economic polarization. Facebook’s filter bubbles accelerate political polarization.

The conventional wisdom is that this is a terrible thing. I’m not so sure. I’m opposed to the polarization of wealth, ever-increasing inequality, until and unless Extremistan generates sufficient wealth that even the poor benefit more than they would in Mediocristan — but I can least imagine a world where that is so. (In particular, a world where we can afford to implement a universal basic income for everyone.)

As for political polarization — yes, it fragments communities and pits neighbors against one another, but it still seems a whole lot better than a media controlled by a narrow, blinkered establishment whose fundamental goal is to perpetuate itself and its self-serving views.

A polarized world is also one in which previously unthinkable viewpoints can be seriously proposed, vigorously argued, and accepted by the majority with astonishing speed. (Remember when legal marijuana and gay marriage seemed like distant, unachievable goals?) It’s a world prone to more conflict, but also more and faster change; high-risk, high-reward. That’s a tradeoff that I for one am tentatively, optimistically, willing to accept.

Source: TechCrunch

VR is terrible for traditional storytelling

VR is terrible for traditional storytelling

“But among them was this poor Earthling, and his head was encased in a steel sphere which he could never take off. There was only one eyehole through which he could look, and welded to that eyehole were six feet of pipe. He was also strapped to a steel lattice which was bolted to a flatcar on rails, and there was no way he could turn his head.” — Kurt Vonnegut, SLAUGHTERHOUSE-FIVE

Last weekend I visited the San Francisco Film Festival’s “VR Day,” and spent some time binging on short pieces made for the Samsung Gear, Google Cardboard, and Oculus Rift; and amid all this virtual diversity, lo, the proverbial scales did fall from my eyes.

Yes, VR is amazing — I caught myself uttering “oh, wow” under my breath multiple times — but at the same time, don’t kid yourself, we are still in the “Steamboat Willie” / hand-cranked-cameras stage of the art. The technology is terrific but still profoundly restrictive, and as Lucas Matney observes, it raises a whole lot of unanswered questions.

How will we tell stories in VR? What will be the relationship between those stories and their observers? The more one “moves” in VR, the more compelling it is … but the greater the risk of motion sickness. (I felt faint stirrings from a mere drone’s-eye view, and my gut survived the Bitcoin Jet.)

More importantly, though, stationary-observer VR — call it “DomeVR,” since your point-of-view is essentially frozen in place within a dome — may be a richer, more immersive experience than a 2D screen, but when it comes to traditional narratives, itis vastly inferior to, say, movies.

Narrative storytelling is something I’ve thought, and know, a lot about; I’ve had a clutch of novels (traditionally) published, scripted a graphic novel for Vertigo Comics, had various screenplays bounce around Hollywood, have helped to shoot and edit TV episodes, etc. All those kinds of stories follow similar rules — rules which are blithely, rudely shattered by VR.

A movie viewer, or a book reader, is in the same position as the unfortunate Billy Pilgrim in the Vonnegut quote with which I opened this piece: trapped in a linear narrative, with every sensation restricted and controlled by someone else. That quote goes on:

The flatcar sometimes crept, sometimes went extremely fast, often stopped–went uphill, downhill, around curves, along straightaways. Whatever poor Billy saw through the pipe, he had no choice but to say to himself, “That’s life.”

That’s a movie for you, or a book; time and space appear to you only as and when the storyteller allows.

Not so VR. Even in stationary DomeVR, you can twist and turn and spin and look at a full 360 degrees of immersive environment. The narrative effect of this is that you are never quite as in sync with the story that being told; there is no clear demarcation between “story space” and elsewhere, as there is with a TV or movie or game screen. Your mind keeps telling you that everything is story space. But you can only focus on so much of it at a time; and it is all too easy, and tempting, to look away from what matters to the story, in favor of some curious detail, at exactly the wrong moment.

Put another way, in VR, the story does not come to you; you go to it.

There are various tricks one can use to get the VR viewer to go to it in the right way:

and I expect those will soon become a new kind of visual grammar, in the same way that we’re all accustomed to cinematic visual transitions that would have seemed shocking in the age of the Lumière brothers. But even so — if you reduce a VR experience to stationary viewers restricting their vision to a controlled frame, all you’re doing is recreating the 2D screen experience in an especially clumsy, annoying, restrictive way. What’s the point of that?

VR is not for traditional narratives. VR is for whole new kinds of narratives.

It’s easy to say “Games! Games games games!” Snd VR games will be great, sure. But pure narrative, the raw human urge and need for stories, is what interests me more. If you stripped out the contests, puzzles, scoring, and first-person-centricity from games, if you de-gamified them, how often would their stories and characters still be interesting enough to be captivating all on their own? Not often, he understated.

But I put it to you such VR stories will exist, and what’s more, they will become wildly popular. Consider Sleep No More, the immersive (loose) adaptation of Macbeth, which occupies an entire large building in New York City, and whose action roves among many chambers in that building over the course of several hours. I know people who have attended it more than a dozen times, each time taking a new path, following a new storyline or a single character through the events.

Now imagine that but with something even more sprawling as its basis. Game of Thrones, say, or the Marvel Cinematic Universe. That, I think, is an indicator of what the great VR narrative art will become; not a story that you watch once, strapped to the storytelling equivalent of Vonnegut’s Tralfamadorian prison, but an experience you immerse yourself in multiple times, grasping new facets, finding and sharing Easter eggs, and seeing new angles every time.

We’re some distance away from that yet, and it’s pretty clear that video games will be the thin edge of the VR wedge. But I predict they will soon be followed by a whole new kind of immersive fiction — one that will make IMAX 3D movies look like black-and-white silent films. I, for one, can’t wait.

Source: TechCrunch

On the dark art of software estimation

On the dark art of software estimation

“How long will it take?” demand managers, clients, and executives. “It takes as long as it takes,” retort irritated engineers. They counter: “Give us an estimate!” And the engineers gather their wits, call upon their experience, contemplate the entrails of farm animals, throw darts at a board adorned with client/manager/executive faces, and return–a random number. Or so it often seems.

It is well accepted that software estimates are frequently wrong, and all too often wildly wrong. There are many reasons for this. I am very fond of this analogy by Michael Wolfe on Quora:

Let’s take a hike on the coast from San Francisco to Los Angeles to visit our friends in Newport Beach. I’ll whip out my map and draw our route down the coast … The line is about 400 miles long; we can walk 4 miles per hour for 10 hours per day, so we’ll be there in 10 days. We call our friends and book dinner for next Sunday night, when we will roll in triumphantly at 6 p.m.

They can’t wait! We get up early the next day giddy with the excitement of fresh adventure … Wow, there are a million little twists and turns on this coast. A 40-mile day will barely get us past Half Moon Bay. This trip is at least 500, not 400 miles …

Writing software is rarely a matter of doing something you already know exactly how to do. More often, it involves figuring out how to use your available tools to do something new. If you already knew exactly how long that would take, it wouldn’t be new. Hence “it takes as long as it takes.” Non-developers often seem to think that we engineers just look at a proposed task and think “we shall implement A, B, and C, then do X, Y, and Z, and that should require N hours, plus or minus a couple!” Sometimes it is like that. But not often.

More typically, the thought process is more like: “I can see how I’d do it if I were rewriting that whole controller from scratch, but that would take days … is there an elegant hack where I can change the inputs to this function in such a way that I don’t have to rewrite its code? … what if I monkeypatch it at the class level? … wait, maybe there’s an API call that almost does what I want, then I can tweak the results — hang on, what if I outsource it via an asynchronous call to the external OS?

In which case, the result is: “I can confidently estimate that this will require less than two hours of typing. However, working out what to type is going to take me/us anywhere from one hour to several days. Sorry.”

Another analogy, from my sideline in novel-writing: publishers always want a synopsis of your unwritten novel. This is insane, because, as an extremely successful author friend puts it, “writing a synopsis requires everything that actually writing the novel requires.” It’s like asking for a map of what is by definition terra incognito. So it often is with software estimates.

That said: if you’ve ventured into a lot of terra incognito in the past, and you’ve heard legends and tales of this new territory, and how similar it is to your past ventures, you can at least offer up an educated guess. (People seem to like the word “estimate” better than “educated guess.”) There are better and worse ways to estimate, and to structure estimation, and you can’t give up on the task just because software is hard.

What’s more, the form in which an estimate is requested is crucially important — and that’s often what managers, clients, and executives get so terribly wrong. More often than not, when a low-ball estimate balloons into a terrible reality, it’s their fault, not that of the engineers.

First, some managers/clients/executives (hereafter referred to as MCEs) still think that task performance can be measured in fungible engineer-hours. Not so. It was never so. We’ve known for decades that, for instance, “adding engineers to a late software product makes it later.”

Second, some MCEs tend to think that they can increase the accuracy of estimation by adding precision and granularity — by breaking down large projects into as many small tasks as possible, maybe even hundreds of them, and asking for individual estimates of each small task. This is a catastrophically terrible idea. Increasing precision does not increase estimate accuracy. In fact it does the exact opposite.

Let me explain. Software estimates are always wrong, because the tasks being estimated are always, to some extent, terra incognito, new and unknown. However, sometimes the errors are in your favor; an obscure API, a third-party library, or an elegant hack condenses what you expected to be a week’s worth of work into a single day or less. Your hope, when estimating a project, is not that all of your estimates will be 100% dead accurate. That never happens. Your hope is that your overestimates and underestimates mostly offset each other, so that your total estimate is roughly correct.

But the more precise and granular you get, the more likely you are to reduce your overestimates — which is disastrous for your overall accuracy. What’s more, the estimation of a hundred-plus features quickly leads to “estimation fatigue.” Estimation is important. It is the basis for your schedule for the next weeks or months. So, like all important things, you want to minimize, not maximize, its cognitive load.

Furthermore, a feature is not the work that goes into it, and not the unit that should be estimated. What engineers actually estimate is the size of the technical foundation building block that makes a feature possible, which is often/always shared among other features at that scale. It is very difficult — and doesn’t really make sense — to try and work out exactly how much of the work going into such a building block will be apportioned to individual features.

This is all pretty abstract. Consider a concrete example. Suppose you’re building an app that logs in to a web service. Don’t have individual server-side estimates for “user can create account,” “account email address can be confirmed,” “user can log in,” “user can sign out,” and “user can reset password.” Have a single “user authentication” task, and estimate that.

In hours? …Maybe, maybe not. I’m fond of using “T-shirt sizes” instead: S (an hour or three), M (a day or two), and L (a week or so.) Each project has its own pace, depending on tools, technical debt, the developers working on it, etc., so after a week or few you should be able to start figuring out how your hours map to your T-shirt sizes, rather than pretending to know that a priori.

T-shirt sizing is especially good because it comes with built-in warning signs. If you find yourself with all Ls and some XLs, it’s a very visible sign that you probably do need to deconstruct your project a little further; if you have mostly Ss, it means you’ve fallen into the granularity trap and need to abstract your tasks out a little. If you have a roughly even mix of S, M, and L, you’ve probably structured things so that you’ll have pretty good — well, least bad — estimates.

Which are, to belabor a point that needs belaboring, only estimates. The sad ultimate truth is that it takes as long as it takes, and sometimes the only way to find out how long that is is to actually do it. MCEs hate this, because they value predictability over almost all else; thus, they nearly always, on some level, treat estimates as commitments. This is the most disastrous mistake yet because it incentivizes developers to lie to them, in the form of padding their estimates — which in turn inevitably slows down progress far more than a few erroneous task estimates might.

So: accept that estimates are always wrong, and your hope/goal/aim is not for them to be correct, but for their errors to cancel out; estimate by feature group, not by feature; and never even seem like you treat estimates as commitments. Do all these things, and while estimation will always remain a gauntlet, it will at least cease to be the lethal minefield it still is for so many.

Featured Image: NY Photographic/PicServer UNDER A CC BY-SA 3.0 LICENSE
Source: TechCrunch

The FBI is working hard to keep you unsafe

The FBI is working hard to keep you unsafe

Did you know that the US government is sitting on its own Strategic Zero-Day Reserve? A “zero-day” is a software vulnerability that allows adversaries to bypass or reduce security restrictions; lets them hack systems which use that software, basically. These are not restricted to shady criminal hackers. They are strategic weapons in the hands of nation-states, including America. This is morally complex.

To a certain extent makes sense. Say what you like about the NSA, and I’ve said a lot of unflattering things, but stockpiling zero-days is at least arguably part of their job. The FBI, though — isn’t the primary job of the FBI to protect the American people?

Because make no mistake, every zero-day that exists, in anyone’s hands, makes everyone marginally less safe.Their undisclosed existence makes everybody who uses the hardware or software in question more vulnerable — and the number of such innocents is almost always vastly, vastly, vastly greater than the number of criminal suspects.

How did the FBI hack into the Tor network last February? They won’t say, but it seems extremely likely that they used a zero-day in the Tor Browser, which runs on the same fundamental codebase as Firefox … which is used by hundreds of millions of people who are less safe because that zero-day has not been reported and patched.

As it turns out, the FBI’s activity subsequent to their Tor hack has been ruled an illegal search by a federal judge:

skewing the risk/reward ratio of hoarding their (presumed) zero-day—and keeping, again, hundreds of millions of Firefox users that much more unsafe—even further.

Consider the famous iPhone 5c found in San Bernardino that the FBI tried to compel Apple to unlock. The unlocking method will likely remain secret, because the FBI bought it from a third party without insisting on on getting the rights to it . It was good of them to give us all an extremely blatant object example of how zero-days can be used by other parties as well.

To quote Bruce Schneier:

This is how vulnerability research is supposed to work. Vulnerabilities are found, fixed, then published. The entire security community is able to learn from the research, and — more important — everyone is more secure as a result of the work. The FBI is doing the exact opposite.

What’s more, the FBI spent more than a million dollars to get nothing out of that phone. One can’t help but wonder if that money could have been better spent elsewhere, rather than hunting mythical “cyber pathogens” that don’t exist.

These aren’t recent developments. The FBI has been trying to hack their way around encryption for more than a decade. (Although they’ve only just gotten the OK to routinely use NSA data in the course of investigations. What could possibly go wrong?)

To the government’s credit, the decision to retain or report an exploit is not made in ad-hoc manner by whoever happens to have their mitts on it. (And there’s plenty of precedents for reporting; the UK’s GCHQ, for instance, has an admirable history of reporting Firefox vulnerabilities.) There is an official procedure, known as the “Vulnerability Equities Process,” which is used to make that determination.

And of course that process is open, transparent, aboveboard, with active advocates for both sides, and in no way a rubber stamp, right? Judge for yourself, to the extent that you can from the redacted documents behind that link, and be unsurprised.

(One interesting bit in there: vulnerabilities in systems certified by the NSA are to be passed on to the NSA to deal with as they feel is appropriate, presumably in case the NSA introduced that vulnerability.)

Anybody with power, and zero-days are power, is naturally disinclined to give it up for some sort of abstract marginal benefit spread across millions of other people, even if that benefit is cumulatively massive. It’s hard to see how a star-chamber FISA-like review board can effectively advocate for stripping government agencies of that power — even if that would make the public more safe. Expect more of the same; and expect criminals to use exploits that the US government could have closed long ago.

Source: TechCrunch

We should be worried about job atomization, not job automation

We should be worried about job atomization, not job automation

In the future, machines will do tedious, repetitive work for us, and do more of it than humans ever could, simultaneously increasing economic output and liberating humans everywhere from drudgery. We all know what that means: Disaster! Dystopia! Catastrophe! Everybody panic, the robots are stealing our jobs! We’re dooooooomed!

Does it not seem completely insane, when you take a step back, that we’re actually collectively upset about this prospect? And yet we are. “What should you study to stop robots stealing your job?” asks The Times Higher Education. “AI And Robots Are Coming For Your Job,” warns Entrepreneur. As if it would somehow be far better if this future did not come to pass.

Dwight Eisenhower once said: “If you can’t solve a problem, enlarge it.” I submit that the real problem we face is not that robots will produce more than people while freeing us from mind-numbing, back-breaking toil. I submit that the actual problem is that full-time jobs are assumed as the fundamental economic building blocks of our society, and that we lack the flexibility or imagination to consider, much less move towards, any alternative structure.

Don’t blame the robots. Our brave new economy is already winnowing jobs as we knew them, while the great tsunami of automation still gathers on the horizon. In 1995, 9.3% of the American work force had a so-called “alternative work arrangement” — temporary, gig, or contract work — as their main job. By 2005 that rose slightly to 10.1%. But by 2015 that had skyrocketed to a whopping 15.8%. Indeed, all “net employment growth in the U.S. economy since 2005 appears to have occurred in alternative work arrangements,” notes Fusion.

The Wall Street Journal concedes that “an expanding share of the workforce has come untethered from stable employment and its attendant benefits and job protections” but points out “this shift away from steady employment has taken place largely in the shadows … most of that growth has happened offline, not through apps such as TaskRabbit and Lyft.” So you can’t blame the servants-as-a-service apps for this…yet.

But the original study notes that the ‘“Online Gig Economy” has been growing very rapidly.’ Does anyone doubt that this, plus rising automation, will do anything other than accelerate the existing trend towards “alternative work arrangements”?

This is not job destruction, but job atomization — the replacement of long-term, full-time work with benefits, and a career path, with occasional, short-term contract gigs without benefits or any escalating career structure. For some people this is great! Including me; my employment history is best described as “checkered,” and I wouldn’t have it any other way. I think it’s important not to ignore that many people prefer “alternative work arrangements.”

But, generally speaking, most people want benefits, consistency, predictability, and predefined career paths. Not least because if you do not have any of these things, and you’re not lucky enough to be, say, a successful novelist or a software engineer, then society frowns on you, and your prospects are frequently bleak and deeply uncertain. You become part of the precariat

This is not just a matter of having insecure employment, of being in jobs of limited duration and with minimal labour protection, although all this is widespread. It is being in a status that offers no sense of career, no sense of secure occupational identity and few, if any, entitlements to the state and enterprise benefits that several generations … had come to expect as their due.

Tech inadvertently contributes to job atomization by making it easier. Individual jobs can more easily be partitioned, subdivided, outsourced, and made fungible with the assistance of software and smartphones. Again, there’s nothing intrinsically wrong with this; it reduces wastage and makes work more efficient. Think of the horde of part-time Uber drivers who pour into the streets when surge pricing ratchets up to 2x or 3x; everybody wins, albeit at a cost. The problem is that the growing precariat is ill-served by an economy built around the assumption that every able-bodied adult should have a full-time job.

So what’s to be done? Well, a decent minimum wage will help people who do have atomized jobs, and discourage a race to the bottom. It will also incentivize automation, but if that destroys jobs en masse faster than it creates them, a minimum wage won’t make much difference.

In the long run, though, the solution is to ensure that a decent portion of the fruits of what should be a golden future — a world in which machines do ever more work for us — are shared with the precariat on an ongoing basis. That way its growing numbers have some semblance of security, hope for the future, and real opportunity for their families. Those should not be reserved only for writers of software, owners of robots, and inheritors of wealth. A universal basic income may seem like a drastic change — but I submit that when technology ushers in what should be a giddily wonderful future, and we react as if it’s a terrifying horror to be feared, a drastic change is exactly what is called for.

Featured Image: OLCF/Vimeo UNDER A CC BY 3.0 LICENSE
Source: TechCrunch

Dear Facebook, why are Facebook Comments so unremittingly terrible?

Dear Facebook, why are Facebook Comments so unremittingly terrible?

For long months now, Facebook Comments have been riddled by some of the most transparent, eye-roll-inducing “I make a good salary working from home” spam you’ve ever seen. Every mail service can filter it out; but Facebook? Home to cutting-edge AI research, massively scalable services, some of the smartest software people in the world? Nope, spam appears to be beyond their capabilities.

I kid, I kid. Obviously Facebook could clean up comment spam if they really wanted to. (And, in fairness, Facebook Comments have always been terrible.) Maybe they even will, on some executive whim. But, really, who can blame them for not bothering? Facebook has become a business which focuses on things that affect billions of users, and/or bring in billions in revenue. Comments don’t come even close to moving the needle on that scale.

But Facebook Comments are an excellent object example of a curious tech paradox: the bigger the business, the less you can rely on its new initiatives.

Consider the curious case of the Revolv, a home-automation controller which was bought by Nest, which in turn is part of mighty Alphabet … and is now deliberately being bricked by its controllers. This is numerous things: an instance of Nest apparently being a slow-motion flaming trainwreck; This is a great reminder that the “Internet of Things” is actually the “Internet of Someone Else’s Things“: if you don’t have root on a device in your home, then it is not yours. It is also a reminder that the bigger the business, the more you should fear it if it acquires something you love.

Google, of course, was also home to much-beloved Google Reader, which it dispatched a few years ago as casually as if it were a mook in a martial-arts film:

On the one hand this was simply a bad strategic mistake on Google’s part, at a time when they thought that Google+ was the future. (Cue the flood of people trying to claim that Google+ was actually a success, in some bizarre war-is-peace kind of way. It was a debacle, folks.) But on the other hand Gmail actually matters to Google; it would far rather make its billion users feel a little better about the service than cater to Reader’s tens of millions of users.

Which in turn explains why the great Maciej Ceglowski aka Pinboard is not exactly shuddering in his proverbial boots now that Google has launched a new kind of bookmarking service itself:

It may seem that one would be better off relying on a service brought to you by one of the Stacks than something from a tiny, scrappy startup. But this is not so unless this service is crucial to one of their major business lines. (Unless it’s PayPal, in which case you shouldn’t rely on it at all, he said bitterly, fuelled by a painful recent dayjob project.)

Otherwise, any BigCo service can and will be victimized by the vagaries of internal politics. Bit rot affects us all; being left to languish is just a slow death sentence in its own right. You’re arguably better off relying on a failed startup that dies a quick death than being dragged down by an abandoned service that slowly circles the toilet. At least that way you’re granted the gift of a known outcome. For further evidence, I invite you: just scroll down.

Featured Image: Jon Evans/Flickr
Source: TechCrunch