Stand up against the stand-up

Stand up against the stand-up

It’s custom; it’s tradition; it’s dogma; it’s a cargo cult. It is well-intentioned, but all too often, ill-advised. It’s done because it is the thing one does. It wastes your time, shackles your mind, kills your productivity. It is the ritual that so many software developers suffer silently through, every day It is the daily stand-up. I say to you: no more!

Before I lay waste to my target, let me first say: it’s not always awful. If your team is all in one physical location; if they all start their work at exactly the same time; and if the stand-up takes fewer than ten minutes, with any issues raised immediately assigned to be dealt with later by ad-hoc groups for whom they are relevant — in that case a daily stand-up is, indeed, an excellent way to maintain momentum, identify problems as soon as possible, and foster team communication.

Which is why the daily stand-up became A Thing, back in the days of yesteryore when nobody on your team worked remotely, much less in entirely different time zones; when nobody arrived an hour earlier or an hour later than anybody else; when “agile development” was still about actually being agile. Those were the years. But this is today. To quote myself:

In many places, ‘agile development’ has become codified into a fixed, carefully specified process of “standups” and “scrums” and “sprints,” which is darkly ironic given that the key principles of the Agile Manifesto include “value individuals and interactions over processes and tools” and “value responding to change over following a plan.” So what did companies create and fervently follow? Agile processes, tools, and plans. Sigh. If you are a Certified Scrum Master, you are doing it wrong.

Similarly, if you have a daily stand-up, it should not be because you take it as received faith that this is a good idea; it should be because you have carefully interrogated its actual purpose and outcome, and concluded that it is worthwhile despite its significant costs.

I’ll get back to those costs. First let’s talk about a stand-ups intended purpose: to maintain momentum, identify problems, and foster team communication. Back in the bad old days the only alternative for voice communication was email. Today, though, we have tools like Slack, or the like. If your team is in constant asynchronous communication via Slack and its ilk, why do you need a synchronous stand-up? Conversely, if your team isn’t in constant asynchronous communication, do you really think your problems are so small that a mere daily synchronous stand-up will help?

Now let’s talk about the costs. Yes, there are costs. There are very significant costs, ones which are often essentially invisible to managers. For further explication let me refer you to Paul Graham’s excellent essay “Maker’s Schedule, Manager’s Schedule“:

there’s another way of using time that’s common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can’t write or program well in units of an hour. That’s barely enough time to get started. When you’re operating on the maker’s schedule, meetings are a disaster. A single meeting can blow a whole afternoon…

Worse yet: for a lot of people, their sharpest, most productive time is first thing in the morning. So standups face a catch-22: they either occupy the very best and most productive time of many developers’ days, or else they detonate in the middle of the day, generally requiring at least twenty minutes of context switching before and after.

Figure your stand-up lasts twenty minutes. Then the develsoper time it occupies, including context switching, is one hour per developer. Suppose your team has eight developers: then a single twenty-minute daily standup occupies eight hours a day of developer-time … in other words, is equivalent to an entire person sitting around doing nothing, ever.

Now, granted, some people may find greater cohesion, greater sense of purpose, a greater sense of belonging, from a twenty-minute meeting once a day. Sometimes you may have a specific need for the whole team to sync up (before meetings imposed by clients or executive, for instance.) Sometimes, again, you actually do have everyone in the same physical location starting at the same time every day, the circumstances for which the stand-up was originally proposed.

But for certain projects, and certain teams — I would suggest most of them — the stand-up can and should be replaced with the check-in: as soon as every team member comes online in their morning (or evening, if they’re like some night owls or faraway contractors with whom I’ve worked…) they contribute to a dedicated “check-in” Slack channel or equivalent, reporting what they did yesterday, what they’re doing today, what problems and unknowns they face.

As others come online, they communicate to solve these issues, via text or voice or shared screen, coordinated of course by your friendly neighborhood project manager. But not all at the same time. In this glory era of asynchronous communication, synchronicity is highly overrated.

Source: TechCrunch

VR needs a hit

VR needs a hit

I believe virtual reality is going to be huge. Huge. …Eventually. But when? Are we talking years, or decades? I visited yet another VR festival this week, and I couldn’t shake the feeling that we’re still in the very early days of the medium. It’s amazing! It’s thrilling! And it’s still kind of the Stone Age. When do we get to discover bronze?

People talk about a “Cambrian explosion” of VR content over the next year or so. I sure hope so. Because there is a very plausible future in which a hard core of VR enthusiasts build systems and worlds that whet the appetite, tickle the interest, and fan the flame of true belief in a tiny minority — but spend a decade failing to break through to the larger population.

Most VR experiences so far are more curiosities than they are compelling. One exception, for me, was Leap Motion’s no-controller motion-tracking experience, which simply tracks your hands in real time such that you almost seamlessly, almost perfectly, wield them and use them in a virtual world. It doesn’t quite just work, but it’s close enough that the weightlessness of virtual objects is actually instinctively disconcerting.

Of course, on the other hand…

But write those caveats off as glitches fixed in the next generation, or the next. The question remains: what might the first real breakout, crossover VR hit be? When can we expect it? And how will it get to us?

The obvious answer is “games.” Unless you count Google’s Cardboard headsets — which on the one hand are popular, but on the other hand seem to tend to languish unused after brief experimentation — gamers are VR’s first major (consumer) target market. A compelling, you’ve-never-experienced-anything-like-this-before VR-only game / world would go a long way towards drawing in the proverbial masses.

VR gaming rigs are expensive, yes, but I predict capitalism will make its usual lemonade from that lemon, and we’ll soon see a rebirth of the arcade culture of the 1980s … except this time all the arcade gamers will be wearing headsets and gesticulating wildly at nothing in little pay-by-the-hour booths.

I suppose completeness compels me to mention another possibility for the VR killer app:

…and I suppose time will tell; but again, I’m talking about a crossover hit, one that stakes out territory in our collective cultural commons. (It helps that VR fiction like Ready Player Onea copy of which is issued to every Oculus Rift employee — has already built a bit of a bridgehead there.) Until we see one of those, VR will be like nuclear fusion: it is the future, sure, but it has been the future for decades, and it often feels like it will always be the future, never the present.

That future will not happen in a culturally meaningful way until better hardware, better software, and more creativity come together to create something now. Not an adaptation; not just a new dimension; but a game or experience that can only work in VR.

One so compelling will drive ordinary people to use and buy VR hardware, rather than merely preach to the choir of early adopters searching for content to justify the hardware they bought out of habit. One that spreads by word-of-mouth, until middle-aged couples who wouldn’t normally be caught dead in VR arcades find themselves lining up to try this hot new thing. One that inspires a censorious moral panic — you really know you’ve made it when you trigger a moral panic.

It’s a mug’s game to try to predict exactly when that will happen, or what it will be. But I think it’s a fairly sound prediction that (consumer) VR will languish as a minor curiosity appealing largely to a few die-hards — think Ingress compared to Pokémon Go — until it does.

Source: TechCrunch

How much does it matter if your software quality sucks?

How much does it matter if your software quality sucks?

I ran across a fascinating piece by Leo Polovets of Susa Ventures this week, provocatively titled: “Why Startup Technical Diligence Is A Waste Of Time.” You should go read it, but its central thesis is simple: “in today’s world of SaaS tools, APIs, and cloud infrastructure … technical resources are rarely the cause of success or the reason for failure.” Is he right? Yes! But he is also wrong.

The most beautiful, elegant, powerful software in the world cannot save you if you fail to achieve “product-market fit.” (If you don’t like industry jargon, let’s use Paul Graham’s phrasing: “building something people want.”) Your software cannot save you if you have no viable business model (aka “building something people want so much that someone will pay for it.”) And it will not save you if nobody is ever offered your product, or ever hears of it (aka sales/marketing failures.)

But if and when you do get past those high hurdles — that’s where your software quality can make or break you. I see a lot of this in my day job: I’m an engineer (slash manager, slash principal, slash whatever) at HappyFunCorp, a software consultancy — and startups often come to us with the Startup Software Quality Problem.

The Startup Software Quality Problem is this: a startup has successfully built, and maybe even launched, a Minimum Viable Product, with software courtesy of their sole technical co-founder and/or a cheap dev shop somewhere. Now, having launched it, seeing how real people actually use it, they want to quickly iterate its strengths and fix its weaknesses, or perhaps pivot to focus on a new facet on what they’ve built — only to find that they can’t, because they’re stuck in quicksand.

That’s what poorly architected, dubiously written, high-technical-debt software is like. Quicksand. It’s buggy, too, usually, in an intermittent and hard-to-reproduce way, frustrating users, developers, and co-founders alike.

Bug fixes that should take hours take days; changes and feature requests which should take a few days occupy whole weeks; and you get into a vicious spiral where this slowdown causes everyone to be slow desperate to iterate faster that you can no longer take any time to try to pay down your technical debt, so instead you just keep exacerbating it. Needless to say, this vicious spiral can and often does become a death spiral.

It’s true that Minimum Viable Product software is not, and should not be, built to be perfectly elegant and scalable. But if it’s quicksand software, and you just missed building something people want, then now it’s harder, slower, and more expensive to re-target, while competitors and newcomers with higher-quality software can iterate with speed and abandon.

Even if you have built something that people really want, the time spent hiring new engineers and rewriting your entire codebase is time that your higher-quality competitors can use to overtake you. Quicksand software is often so difficult to repurpose with a complete from-scratch rewrite is a better option than trying to reuse any of it at all. Needless to say, founders who have spend hundreds of hours and tens of thousands of dollars constructing this quicksand never want to hear this.

Software quality doesn’t dictate your success, that’s true. But it does dictate your speed. In the absence of any competition, this doesn’t matter, but if you think you live in a field without any competition, a very painful awakening awaits. The slower you can move and iterate, the faster your startup can and will die.

It’s true that, as Polovets points out, building a product that people want is the most important thing. (Which in turn can be partitioned into “your idea” and “your timing.”) And sales is probably second. But while your software may not be your startup’s heart or lungs, it’s still a vital organ that can and will kill you. Worse yet, it will do so slowly, even subtly, after long illness, possibly without you ever even recognizing that it was the proximate cause. Don’t handwave it off as something to worry about later. I assure you that you will regret that bitterly.

Featured Image: Janet McKnight/Flickr UNDER A CC BY 2.0 LICENSE
Source: TechCrunch

Immersion is going to be immense

Immersion is going to be immense

Pokémon; Macbeth; the Illuminati. Those may not sound like they have a lot in common, but they exemplify the three whole new forms of technology-driven entertainment that have erupted in recent years. We’ll soon combine all three–and, eventually, use them to create whole new multi-faceted immersive worlds that will make today’s entertainment look like radio dramas.

Augmented Reality

At last! For years augmented reality has waited for its messiah, its killer app; and at last, undisputably, here it is. I mean, of course, Pokémon Go, yet another overnight hit 10 years in the making. Vernor Vinge basically predicted it a decade ago, while William Gibson was writing about augmented-reality “locative art.” In the years since, informative AR apps like Layar and Broadcastr rose, fell, and died unnoticed. A small but devoted hardcore has been playing Ingress, and thereby basically playtesting PG, for years. And now, finally, we have a bona fide hit.

Why did we have to wait so long? Partly for sufficiently powerful hardware to become ubiquitous. (I still remember how Layar stuttered on my first Android phone.) And largely, as Darrell Etherington points out, because of Pokémon’s pre-existing “tremendous success as a media property.”

We will now doubtless see an explosion of failed copycats … and also major franchises looking for their own AR hit. (Marvel Universe? The Bourne Reality?) And we should also see new, more immersive AR hardware launch soon, such as much-awaited, billion-dollar-funded Magic Leap‘s debut product.

Immersive Reality

In case you haven’t noticed, immersive theater is huge right now. New York City, Orlando, Los Angeles — wherever you look, new immersive theatrical experiences are popping up. These are to traditional theater as open-world games like Grand Theft Auto are to linear games like Halo. And you’ll note that, once again, the most successful examples are the ones which build on a known franchise: Sleep No More, a (very loose) adaptation of Macbeth, and Then She Fell, inspired by Alice in Wonderland.

Theater is, of course, only one form of reality. Real-life immersive games like escape rooms have become wildly popular over the last decade as well, as have “haunted house experiences.”

I would argue that all of the above create the illusion of Temporary Autonomous Zones, a concept named and popularized by the anarchist writer Hakim Bey–and which in turn inspired Burning Man, itself an immersive reality, and its ilk.

The great virtue and tragic flaw of TAZs is that they don’t scale; they are experiences unique to the few who are physically there. Even Burning Man only has room for 70,000 people each year. They are ideal for intimate, personal experiences. But not for any kind of mass cultural commons–

–or at least they weren’t, until technology made it possible to have globally scalable immersive experiences. I refer, of course, to virtual reality — when it matures.

Alternate Reality

Older than AR and VR, and currently slightly out of favor, alternate reality games use the real world as a platform, and imbue ordinary, unaugmented existence with secret meaning and purpose, by sending cryptic messages, delivering mysterious packages, leaving hidden clues that only initiates will recognize, etcetera.

ARGs have often been used to promote more traditional media: The Beast for the movie AI, I Love Bees for Halo 2 — once again, leveraging an existing franchise. And of course the notion of a hidden layers of esoteric meaning interwoven with ordinary reality is an insanely common fictional trope: see The X-Files, The Da Vinci Code, Foucault’s Pendulum, and every conspiracy theory ever told. But there are original examples as well, such as San Francisco’s late, lamented Latitude Society.

Three Legs of a New Storyteller’s Stool

I put it to you that we’re going to see more and more of all three of the above forms of storytelling and gaming. That’s pretty uncontroversial. More interestingly, I predict we’ll start to see combinations of them; experiences which start as augmented reality on your smartphone, progress to alternate-reality messages in real-world billboards and Instagram feeds, and encounters with paid actors, and eventually, if you’re willing to pay the subscription price to keep playing, lead you to a house or warehouse tricked out as an immersive theater, and/or a key to a secret VR landscape — while countless others are also playing, and collectively changing the ongoing shape of the story.

Is that a mixed-reality game? A role-playing game? Immersive theater? Any and all of the above, and something for which we need a new name? I look forward to finding out–and between Pokémon Go, Sleep No More, Oculus Rift, the Latitude Society, and other new shoots too numerous to cite, I suspect I won’t have to wait all that much longer.

Source: TechCrunch



As technology advances, a world partitioned into nation-states makes less and less sense. That may sound crazy, if you take it as granted that our world must be divided into nations. But the whole concept of a “country” is a 400-year-old weird hack, riddled with crippling bugs, plagued by contradictions that sharpen each year. It is unlikely to survive this century.

So what comes next?

In 1648, European nobility and aristocrats got together in Westphalia, northwestern Germany, and signed treaties which ended both the Thirty Years’ War and the Eighty Years’ War. They also, in passing, laid the basis for the world as we know it today: one in which almost every scrap of land has been allocated to a “nation-state,” groups of humans defined by geography of all things, membership in which is the fundamental defining factor in human identities around the globe.

If you don’t think that’s true, try being Haitian or Zimbabwean, and see how different that is from your (presumably) rich Western citizenship. Oh, you have a green card? A work permit? An entry visa? Those are just other, lesser but still privileged, forms of nation-state membership. That, more than anything else except perhaps your health, is what most describes, and circumscribes, your life.

Does this not all seem a little odd and antediluvian to you, in today’s modern, ultra-networked, densely intertwined, post-geographic world? If you were going to redesign the social architecture of our world from scratch, would you begin with the nation-state as your basic building block? Really? Think about it. Really? I didn’t think so.

It may be hard to imagine a post-Westphalian world, given how much of our assumptions are built on the deep foundation of that structure; but to paraphrase the great Ursula K. Le Guin, “We live in nation-states. Their power seems inescapable–but so did the divine right of kings.”

Le Guin was actually talking about capitalism: let’s detour to discuss that for a moment. People tend to assume that those who believe in the end of the nation-state must be hardcore anti-government libertarians. This in turn is just evidence of how deeply Westphalia has infected the zeitgeist; people assume that the only alternative is, basically, no government at all. What a sad paucity of imagination.

In fact the Westphalian consensus has been withering away for many years. The nations of Europe joined together long ago into a European Union; Brexit is essentially a reactionary backlash against the decline of Westphalia. Its victory was mildly disheartening, in that it exemplified the petty, jealous form of xenophobia incentivized by nation-states.

But it was also a mere blip in the face of the overwhelming larger trend. Africa is moving ever closer to free movement among its 54 nations. It is following the lead of South America, which in 2009 adopted Mercosur’s Residence Agreement.

This increasing transnationalism seems like an inevitable side effect of the switched networks of IP packets and shipping containers drawing the four corners of our world ever closer together. But the EU, the African Union, Mercosur, etc., are still just umbrella organizations of nation-states, still limited by mere geography. That’s so twentieth century. Let us dream a little bigger.

Balaji Srinivasan–CEO of Consensys and 21, partner at Andreessen Horowitz–is doing just that, per this fascinating tweetstorm from February, sampled below and fully available here:

Consider that finest of human inventions: the city. Both within and across nations, cities, and city people, often have far more in common with each other than with the rest of their nation. London, Paris, Tokyo, Toronto, Shanghai, New York, Mumbai, Buenos Aires, Dubai, Cape Town, Sydney — it is easy to feel more at home in all of these places than to go from any of them to a small rural settlement. This is true of both “elites” (a word now invariably used as a pejorative, outside of military contexts) and the legions of impoverished young students and travel-hungry twentysomethings.

Does it really seem likely, given all the above, that land borders and map colors will set the course of all human behavior forever? It seems to me that technology, by shrinking our world and forming ever denser connections all across it, is inciting the growth, in both number and size, of loose-knit transnational organizations which–over decades–will rise in importance until they begin to usurp our notions of national identities.

Thus far almost the only such organizations of real scale and importance are, of course, corporations. I’m faintly surprised that Google and IBM don’t already issue passports that holders can use to travel to, and work in, Westphalian nation-states.

But of course the Westphalians are jealous of sharing their power, and people are, rightly, deeply mistrustful of transnational corporations. Even–or maybe especially–tech companies. (As a deeply admirable and highly successful tech executive mused to me over dinner the other day: “I really think, at some point, the pitchforks will be coming for us in tech.”)

Some other form of transnational organization will have to be first. Only Nixon could go to China. I predict that international groups which initially seem trivial, or even laughable, will slowly grow in stature and importance until they become, in many ways, distributed nations of their own…without the limitations of that ugly hacks called a “state.” I don’t know which will be first, but I suspect it may already exist, in some nascent form. I also suspect that we will see a growing number of new city-states as this century progresses.

(Obviously I’m far from the first to predict this. Neal Stephenson did so more than twenty years ago with The Diamond Age, in which he described a world divided into dozens of different distributed nations, called “phyles,” each with its own scattered archipelago of territory around the world, along with cities and shared land where laws were dictated by the Common Economic Agreement among the phyles.)

Again, all the above may seem dubious, unlikely, or even completely insane, to anyone whose whole life has been steeped in a world defined by nation-states. But if you take a step back and look at that world, and how it’s changing, and the possibilities that new technology provides–I think you’ll find it’s hard not to see some livid writing on the Westphalian wall.

Source: TechCrunch

A brief history of cryptocurrency drama, or, what could possible DAO wrong?

A brief history of cryptocurrency drama, or, what could possible DAO wrong?

It makes SILICON VALLEY look like C-SPAN–and yet it’s a documentary. Yes, it’s Cryptocurrency!, the show! You already know it’s been the hit of the last half-decade in extreme-nerd, get-rich-quick, and/or libertarian-conspiracist circles. But the story so far may seem incredibly… well… cryptic. So if you’re just tuning in, here’s a timeline to catch you up before the new season begins:

October 2008 A pseudonymous figure or group called Satoshi Nakamoto publishes a white paper entitled “Bitcoin: A Peer-to-Peer Electronic Cash System.” This introduces a new data structure, the blockchain, which over the next eight years will create billions of dollars of value and cause intelligent people to seriously speculate that it could be used to replace the entire global financial system. The mystery of Satoshi Nakamoto’s identity is never solved. Compared to much of what follows, this all seems pretty reasonable and plausible.
May 2010 The first real-world Bitcoin purchase occurs: 10,000 btc, currently valued at ~$6.5 million, is used to purchase two large Papa John’s pepperoni pizzas in Jacksonville, Florida.
Early Spring 2011 The rest of the world (including me) discovers Bitcoin. Reactions range from “this is a giant scam” to “the most dangerous project we’ve ever seen” to “this is our greatest hope for liberty” to “OMG they’re all totally crazy.” Each of these contradictory viewpoints is surprisingly convincing.
Spring 2011 Bitcoin is widely castigated for being primarily used to purchase drugs on darknet sites such as Silk Road. The traditional reaction to moral censure ensues: the price of Bitcoin immediately rises almost forty-fold in ten weeks.
June 2011 Bitcoin promptly falls back from $32 to $10.
June 2011 Mt. Gox, a Bitcoin exchange originally set up to trade Magic: The Gathering cards, is hacked.
June 2011 People keep using Mt. Gox, because, you know, what the heck, why not? What could possibly go wrong?
Autumn 2011 The price of Bitcoin has fallen back down to $2.
March 12, 2013 Most Bitcoin aficionados will tell you, in a tone usually reserved for tales of deaths, wars, famines, and pestilences, that this is the day a “hard fork” occurred on the Bitcoin blockchain. In actual fact there has never been a Bitcoin hard fork.
April 2013 Bitcoin skyrockets back up to $100.
July 2013 The infamous Winklevii twins launch the Winklevoss Bitcoin Trust exchange-traded fund. They found out about Bitcoin at a party on Ibiza. I am not making this up.
November 2013 Bitcoin hits $250.
November 2013 Bitcoin hits $500.
November 2013 Bitcoin hits $1000.
December 2013 Bitcoin crashes back below $600.
January 2014 The Wall Street Journal publishes “Why Bitcoin Matters” by Marc Andreessen, who argues (correctly) that “Bitcoin offers a sweeping vista of opportunity to reimagine how the financial system can and should work in the Internet era, and a catalyst to reshape that system in ways that are more powerful for individuals and businesses alike.”

Meanwhile, however, much of the early Bitcoin community is still using Mt Gox, the Magic: the Gathering trading-card site turned Bitcoin exchange coded in PHP by developers of extremely dubious technical ability.

February 2014 Mt Gox shuts down and files for bankrupty, reporting the loss of 850,000 bitcoin, or about 7% of all extant at the time. Some weeks later they rediscover ~200,000 of those bitcoin behind a couch cushion.
March 2014 A train set aficionado named Dorian Nakamoto is “outed” by Newsweek as the creator of Bitcoin. It is soon apparent that Newsweek is flagrantly, painfully, stupidly wrong.
January 2015 Bitcoin slips to $200 during a period of merciful boredom, during which serious people, genuinely interested in using its remarkable decentralized / permissionless technology to change the world for the better, pour an enormous amount of time, money, and resources into initiatives to do just that.
Basically all of 2015 The Bitcoin world is riven by a bitter, personal, and vicious debate over an arcane (but important) technical issue which hamstrings the network to no more than seven transactions per second, and will require a hard fork — and, arguably, greater network centralization — to completely fix. Its developer community fragments into two chief camps: Sharks and Jets, later renamed Core and Classic.

This division makes it impossible to ignore the fact that the supposedly permissionless and decentralized cryptocurrency is de facto controlled by a handful of mining pools and a tiny coterie of developers. Previously the community has dealt with this inconvenient truth by loudly singing “la la la la!” while studiously looking away.

April 2015 Mt Gox co-founder Jed McCaleb, who, to his credit, left Mt Gox long before it disintegrated in a bubble of recrimination and humiliation, and who went on to create two new (non-blockchain) cryptocurrencies, Ripple and Stellar, is sued by Ripple for selling Ripple, allegedly because he couldn’t sell Stellar to support Stellar.
August 2015 Ethereum, a new cryptocurrency basically founded on the precept that Bitcoin has been timid and unambitious, or at least insufficiently brash, weird, and disruptive, launches its first phase into the world. Its distinguishing factor: the scripting language which controls its monetary transfers is Turing-complete, which means, by definition, that it is impossible to tell whether its programs will run forever or eventually halt. Ethereum deals with this problem by charging “gas”, a unit of its own currency, for every computation.

To analogize, Bitcoin offers its developers a knife with which to stab themselves; Ethereum offers them the entire arsenal of the United States military with which to destroy everything that they have ever loved, but makes them pay by the second to use it. Like Bitcoin, Ethereum is both technically fascinating and generally awesome. And, again like Bitcoin, it promptly attracts a coterie of dollar-sign-eyed enthusiasts who are … shall we say … somewhat less awesome.

December 2015 An Australian man known as Craig Wright is “outed” as Satoshi Nakamoto. It soon becomes painfully apparent that either Wright is a con man, or “Satoshi” is doing absolutely everything he can to convince the world he is a con man. Occam’s Razor makes it pretty clear which way to bet.
January 2016 Mike Hearn, a senior Bitcoin developer, announced that Bitcoin has failed, and loudly and publicly quits the community (with a Medium post, of course.)
January 2016 Blocksize debate (correctly) resolved, Bitcoin embarks on a long bull run and continues to gain in value, without ever really resolving the dark fundamental self-contradiction at its heart.
April 2016, a German startup which creates “slocks” — smart locks that open when they receive money, just like the toll door in Philip K. Dick’s dystopian Ubik — announce a side project: the world’s first Distributed Autonomous Organization, built on Ethereum, a computer program which will raise ether and then invest it based on its funders’ votes, with no further human decisions or intervention required. Their announcement refers, in a soon-to-be-ironic self-congratulatory manner, to

irrefutable computer code … reviewed by the best security audit company in the world … self-governing and not influenced by outside forces: its software operates on its own, with its by-laws immutably written on the blockchain, not controlled by its creators …

May 2016 People claim, with an apparently straight face, “The DAO Will Soon Become The Greatest Threat Banks Have Ever Faced.” Optimistic investors pour a whopping $150 million (at then-current exchange rates) into the DAO, because, you know, what the heck, why not? What could possibly go wrong?

A few skeptical people like, er, me (and pretty much everyone I know) write things like “What most concerns me about the Ethereum project is security. … Ethereum offers a vastly larger attack surface than Bitcoin … this applies not just to the network itself, but to individual Ethereum contracts.” DAO enthusiasts are angered by us.

May 2016 Craig Wright, the aforementioned Australian sure-looks-like-a-con-man, outs himself as Satoshi Nakamoto, and even manages to convince Gavin Andresen, a major Bitcoin figure and developer. (Con men can be very convincing in person.) But after a flurry of media attention, it soon becomes painfully apparent, again, that either Wright is a con man, or “Satoshi” is doing absolutely everything he can to convince the world he is a con man. Occam’s Razor continues to make it pretty clear which way to bet.
May 2016 Some well-respected researchers identify fundamental flaws in the DAO’s voting protocol and call for a moratorium on DAO projects until these are resolved.
June 2016 A potential flaw in Ethereum smart-contract scripts is identified in the DAO and, everyone is assured, quickly fixed.
June 2016 The DAO is hacked; nearly a third of its money is siphoned away into a “split DAO.” Who could possibly have seen that one coming?
June 2016 The very same people who until recently were trumpeting “irrefutable computer code … not influenced by outside forces … not controlled by its creators” immediately call for the intervention of creators and outside forces to repair the damage caused by the computer code. To their credit, they seem fairly embarrassed about this.

A “soft fork” to ensure the attacker can’t make away with the drained funds is proposed, as is a much more drastic “hard fork” to return the funds to investors. Someone purporting to be the attacker appears in the DAO slack channel claiming they’ll bribe miners to oppose the soft fork. Needless to say, one way or another, the DAO will be DAOing no more.

June 2016 The august London Review of Books publishes a 35,000-word piece about Craig Wright (remember him?) which serves primarily as a vivid example of why technically incompetent writers should not attempt technically dense subjects. Sarah Jeong hilariously eviscerates it with one of the most caustic tweetstorms I have ever encountered. Meanwhile, Wright’s long game is explained, when he files dozens of Bitcoin-related patents.
June 2016 Emin Gün Sirer, a Cornell professor one of the researchers who identified flaws in the DAO’s voting proposal, and an early identifier (and explainer) of the exploit that led to the DAO hack — for which he was apparently falsely accused of being the attacker by — points out that the proposed fix opens Ethereum up to a troubling Denial-of-Service attack. The soft fork is called off … for now.
Press time A rumor sweeps through the Bitcoin community that a supermajority of Chinese miners — which is to say, a majority of all miners — has suddenly decided to reject Bitcoin Core in favor of previously spurned Bitcoin Classic. This rumor is, to understate, thus far unsubstantiated.
July 2016 Will the DAO attacker yet abscond with their hacked ether? Will it be worth anything by then? Will Ethereum recover from this wound — and will they aggravate it themselves, by intervening too much — or will it cripple the credibility of their smart contracts forever? Will patent offices fall victim to Craig Wright’s claims that he is Satoshi Nakamoto? Will Ripple and Stellar ever find wide success? Will the reign of Bitcoin Core be overturned in favor of the higher-bandwidth — and more centralized — hard fork of Bitcoin Classic, or will sidechains and the Lightning Network era in a second era of cryptocurrency? Your guess is as good as mine! Stay tuned to this exciting stream, and don’t touch that touchpad! Whoever’s writing this show, you have to give them this much, they’re never boring, and always unpredictable.

Source: TechCrunch

The dredge report: being an account of an expedition into the hyperreality of the California Delta

The dredge report: being an account of an expedition into the hyperreality of the California Delta

It has occurred to me that perhaps TechCrunch pays insufficient attention to slurry, sediment, silt, mud, and muck; to canals, earthworks, levees, dikes, dredges, and the Army Corps of Engineers; to the vast engineering works, with lifespans measured in decades, that literally reshape our world. So last weekend I boarded a bus hired by the Dredge Research Collaborative.

This wonderfully obsessive and obscure group of infrastructure aficionados, guided by Wired writer and Rhode Island School of Design professor Tim Maly, has held an annual “DredgeFest” event every year since 2013. This year it was a tour of that vast artificial hyperreality just upstream of San Francisco and the Valley — the California Delta.

“We own the only high-rate offloader west of the Mississippi,” Jim Levine says at our first stop, the colossal megaproject called the Montezuma Wetlands, with some justified smugness in his voice. “Slurry it up to 15% … pump the sediment 4 miles at 20,000 gallons a minute … last year we took in about a million [cubic] yards … probably half of all the dredging in the bay.”

“Wow,” an awed student breathes in reply, “that’s a lot of dredge.”

The entire Delta is, basically, a lot of dredge. Once it was a huge wetland carved by a network of intertwined, constantly shifting waterways. Then came the 19th-century gold rush. Miners upstream in the Sierra washed away entire rock faces with high-pressure water, creating a gargantuan “pulse” of sediment that filtered downstream, its traces still visible in the San Francisco Bay today. Meanwhile, settlers began to cultivate the Delta. Chinese laborers built the first delta megaproject; a colossal array of levees to (aspirationally) hold back floods.

The descendants of those levees wall the Sacramento and San Joaquin rivers today — and the peaty land of the islands behind them has subsided to a depth which in places hits 26ft/8m below sea level. On Sherman Island, where the Sacramento and San Joaquin meet, we stood in those depths, looking what felt like way up to the riverbanks, jokingly advised to look for cracks in the levees.

Of course what really worries Californians is earthquakes. Sherman Island, and its levees, keep the brackish Bay water out of what’s sent south by the California State Water Project. If it were to be inundated by a levee breach, salinity would increase so much that “we’d have to shut down the pumps” carrying water south, said an engineer from Ducks Unlimited (no, really) as he showed us around the site.

If this doesn’t seem like a tenable long-term solution to you, you’re not alone. Governor Brown and co. are currently promoting a $15 billion plan to carry water under the Delta to Southern California, courtesy of two giant tunnels. This is, to put it mildly, controversial. Others have proposed converting farmland back to natural wetlands. Which is of course even more controversial.

There is no such thing as a megaproject without fierce opposition. Even the Montezuma Wetlands project — which, elegantly, tops up subsided lowlands with dredged sediment, to regenerate the kind of wetlands which originally existed — attracted enormous hostility and resistance. The project was birthed in 1991: it did not launch until 2003, after 12 years of legal battles.

Now, in the shadow of a massive wind farm (its existence perhaps prompted by the project’s $150,000/month PG&E bill) a slow restoration of two thousands acres of wetlands is underway, and should continue for at least another decade. In an era when “tech” usually means something obsoleted within a few years, there is a certain grandeur to projects with this kind of scale and lifespan.

Even outside of cities, most of the population of California lives in a fundamentally artificial environment, courtesy of what can be considered an ongoing engineering project that dates back more than a century: the transformation of the Delta and the Central Valley into some of the world’s most productive farmland, while keeping the South alive with water from the North, the Owens Valley, and the Colorado River.

Some of this history has been captured at the Dutra Museum of Dredging (no, really) in Rio Vista in the Delta. (Sorry, hacker tourists; they’re open by appointment only, and prefer groups of 10 or more.) There I learned that the clamshell dredge was one of the cutting-edge world-changing technologies of its day. It’s still in wide use, but of course it’s not “tech” any more. As technology ages, it becomes infrastructure, and grows deeply boring to most.

The friend who invited me out to DredgeFest is a professional infrastructurist himself — except that his is in orbit. One day the descendants of his satellites will be both as unsexy, and as crucial, as the dredges you might glimpse from BART or the bayside from time to time, without really noticing. And why should you? Like most infrastructure, they are much too important to be allowed to be interesting.

Source: TechCrunch

Learn deeply, but baby, don’t fear the Skynet

Learn deeply, but baby, don’t fear the Skynet

Who’s afraid of AlphaGo? Everyone who’s anyone, you might think. Elon Musk, Bill Gates, and Stephen Hawking have all expressed concern about the “existential threat” of AI, just as “deep learning” neural networks are revolutionizing the AI field. Should we be scared for our jobs? Or even our species? Fear not, I have answers! They are, respectively, “maybe,” and “don’t be ridiculous.”

It is right to describe recent AI developments as a “breakthrough,” as Frank Chen of Andreessen Horowitz does in this excellent presentation which summarizes both the history and the state of the art. Chen ends with a bold call to action:

All the serious applications from here on out need to have deep learning and AI inside … it’s just going to be a fundamental technique that we expect to see in all serious applications moving forward … as fundamental as mobile and cloud were in the last 5-10 years.

And he’s right! But “deep learning” is not even remotely in the same galaxy as the “AI” that Musk, Hawking, and Gates are worried about it. It may be — indeed, probably is — a step along that very long road. But what’s interesting and powerful about deep learning is not that it makes machines “smart” in a way that the word is used with relation to people or animals.

What’s interesting about deep learning is that while there’s nothing magical or genie-like about it — as Geordie Wood points out in Wired, “it’s really just simple math executed on an enormous scale” — its “programs” are ultimately matrixes of values that have been trained, rather than lines of code which are written (although many lines of traditional code go into the training, of course.)

What’s powerful about it, what’s so exciting, is that it excels at whole fields of problems that are extremely difficult to solve using traditional software techniques: categorization, pattern recognition, pattern generation, etcetera.

It’s not often that whole new kinds of problems suddenly become amenable to better solutions. The last time it happened was when the smartphone ate the world. So deep learning really is an awesome and exciting new development. (As are other forms of AI research, albeit to a lesser extent. The loose taxonomy is that “deep learning” is part of “machine learning” which is part of “AI.”)

Let me offer some primers, while I’m here:

But just as deep learning is a good tool for many problems for which traditional programming is bad, it is also a bad (and extremely unwieldy) tool for many problems which traditional programming can solve. I give you, as an amusing example, this hilarious piece from Joel Grus: “FizzBuzz In TensorFlow,” in which he mockingly tries to use TensorFlow to solve the famously trivial “FizzBuzz” interview problem. Spoiler alert: it does not go well.

Similarly, neural networks are very far from infallible. My favorite example is “How to trick a neural network into thinking a panda is a vulture” by Julia Evans. And with machine learning comes new concerns — such as all the complex tooling around training and using it, per this excellent Stephen Merity analysis of how “we’re still far away from a world where machine learning models can live without their human tutors,” and even the potential need for machine forgetting:


So: AI (the research field) has benefited from a huge breakthrough, which is awesome and exciting and opens up whole new realms of newly accessible solutions that incumbents and startups alike can explore! This also means that jobs which consist largely of pattern recognition and responding to those patterns in fairly simple and predictable ways — like, say, driving — may be obsoleted with remarkable speed.

But AI (as in the research field) is still nowhere near AI (as in artificial intelligence remotely comparable to ours) and this very cool breakthrough hasn’t really changed that. Worry not about Skynet.

But, similarly, let us treat with the appropriate skepticism the clamor for deep-learning solutions to all human problems. Deep learning appears to be the new blockchain, in that people who don’t understand it suddenly want to solve all problems with it. I give you, for example, this call from the White House for “technologists and innovators to help reform the criminal justice system with artificial intelligence algorithms and analytics.”

There is, already, “software used across the country to predict future criminals. And it’s biased against blacks.” Any “improved” machine-learning system is, similarly, extremely likely to inherit the biases of its human tutors — cultural biases which will remain, for the foreseeable future, a problem no neural network can solve. Modern AI is no demon, but it’s no panacea either.

Source: TechCrunch

FFS, Facebook

FFS, Facebook

For your convenience. For your security. To better serve you. To offer you the best experience. To better fit our future plans. To comply with regulations. To optimize our resources. These are the blandly vicious lies that companies proffer when they want to take something away from you. I thought I was used to this game, but this week I was actually upset by it again. Et tu, Facebook?

As of this week, we can, by deliberate and malevolent design, no longer send or receive Facebook messages on the mobile browser in my phone. And neither can you. You must download install the Messenger app instead — or, as I intend to, abandon that functionality entirely.

The advantages to Facebook are obvious: it’s much better for a company to have an installed app than a mere web page accessed via browser. And certainly it’s not the most questionable thing Facebook has ever done. It is, however, the most breathtakingly hypocritical.

Founded in 2004, Facebook’s mission is to give people the power to share and make the world more open and connected.” Oh, the bleakly funny irony. This move flies directly in the face of that so-called mission statement. It quite literally strips us all of a currently existing way to share and connect, and drives us from the open interconnected Web into a walled — and locked — garden.

So if you ever believed in that mission statement, you can stop now. Facebook’s objective is to grow until it is globally ubiquitous. If it happens to accidentally make the world more shared, open, and connected while doing so, I’m sure Facebook’s braintrust will welcome this as a pleasant side effect, but it is hardly their mission. Their mission is to become wealthier and more powerful.

(In case anyone actually believes this change is to preserve the quality of the user experience: don’t be ridiculous. Our phones nowadays are pocket supercomputers. If it works on your widescreen browser, it can work just fine on your mobile browser too.)

Don’t get me wrong. I don’t blame Zuck. Every growing corporation reaches a point at which many of its internal factions and fiefdoms become, from an incentive perspective, parasites more interested in perpetuating themselves on the corpus of their host, rather than part of a single entity with a coherent vision and policy. If you look carefully you can see second- and third-order symptoms of that kind of corporate decay. It seems clear to me that this is one of them.

This decision isn’t going to hurt Facebook in any meaningful way, of course. Network effects mean never having to say you’re sorry. But I strongly suspect that it’s a sign that bit rot is setting in at One Hacker Way. Remember how for most of a decade Google could do no wrong, and then, circa 2010, its halo finally began to slip? There were little signs that led up to that — signs like this one.

So keep a close eye on Facebook over the next year or two. I predict more misses, more missteps, more clumsy communication, more execution failures, more decisions that seem completely baffling to impartial observers not privy to the internal politics. I also predict falling morale and a creeping sense, among engineers, that it’s no longer quite the top-tier place to work that it once was.

In the interim, I for one will stubbornly and pointlessly resist this attempt to encroach on my phone. If you want to message me on Facebook, you’ll have to wait for a reply until I reach my laptop. Perhaps you can pass the time by contemplating how much more open and connected the world is becoming.

Featured Image: Peter Shanks/Flickr UNDER A CC BY 2.0 LICENSE
Source: TechCrunch

Vive la France! Vive le Tech! …But do these two great tastes taste great together?

Vive la France! Vive le Tech! …But do these two great tastes taste great together?

I spent last week on a junket in Paris, paid for by the French government, visiting the various hubs and spokes of the burgeoning French startup scene. I don’t think they’ll invite me back, after they read this. My considered assessment is that the French government, and to an extent the larger French tech scene, lacks ambition, boldness, and confidence, and their technology strategy is doomed to failure.

Among our many meetings was one with Axelle Lemaire, Minister of Digital Affairs for the (spectacularly unpopular) current French administration. She, personally, impressed me greatly; but her ministry’s policies did not. Their stated aim was to foster tech startups in France in order to support France’s largest companies. The idea is that the startup ecosystem will act as a kind of farm team, the best of which will be incorporated into the big businesses, to transform them into the kind of digital disruptors that can compete with the big American Internet companies.

This is, to put it mildly, quixotic. It seems far more likely that big French companies will only slow and stifle whichever startups they partner, invest in, or acquire. This strategy stands in striking contrast to the Silicon Valley ethos that startups should devour and replace legacy dinosaurs, not prop them up. And it bespeaks a curious lack of ambition, as if the ultimate goal of any self-respecting tech startup is to become a division within LVMH or Total or Peugeot, rather than an Uber or Google or Tesla in their own right.

Unfortunately, with a few exceptions, the French government’s attitude was echoed by most of the French companies, and incubators, and hubs, and conference planners, and venture capitalists, with whom we met. The general attitude was that startups were minor-league players competing to be acquired by big-league multinationals. Almost nobody thought big enough to imagine startups so successful that they changed all the rules of the game.

Instead, almost everyone seemed to take it as a given that relationships and partnerships with big companies were, in and of themselves, extremely important and exciting. In fairness, the Valley makes a similar mistake when we celebrate funding rounds, rather than actual achievements — but still, can you imagine an early-stage AirBNB excitedly trumpeting a joint venture with Marriott, or a young Uber desperately seeking a partnership with Yellow Cab?

This French attempt to turn Paris into a startup hub is admirable, but treating a startup ecosystem as an end in and of itself smacks of cargo-cultism, even if your intent is to harvest them for big-company talent. World-beaters do not grow in a petri dish of low ambitions; with the wrong messages and incentives, you risk accidentally pushing your best people and companies towards excellence elsewhere, leaving stagnant mediocrity behind.

The good news is that, outside of the government, the French tech scene is decidedly in the middle of a sea change for the better. As a former Paris resident, I can assure you that the cultural attitude towards technology has been transformed. Ten years ago, tech was anything but cool. And we did encounter two extremely fast-growing French tech companies with admirably high and global ambitions, even if one of them — the long-distance ride-sharing platform BlaBlaCar — still claims, incomprehensibly, that the USA is not a good market for them. (The other is Sigfox, which I wrote about last week.)

Another beacon of francophone hope is Xavier Niel, the multibillionaire who last decade singlehandedly overturned the government-approved order of things in the French mobile-phone industry. Neil has since created and continued to fund the genuinely revolutionary engineering school 42, a branch of which will soon open in California, and is now constructing a vast complex in Paris to house 1,000 startups. Yes, you read that correctly. And yes, that too sounds a little cargo-cult. But it’s also an example of the expansive ambition that the French —and European — tech scene badly needs.

Source: TechCrunch