On the dark art of software estimation

On the dark art of software estimation

“How long will it take?” demand managers, clients, and executives. “It takes as long as it takes,” retort irritated engineers. They counter: “Give us an estimate!” And the engineers gather their wits, call upon their experience, contemplate the entrails of farm animals, throw darts at a board adorned with client/manager/executive faces, and return–a random number. Or so it often seems.

It is well accepted that software estimates are frequently wrong, and all too often wildly wrong. There are many reasons for this. I am very fond of this analogy by Michael Wolfe on Quora:

Let’s take a hike on the coast from San Francisco to Los Angeles to visit our friends in Newport Beach. I’ll whip out my map and draw our route down the coast … The line is about 400 miles long; we can walk 4 miles per hour for 10 hours per day, so we’ll be there in 10 days. We call our friends and book dinner for next Sunday night, when we will roll in triumphantly at 6 p.m.

They can’t wait! We get up early the next day giddy with the excitement of fresh adventure … Wow, there are a million little twists and turns on this coast. A 40-mile day will barely get us past Half Moon Bay. This trip is at least 500, not 400 miles …

Writing software is rarely a matter of doing something you already know exactly how to do. More often, it involves figuring out how to use your available tools to do something new. If you already knew exactly how long that would take, it wouldn’t be new. Hence “it takes as long as it takes.” Non-developers often seem to think that we engineers just look at a proposed task and think “we shall implement A, B, and C, then do X, Y, and Z, and that should require N hours, plus or minus a couple!” Sometimes it is like that. But not often.

More typically, the thought process is more like: “I can see how I’d do it if I were rewriting that whole controller from scratch, but that would take days … is there an elegant hack where I can change the inputs to this function in such a way that I don’t have to rewrite its code? … what if I monkeypatch it at the class level? … wait, maybe there’s an API call that almost does what I want, then I can tweak the results — hang on, what if I outsource it via an asynchronous call to the external OS?

In which case, the result is: “I can confidently estimate that this will require less than two hours of typing. However, working out what to type is going to take me/us anywhere from one hour to several days. Sorry.”

Another analogy, from my sideline in novel-writing: publishers always want a synopsis of your unwritten novel. This is insane, because, as an extremely successful author friend puts it, “writing a synopsis requires everything that actually writing the novel requires.” It’s like asking for a map of what is by definition terra incognito. So it often is with software estimates.

That said: if you’ve ventured into a lot of terra incognito in the past, and you’ve heard legends and tales of this new territory, and how similar it is to your past ventures, you can at least offer up an educated guess. (People seem to like the word “estimate” better than “educated guess.”) There are better and worse ways to estimate, and to structure estimation, and you can’t give up on the task just because software is hard.

What’s more, the form in which an estimate is requested is crucially important — and that’s often what managers, clients, and executives get so terribly wrong. More often than not, when a low-ball estimate balloons into a terrible reality, it’s their fault, not that of the engineers.

First, some managers/clients/executives (hereafter referred to as MCEs) still think that task performance can be measured in fungible engineer-hours. Not so. It was never so. We’ve known for decades that, for instance, “adding engineers to a late software product makes it later.”

Second, some MCEs tend to think that they can increase the accuracy of estimation by adding precision and granularity — by breaking down large projects into as many small tasks as possible, maybe even hundreds of them, and asking for individual estimates of each small task. This is a catastrophically terrible idea. Increasing precision does not increase estimate accuracy. In fact it does the exact opposite.

Let me explain. Software estimates are always wrong, because the tasks being estimated are always, to some extent, terra incognito, new and unknown. However, sometimes the errors are in your favor; an obscure API, a third-party library, or an elegant hack condenses what you expected to be a week’s worth of work into a single day or less. Your hope, when estimating a project, is not that all of your estimates will be 100% dead accurate. That never happens. Your hope is that your overestimates and underestimates mostly offset each other, so that your total estimate is roughly correct.

But the more precise and granular you get, the more likely you are to reduce your overestimates — which is disastrous for your overall accuracy. What’s more, the estimation of a hundred-plus features quickly leads to “estimation fatigue.” Estimation is important. It is the basis for your schedule for the next weeks or months. So, like all important things, you want to minimize, not maximize, its cognitive load.

Furthermore, a feature is not the work that goes into it, and not the unit that should be estimated. What engineers actually estimate is the size of the technical foundation building block that makes a feature possible, which is often/always shared among other features at that scale. It is very difficult — and doesn’t really make sense — to try and work out exactly how much of the work going into such a building block will be apportioned to individual features.

This is all pretty abstract. Consider a concrete example. Suppose you’re building an app that logs in to a web service. Don’t have individual server-side estimates for “user can create account,” “account email address can be confirmed,” “user can log in,” “user can sign out,” and “user can reset password.” Have a single “user authentication” task, and estimate that.

In hours? …Maybe, maybe not. I’m fond of using “T-shirt sizes” instead: S (an hour or three), M (a day or two), and L (a week or so.) Each project has its own pace, depending on tools, technical debt, the developers working on it, etc., so after a week or few you should be able to start figuring out how your hours map to your T-shirt sizes, rather than pretending to know that a priori.

T-shirt sizing is especially good because it comes with built-in warning signs. If you find yourself with all Ls and some XLs, it’s a very visible sign that you probably do need to deconstruct your project a little further; if you have mostly Ss, it means you’ve fallen into the granularity trap and need to abstract your tasks out a little. If you have a roughly even mix of S, M, and L, you’ve probably structured things so that you’ll have pretty good — well, least bad — estimates.

Which are, to belabor a point that needs belaboring, only estimates. The sad ultimate truth is that it takes as long as it takes, and sometimes the only way to find out how long that is is to actually do it. MCEs hate this, because they value predictability over almost all else; thus, they nearly always, on some level, treat estimates as commitments. This is the most disastrous mistake yet because it incentivizes developers to lie to them, in the form of padding their estimates — which in turn inevitably slows down progress far more than a few erroneous task estimates might.

So: accept that estimates are always wrong, and your hope/goal/aim is not for them to be correct, but for their errors to cancel out; estimate by feature group, not by feature; and never even seem like you treat estimates as commitments. Do all these things, and while estimation will always remain a gauntlet, it will at least cease to be the lethal minefield it still is for so many.

Featured Image: NY Photographic/PicServer UNDER A CC BY-SA 3.0 LICENSE
Source: TechCrunch

Watch SpaceX land a rocket in this awesome 360 video

Watch SpaceX land a rocket in this awesome 360 video

Have you ever wished you could stand on deck as a 160-foot rocket lands an arm’s length away from you, after a short trip into space? Yeah, me too, but given the rockets’ tendency to fall over and dramatically explode in a giant ball of fire, perhaps watching it with a VR headset is a the slightly safer option.

SpaceX just shared a gorgeous 360-degree video with the world, and if you happen to have a VR headset, now would be a good time to dig it out and strap it to your noggin; this video is definitely best experienced up close and personal, it’s quite the spectacle.

If you’re on mobile, the Facebook player does a pretty good job of showing it off even without a headset, but it’s available on YouTube as well, if you’re so inclined.

[embedded content]

Landing a rocket on a barge in the ocean is such a tremendous technological success that one can’t help but be agape and amazed at the feat. If SpaceX is able to consistently land all — or even most — of its first-stage rockets, it will mean a tremendous reduction in the cost of sending stuff up to the great, star-speckled darkness beyond.

As SpaceX puts it: “The Space Shuttle was technically reusable, but its giant fuel tank was discarded after each launch, and its side boosters parachuted into corrosive salt water every flight, beginning a long and involved process of retrieval and reprocessing. So, what if we could mitigate those factors by landing rockets gently and precisely on land? Refurbishment time and cost would be dramatically reduced.”

It’s a fantastically exciting time to be a space fan, for sure. And with that in mind, why not enjoy another view in glorious 4k resolution below, of the same triumphant victory of engineering, ingenuity, and perhaps just a tiny little bit of good fortune thrown into the mix as well.

[embedded content]

Source: TechCrunch

Welcome to the post-app world?

Welcome to the post-app world?

We’ve fallen in love with apps. It’s hard to see something so popular fading into the past, but what if that happened? What if apps were simply an iteration of the mobile web, before something better came along?

With the flurry of announcements that occurred around Facebook’s F8 conference, perhaps that time has finally come.

Are we witnessing the rise of the Bots?

Since 2010, “there has been an app for that”. However, such excitement appears to be diminishing. On your own homescreen (the most valuable real estate on earth), what new apps are using that space?

For every iPhone sold, 119 apps have been downloaded. However, we use fewer than a quarter of those apps in any given month. The average app loses 77% of its users within three days after being downloaded. The five apps that we love the most take up 80% of our session time.

Over the past few years, “click to download” display ads have littered feeds. It seemed like every retailers’ mCommerce strategy was to make an iPhone or Watch app. For many who were first to market, it seemed like a gold rush. On reflection, maybe our belief was more concerning the power of technology to change habits than empathizing with consumer needs.

Of course, apps have limitations. Probably the best marketing Apple has ever done is turning the boring notion of a “computer program” into the excitement of an app. They work brilliantly for many uses. Banking or airline apps do a superb job of bubbling up personal, secure information rapidly, a sort of micro-portal to what matters.

That said, again, as instant messaging and voice control takes off, it increasingly seems that apps are not the solution for everything.

sleepingrobot

Forcing a terrible choice

Apps provide an increasingly lousy way to get to what we want. Apple confidently says the future of TV is apps, which is stupid in the extreme. They merely replicate the false, anachronistic structure of the world of the TV channel.

Do I watch AMC or Breaking Bad? Do I want to see Superbowl or tune into CBS? Modern relationships are with the content, not the curator or the pipe. Can you imagine downloading 20 record label apps in order to get our music, and then needing to switch between them?

It’s the same with communications. Back in 2006, I could either call or text anyone in my contact list: a simple choice to make. Now with Viber, Line, WeChat, Instagram, Facebook Messenger, Twitter and a million other ways to reach someone, I first need to pick the app before I select the person that I want to reach… and then hope that it’s the one which they’re on.

When I want to travel to the airport, I unfailingly want a car to take me there. I don’t want to choose the provider, then the size of the the car, then its destination.

This brings up the first mechanism for the future: aggregation.

robotinfo

Aggregation

My TV is set up with four remotes, and watching John Oliver after broadcast requires me to press nine button via Kafkaesque menus.

I long for a day where a single app can be the gateway to a content type. It would be similar to how I use Spotify to listen to all of the music that I care about, or Twitter to consume all the news I’m likely to need. I see apps that bring what we care about to the front.

Apple TV wants to do this, but Apple doesn’t have the partnerships in place to draw upon all the content providers in the world. If business models can be developed, I see the rise of megaportals with maps as the primary access point for all travel, hotels, events, and places. This would be alongside another portal to access all of my networks and ways to reach them, and additional megaportals for retail and content.

 The Internet of apps

I then see our journeys within apps being one of linkage. A consumer may start a conversation on Facebook Messenger discussing movie times, before seeing suggested movies in the feed based on our locations.

Buying and downloading tickets would happen straight to Passbook, with Touch ID and Apple Pay. They then may be offered dinner reservations in Opentable, book a time via in-app IM, and then finally order an Uber to take them there. Each of these experiences will be within an app environment:: seamless, secure and personalized.

The Internet could soon become one totally personalized with apps.

Streaming

We may want to order items from retailers that we don’t care enough to use frequently, or a hotel site that only exists abroad. For this we’ll see app streaming as an emulator of an experience. Our Internet of apps may be formed of those that exist on our phone, but otherwise, we’ll use temporary apps that are streamed to provide such an interface.

New navigation

robottoygroup

This is where artificial intelligence and bots come together.

The notification layer might become a key way for us to skim the surface of the web, rather than deeply experiencing and discovering it. Combinations of technology like AI, machine learning, shared data and rich analysis from Google Now, Cortana and Siri could automate many parts of our lives.

We need to think less of the Internet as something we go to, but more in terms of the notification layer as a place for suggested prompts of what our phones think are relevant to us.

The cue to book an Uber as it’s raining and we’re running late; the weather forecast on our notification screen as we wake up; the rarely available 8 o’clock dinner slot from a late cancellation; the late flight departure… all best served to us as a notification.

Voice

We finally need to consider a screenless world. The Echo is an example of such a device, as are some wearables. How do we use sound, haptic feedback or other mechanisms to impart knowledge without large real estate? It’s this challenge that a whole new group of UI designers need to consider.

Apps are here to stay. Bots won’t kill them, but we’re about to move to a hybrid time where some apps become gateways to everything: they will allow us to choose what we want, rather than who gives it to us

Featured Image: Bryce Durbin
Source: TechCrunch

Gunning for Google, Comcast Preps Gigabit Internet That Works With Regular Cable

Gunning for Google, Comcast Preps Gigabit Internet That Works With Regular Cable

Comcast, the Internet provider everyone loves to hate, is gearing up to offer one-gigabit-per-second Internet service in five U.S. cities this year. The first five cities to see the blazing speed are Nashville, Atlanta, Chicago, Detroit, and Miami. In line with Google Fiber, Verizon FiOs, and municipal offerings at one-gigabit speeds to the home, the new Comcast service will dramatically increase download speeds. Most subscribers currently receive download speeds of 25-100 megabits per second. For the customers with a 100Mbps connection, the increase boosts their speed 10 times over. For customers with 25 megabit connections, it’s 40 times faster. At that rate, one could download a full-length HD movie in around seven seconds. Not bad.

What sets Comcast’s gigabit service apart is the fact that the Internet provider is not using fiber optic lines to achieve the mega-fast speeds. Instead the company is using the existing coaxial cable lines that are already piped into people’s homes, giving Comcast a potentially huge advantage over a project like Google Fiber—which requires digging costly trenches through cities to lay fiber cables.

Hardware Boost

Comcast’s gigabit-over-coax Internet requires a new kind of cable modem. That device is charmingly classified under a new DOCSIS 3.1 standard, an acronym for Data Over Cable Service Interface Specification. And while it looks like any other black box, this new standard is capable of pumping data at 10 Gbps over existing coaxial cable. Still, Comcast is ushering in its new service with only a tenth of that power—currently offering one gigabit per second downstream speeds with 35Mbps upstream.

A number of modem manufacturers have created hardware that works with the DOCSIS 3.1 standard, including Technicolor, CastleNet, Netgear, Askey, Ubee Interactive and, Sagemcom; but Comcast’s first consumer premise equipment partnership for the Atlanta roll out is with Technicolor, the company announced earlier this week. (Comcast actually telegraphed this move to residential 1Gbps last year.)

Fiber Speeds Without Fiber

According to CableLabs, the non-profit research group that makes interoperable hardware standards for the cable industry, the new 3.1 version DOCSIS modems are 40 percent faster than previous data-over-cable gear.

“With 3.1 technology, we define new types of channels that use more advanced encoding, different ways of transmitting signals, so that we can pack more data into the same amount of spectrum over coaxial cable, but also use bigger pieces of the coax spectrum using,” says VP of lab services at CableLabs Matt Schmidt. “We wrote the specs for 3.1 not just thinking about what cable operators need today, but what we expect them to need several years out. The devices can grow with the service needs without having to be replaced over and over.”

To get the new hardware, potential subscribers in the roll out cities will have to sign up. The first lucky city is Atlanta, where some neighborhoods can already order the 1Gbps service. Nashville is expected to be next.

Google has been using this model for a while to spread its fiber service, rolling out within cities a few neighborhoods at a time. But, unlike Comcast’s new plan, Google’s requires intensive home installation. Comcast is only asking current subscribers to change their modem.

“There will be some updates that we’ll be making at the neighborhood level,” Comcast spokesperson Joel Shadle told WIRED in an earlier conversation. “The cable modem termination system boxes in neighborhoods will need to be updated, and that’s the work that’s happening over the next couple years.” But in terms of the actual communications lines that run into people’s homes, none of that needs to be swapped out. “The digging up roads and laying new communications lines is not necessary with a DOCSIS-based Internet service,” Shadle said.

It’s notable that Comcast is introducing its new fiber-speeds-over-coax service in cities where Google Fiber already plans to build a footprint. If new subscribers to Comcast’s 1-gig service want to sign a three year contract, then there’s a slim chance they’ll migrate to Google Fiber. That’s something Comcast wouldn’t mind one gigabit.

Go Back to Top. Skip To: Start of Article.

Source: WIRED

Digital magazine company Issuu is now a collaboration platform, too

Digital magazine company Issuu is now a collaboration platform, too

Digital media company Issuu has been trying to offer a better way to present content online. Now it’s a promising a better way for teams to work together on creating that content too, with the launch of a new product called Collaborate.

Issuu, for those of you who don’t know, allows publishers to create digital publications. They may resemble glossy magazines, except freed from the limitations of print, with support for multimedia and interactive content.

The company said that in a survey of more than 1,300 publishers, it found that 64 percent of publishing teams work in different locations, and they rely on everything from email to spreadsheets to Google Docs to InDesign to coordinate. Collaborate is meant to replace many of those tools, creating a central location where a team (particularly a small to medium-sized media team) can create a digital publication together.

issuu

When you build something in Collaborate, you start with a flatplan, where you can create the layout of your publication and move different pages around. You can also pull layouts from Indesign, if you prefer.

Naturally, you can invite other users to participate, so they can upload images and add their own content. There’s an approval system, as well as the ability to track the status of each piece of the publication as it inches towards completion. You can even place ads.

Once you’re done, you can just publish directly to the Issuu platform, which the company says reaches 100 million readers each month.

Collaborate is currently available to customers of Issuu’s Optimum subscription plan. Oh, and you even need to be on a desktop computer to use it – it works on tablets, too.

[embedded content]

Source: TechCrunch

Windows 95 on the Apple Watch features the world’s most twee Start button

Windows 95 on the Apple Watch features the world’s most twee Start button

Big, complex things running on tiny things is a common theme this week. Earlier we had a hack that put Counter-Strike on Android Wear, and today some maniac has installed Windows 95 on his Apple Watch. At last it’ll do something worthwhile! That is, of course, if you can find the Start button.

Nick Lee of Tendigi Insights is behind this absurd and hilarious endeavor. He appears to be a natural joker: he it was who snuck a flashlight app into the App Store with a hidden tethering tool. And amazingly, it was I who wrote that up 6 years ago.

When you think about it, the Apple Watch is massively more powerful than pretty much any computer that was running 95 back in the day. So it should be able to handle the classic OS with ease, right? Well, it’s not that simple.

Apple Watch isn’t exactly an open system. It’s not like you can boot into the command line, format, and pop a new OS on there. That would be way too easy. But the difficulty of a thing is often positively correlated with the desire of developers to achieve it — with a scalar modifier based on stubbornness and an exponential multiplier for nostalgia.

It seems there’s a way to get a WatchKit app to load arbitrary code, even if that code happens to be a port of a port of an x86 emulator apparently held together with chewing gum and a desperate prayer. (it’s on GitHub)

[embedded content]

Windows 95, 8 GB of storage and half a gig of RAM is an embarrassment of riches. It’s an embarrassment of riches. Only problem is, you’re not going to get the cycles you’d like out of that 520Mhz processor, since it’s an emulator, not a virtual machine.

Result: Lee had to affix a tiny motor to the crown to spin it constantly during the hour-long boot process.

But once that’s done, you’ve got a Windows 95 machine on your wrist! If you don’t mind it running at approximately 2% speed and controlling the cursor with dozens of tiny finger movements, you can play Minesweeper on the subway — ad-free, and you don’t even need your iPhone around!

Congratulations to Nick Lee for making my Friday — this is magnificently dumb.

[embedded content]

Featured Image: Nick Lee UNDER A CC BY-ND 4.0 LICENSE
Source: TechCrunch

Gadget Lab Podcast: How About That Pink MacBook?

Gadget Lab Podcast: How About That Pink MacBook?

Source: WIRED

Puma’s got a tiny racing robot that can move as fast as Usain Bolt

Puma’s got a tiny racing robot that can move as fast as Usain Bolt

Its story is that of an epic battle. Mankind versus machine in a race for dominance. Only one can win.

In practice, thankfully, it’s much, much more adorable. A four-wheeled robot that looks remarkably like an RC car crossbred with a shoebox , programmed to give athletes something to race against. A sort of free-roaming robotic rabbit to their inner-greyhound.

BeatBot was created for Puma by the J. Walter Thompson ad agency — and a bunch of MIT engineers. The sporty little robot has nine built-in infrared sensors designed to follow the lines of a race track as it zips along at a pre-determined pace.

[embedded content]

The ‘bot is capable of hitting Usain Bolt levels of speed, but runners can slow it down via a mobile app in hopes of actually having a chance against the little box, which monitors the revolution of its wheels to figure out how fast it’s going. The BeatBot also sports front- and rear-facing GoPro cameras and LED lights on its tail for easier viewing after it leaves a runner in the dust.

For now, the shoe company is only making the BeatBot available to its own athletes (such as the aforementioned Bolt), but there are plans to introduce it to a number of athletic programs later this year.

Source: TechCrunch

Kentucky Derby attendees can now order food, place bets from their seats

Kentucky Derby attendees can now order food, place bets from their seats

Getting around, drinking and dining at the Kentucky Derby this year should prove a lot easier for fans and employees. According to Churchill Downs’ General Manager Ryan Jordan, the famed horse racing venue on Friday launched a Churchill Downs Racetrack app, powered by VenueNext, to give attendees a better experience on-site.

The new app, available for iOS and Android devices, will let users buy and split up a group’s tickets by phone, navigate the venue, “pin” their parking spot on a map or find the nearest restroom or concession stand.

It will also allow users to order their mint juleps, hot dogs and other concessions from their seats, either for delivery or pick-up without waiting on line.

The venue installed 1,600 beacons around the venue in preparation for the app’s launch and their biggest week of the year, including the Derby Week and Kentucky Derby races, Jordan said.

The iOS version of the app is also integrated with Churchill Downs’ affiliated TwinSpires, which lets users wager on horse races and collect their winnings remotely. The app store on Google Play doesn’t allow betting apps, so the feature is not included for those users.

Finding your way proves challenging for first timers at the Louisville, Kentucky venue because Churchill Downs lacks the standard bowl shape of modern stadiums, Jordan noted. It is sprawling, with a 1-mile racetrack and 1.6 million square feet of covered indoor hospitality and dining space.

The CEO and co-founder of VenueNext, John “JP” Paul, told TechCrunch that Churchill Downs is the largest sports venue to adopt his company’s technology to-date.

VenueNext is also behind mobile apps used to buy tickets, navigate and order concessions within the San Francisco 49ers’ Levi’s Stadium, Yankee Stadium, the Dallas Cowboys’ AT&T Stadium, the Orlando Magic’s Amway Center and will soon be available at the Minnesota Vikings’ new stadium.

But even the largest NFL stadiums have a capacity around 90,000, while Churchill Downs last year saw 170,500 attendees at the Kentucky Derby.

VenueNext aims to eventually expand use of its tech to campuses of every kind — from college to corporate, hospitals to hotels.

Besides giving attendees and staff a bit of help getting where they need to go on-site, VenueNext also gives its customers detailed data in real time and other reports about how people use their venue, and where there may be room for operational improvements and different uses of their space.

While Churchill Downs doesn’t report total concessions and merchandise sales publicly, Jordan said, last year the venue served 127,000 of its signature mint juleps during the Kentucky Derby, as well as 163,000 hot dogs.

Offering navigational help, express delivery and pick-up may help increase those sales. But the company is mostly seeking to make repeat customers of all ticket holders with the launch of its mobile app, Jordan said.

Featured Image: Churchill Downs
Source: TechCrunch

Supreme Court grants FBI decentralized warrants, power to hack suspects anywhere

Supreme Court grants FBI decentralized warrants, power to hack suspects anywhere

The US Supreme Court has passed a proposed change to Rule 41 of the Federal Rules of Criminal Procedure, one of the main bodies of law that governs the powers and behavior of the FBI. Previously, Rule 41 stated that a judge may only hand out a warrant to be issued within the district they represent — but how do you work within that system when you’re tracking someone whose location has been technologically obscured? The new version of Rule 41, approved on Thursday, removes the requirement in cases where the suspect’s location cannot be realistically obtained. In practice, this means the FBI can ask for, and receive, warrants to hack suspects anywhere in the world.

This comes in the wake of a number of legal decisions against the FBI, stemming from the jurisdictional issue presented by the former version of Rule 41. The US congress may intervene to stop this rule change, but it’s doubtful that it will choose to do so, especially in an election year. The Supreme Court also changed Rules 4 and 45 in the same decision, but they’re not considered as centrally important to the FBI’s cyber powers.

FBI

Until now, it’s been difficult to get authorization to directly hack anonymous users of the TOR Network and other anonymity regimes. In many cases the FBI has had to confirm a user’s rough location before they could ask the appropriate judge for a warrant to conduct further, directed attacks against a known criminal personality. That takes time and, in some extreme cases, may simply be impossible. The Supreme Court decision means that in cases where the location of a target computer has been “concealed through technological means,” jurisdiction essentially does not apply at the investigatory phase.

Here’s the most relevant part of the full ruling:

rule 41 change 2

These warrants would still have to meet the normal standards of evidence for a warrant of the type requested, and would have to show that the location of the suspect could not be reasonably attained by other means. In practice, fulfilling this second requirement could be as simple as demonstrating that a suspect uses the TOR Network at all.

silk road 2

Obviously, deep web investigation is possible without these powers. It’s just much harder.

To an extent, the FBI’s concerns are unquestionably real — we can’t, as a society, let crime go on simply because technology has been specifically created to run afoul a rule even The Intercept calls “a technicality” in many situations. The concern is not so much that the FBI will be able to push forward with these sorts of cyber investigations more efficiently, but that the powers will be subject to little oversight.

In particular, privacy advocates worry that this could turn into a meta-warrant issued to give the FBI jurisdiction to attack entire anonymity networks like the TOR Network and, potentially, the entire user base of such programs.

In addition, a large proportion of the suspects investigated by the FBI will be found to be outside the FBI’s ability to prosecute — the criminals will turn out to be in Russia, China, Iran, or just plain old Europe. As UC Hastings professor of law Ahmed Ghappour said in a recent paper, the FBI’s increasingly aggressive tactics in pursuing cyber criminals has the potential to set off real international strife, if the recipient nation decides to take it the wrong way. In many cases, the FBI is already conducting cyber operations of one kind or another against suspects whose physical location is unknown — with this rule change, it’s expected that activity will become totally routine.

As of right now, the FBI has a real sense of entitlement to try any case in which they’ve done the lion’s share of the investigation — check out the case of Eric Eoin Marques, who will soon be transferred to the US despite not having set foot in the country or having hosted a single server there. Since the crime was online, it affected America and can thus motivate an extradition request — the wide-open nature of international law has allowed novel modes of cyber crime to more quickly affect the standards for investigation and prosecution than in the US. For better or worse, America is just now starting to replicate some of the same aggression toward jurisdictional restrictions at home.

Source: ExtremeTech