Not all startups need the bright lights of the big city

Not all startups need the bright lights of the big city

In an office park overlooking a lake in Southern New Hampshire, Rajesh Mishra is working to change how cell networks are created. Rajesh and his company, Parallel Wireless, along with a dozen or so other nearby startups in Southern New Hampshire, are taking on a variety of challenges that face tech infrastructure that most people are not aware of, but impact our daily lives.

These startups are working on problems like how to load websites faster, how to improve security in storage systems and how to enable people to get more information across cell networks. In addition, these startups are also disproving one of the strongest-held beliefs in tech that emerged over the past decade: that startups need to be located in cities and must rely on millennials.

Over the past two decades, the center of gravity for tech has shifted from the suburbs to the city as millennials flock to cities, and startups and tech firms follow. San Francisco, New York City and Boston have experienced a tremendous upswing in tech as the firms that had been located in nearby suburbs have moved into the city.

Despite this trend, or more likely because of  it, Southern New Hampshire has benefited, as the remaining suburban startups have a clearer pitch to job seekers and certain heightened advantages over urban tech firms. While the suburbs are not the right place for all startups, it is the perfect place for some.

Dyn, an Internet performance management company that handles all of the infrastructure-related decisions websites have to make, was started in Manchester, NH because the founders liked the freedom that came from the suburb’s low costs. Instead of being dependent on outside funding to pay expensive rent, companies like Dyn have been able to develop on their own schedule as they don’t need to meet the metrics set by VCs in order to raise successive rounds of funding needed to remain open.

Being located in Manchester, which can be 75 percent less expensive than Boston, the nearest city, enabled the Dyn team to decide when they want to roll out new products, when they want to expand internationally and how they want to run their company — a level of control that a company cannot have if it must appeal to investors in order to survive, a necessity for many urban startups.

While the suburbs of Southern New Hampshire are an hour away from the nearest major city, tech firms and startups there have not struggled to attract the talent they need to innovate and grow. In fact, the suburban location is a boon to recruiting; even though millennials may prefer to live in cities, older workers prefer the suburbs, and suburban startups offer an easier, rush-hour-free commute compared to heading into the city.

For a certain group of startups, being in the suburbs is the best option.

DataGravity, a Nashua-based startup that introduced data-aware storage with the goal of turning storage from being thought of as a dumb container into a trusted advisor, has needed to attract a variety of people because of the range of challenges the company is working on. As DataGravity looks for engineers to work on storage and security issues and designers and data scientists to develop analytics and visualization tools, DataGravity, along with other nearby companies like Plexxi, have an easy time attracting suburban talent. While these companies may need to make an extra effort to convince people in Boston to join, these startups have been able to attract the needed people as they grow into companies worth hundreds of millions.

The suburban startups of Southern New Hampshire have capitalized on their location and wisely use it as a test ground for their products before trying to sell to large companies or attract large user bases. Being located outside of a buzzy tech scene not only enables these companies to focus on their products, it lets startups stand out in their communities, making it much easier to interact with users and run pilot programs.

Adored, a Manchester-based app that lets users know about daily specials, worked with a group of local merchants to test different approaches before deciding on the app’s current structure, which emphasizes daily photos of chalkboards. Adored was only able to iterate and receive significant feedback from its users since merchants in towns like Manchester and Nashua were excited to try a new marketing solution and were willing to stick with Adored as it developed. Adored has now launched in large cities, but the company’s success will be in large part due to all the feedback received in New Hampshire suburbs.

The recent revitalization of Southern New Hampshire towns like Nashua and Manchester shows that even though it is popular for startups to be located in a city, being based in the suburbs may be the best decision a tech firm can make. The chance to build and grow at your pace, without being beholden to VCs, can be incredibly freeing.

Similarly, being outside the tech echo chamber that ensnares many startups and causes them to lose touch with their customers can be highly beneficial. At a time when startups are desperately searching for space in cities, Nashua and Manchester, New Hampshire reveal an alternative.

Rather than just deciding to be located in a city because it is the hot thing to do, startups should think through that decision. For a certain group of startups, being in the suburbs is the best option.

Featured Image: Joseph Sohm/Shutterstock
Source: TechCrunch

Facebook spares humans by fighting offensive photos with AI

Facebook spares humans by fighting offensive photos with AI

Facebook’s artificial intelligence systems now report more offensive photos than humans do, marking a major milestone in the social network’s battle against abuse, the company tells me. AI could quarantine obscene content before it ever hurts the psyches of real people.

Facebook’s success in ads has fueled investments into the science of AI and machine vision that could give it an advantage in stopping offensive content. Creating a civil place to share without the fear of bullying is critical to getting users to post their personal content that draws in friends’ attention.

Twitter has been widely criticized for failing to adequately prevent or respond to claims of harassment on its platform, and last year former CEO Dick Costolo admitted “We suck at dealing with abuse”. Twitter has yet to turn a profit, and doesn’t have the resources to match Facebook’s investments in AI, but has still been making a valiant effort.

To fuel the fight, Twitter acquired a visual intelligence startup called Madbits, and Whetlab, an AI neural networks startup. Together, their AI can identify offensive images, and only incorrectly flagged harmless images just 7 percent of the time as of a year ago, according to Wired. This reduces the number of humans needed to do the tough job, though Twitter still requires a human to give the go-ahead before it suspends an account for offensive images.

<input type="hidden" name="fallback" value="

This embed is invalid

“/>

[embedded content]
Facebook shows off its AI vision technologies

A Brutal Job

When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either a user or paid worker. These offensive posts that violates Facebook’s or Twitter’s terms of service can include content that is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.”

For example, a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone’s wall, a Group, Event, or the feed. They might upload revenge porn, disgusting gory images, or sexist or racist memes. By the time someone flags the content as offensive so Facebook reviews it and might take it down, the damage is partially done.

Previously, Twitter and Facebook had relied extensively on outside human contractors from startups like Crowdflower, or companies in the Philippines. As of 2014, Wired reported that estimates pegged the number of human content moderators at around 100,000, with many making paltry salaries around $500 a month.

The occupation is notoriously terrible, psychologically injuring workers who have to comb through the depths of depravity, from child porn to beheadings. Burnout happens quickly, workers cite symptoms similar to post-traumatic stress disorder, and whole health consultancies like Workplace Wellbeing have sprung up to assist scarred moderators.

Facebook's Joaquin Candela presents on AI at the MIT Technology Review's Emtech Digital conference

Facebook’s Joaquin Candela presents on AI at the MIT Technology Review’s Emtech Digital conference

But AI is helping Facebook avoid having to subject humans to such a terrible job. Instead of making contractors the first line of defense, or resorting to reactive moderation where unsuspecting users must first flag an offensive image, AI could unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.

Today we have more offensive photos being reported by AI algorithms than by people

— Facebook’s Joaquin Candela

Following his talk at the MIT Technology Review’s Emtech Digital conference in San Francisco this week, I sat down with Facebook’s Director of Engineering for Applied Machine Learning Joaquin Candela.

He spoke about the practical uses of AI for Facebook, where 25% of engineers now regularly use its internal AI platform to build features and do business. With 40 petaflops of compute power, Facebook analyzes trillions of data samples along billions of parameters. This AI helps rank News Feed stories, read aloud the content of photos to the vision impaired, and automatically write closed captions for video ads that increase view time by 12%.

Facebook AI video tagging

Facebook’s Joaquin Candela shows off a research prototype of AI tagging of friends in videos

Candela revealed that Facebook is in the research stages of using AI to build out automatic tagging of faces in videos, and an option to instantly fast-forward to when a tagged person appears in the video. Facebook has also built a system for categorizing videos by topic. Candela demoed a tool on stage that could show video collections by category, such as cats, food, or fireworks.

But a promising application of AI is rescuing humans from horrific content moderation jobs. Candela told me that “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100%, the fewer offensive photos have actually been seen by a human.”

Facebook, Twitter, and others must simultaneously make sure their automated systems don’t slip into becoming draconian thought police. Built wrong, or taught with overly conservative rules, AI could censor art and free expression that might be productive or beautiful even if it’s controversial. And as with most forms AI, it could take jobs from people in need.

Sharing The Shield

Defending Facebook is an enormous job. After his own speaking gig at the Applied AI conference in San Francisco this week, I spoke with Facebook’s director of core machine learning Hussein Mehanna about Facebook’s artificial intelligence platform Facebook Learner.

Mehanna tells me 400,000 new posts are published on Facebook every minute, and 180 million comments are left on public posts by celebrities and brands. That’s why beyond images, Mehanna tells me “What we’re trying to do is build a system to understand text at near-human accuracy across 40 languages.” It’s called ‘Deep Text’.

This technology could help Facebook combat hate speech. Today Facebook, along with Twitter, YouTube, and Microsoft agreed to new hate speech rules. They’ll work to remove hate speech within 24 hours if it violates a unified definition for all EU countries. That time limit seems a lot more feasible with computers shouldering the effort.

Facebook's Hussein Mehanna speaks at the Applied AI conference

Facebook’s Hussein Mehanna speaks at the Applied AI conference

That same AI platform could protect more than just Facebook, and thwart more than just problematic images.

“Instagram is completely on top of the platform. I’ve heard they like it very much” Mehanna tells me. “WhatsApp uses parts of the platform…Oculus use some aspects of the platform.”

The application for content moderation on Instagram is obvious, though WhatsApp sees a tremendous amount of images shared too. One day, our experiences in Oculus virtual reality could be safeguarded against the nightmare of not just being shown offensive content, but being forced to live through the scenes depicted.

We don’t see AI as our secret weapon

— Facebook’s Hussein Mehanna

But to wage war on the human suffering caused by offensive content on social networks, and the moderators who sell their own sanity to block it, Facebook is building bridges beyond its own family of companies.

“We share our research openly” Mehanna explains. “Deep Text is based on research that was out there [including papers published as far back as 2011]. These are the crown jewels”, yet Facebook is sharing its findings and open-sourcing its AI technologies. “We don’t see AI as our secret weapon just to compete with other companies.”

In fact, a year ago Facebook began inviting teams from Netflix, Google, Uber, Twitter, and other significant tech companies to discuss the applications of AI. Mehanna says Facebook’s now doing its fourth or fifth round of periodic meetups where “we literally share with them the design details” of its AI systems, teach the teams of its neighboring tech companies, and receive feedback.

DSC05462

Mark Zuckerberg cites AI vision and languages as part of Facebook’s 10 year roadmap at F8 2016

“Advancing AI is something you want to do for the rest of the community and the world because it’s going to touch the lives of many more people” Mehanna reinforces. At first glance, it might seem a strategic misstep to aid companies that Facebook competes with for time spent and ad dollars.

But Mehanna echoes the sentiment of Candela and others at Facebook when he talks about open sourcing. “I personally believe it’s not a win-lose situation, it’s a win-win situation. If we improve the state of AI in the world, we will definitely eventually benefit. But I don’t see people nickel and diming it.”

Sure, if Facebook doesn’t share, it could save a few bucks others have to spend on human content moderation or other toiling avoided with AI. But by building and offering up its underlying technologies, Facebook could make sure it’s computers, not people, doing the dirty work.

Robots date, mate, and procreate 3D printed offspring in ‘Robot Baby’ project

Robots date, mate, and procreate 3D printed offspring in ‘Robot Baby’ project

Researchers in the Netherlands claim to have created the world’s first “robots that procreate.” What does that mean exactly? Well, child, when two robots’ fitness evaluation algorithms come to a successful conclusion, something beautiful happens. You’ll know when you’re older — or if you scroll down.

“This breakthrough is a significant first step in the Industrial Evolution and can play an important role in, for instance, the colonization of Mars,” reads the press release for the “Robot Baby” project. Well, from the humble acorn grows the mighty oak and all that, but these claims should be taken with a fistful of salt.

“Mating” and “evolving” robots appear now and then in research, from self-reproducing “molecubes,” to a robot “mother” selecting the best of its brood, to robo-fish competing and sharing their “genes.” (Sorry, I have a quotation marks “quota” I’m trying to fill in this “article.”) So far, no gray goo or robot armies.

But the new project still offers something new, if only as a proof of concept. Two robots (or conceivably more in the future) made of semi-random configurations of motorized blocks and capable of lurching movement, are given the impetus to make their way towards a bright light (in life this behavior is called phototaxis). Those that arrive quickly, proving their locomotive merit, can contact one another and evaluate whether they’re suitable mates.

What that process consists of there is precious little detail on, but one can imagine: similar block count and leg length, comparable time on the half-meter dash to the light.

Robot-gezin-web_tcm270-767827At any rate, having met, they go on a few dates (to the router) and, having fallen in love at first byte, they submit their genetic material — that is, the code and hardware they are running — to be mixed and synthesized into a new robot. That’s the sex part, in case you were wondering.

The resulting robaby, in this case a hideous chimera consisting of dad’s right leg, mom’s left leg and tail/stabilizer, and god knows what babbling, buzzing confusion in its newborn silicon brain, is printed piece by piece and assembled by the lab techs.

While the claims of the researchers are something of a reach, they aren’t absurd. Self-modifying robots can adapt to situations and environments on their own rather than waiting on instructions from human monitors.

[embedded content]

And natural selection algorithms can produce unique solutions that people, with their puny, fleshy brains, might never hit on. If someone just straight up proposed a giraffe, for instance, would you approve? Yet they seem to be doing just fine. (This programme investigating their ludicrous anatomy was very interesting.)

If you doubt the evolutionary ingenuity of computer-controlled natural selection, look up “evolved virtual creatures” or find a way to run the supremely entertaining BreveCreatures.

The Robot Baby project is the robot baby of Guszti Eiben, professor of AI at Vrije Universiteit in Amsterdam. It was presented as part of the Campus Party traveling tech fair.

Featured Image: VU Amsterdam
Source: TechCrunch

Periscope introduces real-time comment moderation

Periscope introduces real-time comment moderation

Live-streaming app Periscope is rolling out a new experiment with real-time comment moderation, the company announced today. While its parent company Twitter has struggled over the years with spam and abuse – without much success, let’s be honest – Periscope is aiming to go a different route with the introduction of a community-policed system where users can report and moderate comments as soon as they appear on the screen.

Until today, viewers have been able to type in a text entry box in the Periscope app then see their comments overlaid on the live video stream during the broadcast. As others added their comments, the older ones would float off the screen.

However, in terms of managing harassment and abuse, Periscope only offered a set of tools similar to Twitter – that is, users could report abuse via in-app mechanisms or block individual users. You could also restrict comments only to people you know, but this is less desirable for those interested in engaging with a wider, more public community on the app.

With the new system, Periscope viewers can report comments as spam or abuse which will cause the individual comment to disappear from their screen immediately, and will prevent them from seeing other messages from that same person during the broadcast. As a result of being flagged, Periscope will then randomly select a few other viewers to vote on whether or not they also agree the comment is spam or abuse or if it looks okay.

If the majority of viewers indicate the comment is spam or abuse, the commenter is notified their ability to chat is being disabled temporarily. If they again have a comment flagged during the broadcast, they lose the ability to chat for the duration of the live stream.

1-l14iQr9QeyjrGUuxj7-uoA

By asking a random group of online users to confirm if the comment is spam or abusive, Periscope could potentially cut down on community-based attempts at censoring unwelcome viewpoints. That is, if the broadcast is on a divisive topic, and community voting was not involved, someone could target those users with differing ideas by flagging their comments. But instead, if the online community doesn’t agree that a comment is spam or abuse, the comment’s poster should remain unaffected.

Whether or not comments are moderated is up to the broadcaster, and viewers can also choose to opt out of voting via their Settings, the company notes.

What’s interesting about this launch is that it introduces a fairly simple system for managing bad actors on the service – something that Twitter itself could learn a thing or two from, in fact. Twitter is also a real-time platform, and has often been the first source for a number of breaking news stories over the years.

But when events are shared instantaneously, that leaves little time for the company itself to react to reports of abuse or obscene or graphic content. Automated tools like this new voting system in Periscope could help.

The launch also comes at a time when the app has been under fire for serving up livestreams of extremely disturbing content, including one woman’s stream of her suicide by train, the premeditated assault of a man by two teens in France, and the livestream of an Ohio teen’s rape, for example. Meanwhile, a video posted on Twitter showed a gang rape victim in Brazil laying naked and unconscious – something that has now led to protests in the country, as citizens marched on the Supreme Court.

While seeing a graphic incident like this is not common, they do pose a much greater challenge for platforms like Persicope, where users don’t have to have to sign up using real names. Instead, Persicope users can choose to sign up with a phone number or a Twitter account – the latter of which is already fairly anonymous as it only requires an email or phone number, too. By obfuscating a user’s true identity, these public platforms invite bad behavior.

That being said, we understand that Twitter is not considering implementing this same system on its site, as tweets are not the same as Periscope comments – that is, they’re not real-time and ephemeral. Twitter doesn’t believe that tweets require immediate and actionable moderation. We’re not so sure about that.

The new comment moderation system is rolling out now, via an app update.

Source: TechCrunch

Y Combinator announces basic income pilot experiment in Oakland

Y Combinator announces basic income pilot experiment in Oakland

Y Combinator announced today that it would launch its first basic income experiment in Oakland, CA. The startup accelerator began researching the concept of basic income last fall and will soon start paying salaries.

Y Combinator initially said that it wanted to pay basic income to a group of people over a five-year period and study the effects, but now the company has changed course. It will begin the research with a short-term study in Oakland, Y Combinator announced in a blog post: “Our goal will be to prepare for the longer-term study by working on our methods — how to pay people, how to collect data, how to randomly choose a sample, etc.” Depending on how the pilot goes, Y Combinator may continue with the longterm study.

The idea of basic income, which would guarantee a base level of financial support for every person, has gained steam recently. In just a few days, the Swiss will vote on a referendum for basic income. Basic income also has its champions in tech. Y Combinator president Sam Altman has argued that, as technology usurps jobs, the need for a universal basic income will become more pressing.

“In a world where technology eliminates jobs, it will mean that the cost of having a great life goes down a lot,” Altman tweeted today. “And I think we need something like basic income to have a cushion and a smooth transition to the jobs of the future.”

But the concept also has its detractors. One of the biggest questions about basic income is where the money will come from. Y Combinator, with its roster of wealthy investors, may not have to worry about funding its basic income project, but funding is a more pressing concern for governments. The Center on Budget and Policy Priorities (CBPP) has argued that a government-funded basic income would increase poverty by stripping funding from federal programs supporting the poor and instead inject that money into the middle and upper classes.

“Suppose UBI [universal basic income] provided everyone with $10,000 a year,” CBPP’s Robert Greenstein wrote today. “That would cost more than $3 trillion a year — and $30 trillion to $40 trillion over ten years.” The Swiss government has urged voters to reject the basic income referendum, citing its cost.

But Y Combinator sees the pilot program as a way to model basic income for the future, saying government funding may not be the right approach. Its Oakland research will be led by Elizabeth Rhodes, a recent PhD graduate from the University of Michigan.

“In our pilot, the income will be unconditional; we’re going to give it to participants for the duration of the study, no matter what. People will be able to volunteer, work, not work, move to another country—anything. We hope basic income promotes freedom, and we want to see how people experience that freedom,” Altman said.

The accelerator says it is already working with Oakland city officials and community groups to plan the pilot, which does not yet have an official launch date.

Featured Image: Jesse Richmond/Flickr UNDER A CC BY 2.0 LICENSE
Source: TechCrunch

Asus reveals revamped water-cooled gaming laptop with dual Nvidia GPUs

Asus reveals revamped water-cooled gaming laptop with dual Nvidia GPUs

The trend in mainstream laptop design is to make it as unrealistically thin as possible, even if that means sacrificing battery life and performance. Improvements in mobile CPUs have made super-thin laptops much faster than they once were, but what if that’s not enough? For the discerning on the go gamer, Asus has announced a new version of its monstrous GX700 gaming laptop called the GX800 at Computex 2016. It has all the latest hardware with a giant liquid-cooling docking station. It takes the phrase “desktop replacement” seriously.

This is a big, big laptop, even without the liquid cooling dock. The display is 18.4-inches diagonally with 4K resolution and support for Nvidia G-Sync. To power that display the GX800 will have a pair of Nvidia GPUs configured in SLI, but Asus has only said they are unannounced GPUs — it was the same deal last time with the GX700. The version of the GX800 on display at Computex had dual GTX 980 GPUs. However, last year’s GTX700 turned out to have a GTX 980. Either the production version of this laptop will come with a newer GTX 1000-based GPU, or the “unannounced” aspect is simply that now there are two GPUs in SLI.

The GX800 will include the latest Intel Core-i7 chip clocked at 4.4GHz and memory clocked at 3.8GHz. In combination with the GPUs, not only will you be able to play games at incredibly high native resolution, but the G-Sync display refresh rate will by synchronized to the GPU to reduce tearing and lag. This laptop needs two 330W power supplies to keep all that hardware running, but it gets even more powerful when plugged into that massive water cooling dock.

gx800

You can’t usually overclock laptops very much, but the GX800 is designed for that very thing. When you connect the laptop to the water cooling dock, it is capable of overclocking the GPUs as much as 236%. The docked GX800 can also push the CPU to its maximum clock without fear of overheating. If it’s anything like the GX700, the liquid from the cooling dock doesn’t actually circulate through the entire laptop when you plug it in. That would interfere with cooling when the dock wasn’t attached. Instead, the compressor moves liquid through a smaller loop that pulls heat away from the components via a more conventional heat pipe. It’ll run fine without the dock, but not as fast.

The larger frame of the GX800 has also allowed Asus to upgrade this year’s gaming powerhouse laptop with a custom mechanical keyboard. Laptops are plagued by flimsy, low-travel keys, but Asus says this one will make gamers quite happy. It uses switches Asus designed in-house called MechTAG (Mechanical Tactile Advanced Gaming). The company didn’t go into detail, but the keys are raised from the surface of the laptop, indicating they have a good amount of travel. The “tactile” part of the name suggests the MechTAG switches will have a slight tactile bump like MX browns on full keyboards. The GX800 also has full RGB backlighting in the keyboard.

The GX800 is expected to launch in August, but the price is unknown. Prepare yourself for sticker shock, though. The GX700 retailed for over $3,000 when it came out.

Source: ExtremeTech

Energysquare is a wireless phone charging pad that doesn’t use induction

Energysquare is a wireless phone charging pad that doesn’t use induction

Meet Energysquare, a thin charging pad so that you don’t have to plug your phone charger ever again. Energysquare doesn’t rely on induction like most wireless chargers out there. Instead, Energysquare uses a conductive surface as well as a sticker on the back of your phone. The company is currently doing a Kickstarter campaign.

I met the team in Paris and saw a working prototype of Energysquare. The main device is a mousepad-size device with a grid of 25 metal squares. Metal squares are nice, but putting your phone on this pad doesn’t do much.

You’ll also have to put a sticker at the back of your phone. This sticker has two tiny metal dots at each end as well as a USB or Lightning connector so that you can plug the sticker to your phone. After that, you’re set.

While I’m not a fan of putting stickers on the back of your phone, I’m just tired of plugging and unplugging my phone multiple times a day. The idea behind Energysquare is that you have a charging pad at work and one at home so that you can just put your phone on the table to charge it.

Compared to inductive chargers, Energysquare can charge at full speed and you can put multiple devices on one pad. As long as the two ends of the stickers are on two different squares, your phone will start charging.

And the good thing is that stickers are quite inexpensive. You can get a bunch of them for $10. So you can replace your phone without having to think about this. An Energysquare charging pad costs $65 on Kickstarter (€59).

Eventually, the startup wants to be able to sell charging pads to train stations, airports and bars. This seems like a long shot, but it would certainly quite useful to have charging tables everywhere you go.

<input type="hidden" name="fallback" value="

This embed is invalid

“/>

[embedded content]
Source: TechCrunch

Inventory shortages may herald rebirth of Apple’s aging Thunderbolt Display

Inventory shortages may herald rebirth of Apple’s aging Thunderbolt Display

Apple’s Thunderbolt Display, first introduced in the summer of 2011, may be getting a refresh at WWDC, if inventory shortages at retail stores are any indication.

Just try getting one at your local Apple store — dozens of locations near me are “ship to store” only, suggesting (as is often the case before a refresh) that stock has not been replenished or even, as a MacRumors source has it, sent back to the warehouse.

The 2560×1536 display was a nice, if expensive, option when it made its debut, and five years later… well, now it’s just expensive. Needless to say it is no longer a recommended product — but the good news is that a replacement may be on the way.

It would, of course, be the panel from the (again) nice, if expensive, 5K iMac Apple put out in late 2014. Thinner, higher resolution, better color, modern ports  — all valuable things in a monitor. The problem is that few devices can process the 5K resolution — that’s four times the pixels found in a MacBook Pro — and push it over the Thunderbolt interface. The ones that can require two cables to do so, and it’s hard to imagine Apple allowing that to be the standard connection.

Updates to the DisplayPort protocol might allow it, but it probably won’t happen in time for WWDC and any hardware that could reasonably be expected to be announced there. It’s hard to think of a workable alternative — a combination of wireless and wired display driving is conceivable, but only just. Apple has never shied away from leaving standards behind, however, so something new and strange may in fact be on the horizon.

My guess is we’ll find out more at WWDC, since a big new display and potentially a new method of driving it would be something on which developers will want a heads up. But it also depends on the company’s ability to get the necessary hardware lined up by then.

We’ll know one way or the other on June 13. Watch for our live coverage two weeks from now.

Source: TechCrunch

Upcoming Very Bad Idea #992: Using VR in the courtroom

Upcoming Very Bad Idea #992: Using VR in the courtroom

Virtual reality is about to make its way into all sorts of places where it doesn’t belong, from the bathroom to the bedroom — and now to the courtroom. Industry knows no restraint unless it is imposed through the will of consumers, and in the case of the evidence-analysis industry, their consumers are our elected and appointed representatives. Whether these reps will encourage this ill-advised attempt to use technology to augment the justice system, we’ll have to wait and see.

Using VR to represent evidence is a natural idea. Throughout the history of Western law, lawyers have been using drawings, dioramas, and eventually 3D-renders of crime scenes and relevant locations to show a jury the nuanced relationship between different pieces of evidence. Recreating car crashes, for instance, can make it far clearer who is at fault than simple eyewitness testimony. People can lie, or be flustered and mistaken, but the science, they say, doesn’t lie.

trialVR 2

“Ohhhh, so THAT’S why Google beat Oracle!”

But VR has a unique psychological impact that even a high-quality 3D renders do not: interactivity. A render leads you though the crime scene by the nose — it’s a presentation of evidence. In a true virtual space, it’s less about showing the juror the evidence they are supposed to take into account, and more about letting the juror see and understand the implications of that evidence for themselves. Just one problem — they’re not looking at the evidence, but a recreation, and so any insight they derive on their own is now not from the crime scene, nor from the eye-witnesses who were at the crime scene, but from their layman’s analysis of a metaphor for a crime scene, created by an interested party.

In a drawing or rendering, we can see the placement of objects and the overall shape of the scene, but the really nuanced stuff is too small and detailed for the most part. We can see that the knife was there on the floor, but the precise relationship of the knife to the entryway — that’s hard to tell without a big, dramatic zoom orchestrated by one or the other of the legal teams. If there is some tiny detail that needs to be highlighted, it is very clearly being framed that way by someone with a vested interest in the outcome.

trailVR 3

VR recreations let jurors put on their detective hats — whether or not lawyers want them to.

A freely navigable VR environment, however, can be freely navigated and investigated by jurors. That puts incredible power in the hands of the creator of this environment — now the precise placement, color, size, etc of the virtual knife becomes crucially important. How could the cops not have seen it, the jurors might ask themselves. It’s bright red and, regardless, it’s right there! Objects in a VR environment feel very much like real objects, inviting jurors to take their every physical attribute as part of the scene — but they aren’t real objects, and this perception of extra insight doesn’t necessarily correspond to a reality.

There are all sorts of cases where this kind of insight could stop an overzealous prosecution, or nail a particularly clever criminal. But just as likely, it will give juries an inaccurate feeling of competence to do their own sleuthing, to connect dots in ways that seem to make sense based on how they see the situation, not how a cop did while on the scene.

crime example illustrating dangers of travel without securityEveryone deserves the chance to challenge any claim made against them, and certainly the physical relationship between objects is important to many, many prosecutions. But the introduction of virtual reality will be most impactful in the area not currently served by 2D renders. British barrister Jason Holt said that he wonders “how much difference going to a crime scene in 3D will make, compared to a standard DVD and video cameras, which are used at the moment to record similar information.”

That is to say, existing technologies are more than capable of illustrating the basic facts about a scene. The only applications that need VR for a particular piece of communication are those that are going to use its unique attribute (interactivity) to its benefit. That likely means quite a few cases where openly showing your interest in highlighting a fact could damage that fact’s credibility with the jury — so you just leave the fact out, and let the jury notice it for themselves.

Thus, putting jurors “into” a crime scene is in effect doubling down on the human element in criminal conviction, not eliminating it.

Source: ExtremeTech

Xbox One price drops to $299 just in time before unveiling slim Xbox One at E3

Xbox One price drops to 9 just in time before unveiling slim Xbox One at E3

Microsoft just dropped the price of its gaming console. The Xbox One now costs $299 in the U.S. with a 500GB hard drive, or €299 in Europe. For $319, you get a 1TB hard drive. And bundles also get a $50 price cut.

Interestingly, this news comes right before E3. Microsoft is set to announce a new, slimmer Xbox One. Rumor has it that it should be 40 percent smaller and come with a 2TB hard drive. (Maybe Microsoft is going to drop the Blu-ray player?)

Next year, Microsoft could also announce a more powerful Xbox One. So the slim model is just an intermediary step to keep things fresh ahead of the holiday season later this year. Sony is also set to release a more powerful PlayStation 4 ‘Neo’ later this year.

Today’s price drop could be a way to clear up the inventory before making room for the slim Xbox One. Or maybe Microsoft wants to be more aggressive when it comes to pricing as the company is lagging behind Sony when it comes to consoles.

Sony recently announced 40 million sales for the PlayStation 4. Microsoft has stopped reporting unit sales, but VGChartz estimates that Microsoft has sold around 21 million Xbox One units so far.

Either way, Microsoft is going to tell us more about the new Xbox One at its E3 conference on June 13th.

Source: TechCrunch