CounterCraft bags $1.1M to fire up a security decoy play

CounterCraft bags .1M to fire up a security decoy play

Spain-based security startup CounterCraft, founded last September, has closed a €1 million ($.1M) seed round to accelerate development of a b2b security decoy technology designed to engage hackers and keep them harmlessly occupied while also providing tools to analyze what’s going on.

The startup is working on “a variety of techniques, including deception, to lure adversaries into exposing themselves,” is how co-founder and CEO David Barroso puts it.

“We see counter-intelligence as a necessary stance in cyber security independent of the technology that supports it,” adds co-founder and CMO Dan Brett. “Until now it has been frustratingly manual to design and deploy campaigns to detect, study and engage with your adversaries. We offer a product to automate and manage this process.”

Barroso argues there’s a step-change in security thinking underway, shifting from a ‘defend against all threats’ mindset towards permitting some attack vectors “in a controlled environment” — with CounterCraft’s particular sales pitch being the ability “to decide to eliminate them, study their movement and behaviour or even manipulate them”.

The team points to investor dollars pouring into a slew of security startups also plying a nascent decoy furrow for tackling attacks — name checking the likes of TopSpin, Attivo, Guardicore, illusive Networks, YC-backed Cymmetria and TrapX, and noting they have collectively bagged more than $100 million in VC funding since late 2014.

CounterCraft is based out of Telefonica’s Wayra Madrid accelerator, and its funding comes via this program — with specific investors in the seed round including Adara Ventures, Orza Investments and Telefónica Open Future_.

Its early focus for expansion is Europe and the Middle East, with Barroso noting that many of its rivals hail from Israel but are running out of the US — leaving room, he reckons, for a European team to address a less well served region with its own flavor of decoy tech.

Commenting on the funding in a statement, Adara Ventures’ managing partner Alberto Gómez adds that the EMEA market “appreciates vendors with cutting-edge vision and technology, together with the focus and proximity to serve their needs”.

While CounterCraft is not saying very much about what its core tech actually is (security startups do usually err on the side of disclosure caution), the team is claiming some early traction from a handful of Fortune Global 2000 clients in the financial and retail industry, and in government.

“Such companies are looking for other ways to defend their organisations, along with being more active in their security posture,” says Barroso, adding: “We do not agree with the idea of ‘hacking back’ but there is a broad spectrum between being passive and ‘hacking back’.”

Right now they’re running a closed beta with their early users, with a plan to open that up to more interested third parties this September. A full commercial launch is pegged for January 2017, assuming all goes to plan, as product development continues with the beta users.

“Since we are still in an early stage of the product, we want to work closely with those early adopters and build a robust product that can help them when proposing active defence strategies,” Barroso adds.

The subscription business model allows companies to run CounterCraft’s so-called “counter-intelligence campaigns” internally and externally. Those wanting to run internal campaigns need to install its tech inside their network, but for those wanting to run external campaigns a SaaS version is also available.

Source: TechCrunch

Learn-to-code startup CK targets the Minecraft modding craze

Learn-to-code startup CK targets the Minecraft modding craze

The learn to code space is packed with ideas. But when it comes to what children want to do online, towering over everything else is the build-it-yourself Microsoft-owned Minecraft platform. All lesser virtual, creative playgrounds are doomed to be overshadowed by this pixellated silhouette.

Minecraft is clearly great fun for kids, but less great if you’re a startup trying to build your own learn-to-code gaming environment… But, as the saying goes, if you can’t beat them, join them. So that’s exactly what London-based Code Kingdoms has been doing.

The startup launched its own learn-to-code game out of beta back in April 2015. And while it’s still working on that platform, it’s also since shifted efforts to focus on serving the Minecraft modding community with a subscription service to help kids learn JavaScript.

It’s also placing its eggs in the BBC micro:bit basket — aka the programmable single board computer gifted to one million UK schoolkids by public service broadcaster the BBC earlier this year.

At this point Code Kingdoms estimates some 4,900 UK schools are using its micro:bit software, and a further 30 are using its software for Minecraft modding. While the original Code Kingdoms game is being used by around 1,200 schools.

“Making an entertaining game alongside offering an intensive coding environment for kids and making strong video content was/is a real challenge,” admits co-founder and CEO Ross Targett. “Now we focus on teaching kids to design and code things using existing IP i.e. Minecraft, micro:bit, rather than making our own from scratch. It has made the business easier to manage and allowed us to focus on our strengths.”

Targett says the startup sees its core strength as being its educational content along with the code editor it’s designed to get kids writing code. This JavaScript teaching platform targets the six- to 13-year-old age-range.

“We place great emphasis on educational value and this is what we’re good at,” he adds.

Code Kingdoms closed a $1.4 million seed round at the end of 2015, led by Initial Capital and Blenheim Chalcot — which it’s only just announcing now, having been focused on stealthy product dev, according to Targett. 

It had previously raised around $410,000 in pre-seed funding, including from U.S.-based seed fund SparkLabs Global Ventures. As part of the most recent funding round Initial partner Tarek Abuzayyad has joined its board. 

The seed is primarily for growing the team and focusing on “product/market fit within our current model”, says Targett.

Code Kingdom’s newest product — CK for Modding — is a Minecraft-popularity-piggybacking subscription service that teaches Java through interactive videos and its online code editor. Pricing is £9.99 ($14.99) a month, for which kids get access to 40+ hours of coding course materials, a fully customisable Minecraft server and access to the web based code editor.

“We’re already close to achieving [product/market fit] and thus the focus has shifted into expanding into new regions, offering B2B services to expand revenue and soon offer a wider product range,” adds Targett.

Despite its London base and an initial focus on UK schools, Targett says the majority of Code Kingdom’s business is outside the UK, with the US and Canada being its biggest markets — so it’s planning a rapid expansion of its direct to consumer subscription modding product there.

For its b2b camps product, Targett says it’s been seeing “significantly more traction” in Asia, including in Malaysia, Singapore and Hong Kong.

“Our assumptions are on these economies are investing heavily both financially and culturally into STEM subjects and particularly Computer Science in order to transition their economies as modern and highly productive service exporters. This coupled with a highly competitive generation amongst parents when it comes to education has meant the opportunity is both highly lucrative and growing,” he adds.

Source: TechCrunch

SwiftKey officially unwraps its emoji prediction app 🎁😆💃🔛

SwiftKey officially unwraps its emoji prediction app 🎁😆💃🔛

Keyboard app maker Swiftkey, which was acquired by Microsoft for $250 million in February, has officially launched its first product since that acquisition — and it’s an emoji-predicting keyboard app, called Swiftmoji.

We spotted the company’s foray into emoji predictions back in May, when it was running Swiftmoji as a closed beta. The free app is now available for general download on iOS and Android. It only supports English language use for now.

Emoji predictions are crowdsourced, based on Swiftkey’s keyboard usage data, but will also draw on each user’s own emoji preferences over time.

Swiftmoji offers emoji suggestions based on what the user has just typed, with the idea being to speed up the hunt for the perfect visual punctuation to your text — so less swiping through ever-expanding screens of smilies, objects and symbols looking for that elusive French flag, for instance.

The app offers a range of potential emojis, based on your recent text — tapping on one of these  will add it to the text. There’s also a feature called ’emoji storm’ which, if you hold down on the ‘Swiftmoji add emoticon key’ will add the entire stack of emoji predictions in one long emotive line. Which is basically the emoticon equivalent of overusing exclamation marks. So will probably give you a very quick way to annoy your friends (depending on your friends).

On the iOS app there’s also a frequently used emoji feature that puts your favorite emoticons one tap away. And a popular screen to surface the trendiest emoji, based on Swiftkey’s usage data. A third screen organizes all available emoji into easy to navigate categories such as people, food & drink, sports and so on.

The Android app is different, as we’ve previously noted, offering a full keyboard replacement (with other Swiftkey features) — with the emoji predictions positioned in a line above the keyboard for quick access. The company says the differences are down to the different frameworks each platform has for managing keyboards.

The iOS app interface is definitely less streamlined/more clumsy, requiring users to tap the Globe key to toggle between whatever text input keyboard they prefer (which might be Swiftkey’s keyboard app or not) and Swiftmoji in order to generate and view the emoji predictions. So it’s more akin to a keyboard plugin.

As with Swiftkey’s other keyboard apps, iOS users are also required to grant ‘full access’ to use the app — which means the company pulls data about emoji usage to feed its understanding of emoji trends (and to power predictions). (For more on iOS keyboard app permissions read my earlier primer here.)

Android users won’t be contributing any intel to Swiftkey’s data banks unless they sign in and opt into additional Swiftkey services, like backup & sync.

Hit and miss suggestions

Testing the app out ahead of launch, the predictions seemed a tad tenuous and/or hit and miss at times. For example, typing ‘viva la France’ did indeed yield the French flag emoji as the first prediction. However the second prediction was the Italian flag. Which it’s hard to imagine being useful.

Typing ‘David Cameron’, the name of the former UK Prime Minister, yielded the crying tears of joy emoji first, followed by rolling eyes emoji, and an unsure face — so was arguably rather more accurate. (Also being predicted here: two pig emojis — click here if you’re wondering why.)

While typing the Spanish city of ‘Barcelona’ included a football among the predictions, as well as the sun emoji, a flamenco dancer and lots of heart/love emojis. So nothing too off piste there.

However the app also threw up some rather skewed suggestions for certain controversial keywords. For example, typing the word ‘nazi’ included some pretty tone-deaf suggestions — such as the kissing face emoji (errr), a thumbs up sign (hmm), two hands high fiving (ummm) and an American flag (… ).

While typing the word ‘feminists’ included the crying tears of laugher face, the sleeping face, the unimpressed face, the rolling eyes face, the hmm/thinking emoji and the medical mask face among the predictions. So, on aggregate, a rather negative visual assessment.

Yet emoji predictions for various religions appeared far less negative on aggregate, as if some tweaking of crowd-powered suggestions was going on behind the scenes — e.g.:

Swiftmoji

A spokeswoman for Swiftkey’s owner Microsoft confirmed the company has “worked to reduce the chances of anyone using Swiftmoji to be caused offence from the emoji predictions suggested”, but she added that it would not be editing people’s own use of emoji — so any enforced tweaks will likely mainly apply to initial predictions. Usage of the app over time will feed it with your own emoticon preferences/prejudices.

“We have a responsibility to our users but are still giving people the option to use whichever emoji they like and in whatever way they like,” said the spokeswoman, adding: “Swiftmoji is meant to be a fun and easy way of using emoji — we don’t want to cause unnecessary offence. If you do come across something that you find offensive, please report it to us.”

Evidently Swiftkey has done more work in certain potentially sensitive areas (e.g. religions) than others to ensure its emoji prediction algorithms don’t surface controversial/unpleasant suggestions. So, as ever, the shiny algorithm can absolutely reflect existing social prejudices, including positive discrimination.

Out of curiosity I also tried some real names, and this resulted in a mixed bag of suggestions. TechCrunch editor Matthew Panzarino’s name yielded a string of emoji hearts, a stack of cash, the mischievous ghost emoji and a couple of flags (neither of them American). While my own name included a gun, a pink flower and two girls holding hands — none of which were emoji I’d have suggested for myself. Meanwhile my French colleague Romain Dillet’s emoji predictions ran the gamut of facial emoticons and ended with a lipstick (French?) kiss. So plenty of randomness across the board there, albeit as you’d expect given these are all relatively obscure real names.

The app’s emoji suggestions for famous names came across as more logical, with ‘Taylor Swift’ including various musical note emoji, for example. And ‘Donald Trump’ including the American flag and the poo emoji, as well as a skull, a train (making American great again?), and dual exclamation marks. While ‘Kim Kardashian’ surfaced a bunch of make up emoticons and a princess crown.

Obviously the logic of suggesting emoji is never going to be an exact science. But there are always questions to be asked about the underlying workings of algorithms that will, ultimately, be feeding, shaping and reinforcing opinions.

After all, the design of emoticons themselves have not-so-subtly reinforced social stereotypes — e.g. of gender role by, for instance, depicting female emoji as princesses, brides and having their hair done vs male emoji being detectives, policemen and doctors. So pushing emoticon choices at users at least requires forethought about the perspectives (good and bad) that your technology might be encouraging.

In terms of early learnings from what the spokeswoman described as “a very limited” Swiftmoji beta, she said nearly 60 per cent of emoji entered came via predictions.

While the most popular emoji was the crying laughter face, although she added that the company found people using a wider variety of different emoji when using Swiftmoji vs using SwiftKey.

“We put this down to emoji predictions surfacing emoji users hadn’t come across before,” she added.

Source: TechCrunch

DeepMind’s first NHS health app faces more regulatory bumps

DeepMind’s first NHS health app faces more regulatory bumps

It’s fair to say that Google-owned AI company DeepMind’s big push into the health space via data-access collaborations with the UK’s National Health Service — announced with much fanfare in February this year — has not been running entirely smoothly so far.

But there are more regulatory bumps in the road ahead for DeepMind Health.

TechCrunch has learned the company won’t continue using one of the apps it co-designed with the NHS until the software has been registered as a medical device with the relevant regulatory body, the MHRA.

That’s especially interesting given that this app, called Streams, has already been used for patient care in multiple NHS hospitals. The Royal Free NHS Trust previously told TechCrunch the app had been used by up to six of its clinicians in three “user tests” in its London hospitals.

Which, put another way, means a profit-driven commercial entity has been involved in a real-world test of an unregistered medical device on actual hospital patients.

As TechCrunch previously reported, the Royal Free Trust stopped using the app in question in May, in the wake of another controversy pertaining to its DeepMind collaboration. At that point we learned the pair were contacted by the MHRA to discuss whether the app should be registered as a medical device.

A spokesman for the MHRA told TechCrunch at the time that regulatory compliance is helped by parties pro-actively having “a discussion” before going ahead with tests — something which did not happen in the DeepMind/Royal Free case.

It has now emerged that the upshot of those after-the-fact discussions is DeepMind believes it does need to register the app.

A spokesman for the regulator told TechCrunch: “MHRA understands that the Streams software is based on the NHS England AKI algorithm and is likely to be a class I medical device.

“As a class 1 device it is subject to self-declaration. The manufacturer of the device must hold all relevant test and validation data [including for software]  to support the intended purpose for the device. The Competent Authorities within the EU (MHRA in the UK) can request to review all documentation pertaining to the device at any time.”

Announcing a second collaboration with another NHS Trust earlier this month, DeepMind’s co-founder Mustafa Suleyman took to Medium to write how treating NHS health data with respect “really matters”.

He went on to note: “There are different authorities that give different types of approvals and oversight for NHS data use: HSCIC, HRA, MHRA, ICO, Caldicott Guardians, and many, many more. We’re committed to working with all these groups, and making sure with their help that we get it right.”

Evidently DeepMind is going to need to rethink its modus operandi vis-a-vis pro-actively contacting the relevant healthcare regulators before it accelerates ahead with any more “user tests” powered by NHS data-sets.

Commenting on the latest development in its discussions with the MHRA, a DeepMind spokesperson told TechCrunch: We’re still developing the prototype for the Streams app at this stage. We will of course ensure that it complies with all the applicable EU and UK medical device legislation before it is finalised, and we’re currently working with the MHRA on that basis. We would only place the Streams app on the market as a medical device after it had been fully certified, CE-marked and registered.”

“DeepMind is currently working with the MHRA to ensure that the device complies with all relevant medical device legislation before it is placed on the market,” a Royal Free spokesman added.

He confirmed the Trust remains “committed” to the app, although there is no word on when it might be used next.

At the time of publication the spokesman had also declined to confirm to TechCrunch how many patients the app was used on during those three “user tests” — so it’s impossible to verify the scope of the tests.

Both the Royal Free and DeepMind have maintained they have not yet run a clinical trial of the app in question, nor conducted a full on-the-market deployment — both of which would likely require them to first gain additional regulatory approvals.

But the question remains: at what point does a user test become a clinical trial? And without knowing the number of patients involved in these “user tests” how can we judge? The onus is therefore on the Royal Free to disclose that figure.

DeepMind’s first NHS collaboration, also with the Royal Free, also garnered early controversy when it emerged how extensive the data-sharing agreement between the two parties is.

Under a very broad data-sharing agreement, which includes five years of historical hospital inpatient data, DeepMind gains access to potentially millions of NHS patients’ identifiable medical records — and all without asking for patient consent to use their data.

The scope of the data-sharing arrangement has been criticized by health data privacy groups such as MedConfidential, which has questioned why DeepMind is being provided with so much identifiable patient data.

The app in question aims to speed up the identification of a condition called acute kidney injury, and the pair claim they need a wide range of patient data for it to function for this. Yet Caldicott Guidelines do appear to suggest that targeting a condition affecting groups of patients would be considered ‘indirect care’, rather than the ‘direct care’ relationship needed to rely on implied consent to access patient identifiable data.

Another DeepMind NHS collaboration, with Moorfields Eye Hospital in London, does not involve patient identifiable data — although the nature of the data being shared (detailed biometric eye scans) means it would hardly be impossible to link scans to individuals should there be a data leak.

In that instance around one million eye scans are being shared with DeepMind which will use the data to feed machine learning models in the hopes of being able to develop algorithms that can accelerate the identification of particular eye conditions.

While DeepMind is not currently charging the NHS for the work it’s doing in any of its publicly announced collaborations, it has confirmed it does intend to monetize the tools and systems it is building in future — and has also said it is exploring possible payment models based on performance outcomes.

Suleyman recently told the BBC:  “Right now it is about building the tools and systems that are useful and once users are engaged then we can figure out how to monetise them.

“The vast majority of payments made to suppliers in healthcare systems are not often as connected to outcomes as we would like.

“Ultimately we want to get paid when we deliver concrete clinical benefits. We want to get paid to change the system and improve patient outcomes.”

So the reality is that the publicly-funded NHS is freely providing health data-sets to a company that is using them to train machine learning models which, should they prove successful, could be used to bill the NHS in future — based on these trained models being more effective than alternative care options.

At present the Royal Free collaboration does not involve DeepMind applying AI to the data it is getting. But the pair have a wide-ranging memorandum of understanding in which DeepMind states its ambition to do so within the five-year initial timespan of the partnership.

Clearly the company has been moving very quickly in the health sector — and rather more quickly than certain NHS regulatory bodies would prefer.

(An ICO probe following criticism of the data-sharing arrangement between DeepMind and the Royal Free also remains “ongoing”, according to a spokeswoman.)

Given the accelerating pace of AI here, as I’ve said before, speed really is of the essence for the public to have a robust discussion about the rights and wrongs of handing profit-driven entities free access to publicly-funded data-sets.

The reality is that very valuable publicly funded data-sets are being freely handed over to train AI models that might well in future be charging the same NHS for their services.

So the question remains whether we are comfortable with a freemium commercial model being leveraged to acquire advantageous access to taxpayer-funded data-sets.

And, ultimately, what is the true cost of free?

Source: TechCrunch

Bulk data collection only lawful for fighting serious crime, says Europe’s top court

Bulk data collection only lawful for fighting serious crime, says Europe’s top court

The European Court of Justice has issued a preliminary ruling on a data retention case brought by UK MPs and privacy rights groups seeking to challenge the government’s data retention regime under DRIPA.

The advocate-general’s opinion, published today, suggests governments may be able to apply general metadata retention obligations without falling foul of EU law — but it sets the bar for doing so at combating serious crime, and places renewed emphasis on respecting fundamental privacy rights.

The AG’s opinion is not legally binding but is highly influential, feeding into the deliberations of the ECJ judges who will pass final judgement — and whose opinion will undoubtedly influence and shape European legislation in this area.

DRIPA challenge

The UK’s much criticized Data Retention and Investigation Powers Act was passed as emergency legislation back in 2014 by the then coalition government, followed the ECJ striking down European data retention powers earlier that year. It includes a stipulation that telecoms companies retain their customers’ communications metadata for up to a year.

UK MPs including Labour’s Tom Watson and the Conservative’s David Davis successfully challenged DRIPA in the High Court, which last summer ruled the rushed legislation was unlawful under European law. Although the Home Office appealed that ruling, and the case was referred to the ECJ to request a judgement on whether DRIPA’s data retention regime is compatible with European law.

It’s worth noting that Davis has since withdrawn his name from the challenge — unsurprisingly so, given he’s since been appointed to a cabinet position under new Prime Minister Theresa May. (May was Home Secretary at the time of DRIPA, leading to the unusual situation of one of her new cabinet appointees having an active European legal challenge to her Home Office policies… A situation that clearly wasn’t compatible with Davis’ new role as Brexit Minister in May’s government.)

In his opinion today, the ECJ’s advocate general Henrik Saugmandsgaard Øe writes that:

a general obligation to retain data may be compatible with EU law. The action by Member States against the possibility of imposing such an obligation is, however, subject to satisfying strict requirements. It is for the national courts to determine, in the light of all the relevant characteristics of the national regimes, whether those requirements are satisfied.

He goes on to detail what would be necessary in order to meet his test of “strict requirements” — including that a general obligation to retain metadata “must be laid down by legislative or regulatory measures possessing the characteristics of accessibility, foreseeability and adequate protection against arbitrary interference”; and that it must “respect the essence of the right to respect for private life and the right to the protection of personal data” laid down by the European Charter of Fundamental Rights.

The objective of any data retention legislation must also be “in the pursuit of an objective in the general interest”, he writes.

However he specifies that combating any crime would not be a good enough justification in his view; rather the bar is set at “serious crime”:

…solely the fight against serious crime is an objective in the general interest that is capable of justifying a general obligation to retain data, whereas combating ordinary offences and the smooth conduct of proceedings other than criminal proceedings are not.

“[T]he general obligation to retain data must be strictly necessary to the fight against serious crime, which means that no other measure or combination of measures could be as effective while at the same time interfering to a lesser extent with fundamental rights,” he adds.

Other stipulations include a set of conditions regarding access to data, the period of retention, and the protection and security of the data — as set out in an earlier judgement (Digital Rights Ireland) — “in order to limit the interference with the fundamental rights to what is strictly necessary”.

He also notes that a general obligation to retain metadata must be proportionate — weighed against the privacy risks posed by such an obligation to the democratic rights of citizens.

“[T]he serious risks engendered by that obligation within a democratic society must not be disproportionate to the advantages it offers in the fight against serious crime,” he adds.

Proportionality and privacy

What’s most obvious here is the emphasis on oversight and proportionality for legitimizing data retention powers. The 2013 Snowden revelations shone a light on how Western governments, including the UK, had been making shadowy landgrabs of data behind the scenes. There’s no doubt that such expansive and secretive surveillance regimes would not now stand up to legal scrutiny.

However there are still tricky judgements to be made on determining what is proportionate data retention, and assessing and managing how data retention impacts fundamental privacy rights — and how the “essence” of those rights can be protected when bulk collection is occurring.

It remains to be seen how the ECJ will rule on the DRIPA challenge, and it’s possible the court will elaborate on exactly these sorts of tricky points.

Giving some early thoughts on the AG’s opinion, University of East Anglia law lecturer and Internet privacy rights researcher, Paul Bernal, suggested the opinion is taking a very neutral stance — “leaving it to the Member states” to make judgements of proportionality and determine how they are respecting the essence of European fundamental rights.

“For the UK, that isn’t likely to be good news at all, particularly with Theresa May as PM,” he adds. “Unless there’s more in the detail, or the court rules very differently, I suspect this means the IP Bill will be acceptable within EU law (so long as that law applies!).”

As Bernal notes, the UK government is in the process of updating domestic surveillance legislation to replace DRIPA — which has a sunset clause — with the aim of passing the Investigatory Powers bill before the end of this year. The new UK Home Secretary is Amber Rudd.

The IP bill includes powers that require comms companies capture and retain even more data than DRIPA, with a stipulation that ISPs harvest and store so-called Internet Connection Records, detailing the websites and services accessed by users for the past 12 months.

It also aims to enshrine various bulk capabilities in UK law, although these powers have faced opposition, including from the official opposition Labour party, and are the subject of an outside review — due to report later this summer. So it remains to be seen whether the government will make any amendments there.

The bill is continuing its passage through the House of Lords, with peers most recently voicing concern about the implications for encryption — and the government explicitly confirming the bill would grant powers to limit companies’ use of end-to-end encryption. Peers attempts to amend this portion of the bill were rebuffed by the government.

Last month the UK also voted in a public referendum to leave the European Union — casting further doubt on whether European laws, such as the charter of fundamental rights, will apply domestically in future, once (or if) the UK does end up leaving the EU.

Source: TechCrunch

Bubble wants to tap users’ social graphs for an on-demand babysitting app

Bubble wants to tap users’ social graphs for an on-demand babysitting app

Who would you trust more with your kids: a person sent by a babysitting agency or the sitter who watched your friends’ kids last weekend?

Bubble, an on-demand babysitting app (iOS and Android) launching in London today, reckons most people would plump for the second option. So it’s built an app for booking a babysitter that’s powered by recommendations pulled from its users’ social graphs — such as their Facebook friends and phone contacts.

“Users have the option of integrating their Facebook account with the app and allowing it access to their phone contacts. There is also a feature in the app which allows them to stamp their profiles with the schools and nurseries their kids go to. They also have unique referral codes which they can use to refer friends  (parents or sitters) to the app,” says co-founder Ari Last, explaining how Bubble is intending to tap into users’ social graphs to power trusted recommendations.

“We’ve built (and continue to iterate on) a social graph in the back-end that maps all of this data together to show for example a parent, how they might know a sitter via mutual friends. It could be that they have a mutual friend on Facebook, or it could be that the sitter was used by another parent whose kids go to the same school as our parent user.

“We show the user the entire chain of connection between them and the sitter and we then allow the user to message each of those connections for a ‘vouch’ if they want.”

So, in short, here’s (yet) another startup gunning to replace an old school agency model with a messaging platform plus user rating system. (We’re seeing something very similar quickly spinning up in the blue collar recruitment space, for example.)

Does Bubble do any vetting of sitters itself? Last says it runs an automated ID check on all users at sign up (both parents and sitters), adding that no one can use the app until their identity has been verified. After that, the vetting is crowdsourced — via users’ social graph contacts, users’ intuition when they message other users to do their own due diligence and, ultimately, the two-way rating process generated as a by-product of usage of the platform.

“On every sitter profile, the parent can read more about them and what certifications they have. The app is providing the parent with the info on the relevant sitter and they choose what’s best for them based on their own individual needs,” says Last, who previously chalked up time in roles such as biz dev and commercial partnerships for online marketplaces including Betfair and MarketInvoice.

“I saw from my time in fintech that there’s so much innovation happening in how platforms can smartly validate its user base instantly/automatically so we’re excited about leveraging some of these in the future — the social validation and identity checking is a good start,” he adds.

Users are not required to connect their Facebook profiles to use the app, but Last says it is “strongly” encouraged so they get the best experience by being as “well-connected in the app as possible”.

“As a parent, the more friends or sitters you have on the app the easier it will be for you to find a trusted sitter. And for a sitter, the more connections you have in the app, the more local families are going to want to book you. So it’s a win-win which we hope will encourage virality,” he adds.

At this point Bubble is seed funded to the tune of £175,000 by angel investors, including Mark Davies, ex of Betfair — so Last and co have clearly been tapping their own social graph to get this startup off the ground.

They have around 150 London-based babysitters signed up to the platform at launch. And some 300 users have signed up during an open beta running for the past 10 days.

Bubble is not taking a commission from sitters for using its (free) platform, but parents are charged a £3.50 fee for every sitting session booked via the app (though they are not charged any other fees for using the platform). All payment is handled in app, via Braintree.

In terms of competition, Bubble is aiming to disrupt traditional babysitting agencies, some of which Last notes are moving online and going mobile. He says the latter element is key to its strategy as the hope is to help time-strapped parents be more spontaneous about their social lives, rather than having to schedule a babysitter well in advance. (Last and his co-founder are both dads — touting their own experience of trying to pin down babysitting services as the inspiration for launching the app).

“The concept of a purely mobile, transactional, on-demand app in this space is new in the UK — particularly one using social data to build trust as we intend to,” he says, discussing the competitive landscape. “There is care.com and other smaller, similar type websites but that is more for longer term needs and it covers a lot of care verticals.

“You can book a sitter in advance on our app but we’re placing a lot of emphasis on creating an engaging on-demand experience – giving parents that ability to be spontaneous again.”

So what happens if something goes wrong? Who is liable? As with the vast majority of online marketplaces, the risks are in the hands of the users who have to judge whether they are comfortable with a particular transaction themselves.

“We’re a marketplace platform and it’s up to both the parents and sitters on bubble to choose who they interact with on the app,” says Last.

“It’s important for us to stress that our ambition here is not to be ‘safe enough’ but actually be the safest way for someone to find a sitter. Social validation — using the people who your friends vouch for and use themselves is a great way to layer transparency and security into the process of finding someone to look after your kids.”

He also points out that the transparency being two-way helps babysitters too.

“Speaking to them it was really interesting to hear how lots of them often show up to people’s houses not knowing much about the parent or what to expect. After all they’ve often got the job via a text message from someone they don’t know saying “you sat for my friend Sarah last week so can you sit for me tonight”. And most of them would take that job,” he notes.

“With bubble — the transparency is two-way. They get to view a parent’s profile and any mutual contacts before accepting a sit. Sitters are also rating parents at the end of the night (not just the parents rating the sitter) so they get to view that rating too. We’re encouraging the best behaviour all round.”

Encouraging best behavior is one thing, but guaranteeing safety is quite another. So it remains to be seen how parents trust instincts vis-a-vis their kids safety will play out in the crowd-rating arena.

Source: TechCrunch

UK government warned to act fast on Brexit risks

UK government warned to act fast on Brexit risks

A UK parliamentary committee looking at issues pertaining to the digital economy has warned of multiple risks in the wake of last month’s referendum vote to leave the European Union.

The committee is also urging the government to set out key objectives for regulating what it dubs “disruptive change” — urging a focus on promoting productivity, innovation, and customer choice and protection, and suggesting that users of online platforms should be more involved in solutions to improve compliance with existing regulations.

Brexit risks

In a report published today, the Business Innovation and Skills committee urges the government to act quickly in the wake of the referendum vote — citing multiple risks to the domestic tech industry including ongoing access to skills and the risk of a post-Brexit brain drain; the UK’s fintech sector dominance being eroded if it loses access to the single market of financial regulation; investor confidence ebbing and businesses relocating to other European countries if the UK is outside the European digital single market.

“The digital sector relies on skilled workforce from the European Union, and those individuals’ rights to remain in the country must be addressed, and at the earliest opportunity,” the committee notes in its report.

“The Government needs to provide clarity surrounding skills, post referendum, otherwise skills and talent will be lost to other countries,” it warns.

“We could have led on the Digital Single Market, but instead we will be having to follow,” the committee writes, recommending that the government address the issue of whether businesses will be able to access the European Single Digital Market, “if they want to do so”.

“The Government must address this situation as soon as possible, to stop investor confidence further draining away, with firms relocating into other countries in Europe, to take advantage of the Digital Single Market,” it adds.

The committee also urges the government to explain how its forthcoming Digital Strategy will be affected by the referendum result, and suggests the document sets out a list of “specific, current EU negotiations relating to the digital economy”.

The Digital Strategy has been delayed for more than six months, with the government also putting out a call for public and industry contributions in December — a move the committee queries, asking the government to explain why it did this.

Following that call the minister for digital issues, Ed Vaizey, said the government would delay the publication of the strategy until after the EU referendum vote. In the event, Vaizey himself is now a casualty of Brexit, with the new Prime Minister, Theresa May, reshuffling her top team and replacing him with Matt Hancock.

The committee is also asking for clarity on how the strategy has changed since it was originally drafted. (It previously took evidence from Vaizey on this but appears to have been given only a rather sparse explanation of the strategy’s contents at that point.)

“While the Government is supporting the digital economy, including support of Innovate UK, Tech City and Tech North, there is no overall strategy for this support,” the committee writes. “We hope that the digital strategy will provide an overview of present and future Government policy on the digital economy, which will be published as soon as possible, and in its reply the Government must provide us with an update of any changes made to the strategy since it was originally written.”

“The Digital Strategy must address head on the status of digitally-skilled workers from the European Union who currently work in the UK. The digital sector relies on skilled workforce from the European Union, and those individuals’ rights to remain in the country must be addressed, and at the earliest opportunity,” it adds.

However, given the ongoing fallout from Brexit, it seems safe to expect further delays to the publication of the government’s Digital Strategy. After all, the new minister for digital issues has only been in post since Friday.

Regulating disruption

On the challenge of regulating disruptive businesses, the committee writes that public policy needs to be future proofed “as far as possible, to ensure that the need for constant regulatory reform is minimised”.

Specifically it recommends regulation based on “agreed principles”, with a focus on consumer interests of quality, choice, cost and safety.

The committee notes the UK government has generally, up to now, taken a hands-off approach to regulating tech platforms — an approach it says it supports, noting this is also in line with an earlier House of Lords report on online platforms (which judged no new regulation is needed to govern the operation of online platforms).

It further suggests that regulation to ensure “reasonable protection” for workers supplying labour to online platforms should be “either given or offered” (emphasis theirs) — suggesting tacit support for some form of opt-out options for employment rights where there is operational pressure stemming from online platform business models.

The committee’s preferred phrase here is “reasonable employment conditions”.

“We agree that regulation should ensure that reasonable protection is either given or offered to individuals working in or using business models based on digital or disruptive technologies,” it write, adding: “It is right, for example, that customers have clear evidence and reassurance that Uber drivers and their cars have been checked fully, and that accommodation booked through Airbnb has adequate insurance.”

On regulatory compliance, the committee suggests online platform feedback mechanisms could be explored as a route for ensuring businesses and their users comply with existing regulations — noting for example that some Airbnb hosts are flouting planning rules in London that restrict the renting of homes for more than 90 days per year, and also pointing to the problem with professional landlords operating multiple properties on Airbnb.

It also believes this route could help regulate conditions for workers supplying labour to online platforms.

“A more collaborative approach to regulation, involving users, should be explored by the Government. Digital platforms (the software or hardware of a site) could themselves become key players in the regulatory framework, required to ensure that users are complying with current regulations, and that workers using the platforms have reasonable employment conditions and are not vulnerable to exploitation,” says the committee.

“We believe that platforms should have greater responsibility in ensuring that regulatory requirements are adhered to. Given that they have the technology at their disposal, this should not be an onerous responsibility,” it adds.

Featured Image: Evgeny Gromov/Getty Images
Source: TechCrunch

Pokemon Go T&Cs strip users of legal rights

Pokemon Go T&Cs strip users of legal rights

Players of Pokemon Go are not only giving up their right to act like sane human beings in public, as they walk around, zombie-esque, reaching into the phones held in front of their faces, they are also likely to be waiving legal rights if they don’t take a very close look at Niantic Labs’ Terms of Service for the game.

As spotted earlier by The Consumerist, an arbitration notice states that Pokemon Go users automatically agree to waive their rights to any future trial by jury or class action lawsuit unless they opt out of a binding clause in the T&Cs…

ARBITRATION NOTICE: EXCEPT IF YOU OPT OUT AND EXCEPT FOR CERTAIN TYPES OF DISPUTES DESCRIBED IN THE “AGREEMENT TO ARBITRATE” SECTION BELOW, YOU AGREE THAT DISPUTES BETWEEN YOU AND NIANTIC WILL BE RESOLVED BY BINDING, INDIVIDUAL ARBITRATION, AND YOU ARE WAIVING YOUR RIGHT TO A TRIAL BY JURY OR TO PARTICIPATE AS A PLAINTIFF OR CLASS MEMBER IN ANY PURPORTED CLASS ACTION OR REPRESENTATIVE PROCEEDING.

To opt out of the legal rights waiver, users need to email termsofservice@nianticlabs.com or can send regular mail to 2 Bryant St., Ste. 220, San Francisco, CA 94105.

But the opt out process is only valid if exercised within 30 days following the date a user first accepted the T&Cs.

Having a short opt-out window for legal rights embedded within T&Cs which the vast majority of users won’t read before clicking ‘I agree’ and rushing into their neighbor’s garden to try to catch a pikachu is a very aggressive stance.

Binding arbitration means a private dispute resolution process, heard outside a courtroom, with individual users having to mount their own cases — rather than having the ability to band together in a class action, for example, if there is a data breach which affects multiple users in the same way.

Only individual actions brought to small claims courts and actions seeking injunctive or equitable relief pertaining to IP infringement rights are unaffected.

The rules under which any arbitration would take place are specified as those of the American Arbitration Association — “in accordance with the Commercial Arbitration Rules and the Supplementary Procedures for Consumer Related Disputes”, albeit with some Niantic specific modifications. Safe to say, private arbitration is a restrictive route for redress that clearly disadvantages consumers.

We’ve asked Niantic Labs for comment on the arbitration clause and will update this post with any response.

As previously noted, Pokemon Go also requires extensive app permissions to run. And the Pokemon Go privacy policy states the company may share aggregated data with third parties, and identifiable user data with law enforcement agencies and other parties for a range of reasons it deems appropriate.

The privacy policy further notes that in the event of a sale of Niantic users would need to opt out of having their data disclosed/transferred to the third party acquirer — again with only a 30 day window to do so:

Information that we collect from our users, including PII [personally identifiable information], is considered to be a business asset. Thus, if we are acquired by a third party as a result of a transaction such as a merger, acquisition, or asset sale or if our assets are acquired by a third party in the event we go out of business or enter bankruptcy, some or all of our assets, including your (or your authorized child’s) PII, may be disclosed or transferred to a third party acquirer in connection with the transaction. In the event of such a transaction, we will give you notice of the transaction and the opportunity for a period of 30 days to refuse disclosure or transfer of your (or your authorized child’s) PII to the third party acquirer in connection with the transaction.

So, as ever when it comes to T&Cs, the devil is in the overlooked detail.

Gotta catch all those catches!

Featured Image: Hitoshi Yamada/NurPhoto/Getty Images
Source: TechCrunch

Encrypted comms company Silent Circle closes $50M Series C

Encrypted comms company Silent Circle closes M Series C

Encrypted comms company Silent Circle, which also makes a security-focused Android smartphone called the Blackphone, has announced it’s closed a $50 million Series C round of financing, led by Santander Bank.

The company was in the news recently on account of a lawsuit brought against it by former business partner Geeksphone for non-payment of an outstanding $5 million, relating to the sale of the latter’s half of the joint venture.

Court documents pertaining to that litigation suggested Silent Circle closed an unannounced $25 million Series C round earlier this year. Letters from the court case also reveal it had been hoping to raise an additional $20 million to pay down some of its debt at the time but had been unable to raise that additional funding.

It’s unclear whether the $50 million Series C announced today is all new money for Silent Circle, or whether it includes the earlier $25 million Series C detailed in the court documents. We’ve asked Silent Circle to clarify and will update this post with any response. Either way it looks like they were ultimately able to secure the larger Series C round they had been hoping for.

Since buying out its hardware partner in the Blackphone joint venture Silent Circle has shifted from more of a prosumer focus with the original Blackphone smartphone sales pitch to pushing what it dubs an enterprise privacy platform business — which brings together a suite of software services, such as its Silent Phone secure calling, messaging and file sharing software with encrypted calling plans that expand secure calling to non-subscribers, and user management services via a web-based admin console that targets businesses needing to manage mixed estates of iOS and Android handsets in a BYOD world.

Silent Circle’s hardware foray with the Blackphone was supposed to complement this software platform strategy but court documents from the litigation reveal that the first generation device in fact failed to deliver the expected sales, forcing Silent Circle to fall back on its software business and cut operational costs, including making significant staff reductions.

Key co-founder Jon Callas recently departed the company for a role at Apple, while another senior employee who joined from Geeksphone is also no longer at the company. Various other more junior employees have apparently been let go.

Silent Circle said today that the new financing will go towards “eliminating its debts”, as well as further investment in product development, customer service, business development and marketing activities. As part of the funding round cyber security investor Bob Ackerman is also joining the board.

It’s not clear if Silent Circle intends to settle the litigation with Geeksphone now it’s closed a full Series C — again we’ve asked, and will update with any response.

The company’s general counsel Matt Neiderman is now listed as its interim CEO, although co-founder Mike Janke remains as chairman. Another former CEO, Bill Conner, departed last month according to his LinkedIn profile.

On Neiderman temporarily occupying the CEO role, a spokesman for Silent Circle told TechCrunch: “Matt has been with Silent Circle since its formative days and has worked closely with the other company leaders on the executive team. As Chief Legal Officer he brings a valuable skillset, both as the firm moves through its current litigation and positions itself to experience even more growth and momentum in the enterprise privacy market. He has been a part of all the key aspects of the business. So, he’s a good fit for the role.”

Source: TechCrunch

UK surveillance bill includes powers to limit end-to-end encryption

UK surveillance bill includes powers to limit end-to-end encryption

The UK government has explicitly confirmed that a surveillance bill now making its way through the second chamber could be used to require a company to remove encryption. And even, in some circumstances, to force a comms service provider not to use end-to-end encryption to secure a future service they are developing. The details were revealed during debate of the Investigatory Powers Bill at a committee session in the House of Lords this week.

This cements concerns over the phrasing of a clause in the bill that refers to the ‘removal of electronic protection’, which critics, including from the technology and security industries, have long been warning risks outlawing the use of strong encryption in the UK.

The government’s counter argument has been that there should be no safe spaces for terrorists and criminals to operate online, i.e. where their communications are definitively out of the reach of security and law enforcement agencies.

Speaking for the government during a bill committee session on Wednesday evening, Lord Howe reiterated that view, going on to reject a series of proposed amendments aiming to clarify what the government can and can’t request of companies under the bill’s Technical Capability Notices.

“This is a vital power,” said Howe of the ability to require the removal of electronic protection. “Without which the ability of the police and intelligence agencies to intercept communications in an intelligible form would be considerably diluted.

“Law enforcement and the intelligence agencies must retain the ability to require telecommunications operators to remove encryption in limited circumstances. Subject to strong controls and safeguards to address the increasing technical sophistication of those who would seek to do us harm.”

“Encryption is now almost ubiquitous and is the default setting for most IT products and online services. If we do not provide for access to encrypted communications when it is necessary and proportionate to do so then we must simply accept that there can be areas online beyond the reach of the law,” he added.

Technical Capability Notices are a very wide-ranging provision within the IP bill which can impose requirements on companies to assist state agent investigations, such as by providing access to a communications service. Or even a requirement they maintain a permanent capability to provide access if/when needed.

The oversight process for Technical Capability Notices has been improved since the original draft of the bill, with Lord Howe noting that judicial authorization is now required in addition to senior ministerial sign-off for these notices. He also pointed to the bill’s new privacy clause which requires the Secretary of State to “give regard to the public interest in the integrity and security of telecommunications systems” when making a decision on whether or not to issue a notice.

The new Investigatory Powers Commissioner will also be required to approve requests for Technical Capability Notices — which is a step up from the prior route for UK state agents to impose technical obligations on companies, via section 94 of the Telecommunications Act (which will be repealed in favor of the IP bill).

Howe also claimed the IP bill does not expand on existing state agency capabilities vis-a-vis removing encryption, emphasizing that it can only be used to require a company to remove encryption where it is “reasonably practicable” for them to do so.

He went on to note that any encryption a CSP has not applied themselves would “almost inevitably fall outside these provisions because it would not be reasonably practicable for a company to de-encrypt”. The implication being that CSPs would not be asked to remove end-to-end encryption since they do not have the technical capability to decrypt the data.

Although he noted that the IP bill’s applied standard — of what is “reasonably practicable” — could vary from one CSP to another.

“This isn’t, in many cases, asking companies to do something that they would not do in the normal course of their business,” Howe added, noting how many companies do not use end-to-end encryption in order to afford themselves access to user data for their own business imperatives. (The government clearly wants the power to be able to tap into those data-mining business models for investigatory intel.)

However other peers speaking during the committee session expressed continued concern that the bill as currently couched still poses a risk to the use of strong encryption.

“Once encryption is weakened, it’s weakened for everyone. And once it’s weakened at the request of the government that weakness is available to all the people who would do us harm,” warned Lord Strasburger.

During the debate, Howe was specifically pressed to specify whether Technical Capability Notices would allow for the government to require companies not to use end-to-end encryption on future services in order to afford state agents access to decrypted communications data if/when served a warrant.

“Is there an expectation in this bill, in these clauses, that where a service provider is developing a new service they must ensure in that development that they have the facility to access what the user would assume is encrypted data,” asked Lord Harris of Haringey.

“It depends on what is reasonably practicable for the communications service provider to do,” replied Howe. “Usually this power will apply to encryption that the provider has applied itself or which has been applied on their behalf. If there are other circumstances where it would apply I will take advice and write to nobel Lord but we come back to what is reasonably practicable for the company to do.

“And this is why the government maintains a dialogue with communications service providers to ascertain what is practicable and what isn’t and what would be cost effective and what would not be.”

Pressed a second time by Harris to clarify whether the bill sets up an “an expectation” that CSPs be required to avoid using end-to-end encryption for future services, Howe again gave no definitive answer.

“Are they required to make it technically practicable for future services for this to be allowed?” asked Harris.

“It might be,” responded Howe. “But they might not be. Again it depends on what is reasonably practicable in the particular circumstances and those circumstances might vary from provider to provider and from situation to situation so I don’t think it’s possible for me to generalize about this.”

“I fear that the nobel Earl is taking us up quite a long cul-de-sac here,” added Strasburger. “Because the implication of what he’s saying is that no one might develop end-to-end encryption — and one of the features of end-to-end encryption is that the provider cannot break it himself… So he seems to be implying that providers can only provide encryption which can be broken and therefore can’t be end-to-end encryption.”

Strasburger suggested the government’s position could, “in theory” make the next version of the Apple iPhone illegal in the UK, adding that in his view there is still “quite a lot of work to be done” to shore up this aspect of the bill to avoid compromising data security and risking the trusted reputations of UK technology companies.

With the iPhone example Howe did at least provide a modicum of clarity.

“The Apple case… is not one that I’m advised could occur in this country in the same way,” he said

“I was certainly not implying in any way that the government wished to ban end-to-end encryption,” he added, although given his other open-ended statements there’s very little comfort to be drawn from the phrasing of that sentence.

“The bill is clear that any attempt to obtain communications data must be necessary and must be proportionate or it will not be permitted. It is crucial that the bill provides a robust, legal framework which means that the law is consistently applied correctly,” Howe added.

Another contribution came from Lord Paddick, who pointed to targeted Equipment Interference (aka state hacking powers, also sanctioned by the IP bill) as a potentially more useful and less invasive route for government agents to obtain the sought for comms data, i.e. rather than resorting to overly wide-ranging Technical Capability Notices.

“Certainly targeted Equipment Interference is, if you like, the next step if interception should not be possible for any reason,” said Howe.

The debate concluded with various amendments that had sought to tighten the bill’s scope for removing encryption being rejected by the government.

The committee stage of the bill continues on July 19 when further amendments will be discussed in the Lords.

An independent review of the various bulk investigatory powers contained in the bill — such as the ability to hack into devices or intercept communications en masse — is also ongoing, with QC David Anderson due to report on that matter this summer.

Featured Image: Intel Free Press/Flickr UNDER A CC BY 2.0 LICENSE
Source: TechCrunch