How fog computing pushes IoT intelligence to the edge

How fog computing pushes IoT intelligence to the edge

As the Internet of Things evolves into the Internet of Everything and expands its reach into virtually every domain, high-speed data processing, analytics and shorter response times are becoming more necessary than ever. Meeting these requirements is somewhat problematic through the current centralized, cloud-based model powering IoT systems, but can be made possible through fog computing, a decentralized architectural pattern that brings computing resources and application services closer to the edge, the most logical and efficient spot in the continuum between the data source and the cloud.

The term fog computing, coined by Cisco, refers to the need for bringing the advantages and power of cloud computing closer to where the data is being generated and acted upon. Fog computing reduces the amount of data that is transferred to the cloud for processing and analysis, while also improving security, a major concern in the IoT industry.

Here is how transitioning from the cloud to the fog can help deal with the current and future challenges of the IoT industry.

The problem with the cloud

The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.

However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server. The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.

The cloud paradigm is like having your brain command your limbs from miles away.

Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.

The fog placed at the perfect position

IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.

The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.

A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.

Fog computing has its own supporting body, the OpenFog Consortium, founded in November 2015, whose mission is to drive industry and academic leadership in fog computing architecture. The consortium offers reference architectures, guides, samples and SDKs that help developers and IT teams understand the true value of fog computing.

Already, mainstream hardware manufacturers such as Cisco, Dell and Intel are teaming up with IoT analytics and machine learning vendors to deliver IoT gateways and routers that can support fog computing. An example is Cisco’s recent acquisition of IoT analytics company ParStream and IoT platform provider Jasper, which will enable the network giant to embed better computing capabilities into its networking gear and grab a bigger share of the enterprise IoT market, where fog computing is most crucial.

Analytics software companies are also scaling products and developing new tools for edge computing. Apache Spark is an example of a data processing framework based on the Hadoop ecosystem that is suitable for real-time processing of edge-generated data.

Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.

Other major players in the IoT industry are also placing their bets on the growth of fog computing. Microsoft, whose Azure IoT is one of the leading enterprise IoT cloud platforms, is aiming to secure its dominance over fog computing by pushing its Windows 10 IoT to become the OS of choice for IoT gateways and other high-end edge devices that will be the central focus of fog computing.

Does the fog eliminate the cloud?

Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.

The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.

And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.

It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.

What are the use cases of fog computing?

The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments.

Thanks to the power of fog computing, New York-based renewable energy company Envision has been able to obtain a 15 percent productivity improvement from the vast network of wind turbines it operates.

The company is processing as much as 20 terabytes of data at a time, generated by 3 million sensors installed on the 20,000 turbines it manages. Moving computation to the edge has enabled Envision to cut down data analysis time from 10 minutes to mere seconds, providing them with actionable insights and significant business benefits.

IoT company Plat One is another firm using fog computing to improve data processing for the more than 1 million sensors it manages. The company uses the ParStream platform to publish real-time sensor measurements for hundreds of thousands of devices, including smart lighting and parking, port and transportation management and a network of 50,000 coffee machines.

Fog computing also has several use cases in smart cities. In Palo Alto, California, a $3 million project will enable traffic lights to integrate with connected vehicles, hopefully creating a future in which people won’t be waiting in their cars at empty intersections for no reason.

In transportation, it’s helping semi-autonomous cars assist drivers in avoiding distraction and veering off the road by providing real-time analytics and decisions on driving patterns.

It also can help reduce the transfer of gigantic volumes of audio and video recordings generated by police dashboard and video cameras. Cameras equipped with edge computing capabilities could analyze video feeds in real time and only send relevant data to the cloud when necessary.

What is the future of fog computing?

The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.

Featured Image: Omelchenko/Shutterstock
Source: TechCrunch

How predictive analytics discovers a data breach before it happens

How predictive analytics discovers a data breach before it happens

Cybersecurity experts and analysts are constantly trying to keep pace with changes and trends in the volatile and ever-shifting landscape of IT security.

Despite sophisticated tools and solutions that are being rolled out by cybersecurity vendors, every IT security officer knows that data breaches eventually happen — it’s not about the if but the when — and they usually go undetected for a long time.

Machine-learning-powered solutions have somewhat remedied the situation by enabling organizations to cut down the time it takes to detect attacks. But we’re still talking about attacks that have already happened.

What if we could stay ahead of threat actors and predict their next attack before they take their first destructive step? It might sound like a crazy idea out of Spielberg’s Minority Report, but thanks to the power of predictive analytics, it might become a reality.

Predictive analytics is the science that is gaining momentum in virtually every industry and is enabling organizations to modernize and reinvent the way they do business by looking into the future and obtaining foresight they lacked previously.

This rising trend is now finding its way into the domain of cybersecurity, helping to determine the probability of attacks against organizations and agencies and set up defenses before cybercriminals reach their perimeters. Already, several cybersecurity vendors are embracing this technology as the core of their security offering. Here’s how predictive analytics is changing the cybersecurity industry.

Moving beyond signatures

The traditional approach to fighting cyberattacks involves gathering data about malware, data breaches, phishing campaigns, etc., and extracting relevant data into signatures, i.e. the digital fingerprint of the attack. These signatures will then be compared against files, network traffic and emails that flow in and out of a corporate network in order to detect potential threats.

While signature-based solutions will continue to remain a prevalent form of protection, they do not suffice to deal with the advanced and increasingly sophisticated cybercriminals who threaten organizations.

“In the past decade or so, the landscape of cyber security threats has changed dramatically,” explains Amir Orad, CEO of analytics company Sisense. “The bad actors have transitioned from ‘script kiddies’ to organized crime and state actors, which direct highly sophisticated attacks against specific targets, for example via APTs — agents that infiltrate your IT systems and surreptitiously trickle minute amounts of data outwards.”

A Verizon Data Breach Investigations Report reveals that more than 50 percent of data breaches remain undiscovered for months. In contrast, thanks to the array of innovative malware, botnets and other advanced data-theft tools at their disposal, attackers only need minutes to gain access to the critical data they seek after they compromise a target.

The variety and volume of data involved in identifying and predicting security threats are overwhelming.

Moreover, threat signatures are gradually becoming a thing of the past. “The most significant change in the cyberthreat landscape is the rise of point-and-click exploit kits,” says Dr. Anup Ghosh, founder and CEO of cybersecurity firm Invincea. These exploit kits enable attackers to create unique signatures for each attack. “This approach breaks most traditional security systems because the products haven’t seen the attack before in order to detect it,” explains Ghosh, who’s done a stint as cybersecurity expert at the Defense Advanced Research Projects Agency (DARPA).

“Current cybersecurity solutions leave a wide gap in coverage,” says Doug Clare, vice president for cyber security solutions at analytics software company FICO. “It’s like having a burglar alarm that doesn’t go off until after the burglar’s done his work, left the premises and crossed the county line.

FICO’s solution, dubbed Cyber Security Analytics, utilizes self-learning analytics and anomaly detection techniques that monitor activity across multiple network assets and real-time data streams in order to identify threats as they occur without having specific knowledge of the exact signature. These analytics immediately detect anomalies in network traffic and data flows, while also quickly recognizing new “normal” activity, thus minimizing false-positive alerts. FICO also takes advantage of threat intelligence sharing in order to continually enhance its model with insights gained from data contributed by a consortium of users.

Finding the needle in the haystack

Though a very promising trend, predictive analytics has some hefty requirements when applied to cybersecurity use cases. For one thing, the variety and volume of data involved in identifying and predicting security threats are overwhelming. This necessitates the use of analytics solutions that can scale to the huge storage, memory and computation requirements.

“Organizations today work with large volumes of data from multiple disparate sources, which makes it difficult to trace the signals of a cyber-attack as it is happening due to the need to quickly analyze this data and perform advanced calculations on it in near real-time,” says Sisense’s Orad.

“The challenges are the same, yet amplified, as those encountered when applying analytics in general,” says Lucas McLane (CISSP), Director of Security Technology at machine learning startup SparkCognition. “This is because predictive analytic processing requires a lot more computing resources (i.e. CPU, memory, disk I/O throughput, etc.). This is especially true when the algorithms are operating on large-scale data sets. Predictive analytics engines need to be paired with computing resources that are designed to scale with the volume of data targeted for analysis.”

Further complicating the situation, Orad explains, is “the fact that the cyber-attack’s signal is often very weak and obstructed by a lot of organizational noise, i.e. there will only be a very slight change in patterns recognizable.” This is in turn means that using the wrong algorithms can easily create a lot of false positives, Orad warns.

Predictive analytics will have a pivotal role in shaping the future of cybersecurity.

That is why cybersecurity companies are teaming up with analytics firms, such as Orad’s own startup. Sisense provides a set of proprietary tools and features that enables cybersecurity companies to quickly analyze huge sets of scattered data. They leverage the platform to identify suspicious patterns, then they can open a Sisense dashboard that lets them query terabyte-scale datasets, investigate a potential attack and drill into the data to see whether further security measures are necessary.

Forging alliances across industries certainly has its benefits. As Orad explains, advanced analytics platforms such as Sisense enable cybersecurity firms to obtain “an end-to-end solution for modeling, analyzing and visualizing data, without investing vast resources into building a data warehouse as traditional tools would necessitate.”

Predictive analytics and machine learning

“Predictive analytics in security provide a forecast for potential attacks — but no guarantees,” says McLane from SparkCognition. That’s why he believes it has to be coupled with the right machine learning solution in order to be able to harness its full potential.

SparkCognition’s platform, SparkSecure, uses “cognitive pipelining,” a technique that involves the combination of machine-learning-based predictive analytics with the company’s own patented and proprietary static and dynamic natural language processing engine, called DeepNLP.

According to McLane, cognitive pipelining automates the tedious research steps that descriptive and predictive analytics require, which results in “an acceleration of the analyst’s ability to discover the real malicious traffic from the anomalous outliers and forecasting provided by ML.”

The use of predictive analytics coupled with machine learning and natural language processing allows for cybersecurity to move beyond the cumbersome strategy of maintaining black-lists.
“Signature-free security allows us to detect, with high confidence, new threats that have never been seen before,” says McLane.

Predictive analytics is not panacea

Not everyone believes that predictive analytics is the ultimate solution to deal with advanced threats. Arijit Sengupta, CEO of business analysis company BeyondCore, suggests that we look at the problem from a different perspective.

According to Sengupta, cybersecurity challenges stem from two factors. Firstly, the value and volume of online assets are exploding at and exponential rate. Secondly, hackers are increasingly growing in sophistication due to their easy and inexpensive access to large compute resources through cloud computing.

While predictive analytics can help deal with today’s challenges, as both data and computing resources continue to expand, we’ll be facing a problem, Sengupta believes. “If the surface area of your data is growing exponentially and the resources accessible to your attacker is growing, then even predictive analytics is no longer good enough because you simply don’t have the resources to react,” he says.

The correct approach, Sengupta believes, is to “rethink why and how we store valuable data in the first place.”

We also have to consider that the tools and tactics of our adversaries will evolve and change in parallel with ours, warns Olivier Tavakoli, CTO of cybersecurity startup Vectra Networks. “After several years spent trying to perfect predictive analytics, attackers will counter with feints and pattern randomization,” he predicts.

The future of predictive analytics

Nonetheless, with big data and machine learning starting to take a decisive role in every industry, it is only fair to estimate that predictive analytics will have a pivotal role in shaping the future of cybersecurity.

“In the near future, and even today, there will be no cyber security without predictive analytics,” says Orad from Sisense. “Threats have become so sophisticated, and they evolve and change so rapidly, that the only way to identify them on time is via advanced statistical analysis of big data.”

Invincea’s Ghosh believes it is inevitable the security industry will need to re-tool to address an ever-changing threat. “We are making our bet on artificial intelligence is the solution to predict our adversaries’ next moves,” he says.

Source: TechCrunch

Your website may be engaged in secret criminal activity

Your website may be engaged in secret criminal activity

Most of us think of website hacks as illicit activities aimed at siphoning critical information or disrupting the business of website owners. But what happens when your site becomes hacked, not for the purpose of harming you but rather to further the ends of other parties? Most likely, the attackers would manage to feed off your resources and reputation for months or years without being discovered, because it’s hard to take note of something that isn’t directly affecting you.

This is what a recent report from cybersecurity firm Imperva shows, which proves that you should harden your website not only to protect yourself, but also to protect others and prevent your online assets from being taken advantage of for illicit activities.

Piggybacking vulnerable websites for malicious purposes

Compiled by researchers at Imperva Defense Center, the report unveils a long-running blackhat SEO campaign in which hackers are exploiting vulnerabilities in thousands of legitimate websites in order to promote the search engine ranking of their clients’ websites.

The hackers are using botnets (networks of remotely hijacked computers) in order to amplify their campaigns and are using known hacking techniques such as SQL injection and comment spam in order to inconspicuously insert backlinks to their clients in the targeted websites. The attackers use CSS and HTML tricks to hide the inserted snippets from the eyes of visitors and site administrators while keeping them visible to web crawlers.

The fact that the targeted websites are not directly affected by the attacks (aside from SEO penalties) makes the attacks much harder to detect and notice. In fact, according to Imperva, the campaign is still ongoing and the hackers continue to seek out and target vulnerable sites.

Although the Imperva report is the most recent and expansive case of websites being piggybacked for malicious purposes, it is far from being the only instance. There’s a long precedence of websites being hacked and used as a beachhead for activities that in most cases are far more damaging than blackhat SEO.

In February, hackers broke into the official Linux Mint website and surreptitiously distributed their own backdoored version of the operating system to thousands of oblivious users. In October last year, hackers breached thousands of websites powered by eBay’s Magento e-commerce platform through a zero-day exploit and abused them to deliver malware to visitors.

More than our own data and security is at stake when we’re operating websites.

A joint research led by experts from Katholieke Universiteit Leuven in Belgium and Stony Brook University in the U.S. showed how hackers were compromising advertisements on illegal livestreaming websites to inflict visitors with malware.

But websites of questionable nature aren’t the only targets that hackers exploit to deal their damage. According to Cisco’s 2015 Annual Security Report, the aviation, agriculture, mining and insurance industries top the list of websites that pose the risk of harming visitors.

And a rash of malicious ads turning up on sites such as The New York Times, BBC and MSN earlier this year showed that even the big-name sites can unwittingly become complicit in the crimes of cyber-evildoers.

Source code flaws are at the heart of website hacks

Not all website-related hacks are carried out by compromising the server. Many of them use malvertising, a hacking technique that takes advantage of ad delivery networks and leverages vulnerabilities on client machines such as bugs in Adobe Flash and Microsoft Silverlight.

But where web servers are concerned, source code flaws are the main reason websites are compromised. “Today we see that a major number of attacks against websites are based on vulnerabilities which have not been properly addressed at the code level of the web application,” says Amit Ashbel, CEO of cybersecurity firm Checkmarx.

While developers usually do test the code of their websites, it isn’t necessarily the security flaws they seek. “Unfortunately it is not always common practice to have developers identify and address the vulnerabilities just like they would address functionality bugs triggered by their code,” Ashbel elaborates.

Organizations are starting to understand the importance of rooting out security flaws from their applications, but there’s only so much you can do when dealing with hundreds of thousands of lines of code.

This is a challenge that, according to Ashbel, can be overcome with the use of static application security testing (SAST) tools, solutions that help spot security bugs in software as you code. “Source code analysis can be implemented in a very efficient and effective manner if organizations adopt the idea of introducing security,” he says.

The advantage of SASTs, Ashbel says, is that they become integrated into the development lifecycle of web applications and reduce the cost and time required to fix bugs.

“While this may not provide 100% protection, it is a key step which should become part of every organization’s SDLC (Software Development Lifecycle),” he stresses. “Making sure that code is analyzed for vulnerabilities as part of the SDLC is just like analyzing code for functionality bugs.”

Checkmarx has designed its tools with the focus to help developers quickly mitigate vulnerabilities in their code, while at the same time increase their secure coding skills via a set of functionalities designed to deliver education as part of the mitigation.

Other viable initiatives in this regard include efforts led by several security startups to leverage artificial intelligence in hunting software bugs. The innovations have been set forth in a Cyber Grand Challenge competition hosted by DARPA. Among tasks given to participants is to design tools that can disassemble software, analyze it and plug any potential security holes.

DARPA’s vision is to have AI that complements the work humans do in finding bugs — and, of course, exploiting them.

Not every organization has the know-how and resources to fix security bugs in the source code of their web applications.

A small team from the University of Idaho’s Center for Secure and Dependable Systems is among the competition’s finalists. Their goal is to make tools and methodologies available to developers that will make it easier and cheaper to build secure code. Jim Alves-Foss, who leads the two-person team, says they have opted for a combination of algorithms and heuristics to root out bugs that have been known to researchers for decades but pop up in newly written code, which he describes as “low-hanging fruit for attackers.”

Another team from software security firm GrammaTech and the University of Virginia are developing an AI-powered task master that can determine which parts of software are more likely to have security bugs and optimize computation resources to analyze those sections.

The efforts are still far from being deliverable to consumers, but the challenge environment is showing promise and will crop up some interesting results.

What if you can’t fix your web application’s source code?

Not every organization has the know-how and resources to fix security bugs in the source code of their web applications and make sure they don’t expose their visitors to harm. In fact, for the most part, organizations rely on popular CMS and blog engines such as WordPress, which let you power up your website with little or no coding skills.

This by itself can become a security hole, because, in many cases, site administrators remain oblivious to hacks because of their lack of knowledge.

As it happens, a huge number of website hacks are made possible through zero-day flaws in these engines, or known flaws in unpatched instances installed on web servers. And as most of these engines are open to third-party extension development, many data breaches take place through badly coded plug-ins installed by careless site administrators who only wish to access the added functionality.

But this problem isn’t without a solution. Firms with little or no security staffing and web application experience can invest in the use of cloud-based security services, which are easy to integrate with different forms of IT infrastructure.

For instance, cloud-based Web Application Firewalls (WAF) add a layer of security to web applications, and their installation is often as simple as a redirection of a website’s traffic through the WAF provider. WAFs function by monitoring website traffic at the application layer, which basically means they are much more effective than traditional security tools in discovering and blocking known attacks and zero-day exploits on web applications.

According to Gartner’s Magic Quadrant 2015, WAFs are one of the most popular tools for securing websites and can act as an alternative to vulnerability scanning tools and processes for organizations that don’t have the necessary resources.

Most major cybersecurity vendors and hosting services such as Amazon and Microsoft Azure offer some kind of WAF protection to their clients, but there are also many startups and mid-sized companies that are carving out a position for themselves in the cloud-based WAF industry, including Imperva, DenyAll and Positive Technologies (ranked as Leaders and Visionaries in Gartner’s MQ).

WAFs do come with their own caveats and require in-house cybersecurity talent. They also have their shortcomings when it comes to dealing with the complexities and diversities that characterize web applications. However, cloud-based security solutions often remedy the situation somewhat by requiring the least involvement from the client and deferring the bulk of the work to the WAF provider and its teams of experts.

Recent hacks serve as a reminder that more than our own data and security is at stake when we’re operating websites. It’s hard to call any single tool a panacea that will plug all the holes and prevent your website from becoming a vehicle for cybercrimes. That’s why we’re still seeing websites getting hacked on a large scale. However, it doesn’t mean that you shouldn’t try your best to protect your website (and, of course, its visitors) with as many tools as you can lay your hands on. After all, as the saying goes, you only need a stronger lock than your neighbor.

Featured Image: Bryce Durbin
Source: TechCrunch

How IoT and machine learning can make our roads safer

How IoT and machine learning can make our roads safer

The transportation industry is associated with high maintenance costs, disasters, accidents, injuries and loss of life. Hundreds of thousands of people across the world are losing their lives to car accidents and road disasters every year. According to the National Safety Council, 38,300 people were killed and 4.4 million injured on U.S. roads alone in 2015.

The related costs — including medical expenses, wage and productivity losses and property damage — were estimated at $152 billion. And this doesn’t account for general maintenance and repairs costs of the road and highway systems, which earmark billions of dollars of public funds every year — and are still underfunded.

Hopefully, this problem can be addressed by the Internet of Things and machine learning, two technologies that are taking the world by storm and will someday become an inherent part of every aspect of our lives.

With the right implementation of IoT technology, we can mitigate risks, prevent damage and reduce costs. The deployment of smart, connected sensors, combined with machine-learning-powered analytics tools, can enable us to gather information, make predictions and reach decisions that will make our roads safer.

Improving driver behavior and reducing accidents

The human element continues to be the main contributing factor to road fatalities. Reckless driving, distracted driving, DUI, speeding and other bad habits increase the likelihood of road accidents.

In every country, there are rules and regulations that enforce safe driving. But while rules are rather reactive in nature and force drivers to drive safely to avoid punishment, IoT can play a more proactive role in helping drivers adopt safe habits.

Telematics company Geotab is using IoT to help fleet management companies significantly reduce accidents. “Until autonomous vehicles are fully rolled out, we have to employ technology to help manage the human factor in driving,” says Neil Cawse, the company’s CEO. “Data collection is the first step. With telematics, you can know an infinite number of things about the vehicle and what the driver is doing.”

Telematics and Onboard Diagnostics (OBD) are helping fleet management companies and insurance firms collect a wealth of information about vehicles and drivers, including measurable events such as speeding, seatbelt usage, sharp cornering or over-acceleration. This helps them in promoting and incentivizing safe driving by using a number of measures, such as scorecarding, which rates drivers based on their driving data.

“The second step is driver coaching — using the data to help the driver learn to drive safer,” says Cawse. “In-vehicle driver feedback tools are an effective way to change driver behavior.” Geotab users leverage vehicle-in-reverse detection, collision avoidance systems, mobile cameras and video and spoken word notifications to detect risks and receive live in-vehicle feedback and warnings. “Not only can it keep people safe, but it is a major benefit to companies looking to manage risk and control costs related to accidents,” adds Cawse.

The true power of IoT in ensuring safe driving continues to be unleashed.

IoT and telematics are also reducing the overhead in ensuring compliance with regulations such as hours of service, which require drivers to record such details as when they start driving, when they stop and other important trip details. “This safety measure combats accidents due to lack of sleep,” Cawse explains.

In the event of an accident, analysis of data collected through IoT technology will help reduce risks in the future. “Accident data collected by telematics could be used to identify the most dangerous intersections in our cities,” says Cawse. “With that kind of information, municipalities and departments of transportation can put their dollars exactly where it’s needed the most, and where it will make the greatest impact. Geotab collects over 900 million points of data each day. The potential for how this data could be used is limitless.”

Preventing disasters

Providing timely maintenance for the network of bridges, roads and highways of a country is a challenging task, and often the mismanagement of transportation infrastructure leads to collapsed bridges, potholes in roads, unwanted congestion and fatalities.

The 2007 collapse of the I-35W Mississippi Bridge in Minneapolis, which cost the state millions of dollars in repairs and resulted in 13 casualties, is one of the starkest examples of the consequences of failing to address infrastructural problems in time. Presently, of the 607,000 bridges across the U.S., roughly 65,000 are classified as “structurally deficient” and 20,000 as “fracture critical,” which takes years and billions of dollars to fix. This means the I-35W tragedy could happen any time, because, with limited public funding, it’s hard to keep track and prioritize repairs and maintenance.

IoT can help remedy the situation. IoT sensors and smart cement (cement equipped with sensors) can monitor the structural status of roads and bridges under dynamic conditions and alert us about deficiencies before they turn into catastrophes.

Professor Jerry Lynch of the Center for Wireless Integrated MicroSensing and Systems (WIMS2) has instrumented the New Carquinez Bridge with IoT sensors. Supported by the California Department of Transportation, the IoT ecosystem includes tri-axis accelerometers, strain gauges, wind velocity, temperature and potentiometer displacement sensors measured using a proprietary Narada circuit board. The data collected by the system is used to better understand the response of the bridge under such conditions as high wind loading and earthquakes, and to determine in real-time when the structure is at risk or in need of repairs.

The replacement of the I-35W Mississippi Bridge cost the state $234 million. Today, that sum can be used to instrument thousands of bridges and prevent tragedies in the future.

Detecting road conditions

Controlling traffic and keeping roads clear can help immensely in reducing accidents and incidents that occur because of poor road and weather conditions. Driving safety, in particular, is dependent on being able to monitor road surfaces and identify road hazards.

IoT road sensors can provide real-time data from roads to help divert the flow of traffic away from areas of hazard. French IoT startup HIKOB is exploring the possibilities in several French cities. “Road sensors are going to be one of the most crucial developments that will take place in the world of transportation with the introduction of the Internet of Things technology,” says Ludovic Broquereau, VP of marketing and business development at HIKOB. “Road sensors can be easily embedded under the roads so that they can effectively measure the changes in temperature, traffic volume and humidity, among other weather and traffic constraints.”

Maybe one day accidents will become a thing of the past.

The data collected by the sensors is gathered in servers, where it is analyzed to provide concerned authorities with real-time information about traffic and road conditions in the IoT-equipped regions. The gleaned insights can help in a number of scenarios, including optimizing the use of limited maintenance resources and equipment, as well as predicting and alerting about possible hazards and accidents that may take place because of poor road and weather conditions.

HIKOB is testing its solution in the city of Troyes, where it is using a host of smart sensors and IoT gateways to monitor traffic and weather conditions to assist in improving road safety.

The future of IoT in roads

IoT is already helping make our roads much safer. But this is just the beginning. The true power of IoT in ensuring safe driving continues to be unleashed as cars move toward becoming fully autonomous and start interacting with their environment and making decisions on their own. This could unlock new possibilities, such as preventing drivers from entering hazardous areas, assisting in avoiding collisions, selecting detours and avoiding traffic jams and many other scenarios where the power of IoT and machine learning combine to create new opportunities. We still have a long way to go, but maybe one day accidents will become a thing of the past.

Featured Image: Bartolomiej Pietrzyk/Shutterstock
Source: TechCrunch

Exploiting machine learning in cybersecurity

Exploiting machine learning in cybersecurity

Thanks to technologies that generate, store and analyze huge sets of data, companies are able to perform tasks that previously were impossible. But the added benefit does come with its own setbacks, specifically from a security standpoint.

With reams of data being generated and transferred over networks, cybersecurity experts will have a hard time monitoring everything that gets exchanged — potential threats can easily go unnoticed. Hiring more security experts would offer a temporary reprieve, but the cybersecurity industry is already dealing with a widening talent gap, and organizations and firms are hard-pressed to fill vacant security posts.

The solution might lie in machine learning, the phenomenon that is transforming an increasing number of industries and has become the buzzword in Silicon Valley. But while more and more jobs are being forfeited to robots and artificial intelligence, is it conceivable to convey to machines a responsibility as complicated as cybersecurity? The topic is being hotly debated by security professionals, with strong arguments on both ends of the spectrum. In the meantime, tech firms and security vendors are looking for ways to add this hot technology to their cybersecurity arsenal.

Pipe dream or reality?

Simon Crosby, CTO at Bromium, calls machine learning the pipe dream of cybersecurity, arguing that “there’s no silver bullet in security.” What backs up this argument is the fact that in cybersecurity, you’re always up against some of the most devious minds, people who already know very well how machines and machine learning works and how to circumvent their capabilities. Many attacks are carried out through minuscule and inconspicuous steps, often concealed in the guise of legitimate requests and commands.

Others, like Mike Paquette, VP of Products at Prelert, argue that machine learning is cybersecurity’s answer to detecting advanced breaches, and it will shine in securing IT environments as they “grow increasingly complex” and “more data is being produced than the human brain has the capacity to monitor” and it becomes nearly impossible “to gauge whether activity is normal or malicious.”

Stephan Jou, CTO at Interset, is a proponent of machine-learning-powered cybersecurity. He acknowledges that AI is still not yet ready to replace humans, but it can boost human efforts by automating the process of recognizing patterns.

What’s undeniably true is that machine learning has very distinct use cases in the realm of cybersecurity, and even if it’s not a perfect solution, it is helping improve the fight against cybercrime.

Attended machine learning

The main argument against security solutions powered by unsupervised machine learning is that they churn out too many false positives and alerts, effectively resulting in alert fatigue and a decrease in sensibility. On the other hand, the amount of data and events generated in corporate networks are beyond the capacity of human experts. The fact that neither can shoulder the burden of fighting cyberthreats alone has led to the development of solutions where AI and human experts join forces instead of competing with each other.

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has led one of the most notable efforts in this regard, developing a system called AI2, an adaptive cybersecurity platform that uses machine learning and the assistance of expert analysts to adapt and improve over time.

Humans and robots have no other choice than to unite against the ever-increasing threats that lurk in cyberspace.

The system, which takes its name from the combination of artificial intelligence and analyst intuition, reviews data from tens of millions of log lines each day and singles out anything it finds suspicious. The filtered data is then passed on to a human analyst, who provides feedback to AI2 by tagging legitimate threats. Over time, the system fine-tunes its monitoring and learns from its mistakes and successes, eventually becoming better at finding real breaches and reducing false positives.

Research lead Kaylan Veeramachaneni says, “Essentially, the biggest savings here is that we’re able to show the analyst only up to 200 or even 100 events per day,” which is considerably less than the tens of thousands security events that cybersecurity experts have to deal with every day.

The platform was tested during a 90-day period, crunching a daily dose of 40 million log lines generated from an e-commerce website. After the training, AI2 was able to detect 85 percent of the attacks without human assistance.

Finnish security vendor F-Secure is another firm that has placed its bets on the combination of human and machine intelligence in its most recent cybersecurity efforts, which reduces the time it takes to detect and respond to cyberattacks. On average, it takes organizations several months to discover a breach. F-Secure wants to cut down the time frame to 30 minutes with its Rapid Detection Service.

The system gathers data from a combination of software installed on customer workstations and sensors placed in network segments. The data are fed to threat intelligence and behavioral analytics engines, which use machine learning to classify the incoming samples and determine normal behavior and identify outliers and anomalies. The system uses near-real-time analytics to identify known security threats, stored data analytics to compare samples against historical data and big data analytics to identify evolving threats through anonymized datasets gathered from a vast number of clients.

It’s not about replacing humans, but about making them superhumans

— Caleb Barlow, IBM Security

At the heart of the system is a team of cybersecurity experts who will go through the results of the machine learning analysis and ultimately identify and handle security incidents. With the bulk of the work being carried out by machine learning, the experts and software engineers can become much more productive and focus on more advanced concepts, such as identifying relationships between threats, reverse engineering attacks and enhancing the overall system.

“The human component is an important factor,” says Erka Koivunen, cybersecurity advisor at F-Secure. “Attackers are human, so to detect them you can’t rely on machines alone. Our experts know how attackers think, the very tactics they use to hide their presence from standard means of detection.”

Sifting through unstructured data

While data gathered from end points and network traffic help in identifying threats, it only accounts for a small part of the cybersecurity picture. A lot of the intelligence and information required to detect and protect enterprises from emerging threats lies in unstructured data such as blog posts, research papers, news stories and social media posts. Being able to make sense of these resources is what gives cybersecurity experts the edge over machines.

Tech giant IBM wants to bridge this gap by taking advantage of the natural language processing capabilities of its flagship artificial intelligence platform Watson. The company intends to take advantage of Watson’s unique capabilities in sifting through unstructured data to read and learn from thousands of cybersecurity documents per month, and apply that knowledge to analyze, identify and prevent cybersecurity threats.

“The fascinating difference between teaching Watson and teaching one of my children,” Caleb Barlow, vice president at IBM Security, told Wired, “is that Watson never forgets.”

Combining this capability with the data already being gathered by IBM’s threat intelligence platform, X-Force Exchange, the company wants to address the shortage of talent in the industry by raising Watson’s level of efficiency to that of an expert assistant and help reduce the rate of false positives.

However, Barlow doesn’t believe that Watson is here to replace humans. “It’s not about replacing humans, but about making them superhumans,” he said in an interview with Fortune.

If the experiment is successful, Watson should deploy to enterprise customers later this year as a cloud service named Watson for Cyber Security. Until then, it has a lot to learn about how cybersecurity works, which is no easy feat.

Cybersecurity startup Massive Alliance uses a slightly different approach to glean information from unstructured data. Its cybersecurity platform Strixus uses a set of sophisticated proprietary tools that anonymously gather data related to its customers from the surface web (public search engines), deep web (non-indexed pages) and dark web (TOR-based networks).

The collected data is analyzed by a sentiment-based machine learning engine that discerns the general emotion of content. The mechanics behind the technology include mathematical engines that produce adaptive models of behavior of threat actors and determine the danger they pose against the client. The results are finally submitted to analysts who process the information and spot potential risks.

This technique gives the cybersecurity firm the unique ability to monitor billions of results on a daily basis, identify and alert about the publication of potentially brand-damaging information and proactively detect and prevent attacks and data loss before they happen.

“To date, human intelligence is still the most pointed form of intelligence and can be the most effective in a specific operation or crisis,” says Brook Zimmatore, the company’s CEO. “However, focus on Machine Learning technology across any industry is vital as human efforts have their limitations.”

Will artificial intelligence replace cybersecurity experts?

It’s still too early to determine whether any of these efforts will result in cybersecurity experts being totally replaced by machine-learning-based solutions. Maybe the balance will shift in the future, but, for the moment, humans and robots have no other choice than to unite against the ever-increasing threats that lurk in cyberspace.

Featured Image: Omelchenko/Shutterstock
Source: TechCrunch

Decentralizing IoT networks through blockchain

Decentralizing IoT networks through blockchain

Imagine a washer that autonomously contacts suppliers and places orders when it’s low on detergent, performs self-service and maintenance, downloads new washing programs from outside sources, schedules its cycles to take advantage of electricity prices and negotiates with peer devices to optimize its environment; a connected car, smart enough to find and choose the best deal for parts and services; a manufacturing plant where the machinery knows when to order repairs for some of its parts without the need of human intervention.

All these scenarios — and many more — will be realized thanks to the Internet of Things (IoT). Already, many of the industries that historically didn’t fit well with computers have been transformed by the billions of IoT devices connected to the internet; other industries will follow suit as billions more enter the fray.

The possibilities are virtually countless, especially when the power of IoT is combined with that of other technologies, such as machine learning. But some major hurdles will surface as billions of smart devices will want to interact among themselves and with their owners. While these challenges cannot be met with the current models that are supporting IoT communications, tech firms and researchers are hoping to deal with them through blockchain, the technology that constitutes the backbone of the famous bitcoin.

The problem with the centralized model

Current IoT ecosystems rely on centralized, brokered communication models, otherwise known as the server/client paradigm. All devices are identified, authenticated and connected through cloud servers that sport huge processing and storage capacities. Connection between devices will have to exclusively go through the internet, even if they happen to be a few feet apart.

While this model has connected generic computing devices for decades, and will continue to support small-scale IoT networks as we see them today, it will not be able to respond to the growing needs of the huge IoT ecosystems of tomorrow.

Existing IoT solutions are expensive because of the high infrastructure and maintenance cost associated with centralized clouds, large server farms and networking equipment. The sheer amount of communications that will have to be handled when IoT devices grow to the tens of billions will increase those costs substantially.

Even if the unprecedented economical and engineering challenges are overcome, cloud servers will remain a bottleneck and point of failure that can disrupt the entire network. This is especially important as more critical tasks such as human health and life will become dependent on IoT.

There’s no single platform that connects all devices.

Moreover, the diversity of ownership between devices and their supporting cloud infrastructure makes machine-to-machine (M2M) communications difficult. There’s no single platform that connects all devices and no guarantee that cloud services offered by different manufacturers are interoperable and compatible.

Decentralizing IoT networks

A decentralized approach to IoT networking would solve many of the questions above. Adopting a standardized peer-to-peer communication model to process the hundreds of billions of transactions between devices will significantly reduce the costs associated with installing and maintaining large centralized data centers and will distribute computation and storage needs across the billions of devices that form IoT networks. This will prevent failure in any single node in a network from bringing the entire network to a halting collapse.

However, establishing peer-to-peer communications will present its own set of challenges, chief among them the issue of security. And as we all know, IoT security is much more than just about protecting sensitive data. The proposed solution will have to maintain privacy and security in huge IoT networks and offer some form of validation and consensus for transactions to prevent spoofing and theft.

The blockchain approach

Blockchain offers an elegant solution to the peer-to-peer communication platform problem. It is a technology that allows the creation of a distributed digital ledger of transactions that is shared among the nodes of a network instead of being stored on a central server. Participants are registered with blockchains to be able to record transactions. The technology uses cryptography to authenticate and identify participating nodes and allow them to securely add transactions to the ledger. Transactions are verified and confirmed by other nodes participating in the network, thus eliminating the need for a central authority.

The ledger is tamper-proof and cannot be manipulated by malicious actors because it doesn’t exist in any single location, and man-in-the-middle attacks cannot be staged because there is no single thread of communication that can be intercepted. Blockchain makes trustless, peer-to-peer messaging possible and has already proven its worth in the world of financial services through cryptocurrencies such as Bitcoin, providing guaranteed peer-to-peer payment services without the need for third-party brokers.

Tech firms are now mulling over porting the usability of blockchain to the realm of IoT.

The application of blockchain to IoT isn’t without flaws and shortcomings.

The concept can directly be ported to IoT networks to deal with the issue of scale, allowing billions of devices to share the same network without the need for additional resources. Blockchain also addresses the issue of conflict of authority between different vendors by providing a standard in which everyone has equal stakes and benefits.

This helps unlock M2M communications that were practically impossible under previous models, and allows for the realization of totally new use cases.

Concrete uses of blockchain in IoT

The IoT and blockchain combination is already gaining momentum, and is being endorsed by both startups and tech giants. IBM and Samsung introduced their proof-of-concept system, ADEPT, which uses blockchain to support next-generation IoT ecosystems that will generate hundreds of billions of transactions per day.

In one of the first papers to describe the use of blockchain in IoT, IBM’s Paul Brody describes how new devices can be initially registered in a universal blockchain when assembled by the manufacturer, and later transferred to regional blockchains after being sold to dealers or customers, where they can autonomously interact with other devices that share the blockchain.

The combination of IoT and blockchain is also creating the possibility of a circular economy and liquefying the capacity of assets, where resources can be shared and reused instead of purchased once and disposed after use. An IoT hackathon hosted by blockchain platform leader Ethereum put the concept of blockchain-powered IoT to test, in which some interesting ideas were presented, including in the domain of energy sharing and electricity and gas billing.

Filament is another startup that is investing in IoT and blockchain with a focus on industrial applications such as agriculture, manufacturing and oil and gas. Filament uses wireless sensors, called Taps, to create low-power autonomous mesh networks for data collection and asset monitoring, without requiring a cloud or central network authority. The firm uses blockchain technology to identify and authenticate devices and also to charge for network and data services through bitcoin.

Chain of Things is a consortium that is exploring the role of blockchain in dealing with scale and security issues in IoT. In a recent hackathon held in London, the group demonstrated the use of blockchain and IoT in a case study involving a solar energy stack designed to provide reliable and verifiable renewable data, speeding up incentive settlements and reducing opportunities for fraud. The system facilitates the process in which a solar panel connects to a data logger, tracks the amount of solar energy produced, securely delivers that data to a node and records it on a distributed ledger that is synced across a broader global network of nodes.

Caveats and challenges

The application of blockchain to IoT isn’t without flaws and shortcomings, and there are a few hurdles that need to be overcome. For one thing, there’s dispute among bitcoin developers over the architecture of the underlying blockchain technology, which has its roots in problems stemming from the growth of the network and the rise in the number of transactions. Some of these issues will inevitably apply to the extension of blockchain to IoT. These challenges have been acknowledged by tech firms, and several solutions, including side-chains, tree-chains and mini-blockchains, are being tested to fix the problem.

Processing power and energy consumption is also a point of concern. Encryption and verification of blockchain transactions are computationally intensive operations and require considerable horsepower to carry out, which is lacking in many IoT devices. The same goes for storage, as ledgers start to grow in size and need to be redundantly stored in network nodes.

And, as Machina Research analyst Jeremy Green explains, autonomous IoT networks powered by blockchain will pose challenges to the business models that manufacturers are seeking, which includes long-term subscription relationships with continuing revenue streams, and a big shift in business and economic models will be required.

It’s still too early to say whether blockchain will be the definite answer to the problems of the fast-evolving IoT industry. It’s not yet perfect; nonetheless, it’s a very promising combination for the future of IoT, where decentralized, autonomous networks will have a decisive role.

Featured Image: Morrowind/Shutterstock
Source: TechCrunch

The implications of large IoT ecosystems

The implications of large IoT ecosystems

The Internet of Things genie is out of the bottle and growing at an accelerating pace. According to Gartner, 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015. This number will soar to more than 20 billion by 2020.

Others present even higher estimates.

The opportunities in improved utility, energy-saving, efficiency and safety lying in the data gathered by such immense numbers of connected sensors and smart devices are huge and without precedent.

However, the challenges that come with the chaotic growth of IoT are also new and, in some cases, unfamiliar, and not knowing and anticipating them can slow down the process — if not halt it altogether.

Here are some of the changes we’ll face as IoT becomes more ingrained in our lives — and how the tech community is getting prepared to deal with them.

Device authentication and authorization

Identifying devices within a network is key to securing IoT ecosystems and preventing the infiltration of intruders. In current solutions, device authentication and authorization is mostly carried out through centralized cloud-based servers, which is perfectly viable in small-scale IoT networks where dozens of nodes are involved. But as ecosystems start to grow and thousands and millions of sensors and gadgets enter the fray, authentication can become a bottleneck, especially if the network loses internet connection for any amount of time.

“Most people don’t understand the notion of scale,” says Ken Tola, CEO of IoT security startup Phantom. “Effective security needs to provide a realistic mechanism to control millions of devices.” Which becomes a nightmare with current solutions. “Current options rely on internet connections which kill batteries, overwhelm the extremely fragile mesh networks onto which most IoT systems rely and fail completely when the internet goes down,” Tola explains.

According to Tola, the solution is to move much of the functionality to the edge, between devices themselves. “Working in a peer-based manner makes it much easier to handle scale,” he says. “No matter how big a system is, when authentication/authorization takes place between devices, it can happen simultaneously across millions of devices without requiring internet access, heavy network loads or any other burdensome features.”

The increased number of IoT devices can quickly turn into a management nightmare.

Tola’s startup has worked on a lightweight, scalable solution for M2M (machine-to-machine) connections with minimal internet connectivity required. Phantom is an invisible (thus the name) smart security layer that sits between a device and any connected network; it is able to securely identify two devices in a peer-based relationship, authorize the levels and types of communication and secure the conversations between those two devices.

Local networks are secured through policies stored in nodes. Policies can be distributed from the cloud or delivered through Bluetooth or direct USB sticks. Strong hash chaining techniques ensure the safe transmission of new policies. “Leveraging this local validation system, we can finally provide truly effective security at scale, and securely control millions of devices,” says Tola.

Wireless communications

At its core, the Internet of Things is the extension of worldwide web connectivity from our computers to devices and sensors that surround us. And for the most part, that connectivity has to be wireless, especially in the personal area network, i.e. the ensemble of wearable, portable and implanted devices we carry on our person.

Most IoT devices rely on radio frequency (RF) technology such as Bluetooth, ZigBee and Wi-Fi for communications. Otherwise known as far-field transmission, RF is great when communicating over long distances, but becomes problematic when applied to short-range, isolated IoT ecosystems, like the wireless personal area network.

“Link and network security become increasingly difficult as the number of any RF devices increases,” says Dr. Michael Abrams, CEO of FreeLinc, a research and development corporation. “Handshake and encryption protocols are projected into free space, and the requirement for decreasing power consumption in Wearables translates to less room for encryption protocols. These issues are clearly reflected by Bluetooth’s increasingly poor reliability and security record.”

Already, RF-based devices are shutting each other down due to interference, a situation that will grow worse when the IoT industry grows by the billions.

One solution would be “to get the FCC to allot additional bandwidth — such as 5GHz for Wi-Fi, but with trillions of projected devices to eventually connect, the problems will recur,” says Dr. Abrams.

Another problem in RF-based wireless communications is power consumption, which becomes a growing issue as more devices are added to IoT space, especially as many are powered by batteries and will be deployed in unattended environments.

An alternative, says Abrams, is to substitute Near Field Magnetic Induction (NFMI) for RF. NFMI uses the modulations of magnetic fields to transfer data wirelessly between two points. Its main strength is its attenuation. It decays a thousand times faster than RF signals, which eliminates much of the interference and security issues that are attributed to technologies such as Bluetooth.

The transformations overcoming the Internet of Things will have huge implications.

“NFMI solves many of these problems,” Abrams says. “It creates a wireless ‘bubble’ around the user, in which multiple devices can connect, and outside the signals cannot be seen. Security protocols exist only within the bubble and are not projected into free space. The rapid signal decay allows the same frequency to be used just a short distance away, practically eliminating the possibility of spectrum contention.”

NFMI is based on the same principle as Near Field Communication (NFC), which is found in all new-generation smartphones, but it extends the reading distance from 1-4 inches up to 9 feet and offers a 400 Kbps data transfer rate.

NFMI has been used in hearing aids, pacemakers and mission critical communications for more than a decade. Abrams believes it will prove its worth in a new way in the age of IoT.

Device and traffic administration

The increased number of IoT devices can quickly turn into a management nightmare. In order to make the best use of the increased utility provided by the millions of smart meters, parking and lighting sensors, traffic controls, crowd movement detection sensors and many other gadgets that are scattered across smart homes and cities, you need to be able to efficiently control their traffic and functionality. Administration, integration and connectivity should be as simple as possible and require the least amount in human intervention.

“Until now the IoT networks were not management or capacity constrained,” says Yegor Popov, co-founder and CEO of WAVIoT, an IoT networking startup. “However, one of the biggest challenges in IoT networks today is the number of different smart devices and sensors transmitting and receiving data at the same time. This challenge needs to be addressed in order to keep the network under control.”

Popov specifically alludes to Low-Power Wide Area Networks (LPWAN), where huge numbers of devices will soon share the same network in cities. Increased numbers of nodes will require real-time re-allocation of resources in order to improve efficiency and prevent interference.

“Based on analysts’ predictions, by 2023 we will connect to the internet an additional 40 billion devices,” says Marat Zaripov, WAVIoT’s CMO. “15 billion of those are estimated to be LPWAN. It is 6 million new LPWAN devices per day. And if these predictions are even remotely correct, then network management will become one of the key issues.”

“A human cannot control a billion nodes connected in a wide-area,” Popov explains. He plans on solving the problem through a technology that has become the buzzword in many industries: artificial intelligence.

WAVIoT is using machine learning for the development of a dynamic, automated network management framework. Their proprietary algorithm, dubbed Albert, provides real-time distributed system control and self-management capabilities for huge long-range IoT networks consisting of billions of smart devices and sprawling across millions of square miles. The system uses trained neural networks and Bayesian methods to optimize the interaction of nodes and IoT gateways on the network.

Popov calls it “a decentralized living organism which can adapt itself based on machine learning algorithms to provide optimal operation for the entire network.”

Albert will meet its first challenge in a real-life project in Sofia, the Bulgarian capital, where it will handle more than a million different smart city devices.

Embracing the change overcoming IoT

In many ways, the transformations overcoming the Internet of Things will have huge implications. Many of the technologies we use will evolve and adapt to support the needs of an increasingly connected world. Others that don’t will be buried under the weight of billions of connected devices and replaced by new technologies that are ready to take on the challenges introduced by the explosion of IoT. Scalability will be the key to winning in this game of survival.

Featured Image: faithie/Shutterstock
Source: TechCrunch

The gaming industry can become the next big target of cybercrime

The gaming industry can become the next big target of cybercrime

Video-game-related crime is almost as old as the industry itself. But while illegal copies and pirated versions of games were the previous dominant form of illicit activities related to games, recent developments and trends in online gaming platforms have created new possibilities for cybercriminals to swindle huge amounts of money from an industry that is worth nearly $100 billion. And what’s worrisome is that publishers are not the only targets; the players themselves are becoming victims of this new form of crime.

Recent trends prove just how attractive the gaming community has become for cybercriminals and how lucrative the game-hacking business is becoming, which underlines the importance for developers, manufacturers and gamers alike to take game security more seriously.

New features breed new hacking possibilities

The recent wave of malware attacks against Steam, the leading digital entertainment distribution platform, is a perfect example of how game-related crime has changed in recent years.

For those who are unfamiliar, Steam is a multi-OS platform owned by gaming company Valve, which acts as an e-store for video games. But what started as a basic delivery and patching network eventually grew into a fully featured gaming market that counts more than 125 million members, 12 million concurrent users and thousands of games. Aside from the online purchase of games, the platform offers features for game inventories, trading cards and other valuable goods to be purchased and attached to users’ accounts.

The transformation that has overcome the gaming industry, or more specifically the shift toward the purchase and storage of in-game assets, has created new motives for malicious actors to try to break into user accounts. Aside from sensitive financial information, which all online retail platforms contain, the Steam Engine now provides attackers with many other items that can be turned into money-making opportunities.

This has fueled the development of Steam Stealer, a new breed of malware that is responsible for the hijacking of millions of user accounts. According to official data recently published by Steam, credentials for about 77,000 Steam accounts are stolen every month. Research led by cybersecurity firm Kaspersky Lab has identified more than 1,200 specimens of the malware. Santiago Pontiroli and Bart P, the researchers who authored the report, maintain that Steam Stealer has “turned the threat landscape for the entertainment ecosystem into a devil’s playground.”

The malware is delivered through run-of-the-mill phishing campaigns, infected clones of gaming sites such as RazerComms and TeamSpeak or through fake versions of the Steam extension developed for the Chrome browser.

Once the intruder gains access to victims’ credentials, they not only siphon the financial data related to the account, but also take advantage of the possible assets stored in the account and sell them in Steam Trade for extra cash. Inventory items are being traded at several hundred dollars in some cases. According to the Steam website, “enough money now moves around the system that stealing virtual Steam goods has become a real business for skilled hackers.”

Every online game and platform can become the target of cyberattacks.

Steam Stealer is being made available on malware black markets at prices as low as $3, which means “a staggering number of script-kiddies and technically-challenged individuals resort to this type of threat as their malware of choice to enter the cybercrime scene,” the Kaspersky report states. The malware-as-a-service trend is being observed elsewhere, including in the ransomware business, which, at present, is one of the most popular types of money-making malware being used by cybercriminals.

What makes the attacks successful?

A number of factors have contributed to the success of the attacks against the Steam platform, but paramount among them is the outdated perception toward security in games. Developers and publishers are still focused on hardening their code against reverse engineering and piracy, while the rising threat of data breaches against games and gamers aren’t getting enough attention.

“I think it’s because in the gaming world as well as in the security industry, we haven’t paid much attention to this issue in the past,” says Pontiroli, the researcher from Kaspersky, referring to the malware attacks against games.

Gamers are also to blame for security incidents, Pontiroli believes. “There’s this view from the other side of the table — from gamers — that antivirus apps slow down their machines, or cause them to lose frame rate,” he explains, which leads them to disable antiviruses or uninstall them altogether. “Nowadays you just need to realize that you can lose your account and your information.”

A separate report by video-game security startup Panopticon Labs about cyberattacks against the gaming industry maintains that in comparison to financial services and retail, the video-game industry is new and highly vulnerable to cyberattacks. “Whereas other industries now have cybersecurity rules, regulations and standards to adhere to, online video games are just now recognizing that in-game cyberattacks exist and are harmful to both revenue and reputation,” writes the report.

Matthew Cook, co-founder of Panopticon, believes that publishers are putting up with the unwanted behaviors of bad actors and accept it as a cost of doing business. “So often, the publishers we talk to refer to fighting back against these unwanted players as a game of ‘whack a mole’ that they can never win,” he says.

In contrast, he believes, publishers can fight back and eliminate fraudulent or harmful activities, provided they get a head start in securing their games and are dedicated to keeping bad players out after they’re gone. “Unfortunately, slow, manual processes like combing through suspected bad actor reports, or performing half-hearted quarterly ban activities just won’t cut it anymore,” Cook stresses. “The bad guys have gotten too good, and there’s simply too much financial opportunity for them to be dissuaded by reactive rules and reports.”

What’s being done to deal with the threats?

Efforts are being made to improve security in software, but there’s still a long way to go. For its part, Steam has rolled out Steam Guard functionality to help block account hijacking, and it is also offering two-factor and risk-based authentication through the Steam Guard Mobile Authenticator. The company is also toughening up the market place and has added new restrictions recently that use email confirmation and put a 15-day hold on traded items in order to mitigate the risks of fraud.

Why bother taking the pains of hacking a banking network when there’s easier cash to be made in the gaming industry?

However, lack of awareness and focus on gaming experience leads many users to forgo activating these features. “While [the security features] do provide a certain level of safety to their users, not all of them are aware of their existence or know how to properly configure them,” says Pontiroli. “Even with all the solutions in the world you still need to create awareness among the gaming crowd.”

Security vendors are also taking strides to provide security for gamers without disrupting the gaming experience. Most security products now offer a “gaming mode” that allows players to keep their antivirus software active but avoid receiving notifications until the end of their session.

Other firms, such as Panopticon, are working on special in-game security solutions that distinguishes suspicious in-game activities from normal player behavior through anomaly detection and analytics. The model is taking after techniques used by fraud detection tools in banking and financial platforms. This approach also helps deal with other fraudulent activities such as “gold farming,” the process of using botnets to generate in-game assets and later sell them on grey markets, an activity that is raking in billions of dollars of revenue every year.

No one is safe

The attacks against Steam are dwarfed when compared to some of the bigger data breaches that we’ve seen in the last year. Nonetheless, it is a stark indication of the transformation and shift that online gaming security is undergoing. Moreover, Steam isn’t the only platform that has suffered data breaches in the past months and years.

A similar attack — though at a much smaller scale — was observed against Electronic Art’s gaming platform, Origin late last year (the gaming giant never confirmed the attacks, however). Several other gaming consoles and networks have been targeted in recent years, and the plague of ransomware has already found its way into the gaming industry. This shows that every online game and platform can become the target of cyberattacks.

Nowadays, online games contain a wealth of financial and sensitive information about users, along with other valuable assets. And as is their wont, online fraudsters and cybercriminals will be following the money and aim for the weaker targets. So why bother taking the pains of hacking a banking network when there’s easier cash to be made in the gaming industry?

Securing the games requires the collective effort of security vendors and publishers. As Kaspersky’s Pontiroli puts it, “Security should not be something developers think about afterwards but at an early stage of the game development process. We believe that cross-industry cooperation can help to improve this situation.”

Featured Image: Victor Moussa/Shutterstock
Source: TechCrunch

How do you outsmart malware?

How do you outsmart malware?

The growth of data breaches in recent months and years is in large part because of the new generation of smart malware being developed on a daily basis. Malicious actors are constantly taking advantage of technological innovations and breakthroughs to devise new ways to flood the Internet with new malware that circumvent security tools, propagate within networks and siphon critical data for months without being discovered.

Traditional security tools and solutions are having a hard time protecting clients against the constantly changing landscape of security threats and malware. No matter how large, virus definition databases don’t seem to account for the growing number of new malware species and variants, especially when they’re smart enough to evade discovery. More devious genus of malware are succeeding at even duping advanced security tools that discover threats based on behavior analysis.

Sophisticated, multi-layered security solutions are predicated on having enterprise-level budgets and resources and their deployment isn’t possible for small businesses and individuals at home, which are no less victims of malware and cyberattacks.

At present, the question is: Will security solutions keep up with the growing trend of smart malware?

The answer to that question might lie in new approaches to cybersecurity that defy the long-established reactive paradigm, which is to try spotting malware based on previously known data. Scientists and cybersecurity firms are now developing and employing new techniques based on our understanding of the mentality behind malware development, and are helping block unknown malware by manipulating the conditions and targets it seeks.

This new shift in malware detection is helping tech firms develop solutions that are smart enough to detect and block unknown viruses while being lightweight and deployable in varying execution environments.

Moving target

Both antivirus and security solutions based on behavior analysis are reactive in nature and need previous knowledge regarding the attack or vulnerable system in order to provide adequate protection. This provides attackers with an exceptional opportunity to target systems through unknown vulnerabilities.

Cybercriminals depend largely on zero days and unpatched vulnerabilities in operating systems and installed software to gain a foothold in the target computer and stage their attacks. The Symantec 2016 Internet Security Threat Report proves that cybercriminals are getting much better at discovering zero-day vulnerabilities in software.

Reducing the attack surface requires a considerable effort on the defender’s part. All the patching and updating that go into making your system immune against known attacks can be for naught if you don’t take one of the actions in time. Even with your entire system up-to-date, you have no idea of the unknown vulnerabilities that are lurking outside or even inside your network.

Experts at cybersecurity tech firm Morphisec intend to tackle this issue with a concept they call Moving Target Defense, a technique that prevents malware from finding the sought vulnerability in the first place.

“The attacker has to be stopped at the first step, before gaining an initial foothold in the system,” says Mordechai Guri, Chief Science Officer at Morphisec.

Recycling malware is easy, developing new malware from scratch is extremely difficult.

The technology suggested by Morphisec achieves this goal by concealing vulnerabilities in applications and web browsers, through a polymorphic engine that randomly scrambles the memory surface of processes at run time, making them unpredictable and indecipherable to attackers and malware. In this manner, any zero-day loophole or unpatched vulnerability will be concealed from prying eyes. “Each time an application or browser is loaded in memory,” says Guri, “we randomly change its memory structure.”

The moving target concept is effectively turning the table against the attacker: Instead of having security solutions chase malware, it is now the malware that is futilely chasing its target vulnerabilities.

“Like this, nothing is known or predictable to the attacker anymore,” Guri explains. “The attacker fails right at the beginning during the exploitation phase and is stopped before having a chance to inject malware into the target system.”

The firm has embedded the Moving Target Defense idea into a lightweight 1 MB endpoint threat prevention solution called Protector, which currently runs on Windows-based workstations and servers.

Keeping the environment hostile for the malware

Some of the more advanced security solutions use a “sandbox,” an isolated and ultra-secure environment in which executables are launched and scrutinized for the manifestation of malicious behavior before being given access to system resources. This technique helps detect and block some of the stealthier malware without allowing them to deal any damage.

In response, malware developers have learned to develop new specimens that remain dormant and refrain from executing until released from the restricted confines of the sandbox, after which they activate their payload and wreak havoc on the target system.

Minerva Labs, a cybersecurity startup that came out of stealth in January, has presented a technique that dupes malware into thinking it is constantly in a hostile environment, thus convincing it to avoid unpacking and executing its malicious payload for fear of being detected and blocked.

Will security solutions keep up with the growing trend of smart malware?

Minerva achieves this by simulating the constant presence of different sophisticated cybersecurity tools, such as sandboxes and Intrusion Prevention Systems (IPS), trapping the malware in a situation that prevents it from knowing where it is. Not being able to differentiate between the simulated environment and real security environment that it tries to evade, the malware will continue to remain inactive, waiting for conditions that will never materialize.

Since the entire concept is based on deception and decoys, its implementation has been made possible through a passive endpoint protection tool with a low memory footprint, which integrates and complements other security solutions installed on user devices.

Predicting the future

The traditional process in dealing with malware is to discover the threat, register the signature and subsequently deliver a definition update to endpoint protection tools. For attackers, breaking through this loop is as easy as modifying the malware code and recompiling it to create a totally new threat that has to be reprocessed in the discovery, definition and update delivery cycle. This is what MIT Technology Review’s David Cowan compares to antibiotic-resistant bacteria, which adapt to our defenses and render them obsolete.

The Symantec paper discovered more than 430 million new and unique pieces of malware in 2015, a 36 percent growth in comparison to the previous year. That’s more than a million new pieces of malware written each day. However, what’s worth noting is that more than 90 percent of new malware are in fact modified variants of the old specimens, and even new, zero-day malware use elements and components of previous ones.

This proves that while recycling malware is easy, developing new malware from scratch is extremely difficult.

Cybersecurity startup CyActive took advantage of this fact to develop a solution that thwarts the plans of malware developers by preventing them from reusing previous code.

The solution uses a predictive engine that uses bio-inspired algorithms and a deep understanding or hacker behavior to automatically forecast how current malware will be manipulated and modified in the future. This way, hundreds of thousands of malware derivatives are predicted and fed into detector systems that anticipate and prevent future attacks on network and endpoint devices. The only option left to cybercriminals is to create new malware, which is a painful and lengthy process.

CyActive was acquired by PayPal in 2015 and is now using its security solution to secure PayPal networks and clients.

Fighting malware proactively

Malware are growing in number and sophistication. Our security solutions need to evolve in tandem in order to be able to respond to the threats of the future. Proactive tools and approaches can help complement and strengthen current security solutions. They can also fill a large part of the gap left by human error, which accounts for the success of a large number of security incidents. As long as we’re running behind malware, we’ll never be safe.

Featured Image: Juan Gaertner/Shutterstock
Source: TechCrunch

How threat intelligence sharing can help deal with cybersecurity challenges

How threat intelligence sharing can help deal with cybersecurity challenges

In the ever-shifting landscape of cyberthreats and attacks, having access to timely information and intelligence is vital and can make a big difference in protecting organizations and firms against data breaches and security incidents.

Malicious actors are getting organized, growing smarter and becoming more sophisticated, which effectively makes traditional defense methods and tools significantly less effective in dealing with new threats constantly appearing on the horizon.

One solution to this seemingly unsolvable problem is the sharing of threat intelligence in order to raise awareness and sound the alarm about new attacks and data breaches as they happen. This way we can avoid major security incidents from recurring and prevent emerging threats from claiming more victims.

Threat intelligence sharing has risen in prominence, giving birth to initiatives such as the Cyber Threat Alliance, a conglomeration of security solution vendors and researchers that have joined forces to collectively share information and protect their customers. We’ve also seen government-led efforts, such as the Cybersecurity Information Sharing Act (CISA), which is meant to ease the way for businesses to join the threat information sharing movement.

The evolution of cyberthreat intelligence sharing is culminating in the development of platforms and standards that help organizations gather, organize, share and identify sources of threat intelligence. Cyberthreat intelligence is also shortening the useful lives of attacks and is putting a heavier burden on attackers who want to stay in business.

There’s still a long way to go, but the inroads made are already showing promising signs.

Dealing with constant changes in the threat landscape

Information gleaned from internal networks and virus definition repositories can serve as sources of threat intelligence, but much more needs to be done to deal with the constant stream of malicious IPs and domains, hacked and hijacked websites, infected files and phishing campaigns that are being spotted on the Internet.

“Today’s cyber threat landscape is polymorphic in nature — constantly changing and making it nearly impossible to detect with traditional security approaches,” says Grayson Milbourne, Security Intelligence Director at cybersecurity firm Webroot. The company’s 2016 Threat Brief has found that 97 percent of 2015’s malware have been seen on a single endpoint, and more than 100,000 new malicious IP addresses are launched every day.

“Given the evolution of malicious code and constantly changing environments, it’s critical that security controls adapt quickly and dependably,” Milbourne says, and he underlines the need to stay ahead of current threats and be able to predict future attacks, which can be achieved through the use of a collective threat intelligence ecosystem.

Many tech firms are now offering security solutions founded on the cyberthreat intelligence sharing concept. Webroot’s own proprietary intelligence sharing platform, BrightCloud, gleans threat intelligence from endpoints and combines it with input from security vendors to provide valuable real-time insights into threats and greater visibility into the behavior of an attack.

Threat intelligence sharing should become an essential aspect of any organization’s security program.

The threat intelligence sharing trend has led other leaders in the tech industry to adopt similar initiatives. Last year, IBM declared its own threat intelligence sharing initiative, X-Force Exchange, a cloud-based platform that extends the tech giant’s decades-old security efforts and allows the clients to share their own intelligence in order to accelerate the formation of the networks and relationships needed to fight hackers.

“This community-based approach enables security teams to associate and uniquely protect one another from threats in real-time,” Milbourne explains. “As soon as a threat is detected on one endpoint, all other endpoints using the platform are immediately protected through this collective approach to threat intelligence.”

Overcoming the challenges of threat intelligence sharing

Threat intelligence sharing comes with its own caveats and presents a few challenges. “In many cases,” says Jens Monrad, Consulting System Engineer at cybersecurity firm FireEye, “organizations end up with a lot of data, sometimes just raw, unevaluated data, which end up adding an extra burden to their security team, increasing the number of events and alerts rather than decreasing it.”

Collaboration between industry peers can help improve the relevance and quality of the shared intelligence, because threats and attacks are often targeted at specific sectors such as finance, banking or retail. This way, industry leaders can better understand the threat landscape and gain insights into practices deployed by others in the industry to better safeguard their own organizations.

Instances of industry-level threat sharing efforts include the recent launch of a portal for ICS/SCADA threat sharing among nations, which took place in the aftermath of the unprecedented cyberattack against Ukraine’s power grid.

FireEye has implemented this model with its Advanced Threat Intelligence Plus platform, which enables clients to develop threat sharing communities with trusted partners. The cybersecurity firm recently partnered with Visa to develop a joint threat intelligence initiative for Visa’s customers, which focuses on cyberthreats toward Visa and its customers.

Business, privacy and legal concerns are also proving to be barricades in efforts to share threat information. As Scott Simkin, Senior Threat Intelligence Manager at Palo Alto Networks points out in an op-ed, security vendors have been previously loath to share information to avoid losing the competitive edge, private companies fear inadvertently sharing sensitive customer information and government agencies have strict controls on the information they share.

Some of these issues can be dealt with through the use of standards, such as STIX, TAXII and CyBox, a set of free, available specifications that have standardized threat information and help with the automated exchange of indicators of compromise (IOC) and other relevant data without leaking personally identifiable information (PII).

The CISA legislation has also helped overcome challenges by lifting some of the liabilities firms and organizations would otherwise be exposed to if they shared data about security incidents.

As for the business side of things, the sheer number of new threats that are being identified on a daily basis is slowly convincing vendors that sharing threat intelligence may prove to be the only way they can protect their interests.

Beyond threat intelligence sharing

The evolution of the cyberthreat landscape has reached a point where it is beyond any individual or organization to defend themselves and their interests against the ever-shifting array of threats. “It is only a matter of when they will become victims of cyber attacks — not if,” says Chris Doggett, SVP of Global Sales at Carbonite.

This issue can only be addressed through a pooling of efforts that expands beyond the disciplines involved in dealing with cyberthreats, Doggett suggests, which should include “sharing cyber threat intelligence, collaborating to minimize vulnerabilities, gaining consensus on global standards for acceptable conduct in cyberspace, and international cooperation to enforce local laws and international standards.”

This is an approach that has been recently put to test in fighting the rising threat of ransomware, which has been growing at an explosive rate and is causing millions of dollars in damage to victims. A collective effort is being led between government agencies, cybersecurity firms and law enforcement to provide effective protection from ransomware, offer recovery solutions and disarm and apprehend the criminals behind the attacks.

On the protection level, tech companies are constantly sharing information about ransomware attacks to better understand how to avoid it and improve the efficacy of security and anti-malware tools. In tandem, efforts are being led to improve data protection and recovery solutions, such as cloud backups and data integrity tools, and security firms are working on solutions to crack the encryption algorithms of specific types of ransomware and disarm them for good.

Security researchers are also collaborating with regional and national law enforcement agencies to track and arrest the cybercriminals involved. An example of such efforts is Kaspersky Lab’s cooperation with the Netherlands Tech Crime Unit to apprehend the individuals behind the CoinVault and BitCryptor campaigns.

Carbonite is working to develop its own proprietary tools to help track malware attacks and respond to them faster and more effectively. “Based on the data we have gleaned, research, and the information sharing with others in this space,” says Doggett “we are now in a position to participate actively from a thought leadership perspective and do our part to arm all users and organizations with knowledge and tools which we believe will allow them to avoid becoming victims of ransomware attacks in the future.”

Sharing is caring

Cybercriminals have been sharing knowledge, tools and experience for a long time, which has lent to their success in staging major data breaches over the past months and years. It’s long past time that the tech community follows suit and teams up to improve general security and mitigate threats to individuals and organizations.

Threat intelligence sharing is already helping detect threats in real time and protect users from malicious encounters. It should become an essential aspect of any organization’s security program if we are to deal with the threats of the future.

Featured Image: Bryce Durbin
Source: TechCrunch