Put to the test: Why vendors shouldn’t shy away from attack testing

IoT testing can be a complex process and as a result many vendors aren’t yet onboard with it. Concerns over their intellectual property, the level of commitment required and how to interpret and act upon the results deter many from embarking upon breakpoint testing.

But, as Andrew Tierney, consultant at Pen Test Partners says, in the long run, the process is beneficial, providing the vendor with the opportunity to correct issues that could compromise the brand.

Unique infrastructure

Nearly all of the published research on IoT vulnerabilities focuses on the device and training on attacking the device. But when it comes down to it, a real-world IoT system is far more complex. There’s the devices, the operating system and software that runs on those, the mobile application, the servers and the build on the server, to name but a few. Compounding this, the devices can be placed in physically exposed locations and on potentially hostile networks that you have no control over. They are installed by people with no networking knowledge. And the painful fact is that you have placed your system directly in the hands of the attacker. This is very, very different to normal infrastructure IT.

There are three methodologies that can be used to test IoT systems, each with their own advantages. Black box testing sees the testers approach the system as real-world attackers. The only knowledge they have is what is publicly available. Often, the testing will focus on recovering firmware or rooting the device to obtain information about how the system operates, including APIs. This can be crucial in finding serious systemic issues. It tends to be time-boxed rather than task-driven and the testing will flow in an organic manner, following paths most likely to yield vulnerabilities.

Alternatively, white box testing sees the testers given access to design documentation, specifications, data sheets, schematics, architectural diagrams, firmware, and even potentially source code. Using this knowledge, they attack the system. Unlike black testing, it can be task driven, as the open access to documentation allows the tester to develop a plan before testing starts.

Between the two is grey-box testing. Some information is provided and this avoids unnecessary time being wasted on reverse engineering. A typical scenario might involve a period of black box testing which, if it fails to yield access to the device/firmware, leads to “break glass access” at which point grey-box testing continues. Grey testing often offers some of the best results, providing confidence that the device will withstand attack from real-world attackers using defence-in-depth.

Debunking myths

Concerns over testing expressed by vendors include whether the test will lead to a compromise so extreme that their product is pushed back to the drawing board. In reality, tests tend to discover vulnerabilities that can be fixed that then prevent mass compromise, stopping the kind of take-down achieved by proof-of-concept hacks like the Miller and Valasek Jeep attack.

Will testing find all the issues? That’s unlikely but white box testing will nearly always find more issues than black box testing. Should you fix even low […]

The post Put to the test: Why vendors shouldn’t shy away from attack testing appeared first on IoT Now – How to run an IoT enabled business.

Blogs – IoT Now – How to run an IoT enabled business

Gesture Control Wants to Move Us Away from Our Keyboards

bixi-move

Anyone who likes to binge watch TV while cooking knows the pain of having to stop kneading dough to pause a show or move to the next episode. In comes Bixi, a device that’s the brainchild of French startup Bluemint Labs. It connects with iOS or Android phones and tablets via Bluetooth LE.
Bixi

However, Bixi does more than just control smartphones. It can operate a GoPro Camera, adjust connected lighting or other smart home devices through gesture control. There’s also a built-in microphone and support for Amazon’s Alexa meaning it can accept verbal commands also.  Bixi currently supports eight gestures, with the intention to add support for more types of gestures as people get more comfortable with Bixi. The device’s sensors easily differentiate between horizontal, diagonal, and vertical swipes.

I spoke to Chief Marketing Officer Pierre-Hughes Davoine to find out more at IFA 2017. He explained that the company was currently in conversation with several automotive companies and OEMs to discuss the future integration of their technology into their products. The company intends to release the API of Bixi App to developers who can make new use-cases to be integrated into the main Bixi App via ‘In-App’ Purchases.

It;s not the first time this technology has been proposed. Car insurers, Ingenie released research this year predicting the functionality of the cars of the future. They forecast that keys will be eschewed for a fingerprint sensor, iris scanner or other biometric systems to identify you as you walk up and open the door. The windows will have AR capabilities and embedded touchscreen. Some driving functions will be carried out through gesture controls and voice activation instead of buttons and a steering wheel.

Image title

Kinemic brings writing to the air

Founded in March 2016, German startup Kinemic takes Bixi’s ideas several steps further with not only gesture control but the ability to write in the air (as if signing your name perhaps) and click on an ‘air mouse.’  I’ve seen a demonstration of the writing capability in person and it’s something quite wonderful. Kinemic enables the gesture control of digital devices – such as PCs, smartphones, wearables or AR glasses. Their focus is industrial customers can use the technology to improve their processes to become safer, more ergonomic and faster.  They’ve piloted with the pharmaceutical and automotive sectors and won a place in the DeutschBahn (Germany’s national railway) MindBox Accelerator in July this year, providing them with hands-on access to the railway sector.

MYO Armbands

A slightly earlier application is MYO armbands by Canadian based Thalmic Labs that uses, as the name implies, electromyography, a sensor technology that is typically used in the medical world, to pick up electrical impulses from muscles. These allow users to control computers, toys, and other devices. It uses Bluetooth 4.0 Low Energy to communicate with the device it’s paired with.

MYO-computer-armband

The company offers SDKs for Windows, Mac, iOS, and Andriod. Some keen developers have already developed a plethora of use cases ranging from surgical applications to controlling drones. There’s even a marketplace for apps developed. Amazon’s Alexa fund invested in Thalmic Labs‘ US series B last fall although it’s unclear what the company will focus on next.

As companies work to move us beyond our smartphones they are fundamentally changing the way we interact with devices. As voice activation is becoming more mainstream, it’s only a matter of time before gesture control makes its big splash.

The post Gesture Control Wants to Move Us Away from Our Keyboards appeared first on ReadWrite.

ReadWrite

Does big data today keep the doctor away?

DNA sequence in blu color

There’s a small cadre of highly skilled big data professionals and doctors who are leveraging technology to help you live a longer, healthier life. Armed with mountains of government-funded genomic data sets along with mature and easily accessible analytics tools, these technicians and doctors are building apps, tools, and systems which can help you diagnose and treat illnesses ranging from common to catastrophic.

Leading that charge is Dexter Hadley, unique in that he is both an engineer and a doctor. Dexter runs the Hadley Lab – a big data laboratory at UCSF Health which develops tech to fight disease and promote health. The Hadley Lab has a mandate to derive value from the mountains of clinical data that UCSF continually generates.  With a research background in genomics and clinical training in pathology, Dexter likes to quip that he uses big data to practice medicine.”

We got a chance to ask Dexter about the innovations that are born at the intersection of technology and medicine and tell us about how the democratization of technology is really impacting people’s lives.

So first off, people are probably wondering why and how you became both a doctor AND an engineer?

I have always wanted to be a doctor, but my trajectory changed dramatically when I taught myself to program computers at the age of 10 years old. Since then, I have been obsessed with how to leverage computation to better facilitate medicine. That journey took me from an undergraduate education focused on computer programming to medical school at University of Pennsylvania where I earned a master’s degree in engineering, a Ph.D. in genomics, and an MD for good measure. 

Through stints practicing medicine in an internship in general surgery at Penn, and then later residency in pathology at Stanford, I developed a passion as a physician/scientist to integrate medicine and software engineering in order to improve the delivery of healthcare for doctors and their patients.

So, what does the Hadley Lab do and how do you contribute?

The Hadley Laboratory leverages big data to improve the practice of medicine and the delivery of healthcare.  Our work generates, annotates, and ultimately reasons over large and diverse data stores to better characterize disease. We develop state-of-the-art data-driven models of clinical intelligence that drive clinical applications to more precisely screen, diagnose, and manage disease. We integrate multiple large data stores to identify novel biomarkers and potential therapeutics for disease.

The end point of our work is rapid proofs of concept clinical trials in humans that translate into better patient outcomes and reduced morbidity and mortality across the disease spectrum.  “I’m an equal opportunity scientist.  I care less about the best disease I can study, but more about what disease I can study best– it’s all driven by the data.   

And what would you say is the present, future, and ideal state of R&D in this area?

At present, I think we are experiencing a continued renaissance of medicine that started with the initial sequencing of the human genome well over a decade ago. Now, we are finally in a position to actually quantify human health and disease in “precision medicine,” a fundamentally different approach to healthcare research and its delivery where our focus is on identifying and correcting individual patient differences rather than making broader generalizations. 

While genomics allows us to quantify our molecular self, I think the future is in leveraging all the technology at our fingertips today to better quantify our physical self. As the power of genomics lies in its objective ability to correlate with physical manifestations in the patient, the ideal state of R&D must involve data collection and analysis at both the molecular genotypic level and the more clinical phenotypic level of the patient. 

For instance, in the context of a health system, my research integrates large clinical data stores with state-of-the-art big data algorithms, smartphones, web and mobile applications, etc. to first discover and then deliver precision medicine to patients.

Sounds like a big part of that future is genomics?

Genomics is indeed the future, except it’s clearly more complicated than we initially thought.  Most doctors don’t sit around looking at their patient’s genomic data to develop treatment plans. However, some specialist doctors look at images all day long, such as radiologists and pathologists for instance. We have technology and algorithms today that allows us to build ‘apps’ that can help these specialists.

For instance, we are working on a mobile medical app for doctors and their patients to use smartphones to better screen for skin cancer. However, while digital health apps on smartphones represent a convenient screen for skin cancer, the actual diagnosis and subsequent management of skin cancer remains within the genomics realm.

So, diagnosis is where the need is right now?

The practice of medicine involves screening a general population and diagnosis of suspected cases before intervention on a specific patient. Much of precision medicine research has focused on diagnosis and intervention phases, with less focus on screening. My focus currently is using powerful big data algorithms for population screening of healthy individuals through digital apps. While “anybody” can build an app these days, not everybody has the knowledge, data, and access to the clinical infrastructure to develop clinical-grade algorithms for doctors and their patients.

How big of an impact is the “democratization of technology” having on this space?

About 6 years ago, Mark Andreessen penned a WSJ editorial that lays out the case for “Why Software Is Eating The World.” How does the average person shop today? Or bank? Or trade stocks? Or find a taxi? Mainly through innovative “apps” that we have come to depend on. I think that inevitably this phenomenon will percolate to our medical world where we now have all the ingredients to do magical things with tech, meaning cheap computation, awesome algorithms, and tons of big data that we continue to generate at breakneck speeds in clinical medicine.

For instance, at UCSF Health, we literally have billions of clinical records over almost a million patients that must hold the keys to practice better medicine. If you think about it, the average clinical trial to prove efficacy of an intervention is practically limited to the order of hundreds of patients because of time and monetary constraints. 

Therefore, our modern health systems allow for the largest clinical trials most appropriately powered for rapid discovery of novel medical interventions. I think that building clinical grade apps based on this big data allows us to immediately deliver the innovative discovery power of our health systems to the hands of physicians and their patients.

What would that involve, “building a clinical-grade app”?

Building the app is actually the least rigorous part of the process as the ‘clinical-grade’ performance comes from the algorithms that we develop that underlie the app interface. The magic of what we are doing lies in learning patterns from big data that we generate in healthcare. Deep learning is one such method that is a paradigm shift towards ‘cognitive computing’ where computers are essentially trained to think like humans. 

Deep learning on big data represents state-of-the-art machine learning today and repeatedly outperforms other more traditional methods. Data is the key piece of this process because these deep learning algorithms are incredibly complex. While much of statistics is based on linear models whose parameters can be accurately estimated with only a few data points, some of the most sophisticated deep learning algorithms have more parameters to estimate than there are atoms in the universe. 

Therefore, useful deep learning requires big data to accurately estimate parameters that are most predictive.

Let’s say one of our readers is interested and wants to develop this app for you, what would you share with them to help get them started?

I would definitely encourage them to reach out directly to me through my website. I’m also a member of the Institute for Computational Health Sciences at UCSF, which is dedicated to advancing computational health sciences in research, practice, and education in support of Precision Medicine for all.

If any readers are interested in contributing to the project, you can reach Dexter at dexter.hadley@ucsf.edu.

This article was produced in partnership with Western Digital.

The post Does big data today keep the doctor away? appeared first on ReadWrite.

ReadWrite