Let’s talk about machine learning at the edge

ARM believes its architecture for object detection could find its way into everything from cameras to dive masks. Slide courtesy of ARM.

You can’t hop on an earnings call or pick up a connected product these days without hearing something about AI or machine learning. But as much hype as there is, we are also on the verge of a change in computing that’s as profound as the shift to mobile was a little over a decade ago. In the last few years, the results of that shift have started to emerge.

In 2015, I started writing about how graphics cores—like the ones Nvidia and AMD make —were changing the way companies were training neural networks for machine learning. A huge component of the improvements in computer vision, natural language processing, and real-time translation efforts have been due to the impressive parallel processing graphics processors have.

Even before that, however, I was asking the folks at Qualcomm, Intel, and ARM how they planned to handle the move toward machine learning, both in the cloud and at the edge. For Intel, this conversation felt especially relevant, since it had completely missed the transition to mobile computing and had also failed to develop a new GPU that could handle massively parallel workloads.

Some of these conversations were held in 2013 and 2014. That’s how long the chip vendors have been thinking about the computing needs for machine learning. Yet it took ARM until 2016 to purchase a company with expertise in computer vision, Apical, and only this week did it deliver on a brand-new architecture for machine learning at low power.

Intel bought its way into this space with the acquisition of Movidius and Nervana Systems in 2016. I still don’t know what Qualcomm is doing, but executives there have told me that its experience in mobile means it has an advantage in the internet of things. Separately, in a conference call dedicated to talking about the new Trillium architecture, an ARM executive said that part of the reason for the wait was a need to see which workloads people wanted to run on these machine learning chips.

The jobs that have emerged in this space appear to focus on computer vision, object recognition and detection, natural language processing, and hierarchical activation. Hierarchical activation is where a low-power chip might recognize that a condition is met and then wake a more powerful chip to provide necessary reaction to that condition.

But while the traditional chip vendors were waiting for the market to tell them what it wanted, the big consumer hardware vendors, including Google, Apple, Samsung—and even Amazon—were building their own chip design teams with an eye to machine learning. Google has focused primarily on the cloud with its Tensor Flow Processing Units, although it did develop a special chip for image processing for its Pixel mobile phones. Amazon is building a chip for its consumer hardware using tech from its acquisition of Annapurna Labs in 2015 and its purchase of Blink’s low-power video processing chips back in December.

Some of this technology is designed for smartphones, such as Google’s visual processing core. Even Apple’s chips are finding their way into new devices (the HomePod caries an Apple A8 chip, which first appeared in Apple’s iPhone 6). But others, like the Movidius silicon, use a design that’s made for connected devices like drones or cameras.

The next step in machine learning for the edge will be to build silicon that’s specific for the internet of things. These devices, like ARM’s, will focus on machine learning with incredibly reduced power consumption. Right now, the training of neural networks happens mostly in the cloud and requires massively parallel processing as well as super-fast I/O. Think of I/O as how quickly the chip can move data around between its memory and the processing cores.

But all of that is an expensive power proposition at the edge, which is why most edge machine learning jobs are just the execution of an already established model, or what is called inference. Even in inference, power consumption can be reduced with careful designs. Qualcomm makes an image sensor that that requires less than 2 milliwatts of power, and can run roughly three to five computer vision models for object detection.

But inference might also include some training, thanks to silicon and even better machine learning models. Movidius and ARM are both aiming to let some of their chips actually train at the edge. This could help devices in the home setting learn new wake words for voice control or, in an industrial setting, be used to build models for anomalous event detection.

All of which could have a tremendous impact on privacy and the speed of improvement in connected devices. If a machine can learn without sending data to the cloud, then that data could stay resident on the device itself, under user control. For Apple, this could be a game-changing improvement to its phones and its devices, such as the HomePod. For Amazon, it could lead to a host of new features that are hard-coded in the silicon itself.

For Amazon in particular, this could even raise a question about its future business opportunities. If Amazon produces a good machine learning chip for its Alexa-powered devices, would it share it with other hardware makers seeking to embrace its voice ecosystem, in effect turning Amazon into a chip provider? Apple and Google likely won’t share. And Samsung’s chip business is for its gear and others, so I’d expect its edge machine learning chips to find their way into the world of non-Samsung devices.

For the last decade, custom silicon has been a competitive differentiator for tech giants. What if, thanks to machine learning and the internet of things, it becomes a foothold for a developing ecosystem of smart devices?

Stacey on IoT | Internet of Things news and analysis

JV Utilises Deep Learning & Edge Computing To Make Factories Smart

Hitachi is aiming to utilize deep learning and edge computing technologies to make machines intelligent, in order to improve productivity. In pursuit of this, it has formed an automation joint venture with industrial robot and factory automation company Fanuc and AI-startup Preferred Networks.

Intelligent Edge System will utilize AI technologies in the social and industrial infrastructure field. It will develop fast, real-time control systems for network-connected industrial robots and machine tools. These control systems will leverage deep learning AI technology to become smarter over time as linked machines manufacture products.

Preferred Networks will use its deep learning AI technology to process information more efficiently and speed up data analysis. This is hoped to boost production line productivity and allow robots to recognize things and adjust their moves accordingly. Robots will also be able to automatically take on the task of an adjacent robot on the production line in case it breaks down.

Edge computing will help the initiative by handling the task at the edge of the network instead of centrally processing data. This will let machines on the production line process the massive amount of data, such as the movement of mechanical hands, on the spot.

Preferred Networks has already applied its AI expertise for Toyota Motor and Nippon Telegraph & Telephone. Toyota Motor invested in the startup for the development of autonomous vehicles that can learn various driving conditions by processing data by themselves rather than relying on cloud computing.

http://www.hitachi.com/New/cnews/month/2018/01/180131f.pdf

The post JV Utilises Deep Learning & Edge Computing To Make Factories Smart appeared first on Internet Of Things | IoT India.

Internet Of Things | IoT India

AI and Machine Learning will Facilitate Monitoring of Blood Pressure

ROHM and Equate Health partner to launch cuffless optical blood pressure monitor which integrates ROHM’s 2nd generation heart rate sensor with Equate Health’s AI and machine learning tools. In the past, we have seen that sphygmomanometer used to block the artery before performing measurement while releasing pressure. This solution monitors blood pressure using ROHM’s BH1792GLC optical heart rate sensor that consumes 0.44mA of current during measurement. It has 1024Hz sampling rate and integrated optical filter. These help in maintaining the level of accuracy and undergo low power pulse detection. The sensor data is processed using Equate Health’s targeted algorithms that make use of AI and machine learning tools.

The company feels this solution will solve the issue of non-compliance among patients and simplify measurement. Also, this will help the patients who are unable to access clinical providers or equipment for determining their blood pressure. Moreover, those suffering with labile hypertension will be able to get help from ambulatory blood pressure monitoring.

The platform employs techniques such as data analytics and machine learning to bring out health-related solutions. This is done by analyzing key trends, patterns and correlations. The data is also obtained from the evidences, behavioral science and user experience. Strategies such as Machine Learning and Predictive Analytics facilitate the mobile and cloud based platform in delivering optimal information regarding the health of patients.

To know more, click here..

The post AI and Machine Learning will Facilitate Monitoring of Blood Pressure appeared first on Internet Of Things | IoT India.

Internet Of Things | IoT India

Analysis: Machine learning has much to teach utilities companies

Why machine learning has much to teach utilities

Machine learning techniques look set to transform the way that utilities companies predict customer usage and production capacity in the years ahead. 

Utilities take note: when it comes to analyzing data, machine learning could be your best bet for achieving new insights, far outstripping other methods in terms of effectiveness, according to a new report published by analyst company Navigant Research, Machine Learning for the Digital Utility.

While machine learning has existed in parts of the ‘utility value chain’ for years, various drivers are expected to increase its use in other parts of the business, the report says. In particular, it has several advantages over other approaches when it comes to customer segmentation, pricing forecasts, anomaly decision, fraud detection and predictive maintenance. Basically, it’s about jobs that use the analytic processes of clustering, regression and classification.

“The utilities industry is already using self-learning algorithms, particularly in the field of asset monitoring and predictive maintenance, and several reasons suggest the use of machine learning will expand to many more use cases and its adoption will accelerate,” says Stuart Ravens, principal research analyst at Navigant.

Read more: Utilities tell their networks: “Smart grid, heal thyself”

Learning how to integrate renewables

As we see it at Internet of Business, this could be of particular interest to utilities struggling to integrate renewable energy with more traditional sources of power supply. Wind and solar power is erratic – it’s hard to predict how much energy a utility can harness this way unless it knows exactly how long and how hard the wind will blow and/or the sun will shine.

Here, machine learning could provide some insight, enabling utilities to better predict renewable production and integrate it with other forms of supply. That’s the theory, at least, that software company Powel has tested out with a large utility in Norway, along with analytics specialist Swhere, on a project that applied machine learning for wind forecasting.

This involved answering three questions: When does wind occur, how powerful is it and in what direction does it blow? According to Swhere founder Dr Ernst van Duijn, the goal was to use machine learning algorithms to detect patterns in the wind that could lead to deeper insight into production capability. The results suggest it’s possible to reduce uncertainty in wind power production by more than 45% and, as a result, cut the costs of penalties that are applied when a utility is unable to meet its commitment to provide a certain amount of renewable energy.

Read more: GE to provide Enel with software for monitoring power plant assets

Learning how to digitally disrupt

Machine learning may also be an enabler of entirely new utilities companies – ‘disrupters’ who are looking to topple incumbent providers. Take Drift, for example, a start-up power utility company based in Seattle that is using a combination of machine learning technologies, among others, to provide customers with cheaper wholesale energy prices, through being able to more accurately predict their consumption.

Fortunately, machine learning technologies have never been more accessible to utilities of all types and all sizes. According to Stuart Ravens at Navigant Research, “During the past decade, it has become easier for companies to deploy machine learning thanks to falling costs, new technological advancements, a softening of conservative attitudes and a fresh approach to analytics procurement,” he adds.

That’s good news because, for utilities, machine learning may not just be what they need to thrive in the digital age. It may even be what they need to survive.


Coming soon: Our Internet of Energy event will be taking place in Berlin, Germany on 6 & 7 March 2018. Attendees will hear how companies in this sector are harnessing the power of IoT to transform distributed energy resources. 

Internet of Energy DE

The post Analysis: Machine learning has much to teach utilities companies appeared first on Internet of Business.

Internet of Business

Google unveils Cloud AutoML for easy-to-use machine learning

Google unveils Cloud AutoML for easy-to-use machine learning

Tech giant Google has unveiled Cloud AutoML, a tool aimed at businesses that want to tap into the capabilities of machine learning but do not have the required expertise to do so.

Following on from Google’s move to let customers use its image and language recognition tools that were initially exclusive to internal Google employees, the technology company is now opening up its machine learning capabilities for end users. Unlike other tools however, companies will be able to upload their own industry-specific data to Cloud AutoML.

The data can be uploaded to Google’s servers and Cloud AutoML will then help end users to create their own custom vision model. This initial data could be a sample of just a few dozen photographic samples – and even if these images have no labels, Google will review the custom instructions given by the customer and classify the images accordingly.

Read more: Rolls-Royce navigates Google deal on machine learning for shipping

Quality and throughput

According to Google, the customer “will get training data with the same quality and throughput Google gets for its own products, while your data remains private”. The custom model can then be allied to understanding a business’s new data.

The key benefit for customers is that they don’t require data science or deep learning expertise – two notoriously difficult skillsets to acquire – in order to exploit machine learning capabilities.

“Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI,” said Fei-Fei Li, chief scientist for Google’s cloud AI.

“There’s a very limited number of people that can create advanced machine learning models. And if you’re one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model,” she said.

Companies that have already piloted the technology include Disney Consumer Products and Interactive Media and Urban Outfitters.

Read more: Candi opts for Google Cloud Platform to join up smart buildings

The benefits of pre-trained models

According to Rob Bamforth, analyst at IT advisory company Quocirca, approaches like Cloud AutoML are making it easier to employ pragmatic elements of AI with pre-trained models.

“It’s a slightly templated approach that works well in many areas of tech, whether it’s code fragments or pre-built document layouts. It helps get something working quickly, and often well-defined templates will be good enough to do the job, so it should be productive and useful,” he said.

However, while this can be picked up and used quickly, it still requires the appropriate skills to make them really effective, so data scientists are still needed but their skilled will be applied differently, Bamforth explained.

Read more: Google pushes Android Things to Developer Preview 6

The post Google unveils Cloud AutoML for easy-to-use machine learning appeared first on Internet of Business.

Internet of Business