AI-Powered Smartphones will be critical differentiator, says Gartner

According to the recent study, Gartner reveals that on-device Artificial Intelligence (AI) features will become a critical product differentiator for smartphone vendors. This will help them to acquire new customers while retaining current users. Over the next two years, AI solutions running on the smartphone will become an essential part of vendor roadmaps.

Gartner predicts that by 2022, 80 percent of smartphones shipped will have on-device AI capabilities, up from 10 percent in 2017. On-device Artificial Intelligence (AI) is currently limited to premium devices and provides better data protection and power management than full cloud-based AI, since data is processed and stored locally.

In the report, Gartner has identified 10 high-impact uses for AI-powered smartphones to enable vendors to provide more value to their customers.

1)    “Digital Me” Sitting on the Device

Smartphones will be an extension of the user, capable of recognizing them and predicting their next move.

2)    User Authentication

Security technology combined with machine learning, biometrics and user behavior will improve usability and self-service capabilities.

3)    Emotion Recognition

The proliferation of virtual personal assistants and other AI-based technology for conversational systems is driving the need to add emotional intelligence for better context and an enhanced service experience.

4)    Natural-Language Understanding

Continuous training and deep learning on smartphones will improve the accuracy of speech recognition, while better understanding the user’s specific intentions.

5)    Augmented Reality (AR) and AI Vision

With the release of iOS 11, Apple included an ARKit feature that provides new tools to developers to make adding Augmented Reality (AR) to apps easier. One example of how AR can be used is in apps that help to collect user data and detect illnesses such as skin cancer or pancreatic cancer.

6) Device Management

Machine learning will improve device performance and standby time. For example, with many sensors, smartphones can better understand and learn user’s behavior, such as when to use which app.

7) Personal Profiling

Smartphones can collect data for behavioral and personal profiling. Users can receive protection and assistance dynamically, depending on the activity that is being carried out and the environments they are in (e.g., home, vehicle, office, or leisure activities).

8)    Content Censorship/Detection

Restricted content can be automatically detected. Objectionable images, videos or text can be flagged, and various notification alarms can be enabled.

9) Personal Photographing

Personal photographing includes smartphones that are capable to automatically produce beautified photos based on a user’s individual aesthetic preferences.

10)     Audio Analytic

The smartphone’s microphone can continuously listen to real-world sounds. AI capability on device enable them to tell those sounds, and instruct users or trigger events.                                                 Read more…

The post AI-Powered Smartphones will be critical differentiator, says Gartner appeared first on Internet Of Things | IoT India.

Internet Of Things | IoT India

Google leads in AI-powered smartphones, says analyst firm

Google leads AI-powered smartphone market, says analyst firm

One in three smartphones sold this year use artificial intelligence (AI) in the form of virtual assistant applications, such as Apple’s Siri or Google Assistant, according to a new report from Strategy Analytics. 

In Global Artificial Intelligence Technologies Forecast for Smartphones: 2010 to 2022, the research company’s analysts point out that these are common in most high-end smartphones today. This year, some 93 percent of premium models priced over $ 300 come with a virtual assistant straight out-of-the-box.

This penetration will quickly extend to lower-cost, even budget, smartphones, according to the market research firm’s analysts – largely down to the rise of Google Assistant. 

“Google has a narrow lead in total smartphones sold with onboard virtual assistants in 2017,” said Ville Ukonaho, senior analyst at Strategy Analytics. “That lead will only grow as Android smartphone sales, with Google Assistant onboard, continue to expand into lower price tiers.”

This has relevance to the IoT in general, as voice interfaces will increasingly become the way we interact with a vast range of smart devices, from our own connected cars, to smart home and smart building platforms, to machinery on manufacturing plant floors. In other words, where smartphones lead, other connected devices are likely to follow.

Read more: The voice of the warehouse worker

Speed of performance

As the technology becomes more widespread, 80 per cent of smartphones costing over $ 100 will have artificial intelligence and virtual assistant technologies built-in by 2020.

Increasingly, the speed at which they can interpret requests, accomplish tasks and return results will become key differentiators.

Right now, very little actual computation is conducted on the phone itself. Instead, it’s cloud-based, which can result in slower response times, says Ville Ukonoho: “This requires a solid data connection, which isn’t always available.”

But smartphone technology evolves quickly, and advances in smartphone CPUs suggest that speeds and accuracy will better support AI-based virtual assistants in the near future. “A number of vendors have created more advanced processing engines or are combining the power from the CPU, GPU and DSP to form a subsystem capable of handling complex machine learning and other computational AI tasks,” said Strategy Analytics director Ken Hyers.

Software enhancements, too, will play their part, added Neil Mawston, executive director at the firm: “By combining software enhancements through machine learning and hardware enhancements in the form of AI engines, we can expect the abilities of virtual assistants to improve significantly over the next several years.

“This will result in increasingly responsive virtual assistants and more interactive experiences from the devices,” he said. 

Read more: Ocado launches Alexa app for voice-activated online shopping

The post Google leads in AI-powered smartphones, says analyst firm appeared first on Internet of Business.

Internet of Business

In an AI-powered world, what are potential jobs of the future?

3d rendering robotic hand with light bulb

With virtual assistants answering our emails and robots replacing humans on manufacturing assembly lines, mass unemployment due to widespread automation seems imminent. But it is easy to forget amid our growing unease that these systems are not “all-knowing” and fully competent.

As many of us have observed in our interactions with artificial intelligence, these systems perform repetitive, narrowly defined tasks very well but are quickly stymied when asked to go off script — often to great comical effect. As technological advances eliminate historic roles, previously unimaginable jobs will arise in the new economic reality. We combine these two ideas to map out potential new jobs that may arise in the highly automated economy of 2030.

Training, supervising and assisting robots

As robots take on increasingly complex functions, more humans will be needed to teach robots how to correctly accomplish these jobs. Human Intelligence Task (HIT) marketplaces like MTurk and Crowdflower already use humans to train AI to recognize objects in images or videos. New AI companies, like Lola, a personal travel service, are expanding HIT with specialized workers to train AI for complex tasks. 

See also: How autonomous vehicles could lead to more jobs in Detroit

Microsoft’s Tay bot, which quickly devolved into tweeting offensive and obscene comments after interacting with users on the internet, caused significant embarrassment to its creators. Given how quickly Tay went off the rails, it is easy to imagine how dangerous a bot trusted with maintaining our physical safety can become if it is fed the wrong sets of information or learns the wrong things from a poorly designed training set. Because the real world is ever-changing, AI must continuously train and improve, even after it achieves workable domain expertise, which ensures that expert human supervision is critical

Integrating jobs for people into the design of semi-autonomous systems has enabled some companies to achieve greater performance despite current technological limitations.

BestMile, a driverless vehicle deployed to transport luggage at airports, has successfully integrated human supervision into its design. Instead of engineering for every edge case in the complex and dangerous environment of an airport tarmac, the BestMile vehicle stops when it senses an obstacle in its path and waits for its human controller to decide what to do, enabling the company to enter the market much more quickly than competitors, which must refine their sensing algorithms to allow their robots to independently operate without incident.

Frontier explorers: Outward and upward

When Mars One, a Dutch startup whose goal is to send people to Mars, called for four volunteers to man their first Mars mission, more than 200,000 people applied.

Regardless of whether automation leads to increased poverty, automation’s threat of displacing people from their current jobs and in essence some part of their sense of self-worth could drive many to turn to an exploration of our final frontiers. An old saying jokes that there are more astronauts from Ohio than any other state because there is something about the state that makes people want to leave this planet.

One risk to human involvement in exploration is that exploration itself is also already being automated. Recently, relatively few of our space exploration missions have been manned. Humans have never left Earth’s orbit; all our exploration of other planets and the outer solar systems has been through unmanned probes. 

Artificial personality designers

As AI creeps into our world, we’ll start building more intimate relationships with it, and the technology will need to get to know us better, but some AI personalities may not suit some people. Moreover, different brands may want to be represented by distinct and well-defined personalities. The effective human-facing AI designer will, therefore, need to be mindful of subtle differences within AI to make AI interactions enjoyable and productive. This is where the Personality Designer or Personality Scientist comes in.

While Siri can tell a joke or two, humans crave more, so we will have to train our devices to provide for our emotional needs. In order to create a stellar user experience, AI personality designers or scientists are essential — to research and to build meaningful frameworks with which to design AI personalities. These people will be responsible for studying and preserving brand and culture, then injecting that information meaningfully into the things we love, like our cars, media, and electronics.

See also: How to avoid losing in the competitive “future of work”

Chatbot builders are also hiring writers to write lines of dialogue and scripts to inject personality into their bots. Cortana, Microsoft’s chatbot, employs an editorial team of 22. Creative agencies specializing in writing these scripts have also found success in the last year.

Startups like Affectiva and Beyond Verbal are building technology that assists in recognizing and analyzing emotions, enabling AI to react and adjust its interactions with us to make the experience more enjoyable or efficient. A team from the Massachusetts Institute of Technology and Boston University is teaching robots to read human brain signals to determine when they have committed a fault without active human correction and monitoring. Google has also recently filed patents for robot personalities and has designed a system to store and distribute personalities to robots.


As automated systems become better at doing most jobs humans perform today, the jobs that remain monopolized by humans will be defined by one important characteristic: the fact that a human is doing them. Of these jobs, social interaction is one area where humans may continue to desire specifically the intangible, instinctive difference that only interactions and friendships with other real humans provide.

We are already seeing profound shifts toward “human-centric” jobs in markets that have experienced significant automation. A recent Deloitte analysis of the British workforce over the last two decades found massive growth in “caring” jobs: the number of nursing assistants increased by 909% and care workers by 168%.

The positive health effects of touch have been well documented and may provide valuable psychological boosts to users, patients, or clients. In San Francisco, companies are even offering professional cuddling services. Whereas today such services are stigmatized, “affection as a service” may one day be viewed on par with cognitive behavioral therapy or other treatments for mental health.

Likewise, friendship is a task that automated systems will not be able to fully fill. Certain activities that are generally combined with some level of social interaction, like eating a meal, are already seeing a trend towards “paid friends.” Thousands of Internet viewers are already paying to watch mukbang, or live video streams of people eating meals, a practice which originated in Korea to remedy the feeling of living alone. In the future, it is possible to imagine people whose entire jobs are to eat meals and engage in polite conversation with clients.

More practical social jobs in an automated economy may include professional networkers. Just as people have not trusted online services fully, it is likely that people will not trust more advanced matching algorithms and may defer to professional human networkers who can properly arrange introductions to the right people to help us reach our goals. Despite the proliferation of startup investing platforms, for example, we continue to see startups and VC firms engage placement agents in order to successfully fundraise.

Despite many claims to the contrary, designing a fully autonomous system is incredibly complex and remains far out of reach. For now, training a human is still much cheaper than developing robot replacement.

TRIF is the venture capital team of Tiffine Wang, Ivy Nguyen, Ryan Morgan, and Freddy Dopfel

The post In an AI-powered world, what are potential jobs of the future? appeared first on ReadWrite.


AI-powered voice assistant platform Snips closes $13M Series A

Snips, an AI-powered voice assistant for connected products announced last week that it has raised a $ 13M Series A round.

The round was led by Korelya Capital and MAIF Avenir. BPI France and existing investor Eniac Ventures.

The startup previously raised $ 6.3M Seed round in Jun 2015 and obtained a grant of $ 2M in Sep 2016 with its total equity funding now reaching $ 21M.

The Snips’ voice platform for connected devices is an end-to-end solution with features such as hotword detection, speech recognition, natural language understanding and dialog management.

A differentiating capability of Snips is that it’s an on-device voice assistant and everything runs on the device that eliminates the need to send user data to the cloud servers. “A company that says they need your data is lying, or has no clue how to build technology,” said Rand Hindi, co-founder, and CEO of Snips. The voice assistant can be used to turn on the lights, check weather reports, play music albums, and even brew coffee.

Unlike Google Home and Amazon Alexa, Snips doesn’t ship the user’s data to the cloud. Startups like Snips might be at an advantage when European digital privacy laws get enforced starting 2018 whereby companies will have to seek explicit permission to collect user data. Since its voice platform competes with the likes of Facebook’s, Google’s, and Microsoft Luis, Snips might comfortably raise bigger investment rounds in the future.

Additional activity in the space include Rokid, a smart-home, and AI company that raised $ 50M in January, and PeoplePower, an AI-based smart home company that raised $ 3.2M in May of 2017.

Postscapes: Tracking the Internet of Things

Bosch and Nvidia to develop AI-powered autonomous vehicle system

Bosch and Nvidia to develop AI-powered autonomous vehicle system

German engineering and electronics company, Bosch, has teamed up with US semiconductor company Nvidia to develop an artificial intelligence (AI) system for autonomous vehicles.

Speaking at Bosch Connected World in Berlin – the company’s annual Internet of Things (IoT) event – Bosch CEO Dr Volkmar Denner and Nvidia CEO Jen-Hsun Huang announced the product, which they say will be available to the mass market.

The agreement will see the companies develop an AI self-driving car computer built on Nvidia’s deep learning software and hardware, meaning vehicles can be trained remotely, operated autonomously and updated via the cloud.

The power behind the machine

The Bosch AI car computer system will be based on Nvidia’s Drive PX technology, an open AI car computing platform that should enable automakers and suppliers to accelerate the production of autonomous vehicles.

The platform will come with the recently announced Xavier AI supercomputer, the world’s first single-chip processor designed to achieve Level 4 autonomous driving – the level at which a car can drive itself without any human intervention. Nvidia claims the chip can process up to 30 trillion deep learning operations a second while drawing just 30 watts of power.

This technology combines deep learning, sensor fusion and surround vision to enable to vehicle to understand its surroundings, locate itself precisely on an HD map, and plan a safe route forward all in real time.

Read more: Hackers could use mobile apps to steal connected cars, says Kaspersky

Training a car to drive

Nvidia’s Huang said his company will deliver technology enabling Level 3 autonomous capabilities (the level at which a car can drive itself but still requires driver intervention in some situations) by the end of this year, and Level 4 capabilities by the end of 2018.

Huang noted that while many brands (Ford, Audi, Tesla) are working on autonomous vehicle technology, such vehicles will require unprecedented levels of computer power that advanced driver assistance systems (ADAS) cannot provide.

Huang continued by saying that coded software can’t possibly be written that would anticipate the almost infinite number of things that can happen along a road, such as cars straying from their lanes, shifts in weather conditions, or animals that stray into the road.

“Self-driving cars is a challenge that can finally be solved with recent breakthroughs in deep learning and artificial intelligence,” Huang noted.

Read more: Amazon Alexa comes to cars courtesy of new Logitech Car Assistant app

Speaking to Internet of Business recently, academy director at the Transport Research Laboratory, Nick Reed, clearly shares this view.

“The key aspect to consider with AI is that driving is infinitely variable – you never know what combination of pedestrians, traffic and weather you might encounter,” Reed said.

“You need the vehicle to have sufficient learned capabilities to know how to handle unexpected situations.

“It is possible to achieve this with the use of AI. However, it needs to be trained on masses of data to enable the AI to learn how it should react to ensure the vehicle makes safe progress in the widest range of potential situations that it might encounter. This is a key challenge for the industry at the moment.”

With Nvidia providing the computing know-how and Bosch opening the right doors through its extensive automotive network, the two companies are well-placed to tackle this challenge head-on.

Read more: Cloudera: Connected car data is a safety issue

The post Bosch and Nvidia to develop AI-powered autonomous vehicle system appeared first on Internet of Business.

Internet of Business