Outsourced AI and deep learning in the healthcare industry – Is data privacy at risk?

As emerging technologies, artificial intelligence (AI) and deep learning are proven to provide powerful business insights. This is especially true for the healthcare industry, says Jonathan Martin, EMEA operations director at Anomali, where Freemium AI and machine learning software packages such as theano, torch, cntk, and tensorflow can effectively predict medical conditions such as cancer, […]

The post Outsourced AI and deep learning in the healthcare industry – Is data privacy at risk? appeared first on IoT Now – How to run an IoT enabled business.

Blogs – IoT Now – How to run an IoT enabled business

Qualcomm acquihires deep learning startup Scyfer B.V.

Qualcomm, a high-tech semiconductor and telecommunications equipment company has acquired Scyfer B.V., a spin-off of the University of Amsterdam specialized in machine learning. Terms of the deal were not disclosed.

It produces images daily to help train the software.

Scyfer B.V. develops AI solutions for companies in various industries that range from manufacturing, medical, and industrial applications. One of its most renowned customers is Tata Steel from India. The startup developed a customized machine learning platform for a steel surface quality inspection system used by Tata Steel. The company also developed medical image analysis applications for health companies.

Apparently, it seems an acquihire deal as Scyfer boasts a strong team of machine learning experts. Scyfer B.V.’s co-founder Dr. Max Welling will continue his role at the University of Amsterdam.

Qualcomm’s previous high-profile acquisition was NXP Semiconductors for $ 47B in cash back in Oct 2016. It also acquired Euvision, a Dutch image and video recognition startup from the University of Amsterdam. The acquisition of Scyfer B.V. will enable Qualcomm to further strengthen its portfolio of AI companies. The acquisition of Scyfer reveals that Qualcomm is moving its AI capabilities towards the ‘edge’ and closer to end user’s devices/products.

The California-based company also released its Qualcomm Snapdragon Neural Processing Engine software development kit last month.


Postscapes: Tracking the Internet of Things

Ultrahaptics finds deep pockets for their groundbreaking AR/VR tech

Business people using virtual reality goggles in meeting in office. Team of developers testing virtual reality headset.

With the rapid development of VR and AR technologies, we have experienced fascinating virtual environments, but in terms of haptic feedbacks and vivid feelings, the market still lacks practical and mature solutions.

Ultrahaptics, which was founded in 2013 based on technology originally developed at the University of Bristol, has developed technology that uses ultrasound to virtually create 3D objects and real-life sensations so that users can get feedback from touchless buttons by seemingly gesturing in midair. Its technology will enhance user experience so that VR and AR-related products look just like what we have seen in sci-fi movies for years.

Truly unique technology

Ultrahaptics launched a development kit called UHDK5 TOUCH last year, which included a complete hardware and software package that can be used in product designs and test results. According to TechCrunch writer Lucas Matney, who tried the kit a couple months ago, despite Ultrahaptics still being “in the early stages of finding use cases for its tech,” the technology it’s offering is “definitely an interesting solution to some tired problems.”

With such unique and sci-fi-like technology, Ultrahaptics draws attention from many investors in related fields, and on May 4, Ultrahaptics announced on its official blog that it had raised $ 23 million in just its B round of funding. Dolby Family Ventures and Cornes both participated in this round, along with Woodford Investment Management and IP Group, who already invested in Ultrahaptics in its $ 15.6 million A round of funding.

This recent funding marks not only a large milestone for the firm, but also for the landscape of user input relating to haptics. According to the data provided by VB Profiles in their “User Input – Haptics” market, Ultrahaptics’ B round makes up almost a fourth of all investments — $ 101 million — in this market. In fact, the money raised by Ultrahaptics in all its funding rounds makes up 41% of all funding in User Input – Haptics with the heavy majority of the remaining funding being raised by Immersion.

Immersion, a publicly traded company, is one of Ultrahaptics’ strongest competitors in the User Input – Haptics market; it has raised the most money in the market thus far — over $ 50 million. However, Immersion has had nearly no headcount growth in the last two years, while Ultrahaptics has experienced nearly 300% growth according to LinkedIn data.

Shifting to smart vehicles

Because of its gains in virtual technology, funding from investors and rapid extension, Ultrahaptics is now shifting its focus more to smart vehicles, hoping it can implement its ultrawave technology and midair gesture feedback system in drivers’ dashboards. With this technology, drivers can release their hands from controlling infotainment and audio systems, all while paying more attention to the road.

During the CES in Las Vegas early this year, it was revealed that the firm’s solution was used in BMW’s new virtual touchscreen system, HoloActive Touch. In February, Harman, which was acquired last year by the Samsung Group, partnered with Ultrahaptics to develop a better midair haptic feedback system and to enhance the driving experience with smart cars.

The positive effects from Ultrahaptics’ new funding will drive its developing process more smoothly, and as a result, we should expect more practical and useful solutions for VR and AR related fields in the near future.

The post Ultrahaptics finds deep pockets for their groundbreaking AR/VR tech appeared first on ReadWrite.

ReadWrite

What makes Deep Learning deep….and world-changing?

Active nerve cells

Remember how you started recognizing fruits, animals, cars and for that matter any other object by looking at them from our childhood?

Our brain gets trained over the years to recognize these images and then further classify them as apple, orange, banana, cat, dog, horse. then it gets even more interesting — aside from figuring out what to eat and what to avoid, we learn brands and their differences: Toyota, Honda, BMW and so on.

See also: How to use machine learning in today’s enterprise environment

Inspired by these biological processes of the human brain, artificial neural networks (ANN) were developed.  “Deep learning” refers to these artificial neural networks that are composed of many layers. It is the fastest-growing field in machine learning. It uses many-layered Deep Neural Networks (DNNs) to learn levels of representation and abstraction that make sense of data such as images, sound, and text

So what makes it deep?

Why is deep learning called deep? It is because of the structure of those ANNs. Four decades back, neural networks were only two layers deep as it was not computationally feasible to build larger networks. Now, it is common to have neural networks with 10+ layers and even 100+ layer ANNs are being tried upon.

Using multiple levels of neural networks in deep learning, computers now have the capacity to see, learn, and react to complex situations as well or better than humans.

Normally data scientists spend a lot of time in data preparation – feature extraction or selecting variables which are actually useful to predictive analytics. Deep learning does this job automatically and makes life easier.

To spur this development, many technology companies have made their deep learning libraries as open source, like Google’s Tensorflow and Facebook’s open source modules for Torch. Amazon released DSSTNE on GitHub, while Microsoft also released CNTK — its open source deep learning toolkit — on GitHub.

And so, today we see a lot of examples of deep learning around, including:

  • Google Translate is using deep learning and image recognition to translate not only voice but written languages as well.
  • With CamFind app, simply take a picture of any object and it uses mobile visual search technology to tell you what it is. It provides fast, accurate results with no typing necessary. Snap a picture, learn more. That’s it.
  • All digital assistants like Siri, Cortana, Alexa & Google Now are using deep learning for natural language processing and speech recognition.
  • Amazon, Netflix & Spotify are using recommendation engines using deep learning for the next best offers, movies or music.
  • Google PlaNet can look at the photo and tell where it was taken.
  • DCGAN is used for enhancing and completing the human faces.
  • DeepStereo: Turns images from Street View into a 3D space that shows unseen views from different angles by figuring out the depth and color of each pixel.
  • DeepMind’s WaveNet is able to generate speech which mimics any human voice that sounds more natural than the best existing Text-to-Speech systems.
  • Paypal is using deep learning to prevent fraud in payments.

Until now, deep learning has aided image classification, language translation, speech recognition and it can be used to solve any pattern recognition problem, and all of it is happening without human intervention.

This is without a doubt a disruptive digital technology that is being used by more and more companies to create new business models.

The post What makes Deep Learning deep….and world-changing? appeared first on ReadWrite.

ReadWrite

“Deep Learning”: Deep Dive Into Large Data

“Deep Learning”: Deep Dive Into Large Data

Understand why we need deep learning and how it is relevant to current engineering problems. Get introduced to application areas of deep learning related to signal processing and computer vision. Kick start your deep learning and Caffe framework.

Machine learning uses data to find an equivalent model underlying a physical process. This is usually achieved by hand-crafting features and training a learning algorithm on top of it. However, in deep learning, the algorithm tries to learn the features along with the algorithm, making the entire learning process largely data dependent. Deep learning approach is the most trending machine learning approach for large sets of “unstructured data” like images, speech and videos. Consider a ‘Visual Servoing System’ where an algorithm extracts information from a visual sensor that controls the robot. An intelligent control algorithm will try to learn various attributes about its surrounding and will generate a response based on the collective information. Engineers and programmers have an approach to program the robot for all possible scenarios it may or may not encounter. However, through deep learning, the robot is likely to learn a desired behaviour. Deep learning with large data is interesting because it somehow finds the underlying pattern in a process. It is interesting to explore some fields those are immensely impacted by ‘Deep Learning’.

A BREAKDOWN OF THE TALK:
• Learning and its importance.
• Deep Learning and ‘Shallow learning’: What they imply?
• Cognitive Processes and Concepts of Deep Architecture
• Application: Deep Learning in field of Robotics
• Open source libraries available for Deep Learning
• Caffe framework

Presented by: Pallab Maji, Senior Engineer, Continental Automotive

04_“Deep Learning”- Deep dive into large data from EFY on Vimeo.


 

The post “Deep Learning”: Deep Dive Into Large Data appeared first on Internet Of Things | IoT India.

Internet Of Things | IoT India