Building the Best Autonomous Brain

When I’m bumper-to-bumper in a sea of exhaust fumes and distracted drivers, it seems like autonomous driving can’t get here fast enough. Nor can the potential rewards that come along with fully autonomous vehicles, like far fewer accidents and mobility for people who struggle to get around on their own. To do my part, I’m focusing on how building the best autonomous brain for a car will get us there faster.

5 Things to Know About Autonomous Vehicles

Every day, we’re getting closer to the technology needed to power self-driving cars. But in-vehicle compute needs are complex, and autonomous driving algorithms are changing rapidly. So, the question is: What is the best long-term path to fast, safe decision-making? It all begins with the right compute for the right task. Here are five things you should know about the complex compute for autonomous driving.

 

It Takes More Than Deep Learning

Artificial intelligence is just one part of the story. And beyond that, AI is more than just deep learning. Yes, deep learning is key in teaching a car how to drive, especially when it comes to computer vision. But there will be several other types of AI at work in the fully autonomous vehicle, from traditional machine learning to memory- and logic-based AI. The fully autonomous vehicle will need a wide range of computing to support three intertwined stages of self-driving: sense, fuse and decide. Each stage requires different types of compute. In the first stage, the vehicle collects data from dozens of sensors to visualize its surroundings. During the second stage, data is correlated and fused to create a model of the environment. Finally, the vehicle must decide how to proceed. System designers need a flexible architecture to support all three stages, with an optimized combination of power efficiency and performance.

With a flexible, scalable architecture of CPUs, Intel Arria 10 FPGAs and other accelerators, our Intel GO automotive solutions portfolio leads the industry with a diverse range of computing elements that support all three stages of driving. But autonomous driving is much more than just in-vehicle compute; that’s why we offer a full car-to-cloud solution including 5G connectivity, data center technologies and software development tools to accelerate autonomous driving.
Smart AI consists of sensing, fusing and deciding.

 

No Fixed Architecture Can Keep Pace

Before system designers can achieve level four and five driving automation, they must determine how to best use different compute elements within the system to support each type of workload.

No fixed architecture can keep pace with the speed of innovation in AI and system design. Automakers and suppliers will need to be ready to change system designs down the road. Whether it’s to incorporate new algorithms or completely rethink compute to accommodate new workloads, system designers will need a flexible, scalable architecture. Simply put, they need interoperable and even programmable compute elements that don’t require them to start from the ground up every time they want to incorporate a new feature. With a flexible architecture of CPUs, FPGAs and other accelerators, future-ready solutions offer a diverse range of computing elements that can accommodate designs that may change long after hardware and vehicle design decisions have been made.

 

Driving the Future

Now is a time of tremendous opportunity as we continue to research and respond to the transformational changes before us. From powering Stanford University’s robotic car to serving as a premier board member of the University of Michigan Mobility Transformation Center’s Mcity, Intel is working alongside world-renowned research teams to understand the way people interact with connected cars. Intel has built autonomous vehicle labs in Arizona, California, Germany and Oregon. Here, we’re working hand in hand with our ecosystem partners to optimize customized solutions, road-test autonomous vehicles, and work toward common platforms that will speed broad industry innovation for the promising road ahead.

Learn more about the road to autonomous driving at intel.com/automotive. To stay informed about Intel IoT developments, subscribe to our RSS feed for email notifications of blog updates, or visit intel.com/IoTLinkedInFacebook and Twitter.

The post Building the Best Autonomous Brain appeared first on IoT@Intel.


IoT@Intel

Nokia OZO: Where virtual reality and brain surgery meet

Nokia OZO: Where virtual reality and brain surgery meet

Virtual reality gets most coverage from its use in gaming, but it has many more applications, including some with the potential to save or enhance lives. Nokia’s OZO virtual reality camera is proving its worth in this respect, with a recent outing as a training aid for brain surgeons.

Surgeons undergo a long and complex training period before they are free to act independently on treating patients. And even when they are practicing, education and learning are ongoing. Working with and observing others more experienced in their chosen fields is a key part of learning. So surgeons are used to the idea of learning through observation.

But how is this achieved?

Surgeons aren’t able to travel constantly to be part of groundbreaking surgery, or to observe complex procedures up close. But modern virtual reality (VR) technologies could help them achieve a near-presence experience of operating theatres, without the need to travel to them.

Proof of concept

Nokia took an experiment in using VR to help surgeons train to the 17th annual Live Demonstration Course in Operative Microneurosurgery, held at Helsinki University Hospital this June. This is an annual event that surgeons attend in order to watch operations take place and learn from the experience.

This year, attendees were offered a totally new way to experience and learn surgical techniques via VR live streaming. The Nokia team developed a solution in which a video from the surgery microscope and brain imaging pictures were captured in a real-time live stream.

The experience delivers a stereoscopic 360-degree OZO camera live stream with spatial audio, complemented by interactive microscope and graphics overlays.

Mass benefits

Normally, live operations can be observed first-hand by a maximum of around 15 people, with others watching on TV screens. But with access to VR live streaming, an operation can be shared with as many people as necessary, and they don’t have to be nearby.

Moreover, those with access to the VR streams can observe more than just the surgery itself. They can see how the patient is prepared for surgery, and observe the work of assisting nurses and anesthesiologists. This opens up opportunities for learning about what makes teams work well together, and understanding all the different roles played in complex surgery.

“Helsinki University Hospital wants to be a forerunner in exploring, identifying and demonstrating novel opportunities in the virtual, augmented and mixed reality domains, and drive concept creation for future virtual and  augmented reality in medical context,” said Miikka Korja, a neurosurgeon and chief innovation officer at the hospital. “We are really happy that we can cooperate with the Nokia team, who are pioneers in this area.”

Read more: In headsets battle, augmented reality for business to dominate, says IDC

Taking Ozo beyond surgery

Nokia says the potential benefits of this use of VR go beyond surgery itself. Technologies such as its Nokia OZO camera could bring doctors together in virtual worlds to help find solutions to complex cases. They might be used in patient care as part of the communications between patients, family and medical practitioners. And the could allow operating teams to review surgery and learn from what has been done.

Read more: Lloyds is banking on Virtual Reality to attract top grads

The post Nokia OZO: Where virtual reality and brain surgery meet appeared first on Internet of Business.

Internet of Business

How ‘brain wearables’ can address 21st century needs

The human brain is the most complex system in the known universe. It is imbued with enormous potential that we have yet to fully understand or to harness. But we’re making progress, for many good reasons.

By studying how the human brain functions and how it responds to stimuli, we can potentially train our minds for optimal performance and, perhaps, overcome physical disabilities or detect neurological abnormalities for treatment. We stand now on the cusp of what has been called ‘The Fourth Industrial Revolution’, a revolution that is growing out of the integration of the physical, digital and biological realms. The ability to directly connect electronic devices to the human organism in order to affect physical objects around us has the potential to drive change forward at an exponentially increasing pace. Our understanding of our limitations will be shattered, and new vistas will open up, as we explore the possibilities that arise when we bring minds, machines, and the material world together.

Put simply, we stand to reap enormous benefits if we can enlighten ourselves as to why and how we think and feel – to improve how we interact with and experience the world around us.

Today, innumerable such efforts proceed in specialised laboratories around the world, with a rather limited number of research subjects. But everyone’s brain is unique and changing in unique ways. The term neuroplasticity means that our brains change shape and function based on personal biological factors as well as our individual experiences in life. So we’re likely to gain commensurately greater insights from a broader participation in such studies.

And that’s where brain wearables come into the picture.

Enter: brain wearables

A market for brain wearables has promised to put neurotechnology into the hands of ordinary people. This is important because of the uniqueness of every brain; the greater the sample, the more robust the insights it yields.

Today these devices fall into two main categories. One uses electroencephalograms (EEGs) – essentially, surface brain wave activity – in a non-invasive, read-only mode, which can provide data on the wearer’s mental and emotional state. The other basic approach relies on transcranial direct current stimulation (TDCS), which sends electrical signals to the brain for neuro-priming, which is intended to promote “hyper-learning.”

I work in the EEG-related field of brain wearables, which offer a means to further our understanding of the human brain in a useful form factor and at a reasonable price point.

Potential benefits

We are using brain wearables to conduct longitudinal studies over time in more than 120 countries to discern how different stimuli and situations affect different brains, helping us understand, for instance, how different people react to handling stress or how we can assist them in achieving optimal performance.

In practical terms, understanding and encouraging high performance is one focus of our work, which would have obvious benefits for athletes, soldiers, professionals, artists – nearly everyone, really. And the broadest possible application would be to gain a better understanding of how various stimuli – and our own, often very individual responses – affect our thoughts and feelings. The end result could be to inform an improved self-awareness and a better understanding of ourselves to mitigate irrational or unproductive behaviour.

Early detection possible

Ultimately, those of us in the brain wearables field would like to make progress on the early detection of neurological issues and overall brain health.

One in three people, of the more than seven billion on Earth, are affected by brain-related illnesses, including depression, anxiety, dementia, autism, attention-deficit disorder (ADD), attention-deficit hyperactivity disorder (ADHD), stroke or trauma. Apart from widespread human suffering, these disorders are estimated to cost the global economy some $ 2 trillion per year. In the U.S., specifically, an aging population has the potential for extended lives, for which quality-of-life will require healthy brains.

Brain health is also considered a key factor in many other bioinformatics advances. I think of it as a quintessential 21st century issue.

Staying ahead of potential pitfalls

Though I’m positively buoyant about the known and potential benefits of brain wearables, it is also our duty to be vigilant about the potential risks.

Data privacy and security are perennial concerns for everyone. These concerns are heightened when personal health-related matters are at stake. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) provides legal protections and it is up to technologists to ensure that data privacy and security protections are state of the art.

Currently we are careful to apply significant effort and care to user consent issues for participation in studies we conduct. The critical issue, in my view, is preserving individual  choice and the personal integrity of every individual.

I have few real concerns at this stage, because “wearables” are just that; you can put them on or take them off and anonymising data in studies is standard practice. But if brain wearables or related technologies were to become embedded in the human body, there’s an obvious risk of abuse. Today, arguably, our thoughts and feelings are our own, but we know that chemical reactions govern these and thus they could be manipulated, leading to a loss of individuality.

Democratisation of technology

Our approach is the opposite of a dystopian use of brain monitoring technology. Our philosophy is to democratize technology and make tools such as brain wearables more affordable, easier to use. Our technology platform is based on open access software (e.g., extensible APIs), aimed at both broad uptake (if the market finds them useful), and the broadest possible base of innovation to benefit all. We want to avoid creating another aspect of a digital divide, with brain wearables available only to a few who can afford them. We believe this approach is in step with society’s shared values.

We work with partners across many domains and more than 120 countries, an open acknowledgement that we don’t have all the answers. The direction that brain wearables take is not up to us as pioneers in the field. It’s an open conversation. We simply want to position the technology and raise awareness for the greatest breadth and depth of potential contributions to the field. The more participants in brain wearable trials the more we learn about the behavior of the human brain and ways in which its health and optimal use can be encouraged.

Widespread adoption is the crux of our success. A broad and diverse dialogue on the issues of brain health and technology will enable the enhancement of healthy brains and detect signs of cognitive decline and disorders. 

iottechnews.com: Latest from the homepage