The organisers of the Bolton Food and Drink Festival have used technology from Innotech Concepts and Libelium to keep attendees moving freely and safely around the event.
Held annually in the Greater Manchester town, the Bolton Food and Drink Festival gives attendees the chance to taste cuisine from all over the world. Last summer, over the August Bank Holiday, it broke all previous records, attracting 267,000 visitors. But pulling in such numbers comes with its own challenges in terms of keeping attendees moving around, freely and safely.
Looking for a better way to understand and manage the crowds, the festival organisers enlisted the support of Innotech Concepts on a project focused on visitor data monitoring. The Castleford, West Yorkshire-based start-up specialises in data collection for the transport and events sectors, with an emphasis on areas including connectivity, public safety and evacuation.
Innotech provided the event’s management team with sensor specialist Libelium’s Meshlium IoT platform, to study the behaviour and activities of visitors in real time, based on detection of smartphones via the Meshlium Scanner.
Two connected scanners were installed at the main entrance of the event venue and a third was deployed in the car park, as a means of monitoring the location, length of stay and individual journey routes of visitors. These scanned for smartphones every 15 minutes via Wi-Fi and Bluetooth.
To protect customers’ identifies, all information was kept anonymous and sent over a secure 4G network to Innotech’s proprietary analysis platform, Innotech Insights Crowded. Here, data can be transformed into visual charts to convey key data more easily, but the platform also offers a raw data download option, enabling users to slice and dice data in the ways that most interest them.
From this, the event’s organiser were able to establish a number of metrics: duration of stay; visitor volume per location; visitor volume per day; most popular locations; most popular individual and group routes; total visitors.
MediaTek Sensio, a powerful advanced health monitoring solution, makes it easy to track heart rate information, blood pressure trends, peripheral oxygen saturation levels and more.
The advent of IoT in healthcare is bringing daily monitoring of our vital signs to everyday devices. We reported recently on the release of the Apple Watch KardiaBand accessory, which can provide easy EKG readings on the go, without the need for standalone, impractical hardware.
Global fabless semiconductor company MediaTek is entering the fray with a personal health companion solution of its own, timetabled for release in early 2018. Its MediaTek Sensio is a six-in-one biosensor module designed to be integrated into smartphones – enabling it to deliver the following key health data points:
Heart-rate: heart beats per minute
Heart-Rate Variability: variation in the time between heartbeats
Blood Pressure Trends: blood pressure data charted over time to show trends
Peripheral Oxygen Saturation (SpO2): the amount of oxygen in the blood
Electrocardiography (EKG/ECG): the electrical activity of the heart over a period of time, displayed as a graph
Photoplethysmography (PPG): the change in volume of blood
The module employs LEDs and a light sensor on the smartphone’s body to measure the absorption of red and infrared light by the user’s fingertips. Touching the sensors and electrodes completes the device’s circuit and allowing the biosensor to measure EKG and PPG waveforms.
Smartphone manufacturers who wish to integrate the Sensio can develop proprietary applications or use third-party solutions and developer add-ons.
“Giving people the power to access their health information with a smartphone is a major step in making the world a healthier place,” said Dr. Yenchi Lee, senior director of product marketing for MediaTek’s wireless business. “With our MediaTek Sensio biosensor module and software, developers and device makers have a powerful, embedded health monitoring solution that delivers heart and fitness information in around 60 seconds.”
There’s no doubt that the MediaTek Sensio offers a neat solution for those with existing health conditions that require close monitoring. However, the challenge for MediaTek will be persuading smartphone manufacturers to integrate their product into future devices.
Historically, the likes of Samsung, Apple and LG have sought to engineer ever thinner, faster phones. While the MediaTek Sensio package is compact (6.8mm x 4.9mm x 1.2mm), it may add to the thickness or limit the battery size of any device that utilises it.
However, with top-of-the-range smartphones now at a point where they are almost bezel-less and as thin as we could want them, heath monitoring features could help to differentiate a device in what is an increasingly homogenous market.
When your competitors are using the same SoC, display and other specifications, it takes something extra to distinguish a new release. Tapping into the burgeoning popularity of fitness and health tracking solutions could well be the answer. Mediatek products, from processors to network chipsets, are currently found in nearly one in three mobile phones globally, so they seem well-placed to disrupt the current status quo.
As more and more smart devices being used in the home can be accessed by smartphone apps, hackers are focusing on exploiting software flaws and hacking the apps that control these devices. Recently, a vulnerability in LG’s SmartThinQ app could let hackers take control of your costly home appliances. Walmart has deployed fancy new shelf-scanning robotic machines across its stores which it says will boost customer shopping experience as well as store sales. Renesas is furthering its autonomous-driving endeavors with a new vehicle solution which will be leveraged by Toyota’s autonomous vehicles, which are scheduled for commercial launch in 2020.
Bug In LG Home Appliance Login App Could Let Hackers Take Control Of Your Home
Recently, Check Point researchers discovered a vulnerability, dubbed HomeHack, in LG’s smart home software exposing it to critical user account takeover. They claim this vulnerability could let hackers to take remote control of the Internet-connected devices like refrigerators, ovens, dishwashers, air conditioners, dryers, and washing machines. Researchers found the flaw was found during users’ signing into their accounts on the LG SmartThinQ app. The attacker could create a fake LG account to initiate the login process. Attackers could also switch dishwashers or washing machines on or off. They could even spy on users’ home activities via the Hom-Bot robot vacuum cleaner video camera, which sends live video to the associated LG SmartThinQ app. Read more.
Walmart has decided to roll out autonomous self-scanning bots to over 50 US stores to replenish inventory faster and save employees time when products run out. The robots are supposed to do tasks like checking stock, identifying mislabeled or misplaced items, incorrect prices, and helping employees in finding orders in online shopping. The robots, approximately 2-foot, come with a tower that is fitted with cameras that scan the stores to perform teir tasks. Once the robot completes its task, its results are forwarded to Walmart employees, who can analyze the data to reduce inefficiencies in the stores. The company emphasizes that robots performing these vital but repetitive tasks frees store employees allowing them to better assist customers and sell merchandise. In addition, this will help online customers and also personal shoppers to fulfil their orders. Read more.
Autonomous-driving Vehicle Solution For Toyota’s Vehicles
Renesas stated that its autonomous-driving vehicle solution will be leveraged by Toyota’s autonomous vehicles, which are presently under development and scheduled for commercial launch in 2020. Selected by Toyota and Denso Corporation, the solution combines the R-Car system-on-chip (SoC), which serves as an electronic brain for in-vehicle infotainment and advanced driver-assistance systems (ADAS), and the RH850 microcontroller (MCU) for automotive control. Renesas boasts that this combination delivers a comprehensive semiconductor solution that covers peripheral recognition, driving judgements, and body control. Read more.
IoB InsidersRob Bamforth, analyst at Quocirca, asks:Is it time to contemplate the end of the smartphone era and if so, what will its replacement look like?
The smartphone has become a default item to carry, absorbing the diverse functionality of cameras, wallets and fitness trackers, as well as being a communications device and pocketable networked computer. Growth in raw compute power and storage, coupled with mostly reliable internet access to further services in the cloud, have allowed an app and services ecosystem to flourish. This has been accelerated by a shift in revenue-sharing from giant mobile network operators to small software developers.
But despite continued improvements in apps and the development process, much of the recent innovation in the hardware platform itself has been incremental rather than revolutionary. There is increasing cynicism among those following or attending major hardware launches, whether by Apple, Samsung or anyone else, about the lack of novelty. It is true that certain incremental improvements, especially those that relate to security, lead to new application opportunities, but this often fails to sufficiently enthuse industry watchers.
Many have looked to wearable devices for excitement. The smartwatch was once thought by some (including Dick Tracey fans, perhaps?) to be the next generation of universal device. However, most smartwatches have been companion devices, attempting to add value via a symbiotic relationship with a smartphone. Finding value has proved elusive; specific use cases are there, but most are too narrow. Perhaps as a result, a number of smartwatch ventures have folded or development effort has shifted elsewhere.
Although desktop and mobile screens have grown over the last couple of decades (or shrunk in size but grown in resolution), demands on screen real estate have grown or at least remained steady. Switching to a tiny screen on the wrist is a challenge, and without a great deal of app developer effort and user experience expertise, wrist-worn app usability is typically not great.
Beyond the smartphone
Recent advances in immersive screen technology offer a different way of thinking about what we are trying to do with digital information. After all, why do we think that data must always be presented in a single, rectangular image? Perhaps the smartwatch is not the killer device replacement to the smartphone after all. There are other technologies worth taking a closer look at, and Google’s return to its Glass technology (perhaps that’s a bit of ‘double-gazing’?), along with Microsoft with its HoloLens and Epson with Moverio, might indicate a new interesting phase.
Fully immersive virtual reality (VR) systems been around for quite a while, but only recently has technology innovation made the experience sufficiently high-definition, responsive and affordable. There are a number of compelling applications beyond gaming and entertainment, such as pre-experiencing reality (looking around a car or apartment before buying, or configuring a super yacht) or collaborating with others in a virtual working environment. There are also tasks that involve hazardous or difficult-to-reach working environments, during manufacturing or maintenance, where immersive simulation could have significant business benefit.
However, the need to wear something that encloses the eyes means that this is not something to do without a bit of physical protection. There is an important area of VR development that involves the physical space to safeguard the user and make the experience ‘feel’ real, including harnesses, omni-directional treadmills and walking platforms. This is fine when the work is specific or justifies the need for immersion, but this is not likely to be for the everyday casual access to information.
Augmented reality (AR) on the other hand might be. Heads-up and projected displays of simple data are equally not recent inventions, and there have been plenty of pop-up data examples in sci-fi and movies, but AR probably gained most awareness from the game Pokemon Go. Overlaying graphics on mobile screens for entertainment is one thing, but there is sufficient capability to create more complex visualizations that overlay and complement the real world and project them into highly wearable devices (such as smart glasses) without obscuring reality.
This does not need to involve an entire screenful of data, but simply the most relevant and timely information, related to context and the need of the individual at a given moment in time. It can be delivered so that those needing to access and interact with information, in order to do another important or even critical task, can operate hands-free, with overlay, not overload.
There are some interesting concepts being explored using screen projection which then pick up gestures for user interaction by camera or radar. But the challenges, such as keeping a device steady if worn on the wrist, combined with the increased investment elsewhere, put the advantages in the VR/AR ‘eyeware’ court.
Glasses, not watches or goggles, are more likely to fit with more general purpose use cases and scenarios, but they will require a different way of thinking about applications. But for the sector to grow, the broadest possible development community will be needed. Not everyone can afford the most sophisticated new products from industry giants, but there are a number of low-cost AR tools appearing, such as Zappar, Layar, Blippar, Aurasma and others.
The smartphone and other mobile devices might not be declining yet, but it is time to think about how to interact differently with IT going forward. We used to be tied to our desks, but we no longer are. Perhaps we no longer need to be tied to the machine in our hands, either?
EarFieldSensing, or EarFS for short, described in an academic paper by the researchers, relies on detecting changes to the shape of the ear canal and other effects caused by facial movement. Data received by the earbud are converted into instructions which are delivered to a smartphone.
For example, when we smile, it isn’t only the muscles around our mouth that move. Muscles in the ear move, too. A sensor attached to the earlobe detects these movements as electrical field changes. These can be interpreted as a specific instruction for a phone to carry out.
The developers say that the current version of the system, which is at the prototyping stage, can detect five expressions with 90 percent accuracy: smiling, winking, turning the head to the right, opening the mouth and saying ‘shh’.
“Something as simple as answering a call with a facial expression could be possible soon,” inventor Denis Matthies from the Fraunhofer Institute told the New Scientist.
The researchers acknowledge that in commercial use the system would need to be able to include other variables beyond just detecting expressions themselves. For example, a smile could be interpreted as an instruction to answer a call only if the phone is actually ringing.
There is potential for EarFS to be used across a range of different scenarios. With smartphone makers constantly looking for the next big thing to make their phones stand out, perhaps an intelligent earbud that shaves seconds off the time needed to check a text message, listen to a voicemail or answer a call without bothering to reach for the phone could be a selling point. In many respects, this is just a logical extension of the Bluetooth hands-free kits that have been around for years.
Indeed, Sony’s Xperia Ear is already in the market and offers a wide range of ‘remote control’ features for phones. Where it differs from EarFS is that its primary control mechanism is voice, though it does also understand when a user nods their head. Adding the ability to pick up facial expressions brings more potential than the spoken word alone.