Let’s Mobilize for Better Data Stewardship

If we want organizations like Equifax to be good data stewards, we, the users and consumers, must mobilize.

In October, the Internet Society explored why the dominant approach to data handling, based around the concepts of risk and compliance, does not work. To recap: “…data handlers try to adhere to regulatory requirements and minimize the risk to themselves – not necessarily to the individuals whose data they handle. For some data handlers, the risk that poor security creates may not extend to them.”

Euphemistically put, Equifax has not been an example of forthcomingness, transparency, and accountability. Users can change this paradigm. Users can shift the cost of a data breach onto the data handler by holding the accountable for their action or lack of action.

The key is to organize. For example, Consumer Reports is organizing a campaign calling on Equifax to take the next steps to address the fallout from the data breach. Their first step was to deliver a petition signed by over 180,000 individuals to Equifax’s headquarters.

To continue making sure Equifax does everything in its power to make things right for consumers in danger of identity theft, Consumer Reports is fundraising. The Internet Society just pledged 10k to this cause, and we hope others will join us.

Other actions you can take:

  1. Sign the Consumer Reports Petition to Equifax.
  1. Prepare for a breach incident with the Online Trust Alliance’s 2017 Cyber Incident & Breach Response Guide.
  1. Read the Global Internet Report 2016 to take a close look at the economics of data breaches and consider five recommendations for a path forward.

The post Let’s Mobilize for Better Data Stewardship appeared first on Internet Society.

Internet Society

Homehack: Smartphone App Lets Hackers Take Control Of Your Home Appliances!

Homehack: Security Bug In Login App Lets Hackers Control Your Home Appliances

As more and more smart devices being used in the home can be accessed by smartphone apps, hackers are focusing on exploiting software flaws and hacking the apps that control these devices. Recently, a vulnerability in LG’s SmartThinQ app could let hackers take control of your costly home appliances. Walmart has deployed fancy new shelf-scanning robotic machines across its stores which it says will boost customer shopping experience as well as store sales. Renesas is furthering its autonomous-driving endeavors with a new vehicle solution which will be leveraged by Toyota’s autonomous vehicles, which are scheduled for commercial launch in 2020.


Bug In LG Home Appliance Login App Could Let Hackers Take Control Of Your Home

Recently, Check Point researchers discovered a vulnerability, dubbed HomeHack, in LG’s smart home software exposing it to critical user account takeover. They claim this vulnerability could let hackers to take remote control of the Internet-connected devices like refrigerators, ovens, dishwashers, air conditioners, dryers, and washing machines. Researchers found the flaw was found during users’ signing into their accounts on the LG SmartThinQ app. The attacker could create a fake LG account to initiate the login process. Attackers could also switch dishwashers or washing machines on or off. They could even spy on users’ home activities via the Hom-Bot robot vacuum cleaner video camera, which sends live video to the associated LG SmartThinQ app. Read more.


Autonomous Shelf-scanning Robots Restock Items Faster

Walmart has decided to roll out autonomous self-scanning bots to over 50 US stores to replenish inventory faster and save employees time when products run out. The robots are supposed to do tasks like checking stock, identifying mislabeled or misplaced items, incorrect prices, and helping employees in finding orders in online shopping. The robots, approximately 2-foot, come with a tower that is fitted with cameras that scan the stores to perform teir tasks. Once the robot completes its task, its results are forwarded to Walmart employees, who can analyze the data to reduce inefficiencies in the stores. The company emphasizes that robots performing these vital but repetitive tasks frees store employees allowing them to better assist customers and sell merchandise. In addition, this will help online customers and also personal shoppers to fulfil their orders. Read more.


Autonomous-driving Vehicle Solution For Toyota’s Vehicles

Renesas stated that its autonomous-driving vehicle solution will be leveraged by Toyota’s autonomous vehicles, which are presently under development and scheduled for commercial launch in 2020. Selected by Toyota and Denso Corporation, the solution combines the R-Car system-on-chip (SoC), which serves as an electronic brain for in-vehicle infotainment and advanced driver-assistance systems (ADAS), and the RH850 microcontroller (MCU) for automotive control. Renesas boasts that this combination delivers a comprehensive semiconductor solution that covers peripheral recognition, driving judgements, and body control. Read more.


 

The post Homehack: Smartphone App Lets Hackers Take Control Of Your Home Appliances! appeared first on Internet Of Things | IoT India.

Internet Of Things | IoT India

Raspberry Pi-powered Inventor’s Laptop Lets You Start With Amazing DIY Projects

Raspberry Pi-powered Inventor’s Laptop Lets You Start With Amazing DIY Projects

If you are a maker or a budding coder and want to create something unique and exciting, here’s how you can get hands-on with your computer science and electronics skills. A new Rapsbbery-Pi based laptop includes everything to get you started with amazing projects. Gigabit Ethernet in the car is getting into gear with KDPOF’s new transceiver for car makers. Finally, Laird is helping OEM customers leverage enhanced throughputs and security benefits of the Bluetooth v4.2 in their end devices with new Class 1 HCI modules.


A new Raspberry-Pi Laptop For Budding Makers

To allow Raspberry Pi tinkerers and budding coders experiment with a variety of interesting projects, a new version of the modular Rapsberry-Pi laptop (Pi-top), has been revealed. To invent new things, this Rapberry-Pi 3 based laptop has everything including an impressive sliding keyboard panel, a 14-inch 1080p display, a power source, a battery slated to offer up to eight hours of use between charges and an 8GB SD card. Furthermore, the Pi-top also includes an Inventor’s Kit to allow inventers and young learners to be inspired by STEAM-based learning. Unlike other laptops, students can access the internals and play with them, enabling them to explore computer science and basic electronics. The price is $ 319.99 including a a Raspberry Pi 3, or $ 284.99 without. Read more.


HCI modules Updated With Bluetooth v4.2 Dual-mode Connectivity

Laird has announced Bluetooth-qualified Class 1 HCI modules for rapid enablement of Bluetooth technology into OEM devices. The BT850, BT851, and BT860 series adds support for the Bluetooth v4.2 BR/EDR/LE core specification in Classic Bluetooth and Bluetooth Low Energy (BLE).  The BT850 and BT860 series provide more options for OEM customers through enhanced throughputs and security benefits in the Bluetooth v4.2 specification. Read more.


Gigabit Ethernet Connectivity In Cars Gets Into Gear

Making automotive gigabit Ethernet over POF (plastic optical fiber) a reality, KDPOF is shipping samples of the first automotive-grade Gigabit Ethernet over Plastic Optical Fibres (GEPOF) transceiver to car makers. Automotive applications of the KD1053 include 100Mbps and 1Gbps Ethernet links such as battery management systems (BMS), inter-domain communications backbones, antenna hubs, autonomous driving, and ADAS (advanced driver assistance systems) with surround view. To allow users start designing fast and easy, the firm also offers comprehensive support such as application notes, reference design, evaluation boards and kits. Read more.


 

The post Raspberry Pi-powered Inventor’s Laptop Lets You Start With Amazing DIY Projects appeared first on Internet Of Things | IoT India.

Internet Of Things | IoT India

The next step in IoT is vision, so let’s give computers depth perception

The computer-generated map of an environment from stereo cameras on a drone. Taken at Qualcomm’s robotics lab last week. Yes, that is me taking the picture.

I talk a lot about computer vision because I think it’s a core enabling technology for vastly more efficient understanding and use of the world around us. When a computer can see, it can apply its intense analytical powers to the images and offer insights humans can’t always match.  Plus, when combined with actuators, computers can direct things in the real world to respond to the data it “sees”  immediately.

Thus, computer vision is a huge stepping stone to the promise of the internet of things. John Deere’s purchase this week of Blue River Technology, a company that makes a computer vision system to identify weeds on farms, is an excellent example of this in action.

John Deere is no stranger to connected tractors. It’s one of the early adopters of the internet of things and was implementing IoT before the phrase was even popular. It has been using GPS data, connectivity and sensors in fields to gather all kinds of data about land conditions and crops, and to make driving such bulky equipment more autonomous.

With this acquisition, it’s adding what Willy Pell, director of new technology at Blue River, calls “real-time perception” to the reams of data the ag firm already provides. This perception comes in the guise of computer vision. The tractors can now pull a trailer behind them that snaps pictures of each plant and prescribes certain actions like dropping pesticide on it. By automating the task John Deere can offer farms a weed killing solution that scales cheaply and performs the same way every time while treating each plant individually.

Computer vision is going to pop up everywhere, in part because as humans we are incredibly visual. If dogs were building the internet of things, I bet they’d build sensors that could detect the chemicals that comprise various scents and then translate that back into code a computer could read. While dogs would likely focus on pheromones, we focus on pixels.

And this is an important thing to remember: computers don’t see like we do. Every image is translated into pixels with data associated with each. The computer then applies math to figure out distances between featured points and determines what it is seeing. Right now, a lot of the focus is on teaching computers to use videos, which a computer reads as “flat.” While we can look at a video of an office and estimate a building’s depth, or at least infer it has depth, a computer doesn’t necessarily do that. That’s why facial recognition using cameras can be spoofed by a photo or makeup that disguises contours.

Computers need depth perception to see as well as humans. With self-driving cars, consumer products like the LightHouse personal assistant, some drones and even the anticipated 3-D sensor on the iPhone, computer vision with depth is hitting the mainstream. So I thought I’d show the picture above, which is a drone mapping out the world using double cameras in stereo, and explain the different ways we’re giving computers depth perception.

Old school depth perception is basically like a moving version of Viewmaster. It requires two cameras on either side plus processing power and algorithms to handle the math required to use the two camera images to provide computers with the sense of depth. When seen represented on a monitor, the edges of things are softer and less defined. In some use cases, especially as cameras decrease in cost and processing power requirements, this can suffice. For example, some drones could use this.

For everything else, there are 3-D depth sensors. They come in three different types. A familiar type are laser range finders that shoot out calibrated laser beams and record what the lasers bounce off of. It’s like sonar for light. This is the type of sensors found in LIDAR. They are extremely accurate at most things, but also expensive and require moving parts.

The other two types use light. One, which generated the image above, is called a structured light camera. It works by sending out a known pattern of light, usually in infrared. The camera then “sees” by figuring out how the pattern was disrupted. The first well-known structured light 3-D sensor was probably the Microsoft Kinect, which launched in 2010. These are cheaper, but they don’t work well in the dark.

The other light sensor is a time of flight camera that shoots out precisely timed bursts of light and then measures how long it takes them to come back. It calculates the difference between returning pulses to generate a sense of the shape of an object it in front of it. These sensors are similar to what might be used in the next generation iPhone because they work well in a variety of lighting situations but aren’t as expensive as a laser-range finder.

As computers gain depth perception they can become more accurate at a variety of tasks, from robots that can better manipulate objects to perform complicated tasks to cameras to high-quality biometric security systems.

And what is the IoT really, except the search for better data and ways to manipulate it?

Stacey on IoT | Internet of Things news and analysis

Forget Elon Musk’s ban — let’s put our energy into building safe AI

Businessman explorers the digital world

Elon Musk recently commented on the need to regulate AI, citing it as an existential risk for humanity. As is the case with any human creation, the increasing leverage technology affords humans can certainly be used for good or evil, but the premise that we need to fear AI and regulate it this early in its development is not well founded. The first question we might consider is whether what we fear is the apathy or malevolence that AI might evolve.

I bring this up because Musk himself has previously referred to the development of AI as “summoning the demon,” associating the imagery of evil with it. Any honest assessment of the history of mankind shows us that the most shockingly malevolent intent can arise from human hearts and minds.

See also: Elon Musk calls on government to begin regulating AI

History also shows, however, technology overwhelmingly advances our shared human experience for good. From the printing press to the Internet, there have always been naysayers who evangelize fear of new technology. Yet, when channeled by leaders for the collective good, these technologies, although disruptive to the known way of life, create a positive evolution in our human experience. AI is no different.

Technology is always neutral by itself

In the hands of responsible, moral leaders, the technology promises to augment human capacities in a manner which could unlock unimagined human potential. AI, as any technology, is neutral. The morality of the technology is a reflection of our collective morality, determined by how we choose to use it.

Imagine any one of history’s dictators with a large nuclear arsenal. If their vengeance weapons were nuclear tipped and could reach all points of the earth, how would they have shaped the rest of history? Consider what Vlad the Impaler, Ivan the Terrible, and Genghis Khan would have done, for example. Not only were these malevolent humans, they actually rose to be the leaders and kings of men. Has technology already developed to a point where a mad man can lay waste to the planet? With nuclear, biological and chemical weapons, the answer is sadly, yes. We already live with the existential risk that comes from our own malevolence and the multiplicative effect of technology. We don’t need AI for that.

Falling prey to fear at this stage will harm constructive AI development. It has been argued that technology drives history. That if there is a human purpose, it is to be found in learning, evolving, progressing and building. Exercising our creative potential to free ourselves from the resource limitations that plague us and the scarcity that brings out the worst in us. In this way, Artificial Intelligence – technology that may mimic the most wondrous human quality, the quality of thought – can be a liberating force and our ultimate achievement. There is far more to gain from AI at this stage.

If that weren’t enough, take a minute to ponder the irreversibility of innovation. No meaningful technology has been developed and then put back in the bottle, so to speak. When the world was fragmented and disconnected, from time to time some knowledge was lost, but it was almost always re-discovered in a distant corner of the globe by some independent thinker with no connection to the original discovery. That is the nature of technology and knowledge…it yearns to be discovered. If we think that regulation and controls will prevent the development of Artificial Intelligence, we are mistaken. What they might do is prevent those who have good intentions from developing it. They will not stop the rest.

How would a ban work?

When contemplating bans, it is important to consider if they can be enforced, and how all parties overtly impacted by the ban will actually behave. Game theory, a branch of mathematics concerned with decision making in conditions of conflict and cooperation, poses a famous problem called The Prisoner’s Dilemma.

The dilemma goes something like this: Two members of a gang, A and B, are both arrested and locked up independently. If they both betray each other, each serves two years in prison. If A betrays B, but B doesn’t implicate his comrade, A goes free but B serves three years. And if both of them stay silent, they serve a year each. While it would seem that the “honorable” thing to do would be to stay silent and serve a year so that the punishment is equal and minimal, neither party can trust that the other will take this honorable course. The reason is that by betraying the other, there is the potential gain to the dishonorable actor of going scot-free. Both B and A will have to consider that the other might take the course most suitable for their own situation, and if this were the case, the betrayed party would then suffer maximum damage (i.e. three years in prison). Therefore, the rational course of action available to both parties is to betray each other and “settle” for a year in prison.

The author is a serial entrepreneur and inventor based in Austin, Texas. He is the Founder & CEO of SparkCognition, Inc. an award-winning Machine Learning/AI driven Cognitive Analytics Company, a Member of the Board of Advisors for IBM Watson, a member of the Forbes Technology Council and a Member of the Board of Advisors for The University of Texas at Austin, Department of Computer Science.

The post Forget Elon Musk’s ban — let’s put our energy into building safe AI appeared first on ReadWrite.

ReadWrite