As the Internet Society celebrates 25 years of advocacy for an open, globally-connected, and secure Internet, we are honored to recognize some of the trailblazers who have fueled the Internet’s historic growth.
Representing 10 countries, the 14 individuals who comprise the 2017 inductee class are computer scientists, academics, inventors and authors who have advanced the Internet with key technical contributions, fostered its global reach and increased the general public’s understanding of how it works—in turn accelerating global accessibility and usage among us all.
Ultimately, the success of the Internet depends on the people behind it, and these inductees personify the pioneering spirit of the ‘Innovators’ and ‘Global Connectors’ that have been so instrumental in bringing us this unprecedented technology. They are some of the earliest Internet evangelists and their work has been the foundation for so many of the digital innovations we see today, and for generations to come.
Whether they were instrumental in the Internet’s early design, promoting its use, or expanding its global reach, we all benefit from their commitment and foresight.
Did someone feed the dog? This question gets asked in my home at least a dozen times a week. It’s enough of a pain point that we’ve tried a chart, the Amazon Alexa Dog Feeder Skill and jerry-rigging a motion sensor combined with texting capabilities from IFTTT to track if the dog has indeed been fed. But none of these options are perfect.
Enter YaDoggie, which aims to build a subscription pet food business around a connected dog food scoop. Founder Sol Lipman had the same issue my family did, and to solve it he envisioned a Bluetooth food scoop that would track if the dog has been fed and how much. But YaDoggie isn’t just a pet food startup. It’s an example of how the internet of things can change how we determine what to pay for goods.
But building such a device is expensive (the company estimates it will sell for $ 40 to $ 50 later this year) and Lipman decided it made more sense to solve the problem of knowing if the dog has been fed with providing dog food as a service. With a connected scoop, he’d know when and how much a dog is fed. From there, he could extrapolate when to send more food. So he got into making dog food.
Thus YaDoggie went from a hardware product to dog food delivered as a service. As far as dog food goes, I spend about $ 2.83 per pound buying a 12-pound bag on a pricey premium brand for my 18-pound mutt. The cost of the YaDoggie food is $ 3.29 per pound for a 14-pound bag, but that also includes delivery and free poop-scooping bags.
The quality of the food is similar based on the ingredient lists. What I’d be paying extra for is freedom from thinking about feeding my dog and buying food. Is that tradeoff worth it? The closest I could come without YaDoggie would be a subscribe and save option from Amazon plus a Dash button dedicated to my dog food (or the Dash wand). But then I lose the automation of it being ordered on its own and the benefit of knowing if anyone fed the dog.
As a consumer what I’m noticing as products become more connected is that weighing the costs and value associated with services or products disguised as services is becoming more complicated. For another example of this, check out the comments in our story this week about Chamberlain charging for its IFTTT integration.
There’s also an emerging theme around the value of connected products and what it costs to make them. Look at the backlash from Tesla’s seemingly helpful decision to add battery range to cars ahead of Hurricane Irma. Customers who purchased a car that was able to travel roughly 210 miles on a single charge got an over-the-air update that boosted their capacity to 249 miles on a single charge.
That boost was something buyers of the higher-cost cars spent $ 8,500 more per vehicle for. But it was simply a software-based distinction. Consumers looked at that and cried foul because the pricing Tesla charged for the car that could go further didn’t reflect a physical limitation driven by cost, but an economic limitation imposed with a few lines of code.
The very value of the things we buy is shifting from something linked to cost to make a good or deliver a service, to the perceived value that service or good provides.
Beforehand you could look at a dog food and say that it’s worth a bit more because the quality of ingredients is higher. Now you can also weigh the value of the convenience associated with stocking the food or even scooping it out every day for your pet.
Whether it’s dog food or a Tesla, in a connected, service-based world consumers will see a pricing revolution.
In both the US and Europe, there’s still a distinct lack of clarity around who owns industrial IoT data – customer or vendor? – as well as what that data might be worth, as David Meyer reports.
Companies using IoT technologies are sometimes unaware of the value of the data those systems create. As a result, they’re inadvertently signing lopsided agreements with system vendors.
That’s the view of Giulio Coraggio, a partner at law firm DLA Piper in Milan, who warns of a current lack of clarity when it comes to the question: Who owns industrial IoT data – the vendor or its customer?
That uncertainty is a problem, says Coraggio, since this data can have a transformative impact on companies, giving them better insights into the functioning of their products and services, and importantly, providing them with a potentially saleable asset.
“The issue is whether the company receiving the service has any intellectual property right on such data,” he tells Internet of Business. “The question has no straightforward answer, since it depends on the type of IoT technology and the type of data. There might be either a copyright or a database sui generis right, or no intellectual property right on IoT data.”
Data itself is not generally protected by intellectual property rights, neither in the US nor the EU. However, the same doesn’t necessarily apply to the databases in which the data is held.
In Europe, there has since 1996 been a database right under EU law, with three major conditions: the data must be collected and arranged in a systematic way; the holder of the rights must have made a “substantial investment in either the obtaining, verification or presentation of the contents”; and they must have a strong connection with a state in the European Economic Area.
(There is also, potentially, a copyright for databases, but there must be a creative element to them that is unlikely to apply in most industrial IoT scenarios.)
There is currently no straightforward database right in the US, where ownership is generally assumed to be owned by whoever has title to the device that collected the data. However, in both legal landscapes, it’s sometimes unclear who owns the device, or who made that “substantial investment”. Is it the enterprise customer, or the vendor that’s gone and deployed its solution across the customer’s facilities?
In the absence of clarity, control over the data will by default go to whoever controls the data collection system. In many cases, that’s the vendor, even though both vendors and their customers have a lot to gain through the canny exploitation of the data that is generated through their arrangement.
“At the moment, IoT data risks remaining ‘locked’ into suppliers’ technologies which have the sole ability to control it,” says Coraggio. “But there is an increasing awareness of the potentials and value of IoT data by entities using internet of things technologies.”
“Therefore, I expect interesting negotiations between suppliers and customers which will have to decide whether it is more valuable for them to either keep full control of its data, or get a better price because of the supplier’s ability to use that data for future projects or improvements of their products and services,” he says.
Are these negotiations commonplace yet, though? According to Coraggio, they are “starting to arise”, but many companies are not yet fully educated about the value of their data. “Therefore, there are opportunities for suppliers in this sense,” he says.
Coraggio noted that the European Commission is currently contemplating the introduction of new ad hoc database rights, which could help make the question of IoT data ownership somewhat clearer and eliminate what Coraggio calls the risk of a “short blanket” regulatory regime, where some data may not have clear intellectual property protections.
The Commission launched a review of the EU database directive earlier this year, and its consultation closed on Wednesday (30 August). When the directive was last evaluated back in 2005, there was no IoT, and the Commission’s next steps, if any, remain to be seen.
However, Coraggio warns that any move by the Commission to introduce new rights over IoT data could have negative repercussions if it is not carefully calibrated.
“The challenge is that if new ownership rights are created to control data, this might increase the number of entities whose permission is required to exploit data, representing more a restriction than an incentive to exploit data,” Coraggio says.
“A most prudent approach is to contractually regulate the matter in the relevant contracts, leaving the issue to a commercial negotiation which might become very complex once entities become aware of the value of their data of their suppliers.
“A potential compromise that is also being reviewed by the European Commission is to ban unfair clauses also from B2B contracts, but this might represent a disincentive for non-European companies providing IoT technologies. Therefore, I believe that it is necessary to see the dynamics of the market before taking any regulatory initiative.”
Whatever the future might hold, it’s clear that certainty is best established through the careful drafting of contracts, with both vendors and their customers going into negotiations with a clear idea of the value all that data might hold for them – not just in the present, but down the line, when there is a risk that serious disagreements could arise.
AT&T, China Mobile, China Unicom, China Telecom, Deutsche Telekom, Verizon and Vodafone Launch Mobile IoT (LPWA) Networks.
The GSMA today announced that its Mobile IoT Initiative¹ has taken off with the launch of multiple commercial rollouts of Low Power, Wide Area (LPWA) solutions by several of the world’s leading mobile operators including AT&T, China Mobile, China Unicom, China Telecom, Deutsche Telekom (DT), Verizon and Vodafone.
China Mobile and China Unicom have launched NB-IoT across several key cities with China Telecom launching NB-IoT networks across the country. Vodafone has also launched NB-IoT in Spain and the Netherlands. DT has launched in several cities in Germany and nationwide in the Netherlands. AT&T and Verizon have previously announced nationwide launches of LTE-M technology. In addition to these deployments, the GSMA also announced that its Mobile IoT Innovators programme, which is designed to encourage the development of new LPWA solutions, has reached over 500 members, underscoring the growth of the wider IoT ecosystem.
Alex Sinclair, Chief Technology Officer, GSMA, commented:
“The Mobile IoT initiative encouraged the market to adopt licensed LPWA networks and we are now seeing this work come to fruition with multiple commercial deployments around the world, as well as the availability of hundreds of different applications and solutions.”
“It is clear that the market sees the benefit of adopting solutions that offer flexibility, security, lower costs, and cover all use cases, and we look forward to seeing other operators follow in the near future.”
Mobile operators are enhancing their licensed cellular networks with NB-IoT and LTE-M technologies, which utilise globally agreed 3GPP standards to scale the Internet of Things. These new Mobile IoT networks are designed to support mass-market IoT applications such as smart meters, environmental sensors and consumer electronics, that are low cost, use low data rates, require long battery lives and often operate in remote locations. Both technologies will be further evolved in 3GPP’s Release 15.
China at Forefront in Development of LPWA Market
According to analyst house Gartner, China is set to be one of the leading LPWA markets, accounting for 486 million of the estimated 3.1 billion connections globally by 2025. It is also at the forefront in the global development of Mobile IoT in terms of both network launches and a record number of ecosystem developer partners. China Mobile has launched NB-IoT networks in several key cities including Yingtan and China Unicom has rolled-out NB-IoT networks in Shanghai, as well as the main urban areas in Guangzhou, Shenzhen and Fuzhou, for a number of different solutions across smart parking, smart fire sensors and smart meters. China Telecom has announced the roll-out of nationwide NB-IoT networks.
China is also leading in the development of new innovative solutions based on Mobile IoT technology as a part of the GSMA’s Mobile IoT Innovator community. Of the 546 global companies currently developing new solutions based on Mobile IoT technology, over 215 are from China. Solutions include smart parking, pet tracking, asset tracking and smart agriculture amongst many others.
¹ The GSMA Mobile IoT Initiative: The GSMA’s Mobile IoT Initiative is helping to support the industry deliver commercial LPWA solutions in licensed spectrum. It is currently backed by 74 global mobile operators, device makers and chipset, module and infrastructure companies worldwide. In the space of a year it has helped to establish market standards for LPWA, published by 3GPP, that will play a fundamental role in the growth, development and adoption of the technology as well as securely and cost effectively connect the billions of new devices making up the IoT. LPWA networks will be used for a wide variety of applications such as industrial asset tracking, safety monitoring, water and gas metering, smart grids, city parking, vending machines and city lighting.
With all the attention Artificial Intelligence (AI) attracts these days, a backlash is inevitable – and could even be constructive. Any technology advancing at a fast pace and with such breathless enthusiasm could use a reality check. But for a corrective to be useful, it must be fair and accurate.
The industry has been hit with a wave of AI hype remediation in recent weeks. Opinions are surfacing that label recent AI examples so mundane that they render the term AI practically “meaningless” while others are claiming AI to be an “empty buzzword.” Some have even gone so far to label AI with that most damning of tags– “fake news.’
Part of the problem with these opinions are the expectations around what is defined in “AI.” While the problem of how best to define AI has always existed; skeptics argue that overly broad definitions, and too-willing corporate claims of AI adoption, characterize AI as something which we do not have. We have yet to see the self-aware machines in 2001‘s HAL and Star Wars’ R2D2, but this is simply over-reach.
Today’s AI programs may be just ‘mere’ computer programs – lacking the sentience, volition, and self-awareness – but that does not neglect their ability to serve as intelligent assistants for humans.
The highest aspirations for AI – that it should reveal and exploit, or even transcend, deep understandings of how the mind works – are undoubtedly what ignited our initial excitement in the field. We should not lose sight of that goal. But existing AI programs which serve lower human end functions provide great utility as well as bring us closer to this goal.
For instance, the seemingly mundane activities humans conduct look simple but aren’t straightforward at all. A Google system that ferrets out toxic online comments; a Netflix video optimizer based on feedback gathered from viewers; a Facebook effort to detect suicidal thoughts posted to its platform may all seem like simple human tasks.
Critics may disparage these examples as activities which are performed by non-cognitive machines, but they nonetheless represent technically interesting solutions that leverage computer processing and massive amounts of data to solve real and interesting human problems. Identify and help a potential suicide victim just by scanning their online posts. What could be more laudable – and what might have seemed more unlikely to be achieved via any mere “computation?”
Consider one of the simplest approaches to machine learning applied to today’s easily relatable problem of movie recommendations. The algorithm works by recommending movies to someone that other similar people – their nearest neighbors – also enjoyed.
No real mystery
Is it mysterious? Not particularly.
It’s conceptually a simple algorithm, but it often works. And by the way, it’s actually not so simple to understand when it works and when it doesn’t, and why, or how to make it work well. You could make the model underlying it more complex or feed it more data – for example, all of Netflix’s subscribers’ viewing habits – but in the end, it’s understandable. It’s distinctly not a ‘black box’ that learns in ways we can’t comprehend. And that’s a good thing. We should want to have some idea how AI works, how it attains and uses its ‘expert’ knowledge.
To further illustrate, envision that interesting moment in therapy when a patient realizes his doctor looks bored – the doctor has heard this story a hundred times before. In the context of AI, it illuminates an important truth: it’s a good thing when an expert – in this case, our hypothetical therapist – has seen something before and knows what to do with it. That’s what makes the doctor an expert. What the expert does is not mundane, and neither is replicating that type of expertise in a machine via software.
Which leads to another problem hiding in these recent critiques: that once we understand how something works – regardless of how big a challenge it initially presented – its mystique is lost. A previously exciting thing – a complex computer program doing something that previously only a person exercising intelligence could do – suddenly seems a lot less interesting.
But is it really? When one looks at AI and realizes it turns out to just program — of course, it is just “programs,” but that’s the whole point of AI.
To be disappointed that an AI program is not more complicated, or that its results aren’t more elaborate – even cosmic – is to misstate the problem that AI is trying to address in the first place. It also threatens to derail the real progress that continues to accumulate and may enable machines to possess the very things that humans possess, and that those criticizing real-world AI as too simplistic pine for volition, self-awareness, and cognition.
Take genetics, for example. The field didn’t start with a full understanding or even theory of DNA, but rather with a humbler question: why are some eyes blue and some eyes brown? The answer to that question required knowledge of and step-by-step advancements in biology, chemistry, microscopy, and a multitude of other disciplines. That the science of genetics should have started with its end game of sequencing the human genome – or in our case, that AI must begin by working on its endgame of computer sentience – is as overly-romantic as it is misguided.
In the end, all scientific endeavors, including AI, make big leaps by working on more basic – and perhaps, only in hindsight, easier – problems. We don’t solve the ultimate challenges by jumping right to working on them. The steps along the way are just as important – and often yield incredibly useful results of their own. That’s where AI stands right now. Solving seemingly simple yet fundamental challenges – and making real progress in the process.
There’s no need to debunk or apologize for it. It is required to advance the field and move closer to the more fanciful AI end-goal: making computers act like they do in the movies, toward which our AI critics — and indeed all of us in the field — strive as our ultimate ambition.
Larry Birnbaum, Co-founder and Chief Scientific Advisor, Narrative Science
Larry Birnbaum is a co-founder of Narrative Science and the company’s Chief Scientific Advisor, where he focuses on next-generation architecture, advanced applications, and IP. In addition, Larry is Professor of Computer Science and of Journalism at Northwestern University, where he also serves as the Head of the Computer Science Division/EECS Department. He received his BS and Ph.D. from Yale.