The future of IoT spells blockchain and AI – the reality of blockchain today mainly spells cryptocurrency and hype

The future of IoT spells blockchain and AI – the reality of blockchain today mainly spells cryptocurrency and hype

The future of IoT spells blockchain and AI – the reality of blockchain today mainly spells cryptocurrency and hype

An article by Nicolas Windpassinger**, Global Vice President of Schneider Electric’s EcoXpert Partner Program and the author of the IoT book Digitize or Die*.

The future of the Internet of Things (IoT) on various levels is an integrated one. Especially, the mix of IoT, blockchain and cognitive computing will lead to myriad new outcomes.

In my digital transformation and IoT book ‘Digitize or Die‘, blockchain technologies and artificial intelligence are mentioned several times. The standardization of the IoT, for instance will create the conditions for more interoperability, connectivity, “digital trust” with blockchain technologies and artificial intelligence (AI) distribution between heterogeneous components and systems.

Blockchain technology, in many cases combined with AI, doesn’t just promise to be the missing link enabling peer to peer contractual behavior without any third party to “certify” the IoT transaction. It also answers the challenge of scalability, single point of failure, time stamping, record, privacy, trust and reliability in a very consistent way.

How the mix of IoT, AI and blockchain is shaping the future of business

I am convinced that blockchain and AI will unlock the promises of IoT. In fact, we already see a lot of this happening in the EcoXpert environment of smart building and energy systems where convergence is the name of the game and existing silos are being replaced by new solutions. IoT ecosystems take center stage in these solutions and inevitably come with advanced data analysis, using AI, to meet changing customer expectations.

In a broader IoT perspective, blockchain is increasingly joining that mix of technologies as well. An example of where it fits, on top of the mentioned ones, and where it is gaining attention is that of IoT, blockchain and security.

Obviously, I don’t stand alone in seeing that convergence of IoT, blockchain and AI becoming ever more important and shaping the foundations of the future of business. According to research firm IDC the future of Enterprise Resource Software (ERP), which it calls Intelligent ERP, even largely consists of a mix of IoT, blockchain and cognitive (with other factors). IDC predicts that by 2021 a fifth of the largest manufacturers will depend on a secure backbone of embedded intelligence, using IoT, blockchain and cognitive, to automate large-scale processes and speed execution times by up to 25%.

Blockchain and cryptocurrencies: the different realities and views of the enabling technology of Bitcoin versus cryptocurrencies at the World Economic Forum 2018

Many people confuse blockchain with Bitcoin and other cryptocurrencies. While blockchain, as an enabler, is really the technological foundation of these cryptocurrencies, its application areas stretch far beyond. This does not mean that cryptocurrencies have no future. However, that future, is not easy to predict and opinions largely vary.

At the World Economic Forum 2018, cryptocurrencies were a highly discussed topic. In the wake of high fluctuations across virtually all cryptocurrencies, opinions on Bitcoin and others went from high skepticism, calls for regulation and statements that especially the blockchain technology mattered to concerns regarding illicit activity in the crypto world. Yet, there were also forecasts that, after the current state of concern, volatility and, indeed, hype, cryptocurrencies would enter a far more stable stage and be here to stay.

The craze and hype regarding cryptocurrencies and ICOs (Initial Coin Offerings) indeed is not really helping. January 2018 was probably one of the months that broke all records concerning the volume of opinions and news defending cryptocurrencies, skeptical and fear-mongering opinions and launches of more initiatives.

The cryptocraze: KODAKCoin and how to see your stock price boost

Kodak, the bitcoin effect
One of the most covered and surprising moves was that of Kodak. In my book I cover how Kodak missed the boat of digitalization, whereas its main contender, Japan’s Fujifilm entered the local market of Kodak, rapidly grabbing an increasing share of the market of digital cameras, which were invented at Kodak.

Yet, Fujifilm ended up taking the right decisions while Kodak wasn’t ready to embrace digital photography and, soon after, the smartphone with built-in cameras, two deadly technological disruptions that were dealt with entirely different by both companies.

Kodak is now also – indirectly – entering the crypto-world. With an ICO on January 31st, a company that licensed the Kodak brand kicks off a blockchain for digital rights management for photographers under the Kodak brand (KODAKOne) and a cryptocurrency called KODAKCoin. Moreover, a second brand license partner is launching a Kodak-branded bitcoin miner, Kodak KashMiner, with a wrong ROI model as time and computer power are tightly linked to Bitcoin’s ROI.

While the initiative cannot be compared with that of ‘open cryptocurrencies’ and there is a lot more to it as mentioned in my blog post on the Kodak initiative, the reaction on the news was a clear indication of this current cryptocraze.

After the announcement, at the Consumer Electronics Show 2018, the stock price of Kodak skyrocketed. It has continued to be relatively steady with occasional spikes until now. Whether it’s a strategy of Kodak or a side-effect of a brand licensing partnership as the company has several, for now is hard to tell. Whether the initiative(s) will work is even harder to tell.
Chart: Kodak stock price
What is for sure, though is that it seems to suffice to announce a cryptocurrency and an ICO to see your stock price go up in a way that clearly shows an overvaluation of it. Creating a cryptocurrency is not so difficult, just have a look at the list of all the recent ICOs: https://coinmarketcap.com/all/views/all/.

The difficulty is to create an ecosystem that drives value into your currency, making it unique and trustworthy. A cryptocurrency is only worth the trust people put into it. And as far as IoT is concerned, the technology enabling it all, blockchain, looks far more promising.

* About the Book
IoT book: Digitize or Die by Nicolas WindpassingerUnderstand, master, and survive the Internet of Things with one simple and pragmatic methodology broken down into four steps. Digitize or Die is used by front-line business decision makers to digitize their strategy, portfolio, business model, and organization. This book describes what the IoT is, its impacts and consequences, as well as how to leverage the digital transformation to your benefit.


Inside these pages, you will learn:

  • What the IoT means to all businesses
  • Why the IoT and the digital revolution is a threat to your business model and survival
  • What you need to understand to better grasp the problem
  • The four steps your company needs to follow to transform its operations to survive

** About the Author
With 15+ years of computer networking industry experience, Nicolas Windpassinger is the Global Vice President of Schneider Electric’s EcoXpert™ Partner Program, whose mission is to connect the technologies and expertise of the world’s leading technology providers, pioneer the future of intelligent buildings and the Internet of Things, and deliver smarter, integrated and more efficient services and solutions to customers.
As a result of his work, Schneider Electric’s EcoXpert™ Partner Program has been granted a 5-Star rating in the 2017 Partner Program Guide by CRN®, which is part of The Channel Company group. The 5-Star Partner Program Guide rating recognizes an elite subset of companies that offer solution providers the best partnering elements in their channel programs.
Nicolas has been recognized by The Channel Company’s Top Midmarket IT Executives list. This annual list honors influential vendor and solution provider executives who have demonstrated an exceptionally strong commitment to the midmarket. The Channel Company, has recognized Nicolas as one of 100 People You Don’t Know But Should in the IT channel for 2017.

The post The future of IoT spells blockchain and AI – the reality of blockchain today mainly spells cryptocurrency and hype appeared first on IoT Business News.

IoT Business News

Are New Advances in AI Worth the Hype?

Almost daily, we’re hit with another breathless news report of the (potential) glories of artificial intelligence in business. Rather than excitement, the fervor can instead kindle a Scrooge-like attitude, tempting executives to grumble “bah humbug” and move on to the next news item.

Why exactly is the “bah humbug” temptation so strong? Perhaps because…

  1. News reports naturally gravitate toward sensational examples of AI. We collectively seem to like it when science fiction becomes science fact. But it may not be clear how humanoids and self-driving cars are actually relevant to most businesses.
  2. Reports tilt toward stories of extreme cases of success. Those managers who have found some aspects of AI that are relevant to their business may be frustrated with the differences between their experiences and the (purported) experiences of others. They may feel that AI is immature and the equivalent of a young child, far from ready for the workplace.

As a result, managers may perceive AI as yet another in a long list of touted technologies that are more fervor than substance. Certainly, the information technology sector is far from immune to getting intoxicated with promising “new” technologies. Still under the influence of the intensity from prior technological shifts (digitization, analytics, big data, e-commerce, etc.), managers may struggle to determine what exactly is new about AI that may be relevant now. After all, AI has been around for decades and is not, actually, new.

Why the attention to AI now? Is there anything new in AI worthy of the hype? Is this vintage of AI just “old wine in new bottles”?

When the web began to garner interest, it was hard to argue that distributed computing was new. We started with centralized processing with mainframes and terminals rendering output and collecting input. Yes, the web promised prettier output than green characters on cathode ray screens, but did the newfangled web differ from prior distributed computing other than cosmetically? In retrospect, it seems silly to ask; it would be hard to argue that the internet didn’t fundamentally change business. Something was new about the web, even if it didn’t look different at first.

More recently, analytics has also seen its fair share of hype. However, statistical analysis, optimization, regression, machine learning, etc., all existed long before attention coalesced around the term “analytics.” Airlines in particular have long used data for revenue management. Yet something was also new about the potential for analytics, starting about a decade ago, that is now affecting businesses everywhere.

Underappreciating the differences between the old periods and new in each of these examples would have been a mistake. Managers who had unfavorable responses to either of these are probably no longer managers. What is different about AI now?

Unlike in earlier incarnations, we now have access to the processing power these AI developments require. What could once be done in theory can now be done in practice. Furthermore, the required processing power is affordable to most organizations. The leap from fervor to value to the business can happen — with investment, experimentation, and tolerance for failure.

Prior vintages of AI also emphasized rules, such as expert systems, formulated to automate reasoning. Now AI relies more often on data-based approaches rather than rule-based approaches. And the analytics boom has stocked the cellars full of the data that AI requires.

Additionally, business ecosystems are now more digital. Previously, an AI system would certainly have had to connect with humans, who would then connect with other humans in other organizations. But now with increasing digitization, links between and within organizations are increasingly digitally native, allowing AI systems to speak directly with digital systems. Network effects will help accelerate AI adoption as the increasing number of potential machine-to-machine interfaces offer opportunities for value creation (and destruction).

While there is AI hype, there is also substance behind recent interest in AI. Both can be true simultaneously. To navigate the drunken AI mob, managers need to embrace a bicameral mindset that allows them to listen to two voices simultaneously.

One voice speaks to the potential that AI now offers. By listening to this voice, managers can investigate promising technologies so that they and their organization do not miss out. They may pursue pilot projects to gain organizational experience, even if the results are not revolutionary. They can test AI approaches along with traditional ones to find their strengths and weaknesses for a particular organization.

The other voice says to question the promises that seem too good to be true, to taste first before swilling. In this way, managers can avoid regrets from overindulging in AI investments and bypass the repercussions from AI investments gone wrong. This voice cautions against binging on AI at the expense of other investments.

Is there anything new in AI worthy of the hype? To find out, managers will have to listen to both voices.


MIT Sloan Management Review

U.S. Cellular Operators Success in Key M2M Markets Could Be Jeopardized by Distraction and IoT Hype

U.S. Cellular Operators Success in Key M2M Markets Could Be Jeopardized by Distraction and IoT Hype

U.S. Cellular Operators Success in Key M2M Markets Could Be Jeopardized by Distraction and IoT Hype

ABI Research Identifies the real challenges and opportunities for U.S. Network Operators.

A new report by ABI Research forecasts that the U.S. cellular M2M market will grow beyond 300 million connections by 2022.

As U.S. network operators rush to deploy the latest LTE technologies, new competition is emerging. In this report, ABI Research identifies the key strategies and challenges facing the top four mobile operators: Verizon, AT&T, Sprint, and T-Mobile.

Total M2M connections at the end of 2016 were as follows:

  • Verizon: 23.85 million
  • AT&T: 30.32 million
  • Sprint: 14.90 million
  • T-Mobile: 13.58 million

The latest distractions facing the U.S. cellular M2M market include disruptive new LPWA technologies that are being positioned as competitive to cellular, increased interest in private network opportunities, and the on-going debate on the merits of licensed and unlicensed spectrum. “Carrier grade support and service is essential for mission-critical enterprise applications,” says Kevin McDermott, Principal Analyst at ABI Research.

“Some of the biggest opportunities and drivers for cellular operators are in telematics and asset tracking; coverage and low latency are the essential requirements for these fast-growing segments.”

In fact, the U.S. market adoption of technologies around connected cars is the main driver for the new applications that will benefit U.S. cellular operators, if they don’t get distracted by other technologies. Of the current 82.65 million cellular connected M2M devices at the end of 2016, 68.5% are related to telematics and other transportation applications.

LTE, which includes LTE Cat-M and Narrow-Band Internet of Things (NB-IoT), is expected to become the largest network standard for the IoT in the United States offering a range of options for data rates, range, and node power efficiency.

Competitive announcements from Comcast supporting long range wireless (LoRa) is a first for a mainstream provider in the U.S. market and a huge distraction for the top four U.S. mobile operators. “Comcast is building on the back of its two main network assets: backhaul infrastructure and successful Wi-Fi hotspot deployments. Comcast has reported its hotspot program has surpassed 16 million, and they are actively developing an M2M strategy,” McDermott points out.

These findings are from ABI Research’s U.S. Network Operator M2M Market Analysis report.

The post U.S. Cellular Operators Success in Key M2M Markets Could Be Jeopardized by Distraction and IoT Hype appeared first on IoT Business News.

IoT Business News

Don’t believe the hype: Why the IoT is stalling

If hype cycles are anything to go by we’re almost certainly teetering on the brink of the trough of disillusionment with the IoT. Recent research claims that 60% of test deployments have failed. But while some pilot projects may have faltered there isn’t yet the weight of devices in deployment to suggest we’ve peaked. Rather, adoption itself seems to be faltering.

IoT adoption hasn’t been the runaway success envisaged and this is down to a number of issues. Firstly, the IoT-enablement of devices which don’t necessarily warrant this additional connectivity. Take the IoT fishtank which regulated feeding, water temperature and quality, says Ken Munro, partner at Pen Test Partners.

That may sound like a labour saving device but ultimately it saw a Casino where the tank was installed lose 10GB of data to a device in Finland. IoT security doesn’t stop at the device. What this illustrates is that such devices are being used to create exploitable backdoors on to networks, allowing the theft of credentials and data exfiltration.

Longevity concerns are also hampering adoption, with devices often swapped out rather than updated causing users to question the viability of the investment. Then there’s the question as to whether the manufacturer has the resource to fix any issues that may come to light?

Security vulnerabilities emerge over time and if that does happen and your devices need to be patched, will the manufacturer invest the time needed to oversee this or deny culpability or even withdraw support?

Divulged details

Protecting the existing installed base is a real headache for manufacturers. If you peruse the Shodan website you’ll see hosts of deployed IoT equipment (and worryingly even Industrial Control Systems) with details on the IP address, the operating system run, and the version of software in use. And the FCC helpfully publishes the schematics for soon-to-be-released IoT devices complete with circuit diagrams for those wanting some up close detail.

In many cases, an attacker can identify devices from Shodan and simply obtain the default credentials to gain access. We once found a handy ‘super password’ list of daily log-on credentials for the entire year published on social media site by a technician. Clearly, supply chain security tends to be lax, with this case illustrating just how easy it is to get hold of ‘confidential’ data.

Struggling in the face of these odds, manufacturers are now looking to sell on data to third parties. That may make sense commercially but it further compromises the security and privacy of the userbase. It could see the floor plans for your office, for instance, sold on if you’re the user of an IoT vacuum cleaner. And what if that information makes its way on to the blackmarket? Building Management System (BMS) data could be particularly useful. Think of the extortion that would be possible if an attacker were to put ransomware over smart thermostats.

Solving the problem

The cynical among you will be questioning whether the end user shouldn’t also shoulder some of the responsibility and it’s true that few reconfigure devices upon set-up. That’s […]

The post Don’t believe the hype: Why the IoT is stalling appeared first on IoT Now – How to run an IoT enabled business.

Blogs – IoT Now – How to run an IoT enabled business

Understanding the hype vs. reality around artificial intelligence

Group of Robots and personal computer vector illustration

With all the attention Artificial Intelligence (AI) attracts these days, a backlash is inevitable – and could even be constructive. Any technology advancing at a fast pace and with such breathless enthusiasm could use a reality check. But for a corrective to be useful, it must be fair and accurate.

The industry has been hit with a wave of AI hype remediation in recent weeks. Opinions are surfacing that label recent AI examples so mundane that they render the term AI practically “meaningless” while others are claiming AI to be an “empty buzzword.” Some have even gone so far to label AI with that most damning of tags– “fake news.’

See also: AI to benefit from MIT’s super low-power chip

Part of the problem with these opinions are the expectations around what is defined in “AI.” While the problem of how best to define AI has always existed; skeptics argue that overly broad definitions, and too-willing corporate claims of AI adoption, characterize AI as something which we do not have. We have yet to see the self-aware machines in 2001‘s HAL and Star Wars’ R2D2, but this is simply over-reach.

Today’s AI programs may be just ‘mere’ computer programs – lacking the sentience, volition, and self-awareness – but that does not neglect their ability to serve as intelligent assistants for humans.

The highest aspirations for AI – that it should reveal and exploit, or even transcend, deep understandings of how the mind works – are undoubtedly what ignited our initial excitement in the field. We should not lose sight of that goal. But existing AI programs which serve lower human end functions provide great utility as well as bring us closer to this goal.

For instance, the seemingly mundane activities humans conduct look simple but aren’t straightforward at all. A Google system that ferrets out toxic online comments; a Netflix video optimizer based on feedback gathered from viewers; a Facebook effort to detect suicidal thoughts posted to its platform may all seem like simple human tasks.

Critics may disparage these examples as activities which are performed by non-cognitive machines, but they nonetheless represent technically interesting solutions that leverage computer processing and massive amounts of data to solve real and interesting human problems. Identify and help a potential suicide victim just by scanning their online posts. What could be more laudable – and what might have seemed more unlikely to be achieved via any mere “computation?”

Consider one of the simplest approaches to machine learning applied to today’s easily relatable problem of movie recommendations. The algorithm works by recommending movies to someone that other similar people – their nearest neighbors – also enjoyed.

No real mystery

Is it mysterious? Not particularly.

It’s conceptually a simple algorithm, but it often works. And by the way, it’s actually not so simple to understand when it works and when it doesn’t, and why, or how to make it work well. You could make the model underlying it more complex or feed it more data – for example, all of Netflix’s subscribers’ viewing habits – but in the end, it’s understandable. It’s distinctly not a ‘black box’ that learns in ways we can’t comprehend. And that’s a good thing. We should want to have some idea how AI works, how it attains and uses its ‘expert’ knowledge.

To further illustrate, envision that interesting moment in therapy when a patient realizes his doctor looks bored – the doctor has heard this story a hundred times before. In the context of AI, it illuminates an important truth: it’s a good thing when an expert – in this case, our hypothetical therapist – has seen something before and knows what to do with it. That’s what makes the doctor an expert. What the expert does is not mundane, and neither is replicating that type of expertise in a machine via software.

Which leads to another problem hiding in these recent critiques: that once we understand how something works – regardless of how big a challenge it initially presented – its mystique is lost. A previously exciting thing – a complex computer program doing something that previously only a person exercising intelligence could do – suddenly seems a lot less interesting.

But is it really? When one looks at AI and realizes it turns out to just program — of course, it is just “programs,” but that’s the whole point of AI.

To be disappointed that an AI program is not more complicated, or that its results aren’t more elaborate – even cosmic – is to misstate the problem that AI is trying to address in the first place. It also threatens to derail the real progress that continues to accumulate and may enable machines to possess the very things that humans possess, and that those criticizing real-world AI as too simplistic pine for volition, self-awareness, and cognition.

Take genetics, for example. The field didn’t start with a full understanding or even theory of DNA, but rather with a humbler question: why are some eyes blue and some eyes brown? The answer to that question required knowledge of and step-by-step advancements in biology, chemistry, microscopy, and a multitude of other disciplines. That the science of genetics should have started with its end game of sequencing the human genome – or in our case, that AI must begin by working on its endgame of computer sentience – is as overly-romantic as it is misguided.

In the end, all scientific endeavors, including AI, make big leaps by working on more basic – and perhaps, only in hindsight, easier – problems. We don’t solve the ultimate challenges by jumping right to working on them. The steps along the way are just as important – and often yield incredibly useful results of their own. That’s where AI stands right now. Solving seemingly simple yet fundamental challenges – and making real progress in the process.

There’s no need to debunk or apologize for it. It is required to advance the field and move closer to the more fanciful AI end-goal: making computers act like they do in the movies, toward which our AI critics — and indeed all of us in the field — strive as our ultimate ambition.

Larry Birnbaum, Co-founder and Chief Scientific Advisor, Narrative Science

Larry Birnbaum, Co-founder and Chief Scientific Advisor, Narrative Science

Larry Birnbaum is a co-founder of Narrative Science and the company’s Chief Scientific Advisor, where he focuses on next-generation architecture, advanced applications, and IP. In addition, Larry is Professor of Computer Science and of Journalism at Northwestern University, where he also serves as the Head of the Computer Science Division/EECS Department. He received his BS and Ph.D. from Yale.

The post Understanding the hype vs. reality around artificial intelligence appeared first on ReadWrite.

ReadWrite