Organizations today are using IoT to boost operational performance, enhance customer experience, lead industry transformation, advance environmental sustainability and scale institutional expertise. Asset-intensive companies must constantly track, assess and manage the reliability of a wide range of physical, technological and human assets, but doing so at an enterprise level is a complex undertaking. Challenges include: aging physical assets requiring ongoing maintenance and repair, ensuring asset performance, global commoditization and competition, regulatory compliance, health and safety protocol, and more.
How can organizations control assets and remain profitable?
That’s where Enterprise Asset Management (EAM) comes in. EAM solutions, such as Watson IoT for Asset Management, provide an integrated approach to managing physical and human assets. EAM can support long- and short-term planning, such as controlling inventory and outside service providers, to better meet demands, and can enable preventive and condition-based asset maintenance.
End-to-end solutions for asset-intensive organizations
EAM can provide real-time insight and visibility into virtually all physical assets, and across the maintenance, repair and overhaul supply chain. In this new era of mobile, cloud and analytics technologies, there are more opportunities than ever to collect, consolidate and analyze information about assets to help fine-tune performance.
Beyond asset management, core capabilities of EAM also include work management, planning and scheduling, supply chain management, and health and safety.
Better visibility, control and automation
To manage the full asset lifecycle and better address business imperatives, asset-intensive organizations require integrated visibility, control and automation across their business and technology assets.
Visibility provides an enterprise-wide view of asset details and processes from across the organization for better decision making. With control of their assets, businesses can better manage and secure their investments and reduce inventory costs. And greater automation helps companies build more agility and flexibility into their operations.
Increasing revenues, decreasing costs
With EAM, businesses can meet today’s business, operational and technology challenges, and efficiently address the lifecycle of resources.
EAM allows for better management of aging infrastructure by implementing and enforcing standard processes for asset management, improving overall maintenance practices, and controlling operational risk by embedding risk management into everyday businesses processes.
It also allows for a lower total cost of ownership, by consolidating operational applications. By using one global enterprise application instance, consistent metrics, and asset management processes that are enforced and managed on the same platform at all of your sites, you’ll be able to standardize best practices across virtually all asset types.
To succeed in the long run, businesses need to create and leverage some kind of sustainable competitive edge. This advantage can still derive from such traditional sources as scale-driven lower cost, proprietary intellectual property, highly motivated employees, or farsighted strategic leaders. But in the knowledge economy, strategic advantages will increasingly depend on a shared capacity to make superior judgments and choices.
Intelligent enterprises today are being shaped by two distinct forces. The first is the growing power of computers and big data, which provide the foundation for operations research, forecasting models, and artificial intelligence (AI). The second is our growing understanding of human judgment, reasoning, and choice. Decades of research has yielded deep insights into what humans do well or poorly.1 (See “About the Research.”)
In this article, we will examine how managers can combine human intelligence with technology-enabled insights to make smarter choices in the face of uncertainty and complexity. Integrating the two streams of knowledge is not easy, but once management teams learn how to blend them, the advantages can be substantial. A company that can make the right decision three times out of five as opposed to 2.8 out of five can gain an upper hand over its competitors. Although this performance gap may seem trivial, small differences can lead to big statistical advantages over time. In tennis, for example, if a player has a 55% versus 45% edge on winning points throughout the match, he or she will have a greater than 90% chance of winning the best of three sets.2
To help your company gain such a cumulative advantage in business, we have identified five strategic capabilities that intelligent enterprises can use to outsmart the competition through better judgments and wise choices. Thanks to their use of big data and predictive analytics, many companies have begun cultivating some of these capabilities already.3 But few have systematically integrated the power of computers with the latest understanding of the human mind. For managers looking to gain an advantage on competitors, we see opportunities today to do the following:
Find the strategic edge. In assessing past organizational forecasts, home in on areas where improving subjective predictions can really move the needle.
Run prediction tournaments. Discover the best forecasting methods by encouraging competition, experimentation, and innovation among teams.
Model the experts in your midst. Identify the people internally who have demonstrated superior insights into key business areas, and leverage their wisdom using simple linear models.
Experiment with artificial intelligence. Go beyond simple linear models. Use deep neural nets in limited task domains to outperform human experts.
Change the way the organization operates. Promote an exploratory culture that continually looks for better ways to combine the capabilities of humans and machines.
1. Find the Strategic Edge
The starting point for becoming an intelligent enterprise is learning to allocate analytical effort where it will most pay off — in other words, being strategic about which problems you decide to tackle head-on. The sweet spot for intelligent enterprises is where hard data and soft judgment can be productively combined. On one side, this zone is bounded by problems that philosopher Karl Popper dubbed “clocklike” because of their deterministic regularities; on the other side, it is bounded by problems he dubbed “cloudlike” because of their uncertainty.4
Clocklike problems are tractable and stable, and they can be defined by past experience (as in actuarial tables or credit reports). Statistical prediction models can shine here. Human judgment operates on the sidelines, although it still plays a role under unusual conditions (such as assessing the impact of new medical advances on life expectancies). Cloudlike problems (for example, assigning probabilities to global warming causing mega-floods in Miami in 2025 or ascertaining whether intelligent life exists on other planets) are far murkier. However, what’s most critical in such cases is the knowledge base of experts and, more importantly, their nuanced appreciation of what they do and don’t know. The sweet spot for managers lies in combining the strengths of computers and algorithms with seasoned human judgment and judicious questioning. (See “Finding the Sweet Spot.”) By avoiding judgmental biases that often distort human information processing and by recognizing the precarious assumptions on which statistical models sometimes rest, the analytical whole can occasionally become more than the sum of its parts.
Creating a truly intelligent enterprise is neither quick nor simple. Some of what we recommend will seem counterintuitive and requires training. Breakthroughs in cognitive psychology over the past few decades have attuned many sophisticated leaders to the biases and traps of undisciplined thinking.5 However, few companies have been able to transform these insights into game-changing practices that make their business much smarter. Companies that perform data mining remain blissfully unaware of the quirks and foibles that shape their analysts’ hunches. At the same time, executive teams advancing opinions are seldom asked to defend their views in depth. In most cases, outcomes of judgments or decisions are rarely reviewed against the starting assumptions. There is a clear opportunity to raise a company’s IQ by both improving corporate decision-making processes and leveraging data and technology tools.
2. Run Prediction Tournaments
One promising method for creating better corporate forecasts involves using what are known as prediction tournaments to surface the people and approaches that generate the best judgments in a given domain. The idea of a prediction tournament is to incentivize participants to predict what they think will happen, translate their assessments into probabilities, and then track which predictions proved most accurate. In a prediction tournament, there is no benefit in being overly positive or overly negative, or in engaging in strategic gaming against rivals. The job of tournament organizers is to develop a set of relevant questions and then attract participants to provide answers.
One organization that has used prediction tournaments effectively is the Intelligence Advanced Research Projects Activity (IARPA). It operates within the U.S. Office of the Director of National Intelligence and is responsible for running high-risk, high-return research on how to improve intelligence analysis. In 2011, IARPA invited five research teams to compete to develop the best methods of boosting the accuracy of human probability judgments of geopolitical events. The topics covered the gamut, from possible Eurozone exits to the direction of the North Korean nuclear program. One of the authors (Phil Tetlock) co-led a team known as the Good Judgment Project,6 which won this tournament by ignoring folklore and conducting field experiments to discover what really drives forecasting accuracy. Four key factors emerged as critical to successful predictions:7
Identifying the attributes of consistently superior forecasters, including their greater curiosity, open-mindedness, and willingness to test the idea that forecasting might be a skill that can be cultivated and is worth cultivating;
Training people in techniques for avoiding common cognitive biases such as overconfidence and overweighting evidence that reinforces their preconceptions;
Creating stimulating work environments that encourage the best performers to engage in collaborative teamwork and offer guidance on how to avoid groupthink by practicing techniques like precision questioning and constructive confrontation;
Devising better statistical methods to extract wisdom from crowds by, for example, giving more weight to forecasters with better track records and more diverse viewpoints.8
Based on our experience, the biggest benefit of prediction tournaments within organizations is their power to accelerate learning cycles. Companies can accelerate learning by adhering to several principles.
The first principle involves careful record keeping. By keeping accurate records, it is harder to misremember earlier forecasts, one’s own, and those of others. This is a critical counterweight to the self-serving tendency to say “I knew it all along,” as well as the inclination to deny credit to rivals “who didn’t have a clue.”
Second, by making it difficult for contestants to misremember, tournaments force people to confront their failures and the other side’s successes. Typically, one’s first response to failure is denial. Tournaments prompt people to become more reflective, to engage in a pattern of thinking known as preemptive self-criticism; they encourage participants to consider ways in which they might have been deeply wrong.
Third, tournaments produce winners, which naturally awakens curiosity in others about how the superior results were achieved. Teams are encouraged to experiment and improve their methods all along.
Fourth, the scoring in prediction tournaments is clear to all involved up front.9 This creates a sense of fair competition among all.
Until recently, there was little published research that training in probabilistic reasoning and cognitive debiasing could improve forecasting of complex real-world events.10 Academics felt that eliminating cognitive illusions was nearly impossible for people to achieve on their own.11 The IARPA tournaments revealed, however, that customized training of only a few hours can deliver benefits. Specifically, training exercises involving behavioral decision theory — from statistical reasoning to scenario planning and group dynamics — hold great promise for improving managers’ decision-making skills. At companies we have worked with, the training typically involves individual and group exercises to demonstrate cognitive biases, video tutorials on topics such as scenario planning, and customized business simulations.
In fields ranging from medicine to finance, scores of studies have shown that replacing experts with models of experts produces superior judgments.
3. Model the Experts in Your Midst
Another way to create a more intelligent enterprise is to model the knowledge of expert employees so it can be leveraged more effectively and objectively. This can be done using a technique known in decision-making research as bootstrapping.12 An early example of bootstrapping research in decision psychology involved a study that explored what was on the minds of agricultural experts who were judging the quality of corn at a wholesale auction where farmers brought their crops.13 The researchers asked the corn judges to rate 500 ears of corn to predict their eventual prices in the marketplace. These expert judges considered a variety of factors, including the length and circumference of each ear, the weight of the kernels, the filling of the kernels at the tip, the blistering, and the starchiness. The researchers then created a simple scoring model based on cues that judges claimed were most important in driving their own predictions. Both the judges and the researchers expected the simple additive models to do much worse than the predictions of seasoned experts. But to everyone’s surprise, the models that mimicked the judges’ strategies nearly always performed better than the judges themselves.
Similar surprises occurred when banks introduced computer models several decades ago to assist in making loan decisions. Few loan officers believed that a simplified model of their professional judgments could make better predictions than experienced loan officers could make. The sense was that consumer loans contained many subjective factors that only savvy loan officers could properly assess, so there was skepticism about whether distilling intuitive expertise into a simple formula could help new loan officers learn faster. But here, too, the models performed better than most loan experts.14 In other fields, from predicting the performance of newly hired salespeople to the bankruptcy risks of companies to the life expectancies of terminally ill cancer patients, the experience has been essentially the same.15 Even though experts usually possess deep knowledge, they often do not make good predictions.16
When humans make predictions, wisdom gets mixed with “random noise.” By noise, we mean the inconsistencies that creep into human judgments due to fatigue, boredom, and other vagaries of being human.17 Bootstrapping, which incorporates expert judgment into a decision-making model, eliminates such inconsistencies while preserving the expert’s insights.18 But this does not occur when human judgment is employed on its own. In a classic medical study, for instance, nine radiologists were presented with information from 96 cases of suspected stomach ulcers and asked to evaluate them for the likelihood of a malignancy.19 A week later, the radiologists were shown the same information, although this time in a different order. In 23% of the cases, the second assessments differed from their first.20 None of the radiologists was completely consistent across their two assessments, and some were inconsistent nearly half of the time.
In fields ranging from medicine to finance, scores of studies have shown that replacing experts with models of experts produces superior judgments.21 In most cases, the bootstrapping model performed better than experts on their own.22 Nonetheless, bootstrapping models tend to be rather rudimentary in that human experts are usually needed to identify the factors that matter most in making predictions. Humans are also instrumental in assigning scores to the predictor variables (such as judging the strength of recommendation letters for college applications or the overall health of patients in medical cases). What’s more, humans are good at spotting when the model is getting out of date and needs updating.
Bootstrapping lacks the high-tech pizzazz of deep neural nets in artificial intelligence. However, it remains one of the most compelling demonstrations of the potential benefits of combining the powers of models and humans, including the value of expert intuition.23 It also raises the question of whether permitting more human intervention (for example, when a doctor has information that goes beyond the model) can yield further benefit. In such circumstances, there is the risk that humans want to override the model too often since they will deem too many cases as special or unique.24 One way to incorporate additional expert perspective is to allow the expert (for example, a loan officer or a doctor) a limited number of overrides to the model’s recommendation.
A field study by marketing scholars tested the effects of combining humans and models in the retail sector.25 The researchers studied two different situations: (1) predictions by professional buyers of catalog sales for fashion merchandise, and (2) brand managers’ predictions for coupon-redemption rates. Once the researchers had the actual results in hand, they compared the results to the forecasts. Then they tested how different combinations of humans and models might perform the same tasks. The researchers found that in both the catalog sales and coupon-redemption settings, an even balance between the human and the model yielded the best predictions.
4. Experiment With Artificial Intelligence
Bootstrapping uses a simple input-output approach to modeling expertise without delving into process models of human reasoning. Accordingly, bootstrapping can be augmented by AI technologies that allow for more complex relationships among variables drawn from human insights or from mining big data sets.
Deeper cognitive insights drove computer modeling of master chess players back in the early days of AI. But modeling human thinking — with all its biases — has its limits; often, computers are able to develop an edge simply by using superior computing power to study old data. This is how IBM Corp.’s Deep Blue supercomputer managed to beat the world chess champion Garry Kasparov in 1997. Today AI covers various types of machine intelligence, including computer vision, natural language comprehension, robotics, and machine learning. However, AI still lacks a broad intelligence of the kind humans have that can cut across domains. Human experts thus remain important whenever contextual intelligence, creativity, or broad knowledge of the world is needed.
Humans simplify the complex world around them by using various cognitive mechanisms, including pattern matching and storytelling, to connect new stimuli to the mental models in their heads.26 When psychologists studied jurors in mock murder trials, for example, they found that jurors built stories from the limited data available and then processed new information to reinforce the initial storyline.27 The risk is that humans get trapped in their own initial stories and then start to weigh confirming evidence more heavily than information that doesn’t fit their internal narratives.28 People often see patterns that are not really there, or they fail to see that new data requires changing the storyline.29
Human experts typically provide signal, noise, and bias in unknown proportions, which makes it difficult to disentangle these three components in field settings.30 Whether humans or computers have the upper hand depends on many factors, including whether the tasks being undertaken are familiar or unique. When tasks are familiar and much data is available, computers will likely beat humans by being data-driven and highly consistent from one case to the next. But when tasks are unique (where creativity may matter more) and when data overload is not a problem for humans, humans will likely have an advantage. (See “The Comparative Advantages of Humans and Computers.”)
One might think that humans have an advantage over models in understanding dynamically complex domains, with feedback loops, delays, and instability. But psychologists have examined how people learn about complex relationships in simulated dynamic environments (for example, a computer game modeling an airline’s strategic decisions or those of an electronics company managing a new product).31 Even after receiving extensive feedback after each round of play, the human subjects improved only slowly over time and failed to beat simple computer models. This raises questions about how much human expertise is desirable when building models for complex dynamic environments. The best way to find out is to compare how well humans and models do in specific domains and perhaps develop hybrid models that integrate different approaches.
AI systems have been rapidly improving in recent years. Traditional expert systems used rule-based models that mimicked human expertise by employing if-then rules (for example, “If symptoms X, Y, and Z are present, then try solution #5 first.”).32 Most AI applications today, however, use network structures, which search for new linkages between input variables and output results. In deep neural nets used in AI applications, the aim is to analyze very large data sets so that the system can discover complex relationships and refine them whenever more feedback is provided. AI is thriving thanks to deep neural nets developed for particular tasks, including playing games like chess and Go, driving cars, synthesizing speech, and translating language.33
Companies should be closely tracking the development of AI applications to determine which aspects are worthiest of adoption and adaptation in their industry. Bridgewater Associates LP, a hedge fund firm based in Westport, Connecticut, is an example of a company already experimenting with AI. Bridgewater Associates is developing various algorithmic models designed to automate much of the management of the firm by capturing insights from the best minds in the organization.34
Artificial general intelligence of the kind that most humans exhibit is emerging more slowly than targeted AI applications. Artificial general intelligence remains a rather small portion of current AI research, with the high-commercial-value work focused on narrow domains such as speech recognition, object classification in photographs, or handwriting analysis.35 Still, the idea of artificial general intelligence has captured the popular imagination, with movies depicting real-life robots capable of performing a broad range of complex tasks. In the near term, the best predictive business systems will likely deploy a complex layering of humans and machines in order to garner the comparative advantages of each. Unlike machines, human experts possess general intelligence that is naturally sensitive to real-world contexts and is capable of deep self-reflection and moral judgments.
Organizations will need to promote cultural and process transformations to give employees the confidence to speak truth to power.
5. Change the Way the Organization Operates
In our view, the most powerful decision-support systems are hybrids that fuse multiple technologies together. Such decision aids will become increasingly common, expanding beyond narrow applications such as sales forecasting to providing a foundation for broader systems such as IBM’s Watson, which, among other things, helps doctors make complex medical diagnoses. Over time, we expect the underlying technologies to become more and more sophisticated, eventually reaching the point where decision-support devices will be on par with, or better than, most human advisers.
As machines become more sophisticated, humans and organizations will advance as well. To eliminate the excessive noise that often undermines human judgments in many organizations and to amplify the signals that truly matter, we recommend two strategies. First, organizations can record people’s judgments in “prediction banks” to monitor their accuracy over time.36 Rather than being overly general, predictions should be clear and crisp so they can be unambiguously scored ex post (without any wiggle room). Second, once managers accumulate personal performance scores in the prediction bank, their track record can help determine their “reputational capital” (which might determine how much weight their view gets in future decisions). Ray Dalio, founder of Bridgewater Associates, has been moving in this direction. He has developed a set of rules and management principles to create a culture that records, scores, and evaluates judgments on an ongoing basis, with high transparency and incentives for personal improvement.37
Truly intelligent enterprises will blend the soft side of human judgment, including its known frailties and biases, with the hard side of big data and business analytics to create competitive advantages for companies competing in knowledge economies. From an organizational perspective, the type of transformation we envision will require focusing on three factors. The first involves strategic focus. Leaders will need to determine what kind of intelligence edge they want to develop. For example, do they want to develop superior human judgment under uncertainty, or do they want to push the frontiers of automation? Second, companies will need to focus on building the mindsets, skills, habits, and rewards that can convert judgmental acumen into better calibrated subjective probabilities. Third, organizations will need to promote cultural and process transformations to give employees the confidence to speak truth to power, since the overall aim is to experiment with approaches that challenge conventional wisdom.38 All this will require changing incentives and, where necessary, breaking down silos so that information can easily flow to where it is most needed.
Having discussed how to improve the science of prediction, it seems fitting to examine the future of forecasting itself. For the sake of comparison, it’s worth noting that medicine emerged very rapidly from the time when bloodletting was common to a more scientific approach based on control groups, placebos, and evidence-based research. Currently, the field of subjective prediction is moving beyond its own black magic, thanks to advances in cognitive science. Given how often forecasting methods still fail, we will need to pay attention to outcome-based approaches that rely on experiments and field studies to unearth the best strategies.
Despite ongoing challenges, the science of subjective forecasting has been steadily getting better, even as the external world has become more complex. From wisdom-of-crowd approaches and prediction markets to forecasting tournaments, big data and business analytics, and artificial intelligence, there is much hope about identifying the best approaches.39 However, there is confusion about how to improve subjective prediction. For example, insurance underwriters are still struggling to properly price risks posed by terrorism, global warming, and geopolitical turmoil.40
The cognitive-science revolution holds both promise and challenge for business leaders. For most companies, the devil will be in the details: which human versus machine approaches to apply to which topics and how to combine the various approaches. Sorting all this out will not be easy, because people and machines think in such different ways. But there is often a common analytical goal and point of comparison when dealing with tasks where foresight matters: assigning well-calibrated probability judgments to events of commercial or political significance. We have focused on real-world forecasting expressed in terms of subjective probabilities because such judgments can be objectively scored later once the outcomes are known. Scoring is more complicated with other important tasks where humans and models can be symbiotically combined, such as making strategic choices. However, once an organization starts to embrace hybrid approaches for making subjective probability estimates and keeps improving them, it can develop a sustainable strategic intelligence advantage over rivals.
The oil and gas industry is changing quickly, and it can be hard to keep up and stay in front of the changes. The Internet of Things is creating a digital transformation in our world. In this transformation, companies who adopt these changes early on are seeing remarkable gains. Companies who are being more conservative are being left behind.
By its nature, the oil and gas industry is more technological than a lot of other industries and it has been digital for decades. But we need to adapt even more to take full advantage of the digital environment. Your company must adopt a seamless environment to create its digital core and avoid being left in the dust. Your digital core also needs to encompass the five key areas of the digital economy to keep your business competitive. Here’s why it’s vital to your success.
Why your oil and gas company needs a digital core
We hear stories every day of market disruptions in a wide range of industries. Some new companies are quickly gaining ground. Others that have been in business for decades are being left behind. The oil and gas industry is ripe for market disruption. How do you keep your company ahead of these changes? How do you keep your business functioning and adapting through the process? What adaptation do you need to embrace to stay ahead of the game, you wonder?
You may just need a small amount of work to adapt and integrate your systems. On the other hand, you may need to make extensive changes. You may need to adapt everything from your business’ work management solutions and processes to your existing business model. Part of the answer involves what you’ve already done and what still needs to happen. This answer can be different for every company in the industry.
An effective digital core needs to cover your entire oil and gas business from one end to the other. It helps your company operate effectively by using real-time information to provide better performance. Do you need a view from your plant-level performance all the way to enterprise-level performance in real-time with in-depth analysis? That’s the level of detail you should be able to instantly analyze with a solid digital core for your business. Running your business based on siloed information is yesterday’s approach and can’t compete in today’s market. Instead, today you can run business in real-time to ensure you’re making smart decisions based on the current conditions of your business, productivity, and the market.
Having this depth of detail available makes it easier to project expected outcomes from your existing information. These projections make your business decisions easier and faster to implement. Having your entire enterprise on a digital energy network helps make this happen. A digital core also makes it easier for you to adapt your business model or enter and exit markets quickly, in as little as a tenth of the time most current systems require.
Other areas where a digital core can provide strong benefits for your business is in daily and task-related decision making. Asset maintenance, production performance, employee well-being and safety, risk management, optimizing logistics, and process optimization can all be handled through a digital core. By creating a dedicated core, you can optimize and automate many tasks that would otherwise need many hours and asset allocation to complete, raising your overhead.
Cloud computing allows you to quickly deploy because you don’t have to maintain expensive systems and software onsite. Instead of buying a piece of software with a particular number of licenses, you can now give your employees cloud access to services through their mobile devices. As the line between product and service continues to blur, the digital core becomes more important than ever. Inexpensive technology provides access to otherwise complex software. This lowers your opportunity and investment costs. At the same time, it allows you to use the excess funds you would have spent on software to explore new finds or invest in your infrastructure.
By combining your systems into a digital core, you can change how your company functions on a daily basis. You can quickly integrate Big Data into your projections and simulations to ensure a good outcome. You can gain detailed insights into what is making money and what is losing money for your company. You can change business models to adapt to changing market conditions. When your enterprise is invested in a digital core, it’s much easier to compete in the digital economy.
Having a solid digital core for your business platform is a great way to remain agile and flexible to future changes and current challenges. But to take advantages of the changes brought about by the digital economy, you need to transform your enterprise to digital quickly or risk being left behind.
If you’re ready to invest in creating a strong digital core for your company, check here and find out how 48% of oil and gas industry companies keep their company flexible for future changes.
Here are some key concepts for the future of IoT security in the enterprise.
First, IoT is going to save a lot of lives
It’s worth pointing out up front that the most direct result of IoT is much better physical security. Cheap, easy-to-install sensors means fewer surveillance vulnerabilities in critical infrastructure.
For example, Gooee provides intelligent sensors integrated with lighting systems to monitor activity, temperature, and more. When people break in, or there’s a fire, or an earthquake is on its way, IoT means we can take action faster, saving assets and lives.
For example, as part of a Smart Cities initiative, SAP has been working with the city of Buenos Aires on a centralized city-wide dashboard showing real-time information from more than 700,000 different city assets. This includes flow sensors on the city’s water systems that proactively alert against floods that could endanger lives.
Almost every potential security threat can be minimized with the appropriate sensors. For example, gunfire locators can help alert crimes in progress: during the 2003-2004 Ohio highway sniper attacks, the FBI successfully used a ShotSpotter gunshot location system to find the shooter.
So if we’re worried about keeping people safe, and detecting toxins slipped into the drinking water, then IoT is a great answer.
But when everything is networked, everything is hackable
While physical security is improving rapidly, cybersecurity is a big and growing threat. IoT compounds all the security problems of traditional networks. There are many more potential points of entry, the tradeoff between security and ease-of-use/cost is more severe, and the devices themselves aren’t easy to patch when security flaws are discovered.
There’s no easy solution to these problems–the right approach is to double down on traditional security measures. Securing connected IoT devices is like trying to seal your house against insects. You have to take the usual measures such as blocking the biggest cracks and cleaning regularly–but some bugs are always going to get through.
Companies must continue to implement “basic digital hygiene”–the equivalent of locking the door twice and not leaving the keys around. But then they should expect to get hacked anyway.
To combat the inevitable hacks, there has to be a multi-layered approach to security. IoT security is like an onion–the more layers you have, the more you’ll make the hackers cry…
Don’t stint on security investments: get secure sensors from reputable companies, use isolated systems wherever possible, minimize data traffic and storage, use effective trusted certificates, employ tokenization, and adopt end-to-end encryption.
And perhaps most importantly of all: Employ people who know how to put all this in place, and work with organizations that understand enterprise security and have been doing it a long time.
The future is about algorithmic security
New technology brings new opportunities–it’s time to take advantage of Big Data technology to improve IoT security.
Simple security is when an alarm is triggered and a guard intervenes. More complex security is more context-aware. For example, an alarm is triggered when the same personnel badge has been used simultaneously in two different electrical power stations. Or a badge has been used by somebody who is supposed to be on holiday.
This kind of security requires real-time access to enterprise systems to augment the sensor data. For example, AlertEnterprise, part of the SAP Startup program, uses the power of the an in-memory platform to provide real-time security analysis, awareness, and prediction:
“Attacks are getting more frequent and more damaging. Key pieces of information lie in different systems and by the time the security teams piece together the puzzle, it’s too late. Enterprise Sentry consolidates critical information from underlying security tools and combines it with operational information to deliver a real view of what’s happening right now.”
Algorithmic security is the next level, and involves using Big Data analysis techniques on the millions of data points that can be collected from, say, an airport’s IT systems: door sensors, employee badges, flight rosters, cleaning schedules, luggage systems and more.
Using predictive algorithms, the system can learn what a “normal” day at the airport looks like, and then sound the alert whenever conditions differ from the expected pattern. These are the kinds of techniques that are already used to detect suspicious financial transactions using fraud management solutions.
Algorithmic security applies to IoT, too. There are many different ways systems can be hacked, and real-time anomaly detection is the ideal way of dealing with unknown new threats.
For example, there have been trials showing that the traffic lights in major cities could be manipulated, leading to traffic jams and worse. With algorithmic security, these sensor patterns would immediately show up as an highly-unusual and suspicious anomalies.
Cybersecurity is about people
It’s a cliché, but that doesn’t make it any less true: robust cybersecurity is much more about people and processes than technology.
Organizations need to concentrate on the most vulnerable part of any network: the people using it. The easiest and most effective way to improve cybersecurity is having the right processes and training in place.
Predictive analytics / machine learning / artificial intelligence is a hot topic – what’s it about?
Using algorithms to help make better decisions has been the “next big thing in analytics” for over 25 years. It has been used in key areas such as fraud the entire time. But it’s now become a full-throated mainstream business meme that features in every enterprise software keynote — although the industry is battling with what to call it.
It appears that terms like data mining, predictive analytics, and advanced analytics are considered too geeky or old for industry marketers and headline writers. The term cognitive computing seemed to be poised to win, but IBM’s strong association with the term may have backfired — journalists and analysts want to use language that is independent of any particular company. Currently, the growing consensus seems to be to use machine learning when talking about the technology and artificial intelligence when talking about the business uses.
Artificial intelligence is now taking off because there’s a lot more data available and affordable, powerful systems to crunch through it all. It’s also much easier to get access to powerful algorithm-based software in the form of open-source products or embedded as a service in enterprise platforms.
Organizations today have also more comfortable with manipulating business data, with a new generation of business analysts aspiring to become “citizen data scientists.” Enterprises can take their traditional analytics to the next level using these new tools.
However, we’re now at the “peak of inflated expectations” for these technologies, according to Gartner’s Hype Cycle — we will soon see articles pushing back on the more exaggerated claims. Over the next few years, we will find out the limitations of these technologies even as they start bringing real-world benefits.
What are the longer-term implications?
First, easier-to-use predictive analytics engines are blurring the gap between “everyday analytics” and the data science team. A “factory” approach to creating, deploying, and maintaining predictive models means data scientists can have greater impact. And sophisticated business users can now access some the power of these algorithms without having to become data scientists themselves.
Second, every business application will include some predictive functionality, automating any areas where there are “repeatable decisions.” It is hard to think of a business process that could not be improved in this way, with big implications in terms of both efficiency and white-collar employment.
Third, applications will use these algorithms on themselves to create “self-improving” platforms that get easier to use and more powerful over time (akin to how each new semi-autonomous-driving Tesla car can learn something new and pass it on to the rest of the fleet).
Fourth, over time, business processes, applications, and workflows may need to be rethought. If algorithms are available as a core part of business platforms, we can provide people with new paths through typical business questions such as “What’s happening now? What do I need to know? What do you recommend? What should I always do? What can I expect to happen? What can I avoid? What do I need to do right now?”
Fifth, implementing all the above will involve deep and worrying moral questions in terms of data privacy and allowing algorithms to make decisions that affect people and society. There will undoubtedly be many scandals and missteps before the right rules and practices are in place.
What first steps should companies be taking in this area?
As usual, the barriers to business benefit are more likely to be cultural than technical.
Above all, organizations need to make sure they have the right technical expertise to be able to navigate the confusion of new vendors’ offers, the right business knowledge to know where best to apply them, and the awareness that their technology choices may have unforeseen moral implications.