A New Approach to Designing Work

You can hardly pick up a business publication without reading about the ever-increasing pace of change in technologies and markets and the consequent need for more adaptable organizations. Given the imperative of adaptability, it is not surprising that few words have received more attention in recent conversations about management and leadership than “agile.”1 Organizations ranging from large corporations like General Electric Co. to tiny startups are trying to be both flexible and fast in the ways that they react to new technology and changing market conditions.2

The word “agile” appears to have been first applied to thinking about software by 17 developers in 2001.3 Having experimented with more iterative, less process-laden approaches to developing new applications for several decades, the group codified its experience in an agile manifesto. “We are uncovering better ways of developing software by doing it and helping others do it,” they wrote. In software development, agile now has a variety of manifestations, including scrum, extreme programming, and feature-driven development.4 The results have been significant. A variety of studies show that agile software development methods can generate a significant improvement over their more traditional predecessors.5

But what does this mean outside of software? Can agile methods be successfully applied to other types of work? Many proponents (a number of whom started in the software industry) argue that the answer is yes, and a growing collection of books, papers, and blog posts suggests how it might be done.6 The evidence, however, remains limited to date, and a recent article by two of agile’s founders cautions against applying agile indiscriminately.7 The blogosphere is also replete with discussions of an ongoing agile backlash.

To provide some practical advice to business leaders trying to understand what agile might mean for their organizations, we take a different approach. Our research suggests that in applying agile methods from the software industry to other domains, managers often confuse practices and principles. When agile methods work, they do so because the associated practices manifest key behavioral principles in the context of software development. But, successful as those practices can be when developing software, there is no guarantee that they will work in other contexts. The key to transferring a set of practices from one domain to another is to first understand why they work and then to modify them in ways that both match the new context and preserve the underlying principles.

The goal of this article is to help you understand several key work design principles that undergird not only agile practices in software but also Toyota Motor Corp.’s well-known production system in manufacturing. Once you understand these underlying work design principles — through a framework we call dynamic work design — you can create work processes in your own organization that are both more flexible and more efficient. (See “About the Research.”)

Stability vs. Uncertainty

Academics and managers alike long believed that organizations had to make trade-offs between flexibility and efficiency. A central notion in the academic theory on organizational design is contingency, the idea that organizations and their associated processes need to be designed to match the nature of the work they do. One of the most common variables in contingency theory is the degree of uncertainty in the surrounding environment (often also conceptualized as the need for innovation). When both the competitive environment and the associated work are stable and well understood, contingency theory suggests that organizations will do best with highly structured, mechanistic designs. In contrast, when facing highly uncertain situations that require ongoing adaptation, the theory suggests that organizations will do better with more flexible, organic designs.8

An early advocate of the mechanistic approach to work design was Frederick Winslow Taylor, author of the 1911 book The Principles of Scientific Management.9 Taylor’s essential insight was simply that if work is regularly repeated, it can also be studied and improved. In stable, well-understood environments, it is thus often best to organize work in ways that leverage the efficiency that comes with repetition. For example, in a modern factory, well-defined tasks are specified, and the work proceeds serially, moving from one carefully constructed and defined set of activities to the next. There is little need for collaboration in these settings, and the organizational structure that surrounds stable and repeatable work tends to be hierarchical to ensure that everybody follows the prescribed work design. The cost of such efficiency is adaptability. Due to the high degree of routinization and formalization, mechanistic process designs are difficult to change in response to new requirements. Though efficient, a mechanistic design is not agile.

When, however, the environment is unstable and uncertain, discrete tasks are harder to define, and therefore organizations cannot rely on a sequence of clearly defined steps. For example, product development teams often face challenges for which there is little precedent. Contingency theory holds that in unpredictable environments like new product development, organizations rely more on things like training and collaboration and less on routinization and careful specification. Developing a breakthrough product or service usually can’t be organized like a factory assembly line. Marketing experts may develop a set of initial requirements, which are then passed on to designers and engineers, but the requirements often evolve through multiple iterations as designers and engineers determine what is technically feasible. Consequently, effective development processes often require ongoing real-time collaboration, rather than rote adherence to a set of sequentially organized steps.

Though the contingency theory was first developed more than 50 years ago, its basic insights reappear frequently in contemporary management thinking. Many flavors of process-focused improvement, such as total quality management, Six Sigma, and business process reengineering, are extensions to Taylor’s fundamental insight that work that is repeated can also be improved. Recently, the increasingly popular design thinking approach can be thought of as a charge to tackle ambiguous, uncertain tasks with a more collaborative, less hierarchical work design.10 In general, contingency theory gives managers a straightforward approach to designing work: Assess the stability of the competitive environment and the resulting work, and then pick the best mix of defined tasks and collaboration to fit the challenge at hand. (See “A Traditional Approach to Work Design.”) If the work being designed consists of well-defined tasks (for example, assembling components), then it is best to organize it serially, or, as we label the cell on the bottom left, using the “factory” mode. Conversely, if the work is highly ambiguous and requires ongoing interaction (for example, designing new products), then the work is best organized collaboratively, or, as we label the cell on the top right, in “studio” mode.

Though powerful, this approach to work design is not entirely satisfying for two reasons. First, it describes an unpalatable trade-off: Work done using the serial factory design isn’t very flexible, making it hard to adapt to changes in external conditions, and work done using the collaborative studio approach often isn’t very efficient. Second, few types of work perfectly fit the archetype of well-defined or ambiguous work. Even the most routine work has the occasional moment of surprise, and conversely, even the most novel work, such as designing a new product or service, often requires executing routine analysis and testing activities that support each creative iteration. Academic theory notwithstanding, real work is a constantly evolving mix of routine and uncertainty.

At first glance, agile methods appear to fall more toward the collaborative side of the work spectrum. However, our research suggests a different interpretation. The conventional approach to process and organizational design is almost entirely static, implicitly presuming that once a piece of work has been designed, everything will go as planned. In contrast, a dynamic approach to work design suggests viewing work as an ever-evolving response to the hiccups and shortfalls that are inevitable in real organizations. As we will describe later in this article, agile methods actually transcend the traditional serial vs. collaborative work framework by creating better mechanisms for moving between the two basic ways of organizing work. By identifying mechanisms to cycle back and forth between well-defined factory-style tasks and collaborative studio modes when appropriate, an agile approach can considerably reduce the trade-off between efficiency and adaptability.

Dynamic Work Design at Toyota

What does this look like in practice? Consider a well-known example of work and organizational design, Toyota’s Andon cord. Work on Toyota assembly lines is the epitome of the serial, mechanistic design. Tasks are precisely specified, often detailing specific arm and hand movements and the time that each action should take. In a plant we visited recently, training for a specific role began with the trainee learning to pick up four bolts at a time — not three and not five. Only when the trainee could pick up four bolts regularly was she allowed to learn the next motion. But, despite an attention to detail that would have made Taylor proud, sometimes things go awry. In the Toyota scheme, a worker noticing such an issue is supposed to pull what’s known as the Andon cord (or push a button) to stop the production line and fix the problem.

While the management literature has correctly highlighted the importance of allowing employees to stop the line,11 what happens after the cord is pulled might be more important. During a recent visit we took to a Toyota supplier in Toyota City, Japan, we observed that one operator on the factory floor was struggling to complete her task in the allotted time, and so she hit a yellow button, causing an alarm to sound and a light to flash. (This factory has replaced the Andon cord with a yellow button at each operator’s station.) Within seconds, the line’s supervisor arrived and assisted the operator in resolving the issue that was preventing her from following the prescribed process. In less than a minute, the operator, now able to hit her target, returned to her normal routine, and the supervisor went back to other activities.

What, from a work design perspective, happened in this short episode? Initially, the operator was working in the “factory” mode, executing well-defined work to a clearly specified time target. (See the box on the lower left in the exhibit “Dynamic Work Design at a Toyota Supplier.”) But when something in that careful design broke down, the operator couldn’t complete her task in the allotted time. Once the problem occurred, the operator had two options for responding. She could have found an ad hoc adjustment, a workaround or shortcut that would allow her to keep working. But this choice often leads to highly dysfunctional outcomes.12 Alternatively, as we observed, she could push the button, stop the work, and ask for help. By summoning the supervisor to help, pushing the button temporarily changed the work design. The system briefly left the mechanistic, serial mode in favor of a more organic, collaborative approach focused on problem resolution. Once the problem was resolved, the operator returned to her normal task and to the serial work design.

The Toyota production system might at first appear to be the ultimate in mechanistic design, but a closer look suggests something far more dynamic. When a worker pulls the Andon cord, the system actually moves between two modes based on the state of the work. Though the nature of the work couldn’t be more different, such movement between the two modes is also the key to understanding the success of agile software development.

Agile as Dynamic Work Design

As we discussed earlier, the last two decades have witnessed a significant change in the conduct of software development. Whereas software was once largely developed using what is known as the waterfall approach, agile methods have become increasingly popular. From a dynamic work design perspective, the waterfall and agile approaches differ significantly.

In the waterfall approach, the software development cycle is typically divided into a few major phases. A project might include a requirements phase, an architecture development phase, a detailed coding phase, and a testing and installation phase. A waterfall project typically cycles between three basic modes of work. First, the bulk of the time is spent by software architects and engineers working individually or in small groups, completing whatever the specific phase requires. Second, typically on a weekly basis, those people leave their individual work to come together for a project meeting, where they report on their progress, check to ensure mutual compatibility, and adapt to any changes in direction provided by leadership. Third, at the end of each phase, there is a more significant review, often known as a “phase-gate review,” in which senior leaders do a detailed check to determine whether the project is ready to exit that phase and move to the next. Development cycles for other types of non-software projects often work similarly.13

Agile development processes organize the work differently. For example, in the scrum approach14 (one version of agile), the work is not divided into a few major phases but rather into multiple short “sprints” (often one to two weeks in length) focused on completing all of the work necessary to deliver a small but working piece of software. At the end of each sprint, the end user tests the new functionality to determine whether or not it meets the specified need.

Like the waterfall method, the agile approach to software development also has three basic work modes — individual work, team meetings, and customer reviews — but it cycles among them very differently. First, proponents of agile suggest meeting daily — thus moving from individual work to teamwork and back every day — in the form of a stand-up or scrum meeting, where team members report on the day’s progress, their plans for the next day, and perceived impediments to progress. Second, agile recommends that at the end of each sprint, the team lets the customer test the newly added functionality. Finally, in something akin to the Andon cord, some versions of agile also include an immediate escalation to the entire team when a piece of code does not pass the appropriate automated testing, effectively again moving the system from individual work to the team collaboration mode.

Viewed from a dynamic work design perspective, agile offers two potential benefits over waterfall. First, in waterfall development, the frequency of collaborative episodes is usually too low, both among the team members and between the team and its customers. A developer working for a week or two without a check-in could waste considerable effort before it’s clear that he or she has made a mistake or gone off course. In practice, developers often do not wait this long and informally check in with supervisors or teammates. While seemingly functional, these check-ins can lead to a situation in which the entire team is not working from a common base of information about the state of the project. In such cases, the operating mode starts to migrate from the box on the lower left, the “factory” mode, to the one on the lower right, where ambiguous work is organized serially. This results in costly and slow iteration, which we call ineffective iteration. (See “Dysfunctional Dynamics.”) Research suggests that in R&D processes, this mode can be highly inefficient.15 Similarly, checking in with more senior leadership only in the form of periodic phase-gate reviews means that the entire team could work for months before realizing that it is not meeting management’s expectations, thus also potentially causing rework.

The agile approach to software development also improves the quality of the time that developers spend working alone. The focus on developing pieces of functionality means that both the team and the customer are never more than a few weeks away from a piece of software that can be used, making it far easier to assess whether it meets the customer’s need. In contrast, in waterfall, the early phases are characterized by long lists of requirements and features, but there is nothing to try or test. It’s not surprising that waterfall methods often lead to projects in which major defects and other shortfalls are discovered very late in the development cycle and require costly rework.16

Applying Dynamic Work Design

Both the Toyota production system and agile-based software methods are thus examples of what we call good dynamic work design. In contrast to traditional static approaches, dynamic work design recognizes the inevitability of change and builds in mechanisms to respond to that. Once managers recognize the necessity of moving between more individual and more collaborative modes of work, they can build on four principles to create shifting mechanisms that are well matched to the work of their organization.

1. Separate well-defined and ambiguous work. Begin by clearly separating well-defined and ambiguous tasks. Trying to handle both types of work in the same process often leads to trouble. (See “Dysfunctional Dynamics.”) Often, the two types can be separated by inspection, but if not, then look for the signature element of ambiguous work, iteration. When work is well defined, it can be moved to the next stage like the baton a relay runner hands off. When done correctly, it doesn’t need to come back. In contrast, when work is ambiguous, even the best effort often needs to be revisited. If you find that a particular task often requires multiple iterations through the same set of steps, that’s a good sign that you are confronting ambiguity inefficiently.

2. Break processes into smaller units of work that are more frequently checked. If you strip away all the hype, the agility of any work process — meaning its ability to both adjust the work due to changing external conditions and resolve defects — boils down to the frequency and effectiveness with which the output is assessed. In both traditional, pre-Toyota manufacturing and waterfall software development, the assessments are infrequent and not particularly effective. Consequently, both approaches tend to be slow to adjust to changes in the external environment, and quality will be achieved only through slow and costly rework cycles. In contrast, when assessments are frequent and effective, the process will be highly adaptable and quality will improve rapidly. The fundamental recipe for improved process agility is this: smaller units of work, more frequently checked.

3. Identify the chain of individuals who support those doing the work. It is also important to identify the help chain — the sequence of people who support those doing the work. In manufacturing, the help chain starts with a machine operator and extends from foremen to supervisors all the way up to the plant manager. In software, the help chain often begins with an engineer and moves through the team leader to more senior managers, ultimately ending with the customer. It is critical, in our experience, that you identify the chains of individuals who do and support the work, not their roles, departments, or functions. Increasing agility requires knowing whom to call when there is a problem or feedback is needed.

4. Introduce triggers and checks that move work into a collaborative mode. Once you understand the help chain, you have two basic mechanisms for activating it: triggers and checks. A trigger is a test that reveals defects or misalignment and then moves the work from a factory mode to a more collaborative mode. In our opening example, the Toyota operator’s inability to complete the assembly task on time triggered her pushing a button and then receiving help from a supervisor. A check involves a prescheduled point when the work is moved to a more collaborative environment for assessment. In agile software development, this shift happens daily in stand-up meetings where the team quickly assesses the current state of the project. Completing a sprint creates a second opportunity, this time to check in with the customer.

Improving Procurement Performance

Using this dynamic work design framework within a company can lead to significant improvements in both efficiency and adaptability. Consider the case of a company we’ll call “RefineCo,” which owns several oil refineries and distribution terminals in the United States. The company had a procurement organization that was uncompetitive by almost any benchmark. RefineCo paid more for similar parts and services than its competitors, and the procurement group’s overhead costs were higher than the industry average. Even more troubling, when critical parts were not delivered to a refinery, it often turned out that the location was on “credit hold” due to an inability to pay the supplier in a timely fashion. Every participant in the system, from senior management down to the shipping and receiving clerks, was frustrated.

The procurement system at each of RefineCo’s sites worked roughly as follows. To purchase an item or service from an outside vendor, an employee would enter the requirements into the electronic procurement system, which would then appear as a request to the central procurement function. The staff in the procurement office would then review the request and issue a purchase order. That order would go to the supplier. When the product arrived at the refinery or the service was completed, a packing slip or service order verification slip would be generated, which would also be entered into the procurement system. Later, the supplier would generate an invoice that was also entered into the system. The electronic system would then perform a three-way match to verify that everything was done correctly: The purchase order should match the verification receipt, which, in turn, should match the invoice. If there was not a three-way match, the invoice would be “kicked out” of the system and the supplier would not get paid until the discrepancy was resolved.

The job of resolving those discrepancies fell to the staff in the refinery’s purchasing office. Unfortunately, the products and services procured frequently failed the three-way match, leading to both an overburdened purchasing department and frustrated suppliers. Though the refinery was part of a large and successful company, it was frequently on credit hold with its suppliers for failure to pay invoices on time, making it difficult for the staff to do their jobs and run the plant safely. The dedicated procurement staff worked 10-plus hours per day and had hired temporary workers to help manage the backlog, but they were still falling behind.

Most of the members of the procurement team complained bitterly about being “overworked” and how “screwed up the system was.” Nobody saw any opportunity for improvement beyond adding what appeared to be much-needed staff. For us, the critical moment in our work with the procurement staff came when one of the longtime team members explained that a good purchase request contained “all the information I need” and could be turned into an official purchase order in “five to 10 minutes.” A difficult one, however, lacked key pieces of information and might require one to two hours to process as the purchasing staff traded emails with both the requesting unit and the supplier. Despite this effort, difficult purchase orders were usually the ones that failed the three-way matching process and got kicked out of the system. Further investigation revealed that the purchase order system was completely gridlocked with the kicked-out orders, and the team spent much of its time trying to clear the backlog. The system had descended into the classic “expediting” or “firefighting” trap: There were so many purchase orders in process that the turnaround time for any given one was very long. But long turnaround times created unhappy customers and suppliers who constantly called to complain and ask about their particular order or payment. Consequently, the procurement team was constantly reprioritizing its work and reacting to whichever customer or supplier was most unhappy.

Our first insight came in recognizing that the procurement team was engaged in two different types of work that corresponded to what we call serial “factory” work and collaborative “studio” work. When the requested item was standard and all the needed information was provided, a single person could easily process the request without collaboration; then, once the purchase order was entered, it would easily flow through the system, just like an item on an assembly line. However, standard requests flowed easily through the system only if the request came with the correct information. If it did not, then it could require several rounds of iteration, usually via email, to issue the purchase order. So the purchasing function created a simple checklist that described a good purchase request. The idea was to ensure that standard orders would always arrive with the correct information. To give the various departments an incentive to use the checklist, the purchasing function promised that any request received by 7 a.m. with the proper information would result in a purchase order being issued by 2 p.m. that day. At that time, a one-day turnaround was unheard of because every order simply went into the “to do” pile. The purchasing department also created a simple trigger to improve productivity: Purchase orders that were missing items on the checklist would be immediately returned to the requesting unit.

The second part of the intervention came in recognizing that not every request could be supported in factory mode. In the existing system, neither the requesters nor the purchasing staff distinguished between a standard request and a novel one. Thus, when a request for a new product or service showed up, the agent would do his or her best to process it, typically requiring multiple emails with the requester, often over several days, to nail down all the relevant information. In many cases, when the agents couldn’t get the information they needed, they would make their best guess and then submit an incomplete or incorrect purchase order. This, too, created additional iteration, as the supplier, unsure of what was being requested, would call or email the agent. The purchasing process was thus living in the lower right-hand box of our matrix, attempting to accomplish ambiguous work in a serial fashion and thereby creating slow and expensive iteration.

Creating an effective collaborative studio mode to handle the complex purchase orders required two changes in work design. First, the team created a clear trigger: If a request was nonstandard, then it was moved into a separate pile and not dealt with immediately. Second, each day at 2 p.m., the team would work together to process the more complex cases. By working collaboratively (in studio mode), they were able to resolve many of the more complex cases without additional intervention — somebody on the team might have seen a similar order before. Also, having a face-to-face meeting was far more efficient than the endless chain of email that it replaced. And, if additional information was needed, the team could schedule a phone call in the time window after 2 p.m., rather than send an email, again reducing the number of expensive iterations.

The results of these two changes were significant. Creating a factory mode for the standard orders allowed the team to make good on its “in by 7, out by 2” promise almost immediately, generating an immense amount of goodwill with the requesters. Spending the afternoon in studio mode also sped the processing of the complex orders. The two changes created enough space that the team was able to use studio time to not only process the more complex requests but also work through the backlog of unresolved older orders. In the end, due to the efficiency improvements, the procurement team reduced its staff by the equivalent of two full-time staff members, while providing far faster and more reliable service. These process improvement insights were then applied to the company’s other U.S. sites and, as of this writing, RefineCo pays more than 90% of its invoices on time, resulting in a far happier collection of suppliers.

Look for Best Principles

Managers and consultants are often obsessed with the search for best practices — those activities that appear to separate leading organizations from the rest of the pack. The idea behind this search is that once identified, best practices can be adopted by other organizations, which will then experience similar gains in performance. While there is certainly some truth to this idea, the supporting evidence is decidedly mixed. Organizations frequently struggle to implement new tools and practices and rarely experience similar gains in performance. In many industries, the performance gap between the top and middle performers remains stubbornly difficult to close. A key reason for these failures is simply that organizations are complex configurations of people and technology, and a set of tools or practices that works well in one context might not be equally effective for a major competitor — even if that competitor is located just down the street.

Best practices are “best” when they manifest an underlying behavior principle in a way that is well matched to the organization that uses them. Toyota’s famed Andon cord and the localized problem-solving it catalyzes work by capitalizing on the efficiency that comes from individual repetition and the innovation that comes with collaborative problem-solving. Conversely, agile development methods work by channeling the creativity of software engineers through frequent team meetings and customer interactions. More generally, organizations become more adaptable when they find defects and misalignments sooner. A dynamic approach to contingency, supported by triggers and checks, can open the path to creating practices that support increased agility in the work of your organization.


MIT Sloan Management Review

Tesla Powerpack begins work on powering South Australia

Tesla Powerpack forms world’s largest lithium ion battery to help power South Australia

The 100MW Tesla Powerpack, built by industry pioneers Tesla, has now been activated – allowing it to store energy produced by a nearby windfarm and stabilize South Australia’s electrical grid.

In September 2016, a massive storm caused an unprecedented state-wide blackout in South Australia, with 1.7 million people spending the night without power and questions raised about the stability of the region’s renewable energy supply.

The event led to the coupling of the Hornsdale Wind Farm with the world’s largest lithium ion battery. The set-up can power 30,000 homes for an hour (approximately the number of properties that lost power during the blackout) and otherwise support the region’s electricity supply.

The historic deal was formed between electric car makers Tesla and French energy company Neoen, with the help of government backing. Tesla boss Elon Musk famously promised that his company would get the Tesla Powerpacks system installed and working within 100 days – or he would do it for free.

Read more: Business Secretary Greg Clark MP announces new national battery facility for UK

Building the world’s largest lithium ion battery

Musk went on to quote $ 250 per kilowatt hour for 100 megawatt hour systems, saying that Tesla was moving to fixed and open pricing across the board. The project moved surprisingly quickly in the storm’s aftermath, with the South Australian government proving its reputation as a serious advocate of renewable energy.

Tesla completed the project in around 60 days from the contract being signed, though the company reportedly got a head start on construction.

For all their environmental benefits, wind and solar energy are less predictable sources of power than fossil fuel or nuclear alternatives. The coupling of renewable technology with batteries is seen as a key way to prevent the kinds of widespread blackouts that South Australia experienced.

You can see the current composition of each Australian state’s energy production here, including live supply and demand.

Read more: Battery tech will power global smart grid ambitions

Tesla Powerpack: Unlocking the potential of renewables

“Tesla Powerpack will charge using renewable energy from the Hornsdale Wind Farm and then deliver electricity during peak hours to help maintain the reliable operation of South Australia’s electrical infrastructure,” announced Tesla. “The Tesla Powerpack system will further transform the state’s movement towards renewable energy and see an advancement of a resilient and modern grid.”

Musk claims that the 100MW battery is three times as powerful as the next largest in the world. As Australia grows more reliant on renewables, the project should help to pacify the political opponents that the aggressive move to wind and solar energy, and the resultant blackouts, cultivated.

“While others are just talking, we are delivering our energy plan, making South Australia more self-sufficient, and providing back up power and more affordable energy for South Australians,” said State Premier Jay Weatherill, who flicked the switch to activate the Tesla Powerpacks.

Rechargeable lithium batteries have been used since the 1970s, but recent large-scale deployments in electric vehicles and the energy sector has seen demand escalate, threatening a shortfall of available materials with which to make them by 2020.

Tesla’s Powerwall residential battery is being installed across homes in Australia, too. The same technology used to stabilize the South Australian grid is allowing homeowners to collect energy during the day, via photovoltaic panels, and supply it at night, even if the grid goes down.

Read more: Future Grid aims to power up the Internet of Energy with Hazelcast

The post Tesla Powerpack begins work on powering South Australia appeared first on Internet of Business.

Internet of Business

Planning for the Future of Work

Digital technologies are poised to disrupt how work is done. Consider the popular example of the impending arrival of autonomous vehicles. When self-driving vehicles are mainstream — within the next decade or two (or less) — the impact on work in the United States alone will be massive.

According to the United States Bureau of Labor Statistics, 1.5 million people in the U.S. are commercial truck drivers, 800,000 work as delivery drivers, and another 1 million people make a living as other types of transportation professionals — including bus drivers, taxi drivers, and Uber drivers. The secondary effects of self-driving cars and trucks are also significant; they would significantly reduce accidents, thereby also affecting auto-body shop workers, insurers, hospital emergency room workers, and a number of others.

Autonomous vehicles are only one maturing digital technology that will disrupt work. Add artificial intelligence, blockchain, additive manufacturing, and virtual and augmented reality to the disruptive mix, and the impact these technologies will have on work will be staggering. Many companies and executives are not planning for this future, and while some employees and leaders are considering how these technologies will affect their careers or their organizations, they may be doing it wrong.

The common approach, which focuses on identifying types of work that only humans can do, is an unproductive way to plan for the future of work. If one primarily fits human work into the gaps left by what computers cannot do, people will increasingly be squeezed out as technology becomes more advanced. As a general rule, computers have become capable of most things that we once thought outside the realm of computer expertise, such as facial recognition and language translation. This logically begs the question: How are people truly better than computers?

The Rise of Emotional Robots

We don’t know exactly how people will adapt or what the majority of jobs will look like in the future, but several pundits have attempted to identify the areas in which humans are superior to computers. Columnist and author Tom Friedman suggests that caring is a trait distinguishing people from machines, noting:

“We used to work with our hands for many centuries; then we worked with our heads, and now we’re going to have to work with our hearts, because there’s one thing machines cannot, do not, and never will have, and that’s a heart. I think we’re going from hands to heads to hearts.”

Anthony Goldbloom, the founder and CEO of Kaggle Inc., suggests that making decisions from incomplete data is another. This insight is reminiscent of what Pablo Picasso once said of computers: “But they are useless. They can only give you answers.”

But what happens when we create caring robots? Research has shown that people are more likely to open up to robots than humans, because the fear of judgment is significantly diminished. To that end, Cynthia Breazeal of the MIT Media Lab is designing so-called “sociable robots” that can approximate empathetic connections.

Simulations can also allow AI to make novel insights from past data that humans cannot. For example, when the AI system AlphaGo competed solely against itself to learn the game Go, instead of using data from human players, it was able to create insights and strategies that people working at the game had not developed over the centuries of playing it.

Seeking the Right New Opportunities

If, as Picasso implied, people are good at asking questions, what questions should we be asking? In the near term, one certainly might be, “What are the new opportunities that arise as technology takes over certain aspects of work?” MIT economist David Autor notes that there are actually more bank tellers in the U.S. today than there were before the advent of the ATM; they are just doing different work today than they did before.

At first, autonomous vehicles will certainly give rise to different types of work. Doctors, nurses, lawyers, and other professionals may be more apt to conduct house calls, as they’ll be able to use the travel time productively. People may be able to use their kitchens to start restaurants that rely on self-driving vans for food delivery. Certainly, still other new jobs are possible. Autor reminds us that just because we cannot envision them now, doesn’t mean they won’t happen. The farmer disrupted by changes in agricultural technologies in the 1900s probably did not envision the future job of the data analyst predicting yield.

We must not be ignorant to the fact that technology is likely to evolve to take over those new roles eventually as well — the sympathetic robot may one day replace the traveling human doctor. But we expect these changes to take place over time. Marco Iansiti and Karim Lakhani argue that it will likely be 20 years or more, for example, before blockchain becomes mainstream. Even if technologies evolve more quickly, societies and institutions often change more slowly.

Ironically, asking questions about new opportunities for work in the light of technological disruption may be the one task for which humans are inherently superior than computers. In many ways, the ability to ask these questions combines the earlier examples of tasks in which computers are inherently superior to humans. It is part empathy, since it involves identifying unmet human needs and desires in this new environment. It is part decision-making based on incomplete data, since it means identifying needs in a new environment created by technological evolution.

Implications for Work

This perspective suggests that work will still exist in a digital future, but it will be different — and shifts will be unpredictable. This demands that people be prepared to be lifelong learners. Successful employees will pivot to new careers as their skillsets become undervalued in one job or sector, requiring them to repurpose them in new roles or industries.

Companies should seek to support this need for lifetime learning in order to retain employees and guide their development. As we learned from our 2017 research on digital maturity, a few organizations employ this practice today. Some companies allow their employees to spend a certain portion of their work week contributing to open-source software communities. Insurer Cigna Corp. conducted a strategic analysis of their future talent needs, and now reimburses employees at a higher rate if they pursue degrees in those strategic areas. Employees value their organizations’ investment in their future; we saw that talent is up to 15 times more likely to stay with a current employer if that company provides them opportunities to continue developing skills.

Organizations that want to stay ahead of the talent curve will help their employees develop new skills appropriate for a digital business environment. We have argued that this may mean that organizations manage talent differently, building a workforce of long-term employees that assemble teams of on-demand workers, allowing the organization to nimbly adapt to changes as well. Although these types of employees will be managed differently, both should be developed and evaluated in a way that maximizes the opportunity to adapt their skillsets to changes in the technology landscape. This is the future of work.


MIT Sloan Management Review

The Imperializer makes quick work of metric conversions

When you work in a machine shop, you often need to convert numbers from metric to imperial. As long as you have to do this on a regular basis, why not make a tool to do so easily?

Instead of pulling out a phone or taping a calculator to their CNC machinery, NYC CNC came up with an Arduino Nano-based device that does this conversion in style. “The Imperializer” features a beautifully milled enclosure that magnetically sticks onto a machine, a backlit LCD, and a toggle switch to flip between metric and imperial units.

The Imperializer is a desktop or machine mountable device that does one thing: converts inches to millimeters (and millimeters to inches)!  It uses an Arduino Nano and is powered by a Lithium battery that can be recharged with a Micro-B USB cable!

If you’d like to have your own for your shop, the bill of materials and Arduino code can be found on the project page. The housing, and even a fully-assembled version, can be purchased here.

Arduino Blog

Connected Environments Will Not Work Without Accurate Asset Master Data

Manufacturers are beginning to use machine intelligence, smart sensors, and the Internet of Things (IoT) to create connected environments. There is broad consensus that transitioning to this type of advanced digital infrastructure will help improve visibility into process functions and allow algorithms and processing power to play bigger roles in optimizing the real-time health of critical assets.

“We are at the beginning of this smart machine journey,” says Dean Fitt, SAP solutions manager for enterprise asset management and plant maintenance. “People want to move from reactive maintenance to predictive maintenance. Sensors and other maintenance technologies have been around awhile, but they are being put together in new ways to transform how we maintain these environments.”

Some companies are tackling these challenges by using software, sensors, drives, and controllers to automate existing assets. This approach allows them to extend the useful life of 50-year-old hydraulic presses and hundred-year-old steam engines, for example. It also preserves more funds for situations where buying new assets is the best or only option for adding needed capabilities.

Master data management is essential for real process improvement

Being able to predict when asset maintenance is required is one of the biggest advantages offered by connected environments and IoT. But predictive analytics require both real-time data and detailed records of each facility’s as-built assets.

Ideally, this information, which includes a number of data types, would be defined as master data objects to ensure consistency across enterprise systems and processes. But capturing and standardizing data from disparate systems, digital formats, and hardcopy documents is often a low priority for project teams when they are focused on bringing new assets online.

“The master data is crucial,” says Fitt. “It is the foundation for everything. If you do not have a good foundation, you are building on quicksand.”

That is why organizations should treat master data management as a core function whenever they adopt, maintain, or automate any new or existing assets. Governance, controls, and workflows are essential for using asset data to minimize downtime, enable real-time decision-making, and increase process and worker productivity.

“Technology alone will not ensure accurate data,” says Peter Aynsley-Hartwell, chief technology officer for Utopia Global, Inc., a global data solutions company that focuses on information management. “A lot of people have information they do not trust. As soon as that happens, they begin making incorrect or poor decisions or no decisions at all. And they lose the opportunity to achieve a huge benefit from the information they have.”

Connected environments require a consistent and proactive strategy

As technology continues to evolve, manufacturing processes are likely to become more reliant on machine learning and artificial intelligence. Some manufacturers, distributors, and service companies will probably use processing, logic, and networking to continuously monitor and improve the quality and reliability of their assets.

“We may see some of these concepts make their way into our day-to-day manufacturing operations,” says Aynsley-Hartwell. “Perhaps when we have self-driving cars, they will diagnose and drive themselves to the service provider on their own initiative.”

A simple self-driving system is already in service in Australia, Aynsley-Hartwell notes. Rio Tinto, a British mining company, uses 73 416-ton trucks to haul ore along a fixed route. The vehicles are driverless and use GPS units, radars, and sensors to work 24 hours a day while saving the company 15% on overhead costs.

These technologies are evolving quickly, and numerous companies are working on making their assets more autonomous and “smart.” But none of these optimistic visions of the future will be realized without an effective strategy for acquiring and managing vast amounts of data.

Want to learn more? Listen to the SAPRadio show, “The Next Big Thing in Plan Operations: Intelligent Machines and Networks,” and check @SAPPartnerBuild on Twitter.


Internet of Things – Digitalist Magazine