How People Analytics Improves Employee Performance
MIT Sloan Management Review: We’ve heard about big performance gains companies were getting as a result of data and analytics. What sort of exciting things do you have going on? What recent advances has Humanyze made? Where are you now?
Ben Waber: There are many changes on both the technology and business model fronts. We analyze data on how people interact and collaborate at work in two ways. One is these next-generation ID badges, which are about the size of those plastic ID badge holders. They have RFID and NFC (near field communication) so you don’t need a separate badge; they can completely act as your ID. They transmit data wirelessly, so they’re much easier for people to use. We also have digital data — email, chat, meeting data, and phone call data — because that’s also an important part of how people work.
In the past — say, two years ago — when we would bring people on, we’d have them start using the badges, and if they were using digital data, we would use that. We had a general-purpose people and analytics platform that could solve lots of different problems. Now we’ve done a 180. We start out looking at only digital data: email, chat, meeting data, who communicates with whom, when do they communicate, no content, that sort of thing.
We provide dashboards that highlight specific business problems, including collaboration of project delivery, workload assessment, diversity inclusion, workplace planning, and risk assessment. This direct insight into where businesses need to be looking is why over the last year, we’ve at least doubled our business and the number of users every quarter.
We went that way because it allows companies to use the data they already have and apply it to specific business problems, allowing them to easily identify and use it. We did a lot of work developing those dashboards to make sure that these metrics are predictive across a wide variety of companies, that they solve these business problems.
For example, there’s diversity inclusion. Lots of companies have diversity and inclusion initiatives. But fundamentally, they don’t know what real impact they have. A couple of months ago Google LLC announced that it had spent $ 217 million on such programs over four years, and they had no effect. That’s because it took that long to realize the company was looking at lagging indicators like promotions, retention, that sort of thing. But, of course, those things take years to change.
Now you roll out a program and ask: Does it change how much managers communicate with the women on their team? How often are women invited to meetings? How broad are the networks women have in the company? Again, it’s not going to have those long-term impacts. So, those are the metrics we provide, which enable companies to do rapid A/B testing with programs they’re already planning to roll out. This orders the magnitude more quickly. If you roll something out, and it’s been a month or two, and it hasn’t changed how people work, it’s not going to have those longer impacts.
Can you share any specific examples of companies you worked with and how they’ve applied this?
Broadly speaking, with workload assessment, which you have especially for Japanese customers, we’re partnered with organizations experiencing problems with overwork — you might have heard of Dentsu Inc.
At the end of 2017, it was a 67,000-person advertising company, the second largest in the world. It owns 80% of the Japanese market. In 2016, a 24-year-old employee committed suicide. She had been working more than 120 hours a week every week for over two years. As a result, the CEO resigned and Dentsu is being fined millions of dollars monthly by the Japanese government for not addressing the problem. Every major Japanese company is under immense public and governmental pressure to show that they’re working on this problem.
But the issue is similar to DNI; the current state-of-the-art solution is to roll out a workload reduction program, and if in the next year, no one kills themselves, you say it worked. Which is awful, but that’s what it is. What we provide is how much people work. It can be from individual data, but you see across the company what percentage of people work more than 14 hours a day, 12 to 14 hours, 10 to 12 hours, and less than 10 hours. And then you can drill in. You can ask, Which teams are responsible for that? Why is that happening? Is that other managers driving the network? Is that from coworkers? Is it customers?
But the sophistication of companies using these metrics is still quite low. Our customers use this essentially as a diagnostic and assessment tool, where you need to know the state of the world. It’s hard for me to know which division you should roll something out in first or what their biggest problem areas actually are.
The next step is, now that I know what the problem is, I have programs I’ve been planning to do — for example, to address overwork or to improve diversity inclusion. I have a toolbox, and I want to know if one of those tools works. But I want to know if it works in a few teams. If it doesn’t work in a small part of the organization, then it’s unlikely I’ll invest money to roll it out everywhere. Testing out these programs allows you insight into the efficacy of a program before a larger-scale rollout to the entire company.
How do you distinguish 14 hours of straight work from somebody who checks email once an hour but is not working nonstop that whole time?
People who check email only once an hour are quite rare. Regardless, you could have complete data for thousands of people, which we use to develop this, and there are things you miss. Imagine you’re working on a PowerPoint deck for two hours. You didn’t put anything in your calendar about that, you don’t chat with anybody, and you don’t answer email. We won’t capture that completely. What we’ve found when we do have complete data — because we do have cases where we do know literally every day what people are doing — is that our accuracy with just using digital communication is between 80% and 85%, which is well above the accuracy rate for other analysis methods such as surveys or management observations.
We’re trying to be conservative with that estimate. Someone working on a deck for two hours without checking email or chat is rare, but it does happen. Those two hours won’t show up, so we know that, let’s say, 4% of employees on average are working more than 14 hours a day. It’s at least 4%. It is definitely more. Our customers will look at the data, and when they find a really big issue, they’ll roll out the badges, because that gives you a lot more context.
We eventually want to get it to where these badges replace all company ID badges. This isn’t just through our work. We’re starting to talk with other electronics manufacturers. You will start to see some other Humanyze-compatible smart badge platforms come out quite soon that aren’t from us, but we’re focused on understanding the data and we’re able to move toward building some of the hardware ourselves. But we want to create this ecosystem where you’re able to continuously identify big problems today with digital data and then really dive in at the important areas with sensors as well.
I’m sure there are some privacy concerns. What has been the point in fact you’ve gotten on that, and how have you managed that?
We process digital data in such a way that we’re not collecting personally identifiable information. On our customer server, we extract metadata from the communications system that was used on one team. But before we even see it, all the IDs — whether that’s an email address, a name, whatever — are all hashed and salted. I don’t see “Ben at Humanyze.com”; I see “x123.”
That means for our customers, which are pretty much all multinationals, we can deploy technology in the European Union and go above and beyond EU privacy standards in terms of the digital side. We don’t give individual data to the companies. Everything has to be aggregated. You need at least three people grouped together for us to show it. With the badges as well, you can’t just show up one day in people’s offices and say, “Hey, we’re going to analyze you.”
We diffuse it. We deploy on an opt-in basis. We give people consent forms that show the actual database tables we collect: “We will not share your individual data with your employer.” “We don’t report what you say.” “We don’t count how many times you go to the bathroom.” That’s a legal conflict for the opt-in users.
But even with that, with the first group we roll out at a company, it’s a four-to-six-week process. We show them articles about the technology and some of our results. Then we meet with level managers and show them what we’re planning to do. At that point, we can pull results from their digital data and say, “Listen, here’s the specific problem you’re trying to address.” Then you meet with the frontline employees and say, “Here’s what we’re trying to do. Here’s what we measure.” And answer all their questions and give out the consent forms. And we get more than 90% participation on pretty much every rollout we do, which is quite good.
But the key is once you spend a lot of time with that first group at a company, which is typically a couple hundred people, it’s really easy to roll out in other divisions, because then it’s not me or someone in management saying, “Hey, this is great.” Instead, it’s them talking to their coworkers.
You get to see your own data, which is essentially a Fitbit for your career. You can compare yourself to the team average. And not just averages — let’s say I’m a salesperson, and I want to be the best salesperson. Well, do I know what the best salespeople in the organization do? And where are there significant differences?
Do they talk more to customers? Do they dominate conversations with customers? Do they listen more? Do they spend more time with their team, or do they interact more with other teams? These are basic questions, but they have a powerful impact on outcomes. These are things that a really good mentor will tell you. But even then, you can’t see quantitatively if you are moving in the right direction day by day. And beyond that, if your company rolls out a new product or you’re working on a new initiative, what worked up to now might not work anymore.
But now you’re able to see that, and you can see where you are versus where that optimum is, and you can change yourself. We try to make it really valuable for not just management, not just district-level managers, but also individual employees. And that drives up your option. Across the board, being very transparent about what we do is really important.
In general, do you find that the individual employees like this data, that it helps them? Do they respond positively?
Yeah. I think there’s a real curiosity about the data. A lot of us probably have an intuition about our behavior. You might think that you listen a lot. Or that you’re a talker. But then you actually see your data. In my case, I’ve been able to steadily reduce the amount I talk on average over time. But still, in the beginning of this year, my running average was a little bit above 60% in an average conversation. You might say that 50/50 is OK. Except the average conversation has more than three participants, so being completely balanced would mean you’re talking about 25% of the time. I’ve gotten it down — I’m at 53% now — but I’ve got to work on that.
You’re moving in the right direction.
That’s right. For a lot of employees, the system as it exists is interesting, but not critical, for their work. We are working to make it more critical. People who have hard key performance indicators, such as salespeople or people in quality control, make changes quickly because their salary is dependent on how much they sell or the number of bugs they solve. And if they change their behavior in the way the data says they should, more likely than not, they’re going to make more money that month. It’s a pretty strong, immediate motivator.
For other people, the situation is closer to: If you do this, you are 10% more likely to get promoted in the next year. That’s not nothing, but I think humans respond to near-term things. We’ve at least made it interesting enough for individuals that they will keep using it, and the large benefit they get in the near term is that the workplace and their company dramatically improves. And moving forward, we’re going to be doing some interesting things soon to help with career progression and get more explicit about that.
You said these companies are not very sophisticated in their use of the analytics. Are they moving in the right direction, and what are the keys that you’ve seen in the companies that are sophisticated?
When it comes to analytics in particular, I can count on one hand the number of organizations globally that I feel are on the path to doing it right. There are roughly four or five stages of sophistication around this.
The first is you’re applying people analytics to a decision you have already made, and you want to see the impact “before and after.” It’s a low-impact way to get started using this technology. Let’s say I’m going to change our office layout or the organizational chart. What does it actually do? A pretty easy, good place to start.
The next step is to proactively use these analytics to plan an intervention with a test group and a control group. And that requires not just the ability to analyze the data or use metrics in the dashboard but also the ability to culturally execute something like that, to say I’m going to change something for some people and not for others. That’s culturally difficult and takes time.
The next step is to test multiple things at once. That’s really hard, because you’re not just changing where someone sits. You could be changing who their boss is or how they get paid. And the last stage is doing tests across every single people decision you make. The furthest along any company right now is single tests.
It’s not a technology problem in the sense that our technology could run tests across everything. There’s a lot of work that you have to do both analytically and, more importantly, culturally. That’s why this is an interesting time, because in 10 or so years, this sort of thing is going to become an attainable fix. The performance benefits that you get from using this technology will dwarf the kind of data-driven decision-making you see in marketing, which now, if you don’t do that, you don’t exist today. This is going to be a magnitude beyond that.
In 10 years, if you just install a technology and buy some dashboards, it’s not going to improve your performance. You’re not going to exist later, because you haven’t invested in changing the way you make decisions. That takes years. We’re seeing some companies either just starting that transition or they’re a couple of years in.
For example, one of our customers is a Fortune 100 energy company. They’ve been using our technology for three and a half years now continuously. At first, they were doing “before and after.” Now they are running seven tests across their business. And they have well over 100,000 employees. They could be doing hundreds of tests. And granted, they’re a very large organization. Things move slowly. But I think that illustrates that even with continued application over the years, building the cultural sophistication and the internal team to make this happen takes time. It’s really important to invest in it.
There’s a lot of research about the importance of breaking down silos, being more collaborative, and trying to create more cross-functional teaming. Are you seeing any hard data that supports the value and benefits of cross-functional teaming?
Absolutely. What’s fascinating is, depending on the type of work that teams are doing, the degree to which that is important changes. In general, if you are in a more straight execution mode in a much tighter group, limited interaction with other teams tends to be more effective. If you’re trying to come up with new product ideas or doing research and development, then talking to other teams is a lot more important.
But it’s a whole gradation. It’s almost never good to be completely isolated on your single team; even for teams that are in execution mode, they need to be able to very quickly work with other teams to be effective. As an example, there is an IT firm configuring multimillion-dollar data servers, and the biggest long-term predictors of team performance are how tightly connected they are as a team. But for specific high-complexity tasks, if you communicate more with teams that are further removed from you socially, you can typically complete that task in about a third of the time it would normally take. So, huge impact.
I think, universally, when you see things in pharma R&D where cross-functional team interaction is hugely predictive, at least of subjective performance, hard numbers are more difficult to get. But there’s lots of data on that. And as with anything, cross-functional interaction is broadly important. The degree to which you should engage in that depends on what you and your organization are doing at any given time.