Beware the Biased Bot as Boss: Reclaiming Human Influence in the Age of the Algorithm

This blog was co-authored with former colleague Evan Sinar and originally published on DDI LeaderPulse (2018).

One of 2018’s hottest topics—and heaviest business investments—is the use of artificial intelligence (AI) to make decisions once made by people. The promise is clear: by replacing human decision-makers with models built on large-scale datasets and algorithms, AI can dramatically reduce decision time and improve predictive accuracy.

Frontline business leaders, those who provide direct supervision to large swaths of the workforce, make tactical and operational decisions every day about people, customers and the business. As a result they find themselves at the center of this trend.

With a growing reliance on technologies like AI to make decisions, it raises key questions about the true role of a leader in the age of the algorithm, and about which leader skills and attributes can’t—or shouldn’t—be displaced.

Alongside disruption of the leader’s role, AI’s impact on HR processes and the workplace is also rapidly escalating. Traditional HR systems with long histories of leader involvement, such as interviews to hire new employees, are being updated to remove humans from the equation. The efficiency and predictive accuracy rationale for these changes is sound. Though, while it is true that well-crafted AI models (for hiring, among other purposes) can often out predict human judgment, accuracy isn’t the only way to judge success, forcing a harder look at the tradeoffs of AI-driven decisions.

Big data. Big assumptions.

As organizations strive from greater diversity and inclusion in the workplace, the pursuit of objectivity in decision making is important for individuals and organizations. The emergence of big data, analytics and AI has fueled hope that decisions will be driven by greater objectivity and fairness. However, Nate Silver, an American statistician and writer who spends much of his time studying and writing about prediction, is far more circumspect. In his book, The Signal and the Noise: Why So Many Predictions Fail--but Some Don't, Silver argues, “Data-driven predictions can succeed–and they can fail. It is when we deny our role in the process that the odds of failure rise… Unless we work actively to become aware of the biases we introduce, the returns to additional information may be minimal–or diminishing.”

Silver’s position touches on a phenomenon known as algorithmic bias which is getting more attention as technologies like AI rapidly expand. Algorithmic bias is rooted in the way algorithms work and is becoming more problematic as software becomes more and more prominent in every decision we make. The core problem is that algorithms used in AI and other software can manifest and reinforce the same biases as humans. 

So how does this occur?

Firstly, all algorithmic-based tools are built using historical data. Regardless of how much data is used to build a new algorithm, it’s based on the core assumption that historical data will predict future outcomes. This is a dangerous assumption for two reasons: the rapid pace of job change means that older data is increasingly irrelevant to future business realities; and it runs the risk of calcifying past prejudices and poor diversity records. 

From an HR perspective, this means instruments commonly used in key people decisions like employment tests, high potential assessments, and automated interviews may preference individuals who bring qualities and attributes suited to a past context and miss individuals who may be well suited to a future context.   

Secondly, the people who write the algorithms that underpin technologies like AI can unconsciously (and sometimes consciously) incorporate their own biases, through the choices and assumptions they make when selecting data to build their models. In 2017, Apple was accused of racism amid reports that its face recognition software was not able to distinguish between Chinese users. And the early voice recognition programs failed to recognize female voices and accents because they were built and tested by males and native English speakers. 

With AI, the risk becomes even greater as algorithms continue to learn from human behavior. They can’t account for past bias; instead, they learn from and quickly perpetuates discriminatory patterns.

Predicting the past: implications for HR

What are the implications of algorithmic bias for HR practitioners? 

  • HR practitioners must recognize that the payoff for algorithmically-based decisions is often far into the future. The return on a leader selection or high potential decision is typically realized at some future point in time, and often within a very different business context distantly removed from the data used to build the algorithm.

  • HR needs to understand the strengths and limitations of data, analytics, and new technologies such as AI. However, DDI’s Global Leadership Forecast 2018 found that HR leaders felt significantly less prepared than their peers to confront the challenges of big data, analytics and the digital environment.

  • As the guardians of objectivity and fairness in people decision, HR should exercise caution when considering any tool or instrument that claims to make concrete predictions about future performance. Performance is less portable than we think and as companies continue to change so will the context against which success will be defined.

The appeal of AI is strong, particularly when many other data points suggest leaders struggle to eliminate their own bias and subjectivity in people decisions. But HR needs to facilitate the role of leaders, not remove them. Leaders too, especially those overseeing the day-to-day activities of their employees, need to recognize and embrace their role in algorithmic model-building and eventual decisions.

How leaders need to step up, and step in

With HR as a partner to engage and leverage business leaders’ expertise, what actions should leaders take now, to restore their critical role in decisions made by or alongside AI systems? We recommend three key responsibilities that can’t be overlooked or deprioritized:

  1. Crack open the black box—More than ever, leaders must demand explainability for new AI models impacting employees. It’s not enough to know that an algorithm’s decision is objectively accurate, it must be understandable, too. This forces a level of transparency that is critical to maintaining employee trust.

  2. Confirm the right data’s being used—A common saying among data scientists is, “garbage in, garbage out.” That is, no amount of sophistication in an algorithm can make up for poor quality data sources used to build the model. Business leaders are experts not only about the data that should—and shouldn’t—be gathered from and about employees, but also about which datapoints are actually meaningful rather than just noise or due to random variation.

  3. Advocate for employees—AI-driven decisions affecting the workforce aren’t just technology issues, they’re people issues. Leaders must represent employee interests such as privacy and equity, by putting themselves in the shoes of a valued key employee denied a long-awaited promotion because of a mysterious and opaque algorithm.

With these considerations in place, and supported by well-established analytical HR practices, blended AI-leader decision models can begin to surpass AI’s promised returns, and can do so more fairly than algorithms alone. AI models can also be homed in on long-challenging talent issues such as recognizing the contributions from long-overlooked employee groups. As a recent example, an AI startup named Primer created a model to identify prominent women researchers whose work nonetheless hasn’t been featured in Wikipedia, a key limiter to the visibility of their work to the general public.

Ultimately, the solution to algorithmic bias in people decisions won’t be based on more complex models or larger datasets. Instead, it will be driven by HR and leadership recognizing their role and taking ownership of the issue. When these groups exert a stronger influence on how new AI models are created and monitored, the results will better balance prediction and fairness.

Previous
Previous

5 Things I Learnt While Building My Own Guitar.

Next
Next

My Daughter’s First Leadership Job