Marcus Buckingham, best-selling author and founder of TMBC, outlined the three seismic shifts in talent management that will take an organization’s focus down to the local level, upset the traditional performance review process, and up-end traditional competency models. During his keynote at Achievers Customer Experience (ACE) 2015, Buckingham explained that organizations will need to move from big data to real-time, reliable data.
According to him, performance ratings data is typically “garbage” because it is generated only once or twice per year, and it’s based on the fallacy that human beings can be reliable of raters of other human beings. In fact, he says that humans are horribly unreliable and have been recognized as unreliable for years.
Enterprises invest billions in the traditional performance reviews that take place each year. After the reviews are completed, data has to be compiled, reviewed, and analyzed by human resources and then packaged up and sent back to leaders before anyone can get a raise, promotion, or learning development plan (or termination). But these workforce analytics are based on obsolete data. It would be better, as noted in the previous post, to upend the process and make performance reviews an ongoing activity in which managers ask real questions about their employees.
In traditional performance reviews, more than half of the rating is based on the patterns of how the manager rates. For instance, if a manager has given a 4 to two employees in a row, they’ll likely be more inclined to give a 3 or a 5 to the next person in the line. According to the study Understanding the Latent Structure of Job Performance Ratings, “Our results show that a greater proportion of variance in ratings is associated with biases of the rater than with the performance of the ratee.”
Companies have known about – and have been trying to remove – these idiosyncratic rater effects (IRE) for decades. Recently, some companies have decided to stop doing performance reviews altogether. But Buckingham says that reviewing isn’t the problem; it’s the ratings and the IRE that are leading to bad data.
Companies actually need a range of data and a differential between people in order to determine how to pay and promote them. He says companies should be asking: How do we capture good data about our employees?
This takes us back to the team lead. Buckingham says instead of asking the leader to rate his or her employees objectively (which is rarely possible), you should turn the questions around so that the rater is asked to record their own feelings and actions:
- I always go to Jane when I need extraordinary results. (1-5)
- I choose to work with Jane as much as I possibly can. (1-5)
- Would I promote him/her today if I could? (Y/N)
- Does he/she have a performance problem that I need to address immediately? (Y/N)
Taking those answers and comparing them to data about the team leaders’ intentions for the team, noting how long he/she has worked with each employee, and understanding that each leader has innate rating tendencies (more critical or more ____, for example), creates natural, real performance ranges that can be used to make solid decisions about pay, promotions, and training.