Originally posted August 12, 2013. Updated October 20, 2015 with newer data.
Corporations adopting Agile practices on their way towards being Agile often struggle with many legacy operational policies and procedures. One question that always comes up is how to conduct performance management appraisals with employees when Agile Teams are supposed to be Self-Directed, Self-Managed and mostly autonomous? Adobe and Motorola have given us two good examples of successful transformations. Most companies not wanting to jump that far just yet don’t know where to go. This article will cover one of many paths we are exploring/piloting with some of our clients.
The typical scenario we find in organizations is a line manager who is responsible for the management of a team or group of teams, who is also responsible for the career development plans and performance appraisals of the people within those teams or groups. This has always proven to be a fool’s errand for managers. As New York Times columnist Phyllis Korkki notes:
Many businesses feel that they must use formal reviews and rankings to create an objective measurement of performance and goals, so that managers can reward and promote good employees, and give poorly performing ones a chance to improve (while creating a paper trail in case they must be dismissed).
Making matters worse, in the mid–90s, a popular system for front-line employees emerged from GE’s Jack Welch which
stated that employees should be lined up along a three-piece bell curve: the top 20% would get rewarded, the middle 70% would be told how to improve, and the bottom 10% would be discarded. This is called forced or stack ranking; according to an in-depth Vanity Fair report, it’s the system that “crippled” Microsofts ability to innovate.
The system at Microsoft, pitted employees against one another in an attempt to reward the best and weed out the rest. However, the system back-fired. Former employees have been quoted as feeing helpless and rewarded to “backstab their co-workers.” Bill Hill, a former manager, is quoted in Fast Company Magazine as saying, ”I wanted to build a team of people who would work together and whose only focus would be on making great software. But you can’t do that at Microsoft.”
It is also the system still used by many of our clients and the reason why we find their cultures lacking innovation, trust, and employee engagement. We are finding the very behavior illustrated in the 2006 MIT study that stated, “… the rigid distribution of the bell curve forces managers to label a high performer as a mediocre. A high performer, unmotivated by such artificial demotion, behaves like a mediocre.”
Motorola had a similar stack-ranking system which they dropped in 2013. Then CEO Greg Brown, noted in Crain’s Business Journal that, “People had an unbelievable focus on their rating. So we decided to forget the rating and just link performance to pay more directly. You no longer have a forced bell curve, which can be demoralizing and can create a culture of infighting.”
Fortunately for Microsoft, stack-ranking was dismantled in late 2013/early 2014. Not so for Yahoo, which decided in the same time period to adopt stack-ranking. Yahoo later backed-off of stack ranking but it appears it has had a lasting effect. One of many bad decisions that have had a lasting effect. Motorola also made a number of mistakes too. They linked performance to pay, which the Federal Reserve study quoted in Dan Pink’s book, Drive, showed is largely a disincentive for most knowledge workers.
Which company is doing well? Microsoft is turning around. Yahoo is on life-support as of this writing. Motorola has had to sell-off most of its business units at fire-sale rates leaving only a former shell of itself to struggle to compete in the commercial and defense communications markets.
At face value, it was never the annual performance review that was the problem. It was that the line manager that didn’t do a great job of ensuring the context, contents and resulting rewards matched reality. Very few line managers have the line-of-sight to knowledge workers daily lives. Complicating the matter is the very definition of a knowledge worker: someone that knows more about a domain then their manager. Now ask that very same line manager to stack-rank their reports. The results have largely been disastrous. Using a system meant for judging the performance of factory work in the early 1900s, we attempted to adapt it to services organizations and knowledge work when we should have just replaced it entirely.
Some companies attempted to do just that in the 1990s recognizing the new era of knowledge work. In the 1990s it emerged as a method for reviewing and improving the performance of managers. It has been extended and used with employees at all levels.
My favorite commentary on the 360 Review comes from PerformanceAppraisals.org. In their article on the “Strengths of 360-Degree Feedback Schemes” they state:
The 360-degree feedback process involves collecting information about performance from multiple sources or multiple raters. For example, a review of a manager’s performance might involve collecting data, opinions, and observations from his or her employees, immediate supervisor, colleagues, and even customers. A review of an employee without supervisory responsibilities might entail eliciting the perceptions of his or her supervisor, customers, and colleagues. Typically those perceptions are collected using a rating system, so in a sense 360-degree feedback is a subset of the ratings method, with all the advantages and drawbacks of any rating system.
The theory makes sense. If you want to improve performance, you can learn more by taking into account the perspectives of a number of “involved parties,” rather than only the perspective of the employee’s immediate supervisor. The implementation, however, is problematic.
Clouding the issue considerably is that the sale of 360- degree feedback instruments, particularly computer-based tools to make the process easier, has become a huge and very lucrative business. Because of the amount of money involved in the industry, there’s a huge level of hyperbole and a lot of exaggerated success stories out there. The 360 method has become one of the more common “management fads.” That’s not to say it can’t be useful, but often the problems associated with it are ignored in favor of an unbalanced focus on its strengths.
So what is the company to do when transforming themselves?
The goal should be to get to a system more like what Adobe Systems did in late 2012. They replaced their system with “check ins”. Some companies choose to jump straight to that once they have the basic organizational structures to support agility in place. (e.g. Scrum or Kanban with a Scaled Framework around it like SAFe) Others chose what I’ll outline here.
One option we are seeing happen a lot is the 4-part Performance Review. The review is broken down as follows:
- Part One: 1/3 of the score is a 360-Review from the team.
- Part Two: 1/3 of the score is a more traditional regarding career development goals
- Part Three: 1/6 of the score is the Team’s Performance to all of the delivery metrics (build the thing right and the right cadence)
- Part Four: 1/6 of the score is the Team’s Performance to all of the Pirate metrics (build the right thing)
The critical piece to make this work is to make sure all of the instrumentation is in place to make quantitative judgements.
Deeper Dive into the Performance Appraisal
To ensure that the performance appraisal is fair and an accurate representation of individual performance to plan, lets first discuss what a Performance Appraisal is and isn’t.
Again, from Performance-appraisals.org, there is a difference between Performance Appraisal and Performance Management. Performance Management. The Performance Appraisal is part of an overall Performance Management program. Where Performance Management is about the entire system of managing the performance of the organization, the performance appraisal is the natural end-point for assessing how an individual did during the performance period. It started with the development of a strategic plan that became and operational plan that became a tactical plan which became an individual development plan (sometimes called a Personal Development Plan or Professional Development Plan).
The performance management program is “an ongoing communication process, undertaken in partnership, between an employee and his or her immediate supervisor that involves establishing clear expectations and understanding” Topics of collaboration should include:
- the essential responsibilities of the employee
- how the employee’s job contributes to the goals of the organization
- what “doing the job well” means in concrete qualitative and quantitative terms such as specific markers for skills mastery using a framework such as the Dreyfus Model of Skills Acquisition and Bloom’s Taxonomy to inform goals in Hard Skills (Content or Technical Skills) and Core Skills (Soft Skills).
- how employee and supervisor will work together to sustain, improve, or build on existing employee performance including professional continuing education goals
- how job performance will be measured (What does below expectations, meets expectations and exceeds expectations really mean?)
- identifying impediments to performance and removing them
From every work written about Performance Management, it requires regular, two-way dialogue between the performance management (line-manager) and the employee. The emphasis should be on learning and improving.
Note that Performance Management isn’t:
- something that happens to an employee without their input
- a means to dictate how a person is to work
- used only for performance remediation
- checking the box once a year
If I get some time, I’ll walk through one transition plan to the type of Performance Management program Adobe is using.
HINT: They call them “Check-ins” and it happens almost every week. It kinda sounds like a Retrospective. The 360-degree appraisals should happen once a quarter and be like a private retrospective. If using SAFe, consider using the Program Increment (PI) Inspect and Adapt (I&A) as the point for the 360-degree review and resetting goals.
- D. Baer, “Why Adobe Abolished The Annual Performance Review And You Should, Too,” Business Insider, 10-Apr–2014. ↩
- J. Pletz, “The end of ‘valued performers’ at Motorola,” Crain’s Chicago Business, 02-Nov–2013. ↩
- P. Korkki, “Invasion of the Annual Reviews,” The New York Times, Job Market, 23-Nov–2013. ↩
- K. Eichenwald, “How Microsoft Lost Its Mojo: Steve Ballmer and Corporate America’s Most Spectacular Decline,” Vanity Fair, Aug–2012. ↩
- J. Brustein, “Microsoft Kills Its Hated Stack Rankings. Does Anyone Do Employee Reviews Right? – Businessweek,” Bloomberg-Businessweek, 13-Nov–2013. ↩
- D. Baer, “Performance Reviews Don’t Have To Be Absolutely Awful,” FastCompany, 02-Dec–2013. ↩
- C. Vaishnav, A. Khakifirooz, and M. Devos, “Punishing by Rewards: When the Performance Bell-curve Stops Working For You,” Massachusetts Institute of Technology, Cambridge, MA, Masters Thesis, 2006. ↩
- J. Pletz, “The end of ‘valued performers’ at Motorola,” Crain’s Chicago Business, 02-Nov–2013. ↩
- Bacal & Associates, “Strengths Of 360-Degree Feedback Schemes,” The Performance Management & Appraisal Resource Center. ↩
- Bacal & Associates, “What Performance Management ‘Is’ And ‘Isn’t,’” The Performance Management & Appraisal Resource Center. ↩