Agile Professional Development

A strategy for organizational success

Professional development is a journey, a journey that takes an individual from point A (current state) to point B (desired state). I am a Product Manager who manages product road maps that look to turn existing situations into preferred ones with the innovative use of technology. My teams typically embark on this product development journey using agile methodologies to produce desirable business outcomes, building agility by regularly measuring success. Yet when it comes to professional development, organizations often practice ineffective methods to measure and nurture the growth of its people, that often results in the employee feeling frustrated or dreading the impending review cycle.

When it comes to annual review time, I work very hard at creating my own professional development presentations– summing up my year with interactive timelines, personal goals met, and inspiring stretch goals for my own future aspirations. The reality is, while it’s a great opportunity to showcase my initiative and drive, corporations most likely have standards and protocols put into place requesting specific information. So, often I find myself adhering to systems and not branching outside of that to avoid the stressors that can come along with thinking outside the performance review box. Many innovative companies have been rethinking their approach to professional development because the historical performance review hasn’t adequately evolved as companies, procedures, and systems have.  However, as the employee experience has become  one of the biggest drivers of success and the performance of the employee is a measurable outcome of the employee experience lifecycle, organizations can look at it like an opportunity.

It is the end of the year, and a lot of us will be assessing our performance this past year and are likely to be having a conversation with our managers for a year-end review. What if we were to use the same methods of successful product development for our professional development?

The following shows what performance reviews could look like if we were to rethink the process to innovate fast and maximize learning. First, some context on the methodologies I regularly use as a product manager:

Innovate fast with lean experimentation

Lean methodology focuses on small hypothesis-driven experiments to drive the innovation of a product road map.

Experiments are built to fail. They are often repeated many times in large volume in order for the results to be valid. This means they must be cheap. Applying this methodology to startups was obvious, most startups don’t have a lot of cash when they begin their journey. However, applying this at scale has become quite fashionable. Low investment, high reward? Sign me up! Eric Ries set the stage when he wrote Lean Startup, with nearly every body of knowledge now having its own Lean revolution.

Maximize learning with an iterative approach

Agility allows teams to get products into the hands of users as quickly as possible. Short iterative cycles with feedback loops allow agile teams to continuously learn. Teams use various methodologies and frameworks to build agility, such as Scrum, a software delivery lifecycle framework. 

Another popular method is the practice of Design Thinking. There is a focus on rapid prototyping and testing with the end user at the center of the problem. Frameworks like these help build agility via an iterative approach for shorter feedback loops.

Add value by incorporating the desirable business outcome

Professional development is a journey and the performance review is the feedback on that journey. Like the user journey of a software product, the journey should take people from an existing situation to a preferred one, and preferred situations should positively impact business outcomes. These business outcomes are the north star of product development, the features in the product road map are the map to that north star.

Let’s get tactical:

Hypothesis-driven design is a tactical approach to problem solving that includes experimentation, short feedback loops and desirable business outcomes. A hypothesis can be created at all levels and stages of the product development lifecycle, from the discovery/visionary stage to the actual user story describing a feature to be built. We create hypotheses to test.

We believe, in order to… desired outcome,

We will… take this action.

This will result in… assumption.

We know we have succeeded when… metric, that validates the assumption true/false.

If we are to apply this to professional development, we need to ask ourselves three important questions that can turn a hypothesis into a performance review. With the hypothesis in mind, we ask ourselves:

  1. How did we do as a team?
  2. How could we improve?
  3. How could I improve?

Lean, iterative experimentation that focuses on business outcomes could result in a meaningful approach to professional development. It can be weaved into your software delivery lifecycle with ownership put to the teams to track success, reducing potential waste of annual review time and increasing ownership and autonomy.

For example: A company is trying to launch a new product in time for an important event, but sprints are not being completed and the deadlines are put at risk. The team’s hypothesis on how to solve this problem (current state to preferred state) could look something like this:

Hypothesis:

We believe, in order to meet our business deadlines,

We will release software incrementally.

This will result in more achievable delivery dates and reduced risk of large product launches.

We know we have succeeded when we have *this* viable feature set in the hands of *this* early adopter user group by *this* date.

Performance Review:

How did we do as a team?

  • Successful incremental releases
  • We did not complete all the work in the sprint

How could we have done better?

  • We need to establish a shared vision with our stakeholders of what is considered to be a ‘viable’ feature set, we tried to deliver too much for the sprint.
  • We need agreed, achievable sprint goals that align to a viable feature set.
  • We need smaller, more achievable success measures as part of our hypothesis.

How could I have done better?

  • Earlier facilitation, with *these* stakeholders to achieve a shared vision of what is viable.
  • Get earlier sign off on sprint goals and requirements.

New Hypothesis:

We believe, in order to meet our business deadlines,

We will get buy-in from our stakeholders on what is viable, and create sprint goals that align to our definition of the MVP.

This will result in more achievable delivery dates and reduced risk of large product launches.

We know we have succeeded when we have agreed achievable sprint goals for the next two sprints, and we successfully close the sprints.

In this example, the goal of the hypothesis is the desirable business outcome — to meet the business deadline. We generated actionable insights via our “how to do better” questions. We iterated based on what we learned and we can run the experiment again and observe the results. When running the experiment again, the team and individuals should be executing their KSAs (Knowledge, Skills and Abilities) or improving on their KSAs in order to bring more value.

This can go both ways if the team also provides feedback to their stakeholders on how the organization can have an impact on the desired outcome. Internal tools, practices, processes, etc. are all potential blockers or drivers of success. Over time, the experiment continues to change as they gather feedback and iterate accordingly.

The performance review becomes a holistic two-way feedback loop that can take place at any strategic point in the lifecycle of the employee, the team, product and/or organization. The conversations taking place are driving outcomes, outcomes that are strategically aligned to business goals. We have linked employee performance to organizational success!

How and when you gather these insights might depend on the software delivery framework you follow. Scrum rituals, Kanban, and Design Thinking practices  all have opportunities to incorporate a professional development process like this. By using the methods you already practice, such as sprint retrospectives, service delivery reviews and discovery sessions, it allows the process to be tailored to what the team already knows.

What can go wrong?

This model works at the individual and team level. However, for true business impact it must move up the chain. A lack of accountability in leadership, that is leadership of the team or the company, could ruin any measure of performance.

Furthermore, if your leadership team is not fluent in or at least inquisitive and supportive of the agile process, then the iterative approach will be lost and misunderstood. Moreover, it could be used against the team – leadership could take it and turn it back on the team placing unrealistic goals that align to broken methods that do not support incremental successes and improvements. 

Lastly, honest feedback. Teams and organizations must be comfortable with honest feedback. If you get dishonest feedback, it will negatively impact the feedback loop. 

Honest feedback = better results. 

The culture should allow for safety, trust and respect up and down the chain of command. No matter how flat a company is, someone is usually a little more equal than everyone else and if the more equal CEO is very comfortable sharing their feedback, but not so comfortable receiving it, then you have a bad CEO and likely far more problems ahead.

There are many examples of how teams are experimenting by applying software delivery methodologies to other departments not necessarily involved in the development of software. Experimentation is taking place at many points in the employee lifecycle. Professional development is catching up and I look forward to the steady decline of annual performance reviews.

 

Background Image