Facilitating Innovation in a Distributed Workforce

Facilitating Innovation in a Distributed Workforce

Previously in this section, we looked at some of the difficulties involved in organising communities of people into optimal innovation ‘crowds’ who are best placed to solve challenges posed by a business. 

The considerations were many, covering many aspects of a typical innovation ecosystem, for example; design of effective idea evaluation criteria that align with existing, and evolving, organisational objectives;  creating a balance between increased communication & participation, and a workforce that could drown with too much information;  choosing an audience with the optimal blend of experience, skills and viewpoints for each innovation initiative;  and building innovation confidence and capability across the organisation to relieve pressure from the central Innovation Team.

In this article, we pay particular attention to the first of the areas noted above;  with ever increasing numbers of employees and innovation team members working ‘out-of-the-office’, how does an organisation create an innovation design that effectively combines weight of opinion from the crowd with a structured evaluation framework which always adheres to the organisation’s current and evolving priorities and strategy?

In particular, how can the design help build momentum, consistency, and innovation capability within the organisation, when opportunities for traditional workgroup events that supported these aspects are likely to be much more limited?

 

A Question of Use Case

When onboarding new smartcrowds® clients, an early question we are regularly asked as we assist with the design of their innovation ecosystem is ‘how do you recommend that we assess each submitted idea?”  This can be a difficult question to answer without first delving into the specific intended ‘use-cases’ that each organisation intends to support within their ecosystem. 

Take the use-case of a Continuous Improvement programme for example, in which a series of ‘broad’ improvement or innovation areas (which some refer to as ‘idea buckets’) are created for employees to deposit their ideas.   With the lack of specificity that is inherent in this broad innovation approach (e.g. “We need your ideas to improve Operations”), those people tasked with selecting the most promising ideas need clear guidance during idea evaluation to help determine whether each of the wide-ranging ideas actually bring benefit to the organisation in some meaningful way.  That guidance is often best served using assessment criteria that are closely aligned to the priorities and KPIs of the organisation – and potentially those of the division or department that is seeking ideas – to be effective (Note - we cover the complexity of an organisational-design approach to continuous improvement in the next article in this section)

Contrast this with the use-case of an Innovation Challenge, which is where we will focus in this article.  Defined properly, an innovation challenge will set out a clear & specific problem, issue or opportunity for which the organisation is seeking solutions – for example “We’re looking for ideas that will make our playgrounds safer for children”.  With specificity, the task of selecting ideas that best meet the challenge  becomes immeasurably easier – it should be obvious which idea(s) could help solve the problem.   As a result, the evaluation process  could arguably be greatly simplified – i.e. “Will the idea help us solve this problem”? 

Unfortunately, many organisations encounter the common pitfall of criteria overcomplication at this point. There are a number of approaches that an organisation could take to determine how best to evaluate submitted ideas for this very specific challenge. 

One approach, making the assumption that the challenge owner or sponsor should really know how best to evaluate ideas for their own challenge, might be to simply leave it to that person to devise the most suitable evaluation process – in effect creating a bespoke set of criteria that, in their eyes, best support the goals of that challenge. For example, cost to implement, likelihood of obtaining planning permission, citizen/parent desirability, probability of achieving the stated outcome etc.

In practice, the myriad of avenues for evaluation are likely to create a criteria-setting headache for the challenge owner that could lead to a crisis of confidence, and any momentum that was being built can quickly become eroded. This crisis is only likely to be compounded by the post Covid-19 business environment in which greater numbers of challenge owners find themselves operating in a more distributed, remote setting where group decision making, often helped by face to face discussion and resulting consensus, is harder to come by.

An alternative approach is to address the task of evaluation criteria setting by examining the hierarchical objectives and KPIs that the organisation has already set and shared with the workforce – since these have been mulled over and agreed by organisation’s execs and senior management team and form the basis of everything the organisation is seeking to do, they must surely make a good starting place as evaluation criteria for the innovation Challenge? 

Whilst this might seem like a sensible approach, it creates a different set of headaches for challenge owners; 1) from the long list of organisational objectives and KPIs that flow down through the organisation, how do they choose the relevant ones, with any confidence that they are the actually best ones; 2) having identified some KPIs that might seem appropriate, how do they confidently translate these into evaluation criteria or measures that make sense in an innovation setting, and; 3) given that organisational priorities change from year to year, how do all would-be challenge owners (if we are to assume that organisation is aiming to democratise this activity to any degree) stay abreast (in their remote place of work) of the organisation’s current strategic priorities and ensure that what they are doing actually aligns with where the business is trying to go.  

Neither of these approaches seems satisfactory, particularly when the difficulty that one challenge owner is wrestling with is  likely to be same for the next challenge owner, and the next -  which doesn’t sit well with an innovation ecosystem that is designed to make innovation faster, easier and better.

 

Designing for Speed, Momemtum & Capability

In our experience, a much better design for approaching challenge-based idea evaluation is to consider 3 simple questions when looking at an idea. 

  1. How well does it align with the stated challenge mission?
  2. What level of positive impact will it deliver?
  3. How much uncertainty is involved in delivering it AND achieving the potential impact?

By answering these 3 simple questions, an organisation can quickly learn 3 very important things. 

First, it will inform the challenge team  whether the idea is, or isn’t, going to solve the problem that the leadership team has already prioritised for organisational focus.   If an idea is aligned, it should be considered ‘in-play’ for potential shortlisting.  If an idea is not aligned, it’s not going to solve the challenge that has been set, and MUST be either closed, or moved elsewhere for some other team to  consider.  By keeping an idea that isn’t aligned ‘in-play’ , the organisation risks losing focus on the specific challenge at hand.

Second, it enables the challenge team to quickly and easily prioritise those ideas that are aligned to the challenge when the idea ‘selection’ process begins.   Those ideas that indicate a high-impact potential are likely to be very ‘interesting’ to the challenge sponsor(s) and the leaders of the organisation.  

Third, it highlights where genuinely new, breakthrough thinking, and solutions might lie.   Ideas that exhibit high uncertainty do so because they haven’t been done before – either by the organisation itself, within the sector in which the organisation operates or, wider still, nationally or globally.  Irrespective of scope, by virtue of not having been done before, a demonstrable level of innovation has likely been put forward for consideration.  These high-uncertainty ideas are likely to be the most ‘exciting’ for the challenge sponsor(s) and the leaders of the organisation.

With these 3 pieces of information the organisation can confidently move on to the shortlisting and selection process.

This normally, then, raises the obvious question -  what about the other areas that need be considered before moving forward with an idea?  The cost, timescales, customer need, and many other areas that we would surely need to consider before progressing an idea? 

In fact, these criteria are what we normally refer to as “uncertainties”.  They are a set of, as yet, unknowns that must be explored and validated to keep the shortlisted idea ‘in-play’, or to remove it from play (a “smart kill”) because an uncertainty cannot be successfully resolved – for example, confirming through exploration that the cost to implement would be prohibitive.  Note - the topic of ‘uncertainty exploration’ using fail-fast-fail-cheap methods is a theme that we cover elsewhere.

Fundamentally, an exciting idea is an exciting idea, and through exploration of its uncertainties even an idea that might initially seem ‘undeliverable’ can be morphed into a version of itself that might just make it.

 

The Importance of Patience in Decision Making

In other words, at all costs organisations should resist the temptation to design an innovation system that requires too much decision making at the early idea evaluation and vetting stages, and allows potential future uncertainties to pre-empt the initial evaluation – an inevitable consequence will be a loss of creativity and meaningfully unique ideas in the final shortlist, leading to a challenge outcome that never quite meets the high expectations of the Senior Leadership Team.

With more teams working remotely from one another, and access to the attention of the organisation’s leaders likely to be more restricted than ever, simplification of any overcomplicated decision-making process must surely be an organisational goal.   The 3-question approach that we recommend simplifies decision-making on 3 levels:

First, it creates an approach to criteria setting that can be applied consistently across all innovation challenges. With consistency comes ease of adoption.  We can confidently state that in our 20 years+ of delivering innovation projects across almost every business sector, we have yet to come across a single project that couldn’t follow this approach. 

Second, it increases the confidence of would-be challenge owners to bring their own challenges forward, safe in the knowledge that the definition and application of evaluation criteria is following ground that has already been tread.  With more challenges being brought forward by confident, motivated employees, more innovation will flow.

Third, it significantly simplifies the process by which innovation capability can be nurtured and extended out across the organisation – when capability becomes part of the fabric of the organisation, it is not bound by the bricks and mortar of the organisations office, rather by the capability of its teams, wherever they happen to be.

Whilst the advice above presents a 3-question approach to idea evaluation that, in our experience, can be applied to most (if not all) situations, many organisations will have their own view on what questions should be posed at this point, which is a perfectly valid viewpoint.    

Crucially though, in light of the wider use of distributed working that many organisations are likely to embrace, the most important point to consider is the adoption of a consistent framework that can be applied across the organisation – by taking this approach the organisation will create a pathway to an innovation system that supports and accelerates the growth of innovation capability across the business. 

 

Getting the Message Out

Having established an effective, consistent framework to help distributed teams carry out early evaluation of ideas, the next task is to get the message out to the workforce (and wider stakeholders if appropriate), via a well-framed innovation challenge.  This brings in to play additional elements of the innovation ecosystem that must be considered to enable democratisation of innovation activities across the organisation.

As we touched on at the start of this article, choosing the appropriate audience for the intended innovation challenge is a more difficult task than might be imagined.  Launching too many challenges to too many people will leave the workforce drowning in innovation information – they will simply turn off and lose focus.  At the other end of the scale, limiting each challenge to people with expertise in that area acts as a barrier to innovation, stopping alternative viewpoints and inspirations from entering into the thinking of the innovation team.

The difficulties associated with optimal audience selection can create such a headache for the challenge owner that that it can add a significant delay, or even create a permanent blockage, to publication of the challenge in the first place.

Taking an organisational-design approach to the development of the innovation eco-system, therefore, can act as a key enabler of momentum and effective idea generation, creating a hierarchical structure with organisationally-aligned containers into which innovation challenges can be launched by both experienced and first time challenge sponsors alike - safe in the knowledge that issues over audience selection, along with other important ecosystem considerations, have already been dealt with.

In the final article in this section, we address this area by moving on to the subject of the organisationally-designed innovation system, focusing on the “Continuous Improvement / Innovation” use-case to describe how such an approach can align, extremely effectively, the strategic priorities of the organisation with the innovation initiatives that will help achieve them.

 

footer.png