Wednesday, 30 July 2014

Fine-Tune Your Voice of Customer with ODI

Applying Outcome Driven Innovation (ODI) Techniques to Quantify Voice of Customer and Translate it into Customer Value

Voice of the Customer and ODI Article Image

Outcome Driven Innovation (ODI) is a methodology for harvesting customer insights in a way that distills what matters most to customers. Its original purpose was to provide basis of evidence to drive product innovation but I’d argue it has the potential for broader application. In this article I share how ODI is invaluable as a mechanism for Organizational Designers to find new Customer Value by quantifying what the Voice of the Customer MEANS to the enterprise.

Are you familiar with Outcome Driven Innovation (ODI)?  I won't blab on about the detail of how ODI came about, the people behind it - or the low-down on how the method works in this article because you can always google it - but I did want to explain why it matters to every organization.

If you've read any of the volumes of work authored by Peter Drucker then you’ll know we share a similar instinct when it comes to customer value.  Simply put, most businesses are about finding a source of customer value and then translating that value into a repeatable source of return for stakeholders. It makes sense therefore that businesses should be good at identifying and qualifying customer value and turning it into cashable returns - right?

If you follow this logic then it will probably surprise you how many organizations are 'poor' if not truly terrible at understanding and valuing what matters to customers.  It's not that organizations don't invest good money into listening to and understanding customers: Few would claim not to. The problem is HOW they listen and HOW they apply this knowledge.

I’d also say it’s not the case that organizations that are poor at listening to customers automatically fail.  Far from it.  There are many operationally poor enterprises that continue to make money for shareholders and grow in size and scale even though the do ignore their customers.  Take British Telecom’s Openreach business for example.  The company has a monopoly in the UK on telecoms infrastructure – so if you’re not happy as a consumer customer, tough! Of course, not every company has the advantage of a monopoly, so if you want to grow and be successful IN MOST CASES listening to customers and sourcing customer value is important, if not essential.

Here's a summary of the challenges of gathering and applying the voice of the customer to organizational design and performance improvement:

1. Not listening to customers
Some organizations spend very little time listening to customers.  It could be they are operating as a monopoly and don't care, or they have a pretty solid economic engine and very little competition, so they don't care.  Other organizations lose contact with their customers as the enterprise grows and key managers and relationship holders leave, never to be replaced. But this still represents the minority of companies in my experience.

2. Listening, but not well
Organizations will often rely on information gathering mechanisms like customer surveys, forums, complaints and suggestion systems (etc.) to source their feedback. Often, these mechanisms distort the voice of the customer by their design or in the way they are implemented.  Other times they employ closed questions (i.e. 'Are you happy or not?') and this doesn't provide the opportunity for customers to 'vent' and really say what they WANT to say.  Poorly formalized listening mechanisms can deliver a shallow, hugely inaccurate perspective of the customer voice.  I would say something like 80% of companies fall into this category.

3. Failing to route customer insights to the right parts of the enterprise
Attempts to listen to customers generally happen in small silos around the enterprise.  While front-line customer service staff, sales people, line managers and others will interact with customers frequently, rarely are these insights harvested and shared.  I often see first-hand organizations investing thousands of dollars in listening to customers and then the resulting outputs end up in a file somewhere and never get used.  Sometimes surveys and research projects funded by one department can have more value to other parts of the organization but because they’re funded by one silo, other silos fail to access the insight (sad but true and surprisingly common).

4. Interpreting Voice of Customer Poorly
As organizations grow they develop their own culture and ‘biosphere’.  They start to apply vocabulary, understanding, norms of behavior and ways of making sense of their market that start to skew HOW managers interpret feedback from customers and markets.  Unless care is taken a management team can cocooned itself from customers by assuming it knows what customers care about because the feedback they get only seems to reinforce the assumptions they’ve already built up about their customers and markets.  This can result in poor interpretation of the voice of the customer – mostly driven by opinions and false perspectives rather than evidence and meaningful quantitative measures.

5. Ignoring the learning lessons
Organizations that pass through all of these hoops sometimes fall at the last hurdle. When focused on ‘the day job’ and departmental priorities it can be difficult to break out of the cycle and look up for a moment to consider something new.  Very often the inflexibility of budgets, organizational designs, norms-of-behavior (etc.) can build up a huge barrier of resistance against considering new ideas and fresh perspectives – the sort of things that come from listening to customers.

Where ODI Comes In
Let’s face it, the picture painted by customer feedback can look unclear.  There’s no silver-bullet solution to gathering insights or weighing their importance.  For this reason, organizational priorities and interpretations of voice of customer tend to be driven by management opinions. Outcome Driven Innovation (‘ODI’) helps this process by translating the voice of the customer in a more meaningful, measurable way.

The underpinning common sense behind ODI is that people ‘hire’ products and services to get a job done better.  Appreciating ‘what the job is’ and then assessing ‘what unmet needs exist’ helps to frame customer feedback into manageable chunks.  Of course, when assessing what unmet needs exist, organizations must also qualify ‘how much pain’ does the unmet need create and ‘what’s in it for them?’ so it’s important to understand to what extent is the unmet need to hire something being served by another vehicle, mechanism, supplier etc.

The ODI methodology serves to construct a formalized process to churn the voice of the customer into a measurable list of wants that enables organizations to appreciate what they can do to bring more or so far untapped customer value.

The Relevance of ODI in Organizational Design
My philosophy behind OD is that it should create better organizations by design – better meaning able to produce more stakeholder value, higher levels of workforce productivity, improved customer journeys and a generally more efficient and effective economic engine for churning out customer value and producing the wonderful bi-product that is profitability. 

For me, ODI is the essential ingredient to that mix:  No strategy should be formed or fashioned into a balanced scorecard until leadership teams KNOW what the voice of the customer is and how their enterprise can translate it into value.

So let me ask you, what matters most to your customers?


Tuesday, 20 May 2014

From Agile Scrums to Codeless Workshops - Tracking the evolution of enterprise software development

Will the transition from Agile Scrum development to Codeless Real-Time Development Workshops be as high impact as the move from Waterfall to Scrum?

In the Beginning

For companies that were accustomed to adopting a waterfall project approach for their software developments, the move to agile scrum methods has been dramatic, both in terms of its results and its impact on the size and skills of project teams.
Waterfall describes a traditional software development approach (that dates back well into the 1970’s) where different threads of development are performed by different development people or teams. 

When development teams adopt the waterfall method, projects are scope up front and then components are developed in parallel by different people – often employing specialist skills and tools.   The number of people involved in projects and the many threads of activity creates a complex project model that then requires expert project management tools, methods and skills to orchestrate (PRINCE2 became the answer to this eventually and led to a whole new set of skills requirements for companies to mince over).

Following development of the various streams of the waterfall, these are bound together and formed into a final ‘end-product’.  This approach has been fraught with project management issues. Delays in any of the parallel developments causes the entire project to over-run.  Often, ‘next steps’ of integration with third party tools and data, and activities such as testing and tuning, can’t be progressed until the last thread of development has been done. This can result in huge project over-runs and costs.  
The article ‘Why Your IT Project May Be Riskier Than You Think’ by HBR (November 2011), that followed a survey of 1,471 IT projects with an average spend of $167m, found that the average overrun was 27%, one in six of the projects studied was a black swan, with a cost overrun of 200%. And almost 70% of black swan projects also overrun their schedules.
Even when projects do get delivered on-time, the consequences of using one or many expert programmatic tools means that documentation is more complex, testing and tuning takes longer, and more bugs are likely to be embedded in the code. EVEN should everything go according to plan, the remoteness of intended system users and stakeholders to the back-room development process means that resulting systems are difficult to get right as a perfect fit to the user community.  It’s quite an art to write a paper specification for a software application and produce a perfect system right-first-time.  More like impossible in fact.

A major challenge all development teams face is application users and stakeholders rarely know PRECISELY what they need in terms of database management, reports, process workflows and user interfacing requirements out-of-the-box.  Normal domain expert business users find it hard to visualize end-applications and often struggle to formalize their end-product in a format that programmers can use. 

Accepting the high resourcing costs, need for costly specialist best-of-breed tools, use and constraints of coding and its impact on testing, tuning and adaptability, perhaps the biggest issue of traditional waterfall development tools and methods however lies in what gets left behind.  Programmed systems are inherently difficult to maintain and adapt. Companies don’t particularly want to be left with systems they can’t change over time, or that require a team similar in size to the team that built the application to maintain it into the future (mainly because of the dependencies created on a smorgasbord of best-of-breed tools and skills.

The step-up to AGILE SCRUM

Advances in Integrated Development Environment (IDE’s) since the turn of the century, aided by the timely emergence of standards around enterprise web server computing platforms, has produced a number of capable tool-kits to enable software development teams to work more closely together with fewer tools and fewer expert skills.  This reduction in tool-kit complexity has meant the project approach has been able to transition more towards a team-based culture where individuals working towards a shared goal can work together in a collegiate way.

AGILE tools are so-called because they provide development teams with more adaptive tools that produce results faster.  One aspect of the ‘agility’ comes from using bigger building blocks of code that can be shaped to fit a variety of needs but doesn’t require programmers to start with a blank page. 
The SCRUM model has evolved around the need for members of development teams to keep in close contact with one another and stay on the same page.  More frequent reviews of progress – perhaps on a weekly, even daily basis – ensures that the streams of development stay on-track and can be regularly check to ensure the quality of the end-product is as it needs to be and that project managers can check time and again that ‘what is being produced’ is ‘what the internal client wants’.

AGILE tooling does not provide developers with everything they need in one box.  To improve the integration and use of third party expert tools, software providers today normally provide an Applications Programming Interface (API) so the various tools are EASIER to integrate with one another.  While this is EASIER it remains a major overhead on developments as each API tends to be different and developments inherit any limitations presented by the ‘black box’ of functionality provided by the best-of-breed software they are integrating with.  This inability to interplay one application with another using the same properties means that developers find themselves constantly having to come up with work-arounds to produce the best end-product.

SCRUM has its limitations too. While it’s great for developer teams to work towards a common goal and take more of an active interest in making sure their contribution is producing the best ‘total’ outcome, developers continue to work in back-rooms and continue to lack an appreciation of the domain area they are building applications for.  Any developer knows the ‘last mile’ (i.e. managing the interaction with the user and stakeholder communities) is the most difficult part of an application to get right as different user groups will have very different expectations and needs from the applications they use. 
Like the traditional waterfall approach, AGILE SCRUM still relies on coding and that’s a problem.  Not only does it install more post–development overheads are more bugs are built into the end-product, and more testing, tuning, integration is necessary, it means that resulting applications continue to be expensive to maintain and adapt.

The New Age of Codeless Development

So-called ‘codeless’ development software describes Integrated Development Environments that displace the need for applications authors to code.  The leading enterprise platform for this approach is Encanvas.  All of the building blocks required to develop enterprise applications are built into a common IDE. ‘Design Elements’ - as they are known - adopt similar properties and have been designed to provide a sufficient level of tailoring to fit the majority of needs. 

At one time, it was thought that codeless platforms would be only good enough for piloting of projects or serve ‘little application’ needs of the enterprise where larger platforms were too costly to use.  The last decade has shown however that IDEs like Encanvas are capable of delivering enterprise, pan-regional and cross-industry applications to enterprise scale and levels of performance.

In the same way that a step-change in enterprise web platform standards at the turn of the century led to the evolution of SCRUM methods for software applications authoring, use of codeless tools has led to the formation of near-real-time applications authoring methods.

With codeless software, the removal of the programming activity in project (and the removal of best-of-breed tools and the need for APIs) has meant that users and stakeholders can be engaged in development workshops where applications designs are iterated in consort with the users and stakeholders who are able to see and understand what’s being authored because they see the ‘end-product’ being formed in-front of their eyes.

Codeless software has dramatically reduced the number of tools and the variety of skills needed to author applications, but its biggest impact is found in the leave-behind: resulting systems can be easily adapted as business requirements and user needs change.  Neither are there any upgrade costs associated with software; platforms like Encanvas enable hundreds of applications to be authored and do not charge for any new improvements to the IDE.  This means IT leaders seeking to reduce the number of technologies they support in their enterprise IT architecture can shrink their footprint from 100’s of tools to less than 10.

The bi-product of this change in technology tooling and methods makes a big impact on IT budgets. The expectation of IT leaders now is that IT operations should cost no more that 1% of operating profits. Deployments delivered through codeless workshop development methods demonstrate a factor of 10 increase in time-to-market and a similar dynamic in the lowering of onward IT costs.

Will the move from SCRUM to CODELESS be bigger or smaller than the move from WATERFALL to SCRUM?

I would argue the move from SCRUM to CODELESS has a greater bottom-line impact but a lesser change impact compared to the original transition from WATERFALL to SCRUM for the following reasons:

  1. IT departments have already shrunk their internal IT teams and now use contractors for much of the heavy-lifting in software development and support. What the move to codeless does is internalize the support of IT systems – making IT more adaptive and better able to support the long-tail of demand for new or better enterprise applications - while displacing the use and cost of contractors. 
  2. IT departments are already using IDEs and reducing their use of specialist tooling.  Platforms like Microsoft .NET and Ruby-on-Rails have done much to consolidate tooling for developers.  What Codeless platforms do is FURTHER consolidate tooling.
  3. IT leaders are already finding ways to embed IT development skills ever closer to the domain expert communities that use applications.  What codeless software does is take development out of the back-room into the front-office workshop so that developments produce faster, better results that the users and stakeholders WANT to use because they were instrumental in the design of applications they use.
  4. IT leaders are already rationalizing the numbers of technologies they support in the enterprise technology stack.  What codeless software does is further reduce the footprint of technologies employed across the enterprise to install more commonality in development methods and tools.
  5. Arguably the biggest change factor then with codeless workshop development lies in the impact of an IT department being able to keep-up with the ideas and process changes of the enterprise.  This has surprisingly big organizational design implications.  For decades people in the business have been able to BLAME IT for failing to support their domain needs.  Consider what happens when suddenly it does:  No longer is IT the scape-goat.  Also consider the impact of embedding software development into improvement teams as some innovative companies like AUDI GROUP already do.  The analysts that scope and build the applications are PART of the improvement team, incentivized by the impact of the process changes they manifest rather than being paid for ‘writing code’. Departmental and operational managers will be inevitably more responsible and accountable for IT as part of their function.  They have no choice but to consider the performance of the IT they use as being part of their portfolio of responsibility.  The role of the CIO becomes one of guardianship of IT resources rather than the deliverer of every single IT initiative.

  Keywords:  Waterfall, Agile, Codeless Software Development, CAAD, Encanvas, Software Development Methods, ALM, Application Life-cycle Management

Monday, 19 May 2014

Secrets of Going Codeless

How to Build Enterprise Apps without Coding

Codeless Software Development

Building applications without code still requires methods and skills. In this article I describe the methods that Encanvas has used for well over a decade to successfully deliver situational applications developments.

The first application created on Encanvas in a codeless form was a silly form for appraisals.  That was in 2002 I seem to recall.  Since then the size and scale of applications authored 'codeless' have included several enterprise-wide systems, a few pan-regional systems and even industry-wide systems.

In the early years of Encanvas, most of the applications were created by Andrew Lawrie (the creator of Encanvas) himself.  It was impressive enough that Andy could take an enterprise requirement and just 'make it happen' all by himself, but since then the community of Encanvas authors and architects has grown exponentially.

The types of backgrounds of authors too is wide and varied.  I've produced a few systems and my background is generally sales and marketing.  A couple of the guys are 'proper developers' that have sold their soul to the codeless god, while many are domain expert consultants, entrepreneurs, project managers and analysts.  What do they have in common?  They can all use a mouse I guess!!

As the scale of applications gets bigger, and the variety of author skills broadens, it becomes more important to adopt methods to optimize the authoring process. No question, it is possible without method to be able to produce results with codeless software, but then that places a 100% reliance on the talents of the practitioner.

CAAD came out of a series of 'interventions' to streamline the codeless authoring process that all hang together pretty well and perform certain key tasks well (like requirements gathering).

Hopefully you find the linked document interesting and easy enough to absorb.  If you have any questions, I'd be happy to field them.



Enterprise Applications, Codeless Software Development, CAAD, Encanvas, Situational Applications, IT Strategy, Application Life-Cycle Management

Friday, 16 May 2014

CAAD Codeless Situational Applications Development Methods for Analysts

CAAD Methods

I produced this article to describe how analysts can build enterprise-scale situational applications time and again without coding based on the methods adopted by the team at Encanvas over the past decade.  The methodology hasn't changed too much over the past few years but we continue to learn from customers and practitioners.

It's only in the last three or four years that we've honed the use of Outcome Driven Innovation (ODI) and Blue Ocean methods as part of this approach.

One of the factors that we've found critical to success is to specify the requirements, outcomes, user group structures, reporting needs (RPRS Analysis) etc. BEFORE jumping into a meeting room and workshopping solutions.

It's great to work in a collaborative way but having the structures and outcomes buttonned down means that analysts can get to work on the boring 'basic modelling' of a solution and have something to walk around before users and stakeholders start iterating their design.

I hope you find it helpful!


Codeless software development, CAAD, Software Development Methods, Agile Software Development Methods, Encanvas, Enterprise Situational Applications Development Methods

Friday, 9 May 2014

Mining Sub-Surface Spend Economies - The role of Federated Spend Analytics Platforms

Sub Surface Spend

Slideshare presentation on sub-surface spend

In this presentation I argue the case for federated spend management software tools as a vehicle to affordably source sub-surface spend economies in organizations where procuement teams have harvested the low hanging fruit and now need to consider digging deeper into spend behaviors to achieve the next stage of procurement economies.

Key words:
Sub-Surface Spend, federated spend analysis platforms, encanvas, situational applications, data mashups, actionable insights, procurement strategies

Tuesday, 6 May 2014

$ub-$urface $pend - Mining the Untapped Spend Economies

All Seeing Eye Image for Sub-Surface Spend Economies Article

How Federated Spend Analytics Software taps into hard-to-reach sub-surface spend economies with its ‘all seeing eye’ and fine-grained analysis

Federated Spend Analytics Software is cloud-based business software that places a lens across back-office administrative computer systems to expose areas of potential spend economy. In this article I explore how the latest generation of cloud-based applications platforms are facilitating fine-grained analysis of spending behaviors to expose new opportunities for cost reduction and efficiencies in the procurement function.

People not acquainted with procurement so often see the discipline as a reactionary back-office function, but anyone who’s dealt with high performing procurement teams knows that it’s one of the few discipline areas tasked with looking ACROSS the silos of operation that tend to build up in organizations; giving procurers the opportunity to invest time in seeking better ways of working and find cost reductions through more thoughtful buying approaches.

Although there is no single journey to better ways of buying, procurement teams will quite sensibly start most cost reduction initiatives by tackling the highest spend areas first.  Next will come the poor performing or poorly controlled areas spend. Then a third tranche of cost economies is gained through rationalization and compressing the supply-chain (these are typically more complex ‘root and branch’ changes that demand more transformation and change in the way the enterprise works and so quite sensibly fall further down the transitional scale even though sometimes the rewards can be something akin to a step-change). 

As procurement teams venture through this life-cycle, they change their own DNA and move from being reactionary to proactive.  Mature procurement teams might well have delivered ALL of these initiatives and are now on the march to find new sources of stakeholder value, cost reduction and improvement (think of it as a mining company that initially searches out mineral reserves on the surface to then move deeper and deeper through the strata in search of new reserves).  Contemplating this next-stage sub-surface procurement initiative is where many procurement leaders now find themselves – but they also know, to achieve the best results whilst keeping their own operational costs down, they need better exploratory tools for their teams.

But for what benefit?  The obvious areas of cost economy in sub-surface procurement initiatives comes from better understanding the ‘when, where, hows and whys’ of procurement behavior to then benchmark approaches to ‘best practice’ (sometimes happening in different industry sector) to then install new behaviors.  Unfortunately, the cost of sub-surface procurement changes can rack up unless procurement professionals gain the tools to more easily speculate on areas that look on the surface to be efficient.  Time-to-value in turning potential weaknesses in procurement to rewards in terms of cost savings becomes a key measure of success.

One hurdle that most procurement teams encounter when transitioning from a ‘reactive’ to ‘proactive’ procurement stance then is the need to source richer spend-analytics across all disciplines.  This is the role of spend analytics.  But where spend analytics software tools generally fall down is not in their scope of use, or ability to present insights, but in harvesting existing data for analysis. 

Few organizations truly operate ONE computer system across their business.  Even companies that have committed to a core technology stack - such as SAP, ORACLE, IBM or MICROSOFT - will often have multiple ‘systems’ of their chosen platform to cater for disparate operating divisions, geographical variances, group companies (etc).  This means SOMEHOW data needs to be harvested, mashed together and normalized, then presented in a data-mart that allows it to be interrogated.
Like most systems that revolve around ‘people being curious’, good Spend Management systems don’t simply regurgitate data in the form of charts and dashboards, what they do is provide a platform for users to ask new questions about what, where, when, how, why and whom in the enterprise spends money.

Federated Spend Analytics platforms are the newest breed of performance tooling for procurement professionals operating in large and complex organizations. Federated platforms should as a minimum provide the following capabilities:
  •  Harvest data from existing applications without requiring manual interventions
  • Transform and normalize data to produce coherent data perspectives
  •  Form a unifying data-mart that auto-refreshes with the latest data from back-office systems
  • Present user interfaces supporting interactive charts, maps, dashboards, side-by-side comparison tables and other visualization tools to distill key facts
  • Support the ability for users to create their own KPIs, alerts, chart and map views
  • Install RASCI role accountability to spend analysis process
  • Provide an always-on (secure) platform accessible via browser-based devices
  • Scale to meet enterprise needs

By implementing a Federated Spend Analysis platform, what should procurement teams expect?  While each organization will benefit differently from adopting a federated system here’s a quick run-down of some of the more common ROI outcomes:

  • Multi-threaded sourcing and multi-linking of data; Tooling to harvest data from multiple sources and augment automation of information flows to route data and place it into a data-mart
  • Ready-to-use design elements; Tooling that provides self-service creation of situational applications and analytical dashboards, interactive charts, maps etc. without coding.
  • The means to be curious; to ask ‘when, where, how and why’ questions without hugely increasing frictional costs associated with the enquiry process itself (such as time spent authoring reports, sourcing data, manual data entry and data crunching, time spent working spreadsheets etc.).
  • The means to expose fine-grained economies; which, by minimizing enquiry and analytical costs, become economic to act on.

  • Opportunities for sub-surface procurement economies by distilling fine-grained operational behaviors (made affordable by reducing enquiry/reporting/analytics unit costs).

  • Time-to-value; expose weak-points in procurement faster by having the ability to interrogate data rather than creating and running ad-hoc reports time and again in the hope that a sub-optimal behavior will be uncovered.

  • Reporting and analytical overheads
  • Software change/upgrade costs
  •  Technology complexity; save on the number of tools needed to harvest, analyze, report, share insights and related cost of authoring situational applications to ACT on learning lessons.

If you have implemented projects/tools to source sub-surface spend economies it would be great to hear about them.  Otherwise, if you’ve yet to explore the potential for sub-surface spend projects I would be happy to talk over the subject with you.