Tuesday 20 May 2014

From Agile Scrums to Codeless Workshops - Tracking the evolution of enterprise software development



Will the transition from Agile Scrum development to Codeless Real-Time Development Workshops be as high impact as the move from Waterfall to Scrum?


In the Beginning

For companies that were accustomed to adopting a waterfall project approach for their software developments, the move to agile scrum methods has been dramatic, both in terms of its results and its impact on the size and skills of project teams.
Waterfall describes a traditional software development approach (that dates back well into the 1970’s) where different threads of development are performed by different development people or teams. 

When development teams adopt the waterfall method, projects are scope up front and then components are developed in parallel by different people – often employing specialist skills and tools.   The number of people involved in projects and the many threads of activity creates a complex project model that then requires expert project management tools, methods and skills to orchestrate (PRINCE2 became the answer to this eventually and led to a whole new set of skills requirements for companies to mince over).

Following development of the various streams of the waterfall, these are bound together and formed into a final ‘end-product’.  This approach has been fraught with project management issues. Delays in any of the parallel developments causes the entire project to over-run.  Often, ‘next steps’ of integration with third party tools and data, and activities such as testing and tuning, can’t be progressed until the last thread of development has been done. This can result in huge project over-runs and costs.  
The article ‘Why Your IT Project May Be Riskier Than You Think’ by HBR (November 2011), that followed a survey of 1,471 IT projects with an average spend of $167m, found that the average overrun was 27%, one in six of the projects studied was a black swan, with a cost overrun of 200%. And almost 70% of black swan projects also overrun their schedules.
Even when projects do get delivered on-time, the consequences of using one or many expert programmatic tools means that documentation is more complex, testing and tuning takes longer, and more bugs are likely to be embedded in the code. EVEN should everything go according to plan, the remoteness of intended system users and stakeholders to the back-room development process means that resulting systems are difficult to get right as a perfect fit to the user community.  It’s quite an art to write a paper specification for a software application and produce a perfect system right-first-time.  More like impossible in fact.

A major challenge all development teams face is application users and stakeholders rarely know PRECISELY what they need in terms of database management, reports, process workflows and user interfacing requirements out-of-the-box.  Normal domain expert business users find it hard to visualize end-applications and often struggle to formalize their end-product in a format that programmers can use. 

Accepting the high resourcing costs, need for costly specialist best-of-breed tools, use and constraints of coding and its impact on testing, tuning and adaptability, perhaps the biggest issue of traditional waterfall development tools and methods however lies in what gets left behind.  Programmed systems are inherently difficult to maintain and adapt. Companies don’t particularly want to be left with systems they can’t change over time, or that require a team similar in size to the team that built the application to maintain it into the future (mainly because of the dependencies created on a smorgasbord of best-of-breed tools and skills.

The step-up to AGILE SCRUM

Advances in Integrated Development Environment (IDE’s) since the turn of the century, aided by the timely emergence of standards around enterprise web server computing platforms, has produced a number of capable tool-kits to enable software development teams to work more closely together with fewer tools and fewer expert skills.  This reduction in tool-kit complexity has meant the project approach has been able to transition more towards a team-based culture where individuals working towards a shared goal can work together in a collegiate way.

AGILE tools are so-called because they provide development teams with more adaptive tools that produce results faster.  One aspect of the ‘agility’ comes from using bigger building blocks of code that can be shaped to fit a variety of needs but doesn’t require programmers to start with a blank page. 
The SCRUM model has evolved around the need for members of development teams to keep in close contact with one another and stay on the same page.  More frequent reviews of progress – perhaps on a weekly, even daily basis – ensures that the streams of development stay on-track and can be regularly check to ensure the quality of the end-product is as it needs to be and that project managers can check time and again that ‘what is being produced’ is ‘what the internal client wants’.

AGILE tooling does not provide developers with everything they need in one box.  To improve the integration and use of third party expert tools, software providers today normally provide an Applications Programming Interface (API) so the various tools are EASIER to integrate with one another.  While this is EASIER it remains a major overhead on developments as each API tends to be different and developments inherit any limitations presented by the ‘black box’ of functionality provided by the best-of-breed software they are integrating with.  This inability to interplay one application with another using the same properties means that developers find themselves constantly having to come up with work-arounds to produce the best end-product.

SCRUM has its limitations too. While it’s great for developer teams to work towards a common goal and take more of an active interest in making sure their contribution is producing the best ‘total’ outcome, developers continue to work in back-rooms and continue to lack an appreciation of the domain area they are building applications for.  Any developer knows the ‘last mile’ (i.e. managing the interaction with the user and stakeholder communities) is the most difficult part of an application to get right as different user groups will have very different expectations and needs from the applications they use. 
Like the traditional waterfall approach, AGILE SCRUM still relies on coding and that’s a problem.  Not only does it install more post–development overheads are more bugs are built into the end-product, and more testing, tuning, integration is necessary, it means that resulting applications continue to be expensive to maintain and adapt.

The New Age of Codeless Development

So-called ‘codeless’ development software describes Integrated Development Environments that displace the need for applications authors to code.  The leading enterprise platform for this approach is Encanvas.  All of the building blocks required to develop enterprise applications are built into a common IDE. ‘Design Elements’ - as they are known - adopt similar properties and have been designed to provide a sufficient level of tailoring to fit the majority of needs. 

At one time, it was thought that codeless platforms would be only good enough for piloting of projects or serve ‘little application’ needs of the enterprise where larger platforms were too costly to use.  The last decade has shown however that IDEs like Encanvas are capable of delivering enterprise, pan-regional and cross-industry applications to enterprise scale and levels of performance.

In the same way that a step-change in enterprise web platform standards at the turn of the century led to the evolution of SCRUM methods for software applications authoring, use of codeless tools has led to the formation of near-real-time applications authoring methods.

With codeless software, the removal of the programming activity in project (and the removal of best-of-breed tools and the need for APIs) has meant that users and stakeholders can be engaged in development workshops where applications designs are iterated in consort with the users and stakeholders who are able to see and understand what’s being authored because they see the ‘end-product’ being formed in-front of their eyes.

Codeless software has dramatically reduced the number of tools and the variety of skills needed to author applications, but its biggest impact is found in the leave-behind: resulting systems can be easily adapted as business requirements and user needs change.  Neither are there any upgrade costs associated with software; platforms like Encanvas enable hundreds of applications to be authored and do not charge for any new improvements to the IDE.  This means IT leaders seeking to reduce the number of technologies they support in their enterprise IT architecture can shrink their footprint from 100’s of tools to less than 10.

The bi-product of this change in technology tooling and methods makes a big impact on IT budgets. The expectation of IT leaders now is that IT operations should cost no more that 1% of operating profits. Deployments delivered through codeless workshop development methods demonstrate a factor of 10 increase in time-to-market and a similar dynamic in the lowering of onward IT costs.

Will the move from SCRUM to CODELESS be bigger or smaller than the move from WATERFALL to SCRUM?

I would argue the move from SCRUM to CODELESS has a greater bottom-line impact but a lesser change impact compared to the original transition from WATERFALL to SCRUM for the following reasons:

  1. IT departments have already shrunk their internal IT teams and now use contractors for much of the heavy-lifting in software development and support. What the move to codeless does is internalize the support of IT systems – making IT more adaptive and better able to support the long-tail of demand for new or better enterprise applications - while displacing the use and cost of contractors. 
  2. IT departments are already using IDEs and reducing their use of specialist tooling.  Platforms like Microsoft .NET and Ruby-on-Rails have done much to consolidate tooling for developers.  What Codeless platforms do is FURTHER consolidate tooling.
  3. IT leaders are already finding ways to embed IT development skills ever closer to the domain expert communities that use applications.  What codeless software does is take development out of the back-room into the front-office workshop so that developments produce faster, better results that the users and stakeholders WANT to use because they were instrumental in the design of applications they use.
  4. IT leaders are already rationalizing the numbers of technologies they support in the enterprise technology stack.  What codeless software does is further reduce the footprint of technologies employed across the enterprise to install more commonality in development methods and tools.
  5. Arguably the biggest change factor then with codeless workshop development lies in the impact of an IT department being able to keep-up with the ideas and process changes of the enterprise.  This has surprisingly big organizational design implications.  For decades people in the business have been able to BLAME IT for failing to support their domain needs.  Consider what happens when suddenly it does:  No longer is IT the scape-goat.  Also consider the impact of embedding software development into improvement teams as some innovative companies like AUDI GROUP already do.  The analysts that scope and build the applications are PART of the improvement team, incentivized by the impact of the process changes they manifest rather than being paid for ‘writing code’. Departmental and operational managers will be inevitably more responsible and accountable for IT as part of their function.  They have no choice but to consider the performance of the IT they use as being part of their portfolio of responsibility.  The role of the CIO becomes one of guardianship of IT resources rather than the deliverer of every single IT initiative.

  Keywords:  Waterfall, Agile, Codeless Software Development, CAAD, Encanvas, Software Development Methods, ALM, Application Life-cycle Management

Monday 19 May 2014

Secrets of Going Codeless

How to Build Enterprise Apps without Coding

Codeless Software Development


Building applications without code still requires methods and skills. In this article I describe the methods that Encanvas has used for well over a decade to successfully deliver situational applications developments.


The first application created on Encanvas in a codeless form was a silly form for appraisals.  That was in 2002 I seem to recall.  Since then the size and scale of applications authored 'codeless' have included several enterprise-wide systems, a few pan-regional systems and even industry-wide systems.

In the early years of Encanvas, most of the applications were created by Andrew Lawrie (the creator of Encanvas) himself.  It was impressive enough that Andy could take an enterprise requirement and just 'make it happen' all by himself, but since then the community of Encanvas authors and architects has grown exponentially.

The types of backgrounds of authors too is wide and varied.  I've produced a few systems and my background is generally sales and marketing.  A couple of the guys are 'proper developers' that have sold their soul to the codeless god, while many are domain expert consultants, entrepreneurs, project managers and analysts.  What do they have in common?  They can all use a mouse I guess!!

As the scale of applications gets bigger, and the variety of author skills broadens, it becomes more important to adopt methods to optimize the authoring process. No question, it is possible without method to be able to produce results with codeless software, but then that places a 100% reliance on the talents of the practitioner.

CAAD came out of a series of 'interventions' to streamline the codeless authoring process that all hang together pretty well and perform certain key tasks well (like requirements gathering).

Hopefully you find the linked document interesting and easy enough to absorb.  If you have any questions, I'd be happy to field them.

Ian.

Keywords:

Enterprise Applications, Codeless Software Development, CAAD, Encanvas, Situational Applications, IT Strategy, Application Life-Cycle Management

Friday 16 May 2014

CAAD Codeless Situational Applications Development Methods for Analysts

CAAD Methods

I produced this article to describe how analysts can build enterprise-scale situational applications time and again without coding based on the methods adopted by the team at Encanvas over the past decade.  The methodology hasn't changed too much over the past few years but we continue to learn from customers and practitioners.

It's only in the last three or four years that we've honed the use of Outcome Driven Innovation (ODI) and Blue Ocean methods as part of this approach.

One of the factors that we've found critical to success is to specify the requirements, outcomes, user group structures, reporting needs (RPRS Analysis) etc. BEFORE jumping into a meeting room and workshopping solutions.

It's great to work in a collaborative way but having the structures and outcomes buttonned down means that analysts can get to work on the boring 'basic modelling' of a solution and have something to walk around before users and stakeholders start iterating their design.

I hope you find it helpful!

I.

Keywords:
Codeless software development, CAAD, Software Development Methods, Agile Software Development Methods, Encanvas, Enterprise Situational Applications Development Methods

Friday 9 May 2014

Mining Sub-Surface Spend Economies - The role of Federated Spend Analytics Platforms

Sub Surface Spend

Slideshare presentation on sub-surface spend




In this presentation I argue the case for federated spend management software tools as a vehicle to affordably source sub-surface spend economies in organizations where procuement teams have harvested the low hanging fruit and now need to consider digging deeper into spend behaviors to achieve the next stage of procurement economies.

Key words:
Sub-Surface Spend, federated spend analysis platforms, encanvas, situational applications, data mashups, actionable insights, procurement strategies

Tuesday 6 May 2014

$ub-$urface $pend - Mining the Untapped Spend Economies

All Seeing Eye Image for Sub-Surface Spend Economies Article


How Federated Spend Analytics Software taps into hard-to-reach sub-surface spend economies with its ‘all seeing eye’ and fine-grained analysis

Federated Spend Analytics Software is cloud-based business software that places a lens across back-office administrative computer systems to expose areas of potential spend economy. In this article I explore how the latest generation of cloud-based applications platforms are facilitating fine-grained analysis of spending behaviors to expose new opportunities for cost reduction and efficiencies in the procurement function.

People not acquainted with procurement so often see the discipline as a reactionary back-office function, but anyone who’s dealt with high performing procurement teams knows that it’s one of the few discipline areas tasked with looking ACROSS the silos of operation that tend to build up in organizations; giving procurers the opportunity to invest time in seeking better ways of working and find cost reductions through more thoughtful buying approaches.

Although there is no single journey to better ways of buying, procurement teams will quite sensibly start most cost reduction initiatives by tackling the highest spend areas first.  Next will come the poor performing or poorly controlled areas spend. Then a third tranche of cost economies is gained through rationalization and compressing the supply-chain (these are typically more complex ‘root and branch’ changes that demand more transformation and change in the way the enterprise works and so quite sensibly fall further down the transitional scale even though sometimes the rewards can be something akin to a step-change). 

As procurement teams venture through this life-cycle, they change their own DNA and move from being reactionary to proactive.  Mature procurement teams might well have delivered ALL of these initiatives and are now on the march to find new sources of stakeholder value, cost reduction and improvement (think of it as a mining company that initially searches out mineral reserves on the surface to then move deeper and deeper through the strata in search of new reserves).  Contemplating this next-stage sub-surface procurement initiative is where many procurement leaders now find themselves – but they also know, to achieve the best results whilst keeping their own operational costs down, they need better exploratory tools for their teams.

But for what benefit?  The obvious areas of cost economy in sub-surface procurement initiatives comes from better understanding the ‘when, where, hows and whys’ of procurement behavior to then benchmark approaches to ‘best practice’ (sometimes happening in different industry sector) to then install new behaviors.  Unfortunately, the cost of sub-surface procurement changes can rack up unless procurement professionals gain the tools to more easily speculate on areas that look on the surface to be efficient.  Time-to-value in turning potential weaknesses in procurement to rewards in terms of cost savings becomes a key measure of success.

One hurdle that most procurement teams encounter when transitioning from a ‘reactive’ to ‘proactive’ procurement stance then is the need to source richer spend-analytics across all disciplines.  This is the role of spend analytics.  But where spend analytics software tools generally fall down is not in their scope of use, or ability to present insights, but in harvesting existing data for analysis. 

Few organizations truly operate ONE computer system across their business.  Even companies that have committed to a core technology stack - such as SAP, ORACLE, IBM or MICROSOFT - will often have multiple ‘systems’ of their chosen platform to cater for disparate operating divisions, geographical variances, group companies (etc).  This means SOMEHOW data needs to be harvested, mashed together and normalized, then presented in a data-mart that allows it to be interrogated.
Like most systems that revolve around ‘people being curious’, good Spend Management systems don’t simply regurgitate data in the form of charts and dashboards, what they do is provide a platform for users to ask new questions about what, where, when, how, why and whom in the enterprise spends money.

Federated Spend Analytics platforms are the newest breed of performance tooling for procurement professionals operating in large and complex organizations. Federated platforms should as a minimum provide the following capabilities:
  •  Harvest data from existing applications without requiring manual interventions
  • Transform and normalize data to produce coherent data perspectives
  •  Form a unifying data-mart that auto-refreshes with the latest data from back-office systems
  • Present user interfaces supporting interactive charts, maps, dashboards, side-by-side comparison tables and other visualization tools to distill key facts
  • Support the ability for users to create their own KPIs, alerts, chart and map views
  • Install RASCI role accountability to spend analysis process
  • Provide an always-on (secure) platform accessible via browser-based devices
  • Scale to meet enterprise needs

By implementing a Federated Spend Analysis platform, what should procurement teams expect?  While each organization will benefit differently from adopting a federated system here’s a quick run-down of some of the more common ROI outcomes:

Add
  • Multi-threaded sourcing and multi-linking of data; Tooling to harvest data from multiple sources and augment automation of information flows to route data and place it into a data-mart
  • Ready-to-use design elements; Tooling that provides self-service creation of situational applications and analytical dashboards, interactive charts, maps etc. without coding.
  • The means to be curious; to ask ‘when, where, how and why’ questions without hugely increasing frictional costs associated with the enquiry process itself (such as time spent authoring reports, sourcing data, manual data entry and data crunching, time spent working spreadsheets etc.).
  • The means to expose fine-grained economies; which, by minimizing enquiry and analytical costs, become economic to act on.

Grow
  • Opportunities for sub-surface procurement economies by distilling fine-grained operational behaviors (made affordable by reducing enquiry/reporting/analytics unit costs).

Reduce
  • Time-to-value; expose weak-points in procurement faster by having the ability to interrogate data rather than creating and running ad-hoc reports time and again in the hope that a sub-optimal behavior will be uncovered.

Remove
  • Reporting and analytical overheads
  • Software change/upgrade costs
  •  Technology complexity; save on the number of tools needed to harvest, analyze, report, share insights and related cost of authoring situational applications to ACT on learning lessons.


If you have implemented projects/tools to source sub-surface spend economies it would be great to hear about them.  Otherwise, if you’ve yet to explore the potential for sub-surface spend projects I would be happy to talk over the subject with you.


I.

Thursday 1 May 2014

The fastest route to achieving operational excellence lies in re-mastering your enterprise DNA 'digitally' as a 3D data cube

Managers driving HR, compliance and performance initiatives need 3D (CUBED) federated data perspectives – but how do you get clarity when data is siloed in hybrid enterprise software applications?  And why does it matter?

As an organizational designer I spend much of my week working with business owners and line-of-business managers leveraging people assets, reducing costs – and more recently installing more robust policy governance and growing performance.  One of the more challenging aspects of my role is helping non-IT people get their head around the importance and value of looking at data in multi-layered perspectives rather than on ONE flat view.

Consider process accountability for example:  All organizations have capabilities (things they do to create customer value and sustain their business continuity) supported by a cascading tree of mission critical processes, then sub processes, and process steps.  Each process step SHOULD have a matrix of role accountability, responsibility, things to be done etc.  From an organizational design perspective I need to know EVERYTHING about the fine-grained action that happens right at the bottom in order to make sure it happens, and that it happens right.  That means I want to look at each action from the perspective of being a customer (customer science), being  a shareholder,  its contribution to strategy (performance), how often it doesn’t happen and the consequence on customer value produced (Six-Sigma), the impact on policy governance and risk (compliance)….

…and today, in most businesses, you can’t do that. 

So what? Why is it so important to digitize the DNA of an enterprise?

With a 3D digitized view of your enterprise, decisions can be made through more useful insights.  Leaders and managers can ‘ask their own questions’ and be curious about how the business works. When things don’t happen as they should, it’s easier to spot.  NEW LEARNING that results from customer and stakeholder feedback can be directed to the right places in the enterprise where these lessons can be applied. Digitizing the enterprise is fundamental to understanding, is fundamental to operational excellence, compliance, performance, success!

All too often, business leaders want the outcome of projects ‘by tomorrow’ (quite understandable!) and so the last thing they ordinarily want to do is start at the very top of the tree formalizing a data model of the enterprise.  They’d rather pick the low hanging fruit.  They want to see the swishy dashboards - and they don’t care how the data gets there.  But, if you want accurate perspectives on data and you want to be able to examine topics (and answer new questions) then SOMEONE has to build-up the DNA of the enterprise into a reliable data structure.  When this preparatory work isn’t done it means consumers of the data are likely to misinterpret what they’re seeing.  This can lead to all sorts of bad outcomes and consequences. 
Invariably, taking these short-cuts can move things forward quickly to begin with, but every successive step takes longer, and the absence of a unified data-mart to interrogate means that every question requires a new report, a new spreadsheet, another new process… aaargggghh!

One time my satnav took me the quick way to Edinburgh late one week-day as I made my way to a conference from my home in The Midlands, England.  A little after 11pm I found myself cross a cattle grid on top of a wind-swept hill with cows watching my progress on all sides!  Some shortcuts are not worth taking.

What people do ‘get’

Appreciating how data works in cubes can be thoroughly mind-blowing, but what people do ‘get’ are spreadsheets.  Most business people work with spreadsheets – and as everyone knows, spreadsheets are great to use and very versatile for looking at data.  Whilst they can also be exploited and multi-layered using pivot-table features, most folk will use spreadsheets in a flat format where each spreadsheet compares ‘x’ data by ‘y’ data.  That’s great – to a level, but when you want to examine ‘x’ by ‘y’ from the perspective of ‘z’ spreadsheet skills start to separate the men from the boys.

Organizational Designers need software too!

I find that most enterprises don’t hold data in very useful ways.  IT systems are purchased from each department or discipline – and every application manages its own data in its own ways, under different labels.  That’s a problem for me as an organizational designer.

It’s not until you have a 6000 column by 8000 row spreadsheet that you realize, as an organizational designer, you’ve not got the tools you need to become ‘operationally excellent’.  Many stakeholders have a deep interest in data that describes an enterprise, what it does, how it delivers what it does, what it produces. Fundamental to Compliance, Governance, Risk, Performance, Human Resources Management and many other disciplines is the fundamental knowledge of:

How is the organization structured (in terms of legal entities, organizational units, roles…)?
What is its strategy?
How is the strategy executed?
What are the processes that are critical to its success?
What ACTIONS must be done to achieve the plan and run the processes?
What is the stakeholder value (particularly the customer value) produced?

All of the above requires landscape views of data that aren’t normally accessible to organizational designers because the data is not held in a form that lets you slice and dice the data.

The answer IS NOT Business Intelligence 

Scale data analysis projects to enterprise data proportions and we have to deal with the thorny issue of data being held in silos of operation and ‘data silos’.  The last few years have seen many different forms of so-called ‘business intelligence’ software platforms enter the market to solve the issue of helping leaders and managers understand what’s happening in their business by interrogating back-office data silos.  The problem with all of these systems is they consistently fail to answer the ‘so-what?’ question.

You see, most things that leaders and managers want to know they don’t measure because they’re new questions (were they old questions then they’d already have systems and people in place to manage the process!) That means most actionable insights are answers to new questions. The problem for Business Intelligence tools is that it takes SO LONG to formalize the root and branch data structures needed to reliably get at data from wherever it’s being held that it becomes inflexible.

Moving forward

To make organizations BETTER BY DESIGN, business leaders and organizational designers MUST be able to understand how their organization works in 3D to deliver perspectives on operational activity that are needed to deliver optimal customer value, meet compliance expectations and drive performance.

It’s said ‘there are many ways to skin a cat’ but the solution we’ve developed here is to use a private-cloud platform called ENCANVAS to produce an adaptive lens across existing data repositories that removes the challenges of ‘getting hold of data’ and ‘organizing data into cubes’ through its built-in data connectors and transformation tools.  What gets produced is a personalized data cube that produces ‘always purified data’.  ENCANVAS also offers a suite of compliance, performance and operational analytics (reporting) tools to then allow senior managers and organizational designer to construct their own applications and views on data.

We’ve used the ENCANVAS platform to construct our own Organizational Design tools to facilitate the production of 3D cubed views of data for Human Resources Management, Performance and Compliance.
The reason ENCANVAS works when other systems don’t is because of the fact it works like LEGO® bricks and removes the need for IT projects or coding to produce the required outcomes.  It humanizes the process of shaping technology to do our will rather than the other way around.  You can ‘shape the bricks’ in workshops with stakeholders to make sure the outcomes work for the organization and the users.

Of course, a few rules apply:

Garbage in, garbage out – If your operational systems produce trashy data, ENCANVAS can purify and treat most of it, but if the data isn’t being captured (or captured correctly) that can become a problem.

Technology is never the answer – ENCANVAS, and platforms like it, are just tools; by themselves they never solve anything.  What’s needed are domain experts that fully know their processes and how the processes need to work.

Ian Tomlin is European Regional Manager for US Tech Solutions Inc. and advises on federated solutions for compliance and performance, customer science, actionable insights and organizational design.