How To Build A Successful Innovation Function

Digital and innovation are getting lots of attention from public servants for good reason. Public servants work within structures and institutions that were designed years or decades or even longer ago, and which have often not evolved as the world around them has changed.

While the potential for transformation is large – there are many areas in which public services haven’t been able to deliver optimal outcomes – some of the reasons the potential exists are quite subtle.

One example is the economics of the digital era are different from the economics of the industrial era. Cost-effective delivery of services to large numbers of users in the industrial era was based on economies of scale. The economics of the digital era make mass personalisation possible – the Amazon bookstore that I see is not the same as the Amazon bookstore that you see – social media is based on communities taking a single construct and turning it into something meaningful for themselves (my Facebook community is different from the Facebook communities created by the NSW Police).

The opportunity is for public servants to create new public services that deliver better outcomes either to existing users or beneficiaries, or users who previously couldn’t or didn’t access the services.

A starting point to address this to create a digital innovation capability, and below is a checklist of the ingredients of such a capability.

1. Define the goal. Why are you innovating, what is the service or policy outcome you want to improve. An approach which starts from ‘we want to be innovative’ or ‘we need to become a more innovation culture’ is too vague. Better goals offer a clearer picture of the future. Some say the starting point is culture (‘culture eats strategy for breakfast’). Actually, the starting point is purpose. JF Kennedy didn’t declare he wanted to develop a space exploration oriented culture, he declared he wanted to send an American safely to the Moon before the end of the 1960s. A purpose provides the necessary focus for efforts, and enables measurement of whether progress is being made (and learning and iterating).

2. Define the remit. How radical is innovation allowed to be. The choices are: (a) sustaining innovation to make the existing services more convenient, faster, or cheaper focussing on process, or how the services are delivered or by who; (b) breakthrough innovation to deliver better outcomes that are potentially radically cheaper through redesigning the services themselves; or (c) transformational innovation, which involves redesigning services plus the supporting business model.

2. Senior level sponsorship and leadership. Innovation requires the support to do things differently, which may not be welcomed by all stakeholders, and ‘air cover’ to allow a process of discovery-testing-iterating (experimentation) where things will not always work or may happen more slowly (require more cycles) than expected.

3. Money or a mandate. There needs to be a reason (and permission) why people would want to work with you. One reason is there is an ‘authorising environment’ that allows the work to happen. The other is there is money – seed funding – for public servants to come forward with problems they want to see solved.

4. Process and metrics. Innovation doesn’t happen because of good intentions. And innovation isn’t just about producing ideas. Innovation happens only when an idea (actually a hypothesis) has been through a process of testing, iterating, scaling, can be used by the target audience, and the outcomes can be measured and compared against the previous service. This requires a process and metrics.

5. Capability.  There will be many people in your organisation who can become skilled at digital innovation. However as that suggests, digital innovation is a skill requiring experience and expertise. A good approach to capability building is a ‘master-apprentice’ model where you hire people with digital innovation expertise and experience who can work collaboratively with subject matter experts and others. That process not only enables innovation, but trains the next group of innovators. This also enables digital innovation efforts to scale across your organisation, rather than rely on a single team whose expertise will necessarily need to be rationed.

Now go forth and innovate. And remember you have a responsibility to those that come after. We are all at the early stages of this exciting journey. Share your learnings.

Disrupting Financial Services

Digital technology and business models are widely expected to disrupt financial services, although this hasn’t yet happened even after two phases of digital innovation. In fact the real opportunity for disruption is to reinvent what the ‘service’ in financial services means. For example, by reducing the worry and drudgery involved in managing money – what if a bank offered a service, not which helped individuals become financially secure, but actually did the job for them.

This doesn’t mean a service that makes you rich, but one which provides financial security. For most people financial security is a long way from rich – it’s having enough money to eat, pay bills, deal with emergencies, go on holiday, educate the children, and retire comfortably.

So far banks have treated digital technology as a way to improve convenience and produce nifty apps. Firstly by offering online banking and some admittedly pretty neat apps that help with things like buying property or managing share portfolios. In this first phase of innovation, the same old banking products and services got an online front-end.

Innovation is now moving to a second phase based on removing ‘friction points’. Friction points are anything that create effort and delay without adding value. Payments – or to be really pedantic, the actions required to pay – are friction points that stand between a consumer and a purchase. Making purchasing faster and easier is a logical evolution in convenience from the first phase. And the significant payoff is companies get to accumulate huge data sets (‘big data’) about customers and their purchasing behaviour, that enable them to sell more of their own products or sell advertising.

There have been signs future innovation will be social – peer-to-peer lending is already big business – but an alternative is to reinvent the ‘service’ in financial services.

Today’s retail banks which are little changed from fifty years ago. Banks keep your money safe, enable you to access it on demand, lend you money when need it, and provide buy-now-pay-later services (credit cards). These are structured as discrete products that are often complex and need to be separately bought, or applied for with the real possibility a customer will not be allowed to purchase what they want.

In terms of service, banks generally think it means ‘customer service’ or being friendly and getting-it-right-first-time. A higher value definition of ‘service’ is doing something on your customers behalf so they don’t have to do it themselves.

The starting point for reinventing ‘service’ is to look at how people live and what ‘job’ a financial service might do for a customer. Surveys consistently show most people are worried about money some or a lot of the time. 30% of Australian households live paycheque to paycheque. 17% of Australians could not find $500-$1000 if needed in an emergency. 43% women and 38% men do not consider themselves financially secure. Compulsory superannuation is intended to ensure individuals have sufficient retirement savings and works well for those in permanent jobs, but excludes others like those who work for themselves. And the future looks increasingly uncertain, even for those in permanent jobs, with less employment security, and more casual and freelance jobs.

This totally new financial service would, regardless of how much I earn, manage my money for me and make me financially secure. It would mean I never had to pay or even look at a bill yet knew I was not being overcharged; had money available as and when I needed it; and knew I was going to have enough money for retirement. In other words, the financial services the wealthy take for granted, with their teams of personal accountants and financial advisors, would be democratised and available to everyone.

This service would genuinely disrupt the existing financial services industry whose organisational structures, profitability, IT systems, and processes are hard-wired around existing products.

Apple Pay may start this disruption. Apple are not known for simply following trends, so its possible Apple Pay will start by making payments insanely easy, then, say, launch a banking platform – an App Store where this totally new financial service would be available via apps which tuned the service to the needs of different customer segments.

Trust is critical in financial services, but customers already trust Apple enough that they hold the details of 800m credit cards on file (growing rapidly) and Apple already position themselves as a trusted provider of apps.

Of course I might be quite wrong about the future direction of Apple Pay but unmet needs and changing circumstances have a habit of attracting innovation and disrupting existing competitors….


Two Problems With Core Banking Projects

The big deal in IT for Australian banks is core banking. ‘Core banking’ is shorthand for replacing old product systems with more modern and flexible versions. Most Australian banks are doing this – CBA spent $2bn over five years – but the ones who reap the benefits over the long term will be those who address two strategic problems.

The premise for core banking projects is simple. Each bank has dozens of different product systems, many over 15 years old, each hard-coded to support a different product or a group of products. This is expensive to manage and makes it difficult to innovate across product sets. And the world has changed. Rapid innovation is now essential rather than optional.

To successfully upgrade product systems, two strategic problems need to be addressed. Firstly that the system design will make assumptions about the future competitive landscape, and secondly that enabling product flexibility can lead to management complexity.

The problems are based on learning from the experience of telcos who upgraded their billing systems in the early 1990s. In the 1990s when telcos were deregulated, competition was based on long distance call pricing. Regulated telcos had made vast profits from long distance calling, and competitors rapidly went after those margins as soon as markets were deregulated.

To compete, incumbent telcos upgraded their billing systems to enable flexible pricing. You want to offer cheap calling to China after 7pm? Sure. After the new systems were implemented, it took about 15 minutes to implement, and didn’t require any programming expertise. This was fantastic for competing with long distance calling competitors.

It was not so fantastic when fixed line and broadband competition emerged. The new billing systems proved totally inflexible in enabling new plans and pricing for non-calling products. This was because they assumed fixed lines would continue to be non-competitive, and broadband simply didn’t exist when the systems were designed. When I managed fixed line and broadband products in the late 1990s, it took 12 months and over $1m (a lot of money in the 1990s) to upgrade the new billing system to support new products and pricing.

Core banking systems run the same risk. If they focus on enabling flexibility in deposits and lending pricing – which don’t get me wrong, is a useful thing – they create an assumption that the basis of competition will be the pricing of existing products.

What happens if the basis of competition changes and e.g. moves to the structure of the products themselves? Banks with new systems may be no more able to innovate and compete than those with 20 year old systems. And both will be at a disadvantage to new competitors with systems built around the new product or service.

The other problem telcos found is that pricing flexibility quickly becomes difficult to manage. We had 14,000 calling plans for 1m customers, so it was difficult to quickly work out whether an innovation was good enough for enough customers to be worth pursuing. And if it was and we launched a new plan, sales and service staff needed to be able to answer the ‘will I be better off on this new deal’ question for any given customer who contacted them. (Remembering too that flexible pricing is a two-edged sword, customers perceived telcos were deliberately making pricing complex to make more money, which created a high level of distrust.)

So is it a case of ‘be careful what you wish for’?

Not quite. The good news is that IT systems in 2014 can be architected very differently to the 1990s. Banks who recognise these challenges can build much more flexible systems. This is not by anticipating the nature of future competition (not possible) but by creating open architectures that make future development simpler, faster, and less expensive. And by adding sophisticated analytics tools, to make the task of managing complex pricing much easier.

New IT systems need to enable rather than constrain innovation. The key is to avoid monolithic systems, and create open architectures, with supporting analytical tools, that enable flexibility and innovation across any product.

Did Big Data Just Win The America’s Cup?

The astounding comeback by Oracle USA to win the 34th America’s Cup against Etihad Team New Zealand last week has created an intriguing mystery. Oracle emerged from 8-1 down in a first-to-nine-points competition, to win 9-8. Since then, media and online forums have buzzed with theories ranging from fair to foul as to how Oracle USA pulled off a victory even described by the official America’s Cup website as “one of the most improbable in the history of sport”.

The competition started on 7 September and was scheduled for two races per day, provided winds were not too strong for safe racing. By 10 September, Oracle was 4-1 down. They promptly postponed the next race, allowable under the rules, and fired their tactician, replacing him with the most successful sailor in Olympic history, Ben Ainslie (link).

Nine of the subsequent races were postponed mostly due to winds being too strong. Adding in scheduled rest days, this meant from 10 Sept when Oracle were 4-1 down, to the close of 21 Sept when the score was 8-5 still in ETNZ’s favour, there were 4 days with only one race and crucially 5 days with no racing at all when Oracle (and ETNZ) could practice and improve their performance. Oracle then totally dominated the last four days of sailing winning all six races, to take the Cup 9-8.

After the initial races, it was widely acknowledged, including by Oracle (link), that ETNZ had a major upwind advantage due to their superior tacking and foiling skills. Yachts can’t sail directly into the wind so on the upwind leg of each race it is necessary for boats to take a zigzag path changing direction by ‘tacking’. ETNZ had also spotted a loophole in the rules allowing the catamarans to sail up on the foils attached to each hull, transforming a water-bound catamaran into a much faster hydrofoil. This is not an easy thing to do, or do consistently, so spotting this opportunity before the Cup started was a serious advantage.

So how did Oracle go from clear disadvantage to romping home in just 9 days? There are three prevailing theories:

  1. Ben Ainslie plus tweaking the boat was the key. Given Ainslie is English this is naturally the version of events favoured by the UK media (link). However, the scale of improvement is too large for this to be plausible. Grant Dalton, CEO of ETNZ, estimates Oracle improved by over 1.5 minutes in those 9 days, from 50 seconds slower to 50 seconds faster. Based on an average race time of 25 minutes, this is a 7% performance increase. And this from a team of already world-class sailors. It would be a truly remarkable feat of leadership to increase team performance by this much this quickly. And any tweaking of the boat could have been copied by ETNZ. Likelihood: low.
  2. Team NZ choked. This emerged from the NZ sports media (link) during Oracle’s eight straight wins in the second half of the competition. The NZ sports media have been jittery and quite frankly neurotic since the New Zealand failed to win five Rugby World Cups in a row until the All Blacks succeeded in 2011. The trouble with this theory is the problem wasn’t that ETNZ’s performance got worse; it was that Oracle’s got so much better. Likelihood: low.
  3. Oracle cheated. This theory runs along the lines that Oracle installed a “stability-assistance system” that enabled the boat to go up on its foils at the touch of a button. Automation is forbidden under the rules which do not allow any equipment powered by stored sources of energy (except for human arms and legs). The source for this appears to be a single NZ journalist although it was widely picked up by other media, and has been hotly denied by Oracle (link). Likelihood: unknown / unproven.

There is a fourth and much more plausible theory based on the use of Big Data, which I like to call the ‘bionic sailor’ theory. The idea is that Oracle gathered and analysed massive amounts of data on their and ETNZ actions and performance during the races, and atmospheric, wind and weather conditions, enabling Oracle to reverse engineer ETNZ’s advantages (which Oracle admitted early in the competition they didn’t understand (link)) as well as provide pre-race analysis. This, combined with a sailor of Ainslie’s calibre who could use the insights to direct the team during the race, the tweaks to the boat, and the time for extra practice, added up to provide the 1.5 minute improvement.

This theory is based on four factors; sailing is a sport based on complex decision-making in a fast-changing environment with multiple variables so is highly susceptible to computer analysis. Oracle built “elaborate television production facilities” (link) for the race that could have been used to track and monitor Oracle and ETNZ, as well as provide broadcast coverage. Oracle has ready access to the world’s best database software, data scientists, and software programmers who could use video and sensor feeds to manipulate immense volumes of data, run major simulations and constantly improving things for Oracle. And finally, Oracle had much greater funds during the competition than ETNZ; meaning ETNZ had neither the money nor the on-tap computing capability to come back against Oracle. Likelihood: unknown but the most plausible.

The upshot is it appears no accident a team sponsored by one of the world’s largest database and software companies pulled off one a great upset. And you can guarantee Oracle will have already started improving their capability for the next America’s Cup.

It may be the start of a new frontier where Big Data is used much more extensively to improve sporting performance. And with wearable computing emerging, Big Data may end up turning a number of sports on their heads.

Either way, my advice to Team NZ and the Australian Bob Oatley who has just announced he is challenging for the next America’s Cup, is don’t even think about competing without a seriously heavyweight team of software engineers and data scientists. Oh and you better throw in some military-level internet security to avoid any unsportsmanlike hacking.

A great sponsorship opportunity for Microsoft or SAP?