Our Approach

Current Challenge

Our experience of tier 1 financial services organisations is that the evolution of how infrastructure was designed and built resulted in a complex tightly-coupled architecture with significant duplication & data quality issues that cannot be easily changed. This has resulted in multiple business problems where the root cause is due to large amounts of disparate data spread across highly specialised systems with no ability to holistically view across systems, organisational or geographic boundaries. This is excerbated by a and uncertain regulatory and market environment subjecting the these systems to enormous change, and an increasing need for faster response times.

Traditional Approach

Historically the approach adopted to manage data has been to standardise processes and centralise data, with strategic architecture being developed within business silo’s and technology used as a driver to take advantage of the efficiencies of economies of scale.

Our Approach

We believe that the best approach is the exact opposite…better information and control is achieved through federated data and localised processes adapted to specific end-user situations. The systems functionality should be suitably partitioned to allow a de-coupled architecture allowing distinct functionality to be delivered flexibly into different business silo’s without duplication, using technology as an enabler.

Federated data ownership

A federated approach takes advantage of better local knowledge to enhance data quality.

It is universally recognised that there is real value in local knowledge. From a data perspective this local knowledge translates into better quality data, with quality increasing where it is closest to the end-users interacting with it on a daily basis. If this data is centralised the quality is compromised as the owner no longer feels accountable.  This has led us to believe that federated data ownership is the optimal model.

Localised processes

A federated approach takes advantage of better local knowledge to enhance the control environment. In large organisations operating across many different business lines, geographies and regulatpory regimes it is necessary to have sufficient local knowledge and flexibility to adapt processes to meet these local requirements.

Local empowerment will also drive more business value since people locally have the power and flexibility to be able to try something different. The closer you are to the end user the more innovation there is and the better the customer experience.

De-coupled architecture

Many IT systems have been written and designed to automate business processes. This is a primary cause behind the widely held belief that business benefit is derived from standardising processes which take advantage of the efficiencies provided by the technology designed to automate these processes.  The unintended consequence of this is to tightly couple the systems functionality, underlying data and the business processes. This results in business processes which cannot be changed without wholesale re-design of the systems. This is compounded by “daisy-chaining” systems by copying data which needs to be used by multiple functions in a way that makes them highly dependent and difficult to change without significant co-ordination.

We believe that systems should be designed such that functional components are appropriately “partitioned” such that they can be re-used multiple times and provide sufficient flexibility to cater for the specific business problem or task at hand.

Data should be de-coupled from the business processes which operate on that data. This allows business processes to be adapted to localised situations and to be easily changed without losing the efficiency of being supported by a well controlled IT system.

Technology as an enabler not a driver

We believe in focusing on solving the business problem and using technology in the background as an enabler. A good understanding of the business problem allows technology components to be used in a way that takes advantage of their core features and does not extend it too far away from its intended use.

Why does this work?

This approach works because it most closely aligns to human behaviours and utlises technology features which play to the charactersitics of those human behaviours.

Human Behaviour

Peoples default behaviour is to ensure sufficient local control and flexibility to ensure that assigned responsibilities can be executed first, with wider corporate responsibilities taking a secondary role, even where good corporate behaviour is highly valued. From an infrastructure perspective that often means building systems to suit local needs and ensuring good quality local data first without sufficient regard to implicatons on other corporate infrastructure or accountability for local data which may be copied into wider corporate warehouses or stores. Using a federated approach to data and allowing localised processes takes advantage of this default human behaviour.

In a large organisation federated data and localised processes creates inconsistencies and duplications, therefore we also need the ability to question (or query in our terminology) the specific data or activities at a local level, right down to the lowest level of granularity, through a lightweight centralised mechanism. This “trust but verify” approach also takes adavantage of another default human behaviour of not wanting to be seen to be acting differently.  Using this central mechanism to shine the light of transparency helps to drive out inconsistencies and bad behaviours. “Sunlight is the best disinfectant.”

Allowing a federated data and localised processes may also encourage duplications in infrastructure.  The key to avoiding this is to deploy technology components with key features that allow the flexibility of a local approach but can be re-used to create a holistic architecture across the organisation.

Technology Features

The key features of technology enablers that facilitate this approach are:

  • plug & play
  • scalability
  • performance
  • resilience
  • security
  • low cost

Plug & Play – We believe in build systems like lego, the system can be viewed as an accumulation of small functional components, each component performs a very specific function and specialises in performing that function extremely well.  The key benefits of this approach are:

  • each component is easy to understand on its own and therefore if the system “breaks” because of this function it is easier to understand and fix
  • building an application is simply and accumulation of many components which are needed to solve a particular real-world problem and hence it is easy to build new applications from re-usable components
  • Designed to support side-by-side plug-in deployment (Production and Test plug-ins both accessing production data) for rapid user testing and change in days or weeks rather than months
  • Automated testing based build allows full regression testing of all components from full test suite on any change without the need for significant manual-based User Acceptance Testing

Scalability – building scalability has to be done at the design stage of building a system. There are constraints any which need to be observed with any sofwtare or hardware used in building an IT system, however good upfront design and deep knowledge of the constraints enables you to find the best engineering compromise possible given those constraints.  This type of design allows us to build in expansion points around these constraints, for example by permitting dynamic addition or removal of nodes (servers) across locations where additional processing power is required, or allowing storage to be dynamically added through Storage Attached Networks. When dealing with data problems an important feature of scalability is the ability to handle significant volumes of data (“big data” to use the in vogue terminology).  Our ability to handle Big Data leverages well known compressed compressed column-store data base technology used in astronomy as far back as the 1970’s and also takes advantage of newer approaches such as sharding and storage on no-SQL technologies such as Hadoop, Mongo, Riak etc. being careful to only use the technology where its core strength lies rather than stretching the boundaries of its capabilities.

Performance – When dealing with large volumes of data, performance of data access is a critical feature. Understanding this constraint in the context of the business problem is essential and armed with this knowledge a variety of techniques can be deployed to manage within those constraints, including:

  • High performance caching,
  • Distributing the performance load appropriately between the central platform and distributed data sources/systems
  • In-memory vector processing

Resilience & Lightweight Support – in most organisations today typically 90% of the focus is on building the functional components and only 10% on meeting the non-functional.  The consequence of that is a system built this way has extremely high running costs both in terms of IT support and business operational support.  We believe that 2/3 of the initial development cost allocated toward non-functional requirements will result in a significantly lower ongoing operational cost.  However, since IT governance processes do not adequately capture these operational costs in the Return on Investment evaluation on project initiation, there is a skewed bias towards showing a lower initial cost of development to only cover the core functional requirements.  Taking shortcuts rarely works “if you want to get to the moon, climbing a tree gets you going in the right direction, but building a rocket is probably a better approach. A good implementation technology will enable rapid prototyping but won’t do so at the expense of production version development.” (Philip Brittan – http://www.zdnet.com/news/software-projects-roi-balancing-acts/132789)

Security – In large regulated financial services organizations security is a critical requirement but can be a minefield to navigate when working over many disparate geographical locations, legal entities, and systems.  This requires a mature security administration framework that can interface with standard permissions systems such as Active Directory that can:

  • adopt a simple approach which automatically inherits the security features and permissioning of the underlying systems or data sources
  • also overlay an additional permissioning layer that can restrict or allow access to data based on specific user access rights
  • apply dynamic data masking locally on federated data sources that are held in geographic locations where data privacy laws prohibit data from being moved

Low cost – Deploy on commodity hardware and ensure efficientl storage of data which significantly reduces total-cost-of-operation