About Corporate Value Systems

First of all, I want to apologise for two things: First, I have not written much for a while.  2018 has been a tumultuous year on the family front so far, and with a big hairy project in full deployment at work, I have simply not had the time to write or work on my data ideas. Then, I have also been distracted by shiny things: Building a robot and working on a simulation of an artificial mind. Be reassured – the former simply wanders about under my control and bumps into furniture, the latter is still a mind map which I use to model human behaviours such as drive, sociopathy and depression. So Skynet is not happening at my house. Yet.

The second thing I want to apologise for is that this post is not at all about data, but it’s more about corporate value systems. This is because I have seen recently some irritating, sycophantic posts on LinkedIN with people sharing some great words of wisdom for CEOs and other industry luminaries, such as:

“Always behave with integrity”

and

“Be true to your values”. Great words indeed, and statements of the blinding obvious. I’m sure these words tick the ‘I’ve given great leadership there’ box. I wish I had the necessary attributes to be a great CEO so that I could come out with a phrase like this and have it shared on social media, as if I were the Dalai Lama or whatever. I’d probably say ‘Try not to be an arsehole’, I think that’s advice that many people should follow more.

Anyway, back to corporate value systems. These have replaced the ‘Mission Statement’ that became prevalent in the 80s and 90s, in order to focus your workforce and your management on a definite and pretty obvious goal.

You may be preparing yourself for a long cynical rant about value systems, but I actually think they are a great improvement on the mission statement, as instead of giving you a single goal they provide instead a behaviour framework that should, if well designed, implemented and followed up, help an organisation adapt to rapid change – an evil of our times – whilst not losing sight of what is important in most human endeavours.

Did I mention the word ‘sycophantic’ above ? Well, I’m afraid i could be accused of this myself, as I am about to give praise to the value system that my employers came up with. I won’t tell you what it is, it may be copyrighted or it may give others a competitive edge.

But it is one that I remember well, and which I use to evaluate my decisions, communications and interactions with my  managers, colleagues and customers. It doesn’t ask the impossible, it does not ask you to become a different person, and its tenets are attainable by all who participate.

What is this value system made of ? It consists of five attributes (of course, we’re an Analytics company), each of them having one or two statements that give context and guidance to them. And, in communications from the Executive team downwards, initiatives are qualified in terms of one or more of these core tenets. It’s interesting to note that in my conversations with my colleagues, we often intersperse these very key words in what we say, and while we might find this amusing, it also influences how we say and do things.

Of course, value systems are nothing new – simply consider the 10 Commandments, or the Scout Promise, and you’ll agree that they’ve been around for a while. You will also agree that frequently, the tenets of these value systems are at best ignored and at worst violated or perverted to generate some pretty egregious behaviours. Thus, after a while these systems go stale until a fresh face or voice gives them a cleanup and reinvents them.

What does it matter if it’s not a new idea ? It’s still a great one. Corporations frequently have innovation spasms, the outcome of which are often dropped and forgotten, or replaced by the next new shiny concept.

But in my company’s case, I hope this particular initiative sticks around, and that our leaders do not tire of it. I think they’ve got the balance right, there, and I thank them for it.

The Reality Gradient

Analytics is the particular branch of IT that I exist in. Here, change is the norm and technologies are rapidly evolving.

This means that a project may well start in a particular shape, and end up in a completely different one. And this may be perfectly acceptable to your client, which is an interesting paradox if you compare analytics projects to, say, a home improvement project. If I am having my kitchen changed, I’d quite like it to be a kitchen at the end of the project. But that is not necessarily the case with an analytics project.

Why is this ? Well, to continue with the kitchen analysis, we’ve been cooking food for almost as long as we have existed as a species. Analytics, on the other hand, is for ever new with knowledge, skills and outcomes that have to be discovered and implemented.

I have noticed, over the course of many – many – such projects, that the implemented vision is frequently different from the planned one. This is because, through the pretty amazing tools provided by my esteemed employers, new things about the enterprise are learned during the data discovery process, and business priorities may change over time. It then becomes important to seize opportunities and adjust the business case, and thus the outcome.

This difference between the vision that underpins a project at the start, and the end result is what I call the Reality Gradient. It is a key concept from a consulting point of view, and it can be expressed as a measure. As an analytics consultant, you will need to be the lead or a key participant to steer the project and the client through uncharted waters. Defining this gradient as a measure can simply be a ratio of initial expectations matched against project outcomes. The fewer the matches, the steeper the gradient.

As with all measures, few are significant on their own. In this case, you also need to have  a measure describing project success. This is more qualitative, in that it matches benefit expectations at the start (the business case) with benefit outcomes (if any).

Now that you have two measures, you have means of learning quite a bit about the people, the enterprise and the methods employed during the course of the project. Put simply, a high-gradient, high success project indicates agility and opportunity, while a low-gradient, high success outcome validates estimating, project leadership and indicates pre-existing knowledge (or immense luck).

If we accept that a Reality Gradient is inevitable, then we need to make sure that the journey is survivable. It is another paradox of agile methods that these are supported by very basic skills, even if these are applied in new ways. It’s worth doing a brief excursion into one of these core skills: Project management.

It is my experience that, across the many technologies and methodologies I have encountered, a key factor with low outcome projects has been a project management deficit. This can take two forms: Quantitative – there is not enough project management – and qualitative, where there is project management but it is not the right kind.

This can be a difficult issue for a consultant. Project success is an important career element and being affected by project management shortfalls is simply not welcome.

My advice to consultants is to learn about the key fundamentals of project management, and assess where these fundamentals are lacking on projects. These fundamentals are:

  • Business case
  • Communications
  • Planning
  • Status recording and  reporting
  • Risks, Assumptions, Issues and Dependencies (RAID)

Each of these points deserve more description. But knowing that these either exist or not, and are fit for purpose, is in my view a vital project survival strategy.

Consulting organisations will have different ways of supplying these skills to colleagues.

Of all the points described above, the least important is planning. I can feel the sharp intake of breath, but you can plan a failure if you neglect any – any – of the other points.

Plans are key for estimating and mobilising and should be as detailed as possible. Once the project has kicked off, however, you should work to the project milestones, but you should simply record what is happening rather than spending ages keeping the plan consistent. I’ve found it acceptable to review and adjust the high-level plan at relevant intervals, providing I have a good record of events and decisions made (see status reporting).

There is a valid reason for this, and I am pleased to loop back to my Reality Gradient. It is that the analytics projects you will work on will yield new data, new ways of addressing business cases, requiring a change of course. Unless you have the luxury of a project administrator, you will not be able to revise the plan in detail every time an opportunity presents itself.

And in any case, unless this is your engagement role, you are a consultant and not a project manager. But you will be impacted if the fundamentals are not adequate.

Briefly about status reporting: It is is essential for all consultants on projects to have basic skills in this respect. Progress on tasks, issues, risks should be captured and aggregated in a regular report which will then track progress against milestones and objectives, and document changes in scope. It is also essential that this status report be reviewed with the client on a regular basis.

Communications are self-evident: regular standup meetings, clear communication lines, defined escalation process, good status reporting are all aspects that should be examined.

The Business Case may change over time. When it does, the project must follow. Meeting business case expectations is of strategic importance as it may well be gateway to further projects and benefits. Understanding this, and knowing when and why it changes, means that delivery is focused on the current goals.

And then, there’s RAID. It’s worth spending time evaluating risks, as a few of these have a chance of turning into issues. Communicating these risks to the client is key, as you can work cooperatively to mitigate the risk and resolve the resulting issue. Assumptions are rather harder to list, as being assumptions, you are tempted to take things for granted. Here, experience is key as you will remember assumptions that bit you hard in previous projects. Issues, we all know about – tracking them and resolving them is a necessary, if burdensome task. Dependencies, again, rely on experience. Frankly, I won’t go on too much about this, there’s plenty to read and learn in books and online.

All these are fine project management disciplines. Not rocket science, but the key there is in the word discipline – these items must be present in any project, if not by you, the consultant, then by someone else in the team.

But above all, you must be able to understand, manage and communicate the Reality Gradient. To do this requires transparency and trust, and quality interactions with your customer, with regular evaluations of the direction a project is taking. You’d neglect this at your peril.

 

 

 

 

 

MicroStrategy World 2017 – Impressions

This week I had the privilege to travel to Washington DC for Microstrategy’s World 2017 conference to present two sessions and talk to customers on pointy technical topics.

MicroStrategy fielded a vast programme, most of which will be available on line shortly. I will not, in this post, be going into any detail about the content of the conference. Rather I will describe, aided by my own pictures, my impressions of the trip, the people I met and some of the conversations I had.

But the journey has to start somewhere:

Heathrow
Heathrow, Terminal 2 – 7 Am, Tuesday 18th April

Slightly wary of United Airlines, I board United 123 for Washington Dulles. The flight leaves on time, and proves to be relaxed and uneventful.

World2017
On my way

National Harbor

Niceday
A nice day

They might know a thing or two about our software.

GreatEngineers
The Great Engineers

The conference gets under way

ItBegins
Anticipation buildup

Colleagues

RemcoDiego
European encounter

Safe harbor

SafeHarbor
We’re going to see new things !

Party Time !

Party
Incredible

Between Sessions

BetweenSessions
Inspiring view

Arcology

Gaylord
Awe Inspiring

Presenting in the ballroom

Ballroom
Presenting is a performance art

And then home.

Sunset in Dulles
Sunset with thunderstorm at Dulles Airport

Of Consulting and Corridors

Belgo
Belgo corridor in Montreal, courtesy of Kalina B.

This is not at all about consulting, beyond the fact the key subject here – corridors – came up as a conversation topic during one of our all to rare gatherings of our consulting team colleagues.

As consultants, we tend to see a lot of corridors in our travels. I am sharing some pictures here, from my colleagues and myself, that we exchanged following that conversation.

Not all corridors are encountered in the course of our work. But the perspective offered by these are sometimes remarkable.

Enjoy !

shuttle
The Shuttle, Channel Tunnel
Golden.jpg
Golden Jubilee Hospital, Glasgow
MK.jpg
Offices, Milton Keynes
Corbie Hotel in Geel, Belgium.png
Hotel in Belgium, courtesy of John D.
tunnel
Wedding Picture, courtesy of Kamil Z.
Lynebank Hospital
Lynebank Hospital, Dunfermline, Scotland. Great for wheelchair racing 🙂

More pictures to come… but on the subject of corridors, and if you’re a science fiction fan, you might want to read Eon followed by Eternity from Greg Bear. There’s a very interesting example of a corridor. Trust me.

From Exploration to Exploitation – 1: Investigating lifecycles for sustainable velocity

Information wants to be free – but an enterprise needs it to be like a life-giving stream of sustaining insight, rather than a thin trickle of stale data or worse, a tsunami of garbage. Maintaining a robust, innovative and flexible ecosystem in an ever-increasing whirlpool of data represents a daunting challenge. Just as software development practices have moved away from waterfall methodologies to agile practices, an enterprise business intelligence system has to introduce velocity whilst preserving veracity. It’s clear to see that the traditional DEV/TEST/PROD lifecycle is no longer the whole solution, but what does the alternative look like? This article describes the evolution of business intelligence lifecycles and prepares the ground for a further study of implemented practices.

Having your cake and eating it

System of record outputs must be truthful and resilient if they are used for regulatory purposes, or if they are part of mission-critical business processes. Yet, timely and volatile insights are also key to fine-tune the steering of a business process – think about a mobile phone company needing to know the take up of a new tariff or device, or modelling the impact of new financial regulations about risk on a banking portfolio. This information is needed now, not three to six month down the line after a laborious waterfall process involving many separate, thus siloed, teams.

v-vs-v

That problem is solved with modern tools allowing governed data discovery and process differentiation between system-of-record outputs and ad-hoc or exploratory products. If your current system is not capable of doing this, you need to ask yourself why…

Yet, as always, the world does not stand still. You congratulate yourself with the achievement of a governed data discovery solution, and here comes the data lake!

This throws up a completely new challenge because you want to avoid a proliferation of exploration and exploitation tools, and you also want to keep a grip on the potential explosion of new applications. From my perspective, I’ve heard about Data Lakes and Big Data for quite a few years now – but now, we’re encountering these in increasing frequency. So how do we handle these?

What’s the data lake for?

I’m hoping that the data lake is a familiar concept for all – there’s been enough stuff written about the subject. The best question to ask is: what is it used for? I’ve seen two broad use cases so far: Genuine exploration of colossal amounts of unstructured data, and replacement for Data Warehousing appliances.

The first case is about shoving pretty much any data in the lake, and using tools and processes to make sense of it. The second case proposes that storing colossal data warehouses is more cost effective on Hadoop technologies than more traditional large-scale solutions. You’d be correct in thinking that a data lake can address both use cases, but you’ll need to solve the veracity and velocity gradients inherent in both cases: Exploration is done by few, using unpredictable and intensive processes, yielding insights and results which may be volatile. Exploitation, the second case, is used by many and requires resilience and veracity enforced by governance.

Where is it going?

I don’t have the answer – yet. The installations I have seen are still in their infancy, and exploration is not simply limited to the data, but also to the processes and governance that have to be developed if a smooth and repeatable transition from exploration to exploitation is to be achieved. What I will try to do in this article is to map the evolution from highly governed implementations to those I see emerging today, with governed discovery and data blending between systems of record and exploratory data lakes.

Business Intelligence system evolution

In the beginning: The traditional setup

lf1.png

This environment provides consumers with highly governed outputs. Change control and governance are strongly enforced – new developments go through extensive testing prior to reaching the end users. Thus, robustness and resilience are the strong points of such environments, whilst the weak points are agility and velocity. New data or functionality take so long to evolve that end-users declare independence and branch off on tools outside the corporate-mandated toolset. Not surprisingly, such systems are getting increasingly rare these days.

Reluctantly, some freedom for privileged end users

lf2.png

Here analysts are given freedom to develop their own offerings but these are all based on central data. This allows for different versions of reports and dashboards, but does not address the need to rapidly model new data and exploit it. Resilience and robustness are preserved. Agility is introduced, with a small risk of divergence from the single version of the truth. Users will still work outside the system on new, volatile data. Governance becomes more complex as many new reports, and versions of those, proliferate within end-user folders.

Today: Freedom expands at a fast pace with new data and new challenges

lf3.png

Today we can add external data to the mix, and a new type of user (the Super-user) is empowered to import new data and blend this with system of record data. This brings into the enterprise solution the previously delinquent users who employed other tools to get results. This also increases velocity, but introduces a veracity gradient if the offerings from the super-users, based on blended data, start diverging from the governed and curated corporate data. Kite-marking ensures that outputs from the system-of-record process are recognised and differentiated from the ad-hoc, agile offerings.

Your strategic toolset should provide you with the necessary functions to identify ad-hoc, agile offerings that start to scale and become part of key business processes. These offerings are then prime candidates for being fed back into the system-of-record process loop, as they can be industrialised and made resilient for the larger consumer community. This also ensures that a solution does not become dependent on one individual – the creator – but can instead be supported and maintained by the developers and the admin teams that look after the system-of-record process stack.

Another feature of this environment is that the load on the production servers becomes less predictable. As development and application architecture become less centralised, your enterprise tool must have the capability to govern and scale up in a safe manner. Whilst the development bottleneck is reduced, the administrators will have new tasks in identifying and restricting the resources that these new users can employ. This restriction may be an issue, and still cause point solutions to be developed outside of the enterprise solution.

Such systems, rendered possible by advances in the best enterprise business intelligence tools, are becoming increasingly common.

And then: Big data happens

lf4

As stated in the introduction to this article, in some cases the data lake is used as a traditional data source and thus becomes tied to the core system-of-record process. It’s when it’s used for exploration that yet another cohort of users, the data scientists, can use the enterprise tool to launch exploratory queries to gather new insights. What’s happening now is that we have, in addition to the traditional dev/test/prod lifecycle axis (the exploitation axis) another axis for exploration, as shown in the diagram below:

lf5.png

Our consumers are now at the confluence of two types of output: System-of-record offerings, strong on veracity and engineered to scale, and exploration offerings, high in velocity but not necessarily scalable or resilient. This poses a challenge: Our consumers range from shop/branch/store users all the way to executives. Our offerings need to come with a quality rating so that the end user understands how the insight was produced, and that there is a difference between high-velocity, high-volatility outputs that will not have gone through the engineering and resilience rigours that the system-of-record items will provide.

Volatility is the key concept here. It relates to the persistence of an offering. If it is transient, needed for a short period of time only, it should be treated as such and not much effort should be put into making it scalable and/or resilient. Conversely, if an exploration-originated offering starts to be used by many people, and becomes an essential part of key business processes, then it must be integrated in the system-of-record domain by shifting it from the exploration axis to the exploitation axis.

lf6

Your enterprise solution should provide you with the necessary tools to identify offerings that are used frequently and the relevant consumer cohorts. You should then set thresholds by which a decision is made to take the exploration offering and send it through the exploitation process to make the offering scalable and resilient. These actions are represented in the diagram above as the ‘persistence assessment process’.

What’s next?

Observation and learning

The last diagram shows where some of our customer’s systems are at today. This represents a rapid departure from lifecycle orthodoxy, and requires new processes for governing and administering the system. The administrators will need to monitor the load on the production systems and provide the information necessary to identify offerings to be industrialised. Development will be devolved, in that the data scientists and super users will be creating the first drafts of new applications based on production data. The task for traditional developers should be simplified and these should become more productive as the requirements are better understood.

New system topologies

The sacrosanct production environment will still exist, but it may be cloned to support the exploration process. This mitigates the risk that intensive exploration and implementation poses to the production environment’s stability. This may increase the administrative workload so you will need to make good use of all the helper tools offered by your enterprise solution.

Free flow of ideas

As an interesting historical analogy, it is now certain that the reason that the Industrial Revolution took place in the United Kingdom rather than France, for instance, is due not to the lack of scientific and technical competence – France had, in the 18th century, a huge cohort of world-changing scientists and innovators – but to the free flow of ideas and a loosening of central governance, supported by an enlightened leadership. France centralised everything and took for ever to release new knowledge, whereas British entrepreneurs simply got on with it.

It may be a bit of a stretch to compare an enterprise business intelligence system to a country – but you do notice the harm done by sclerotic processes to innovation and the sharing of information.

Keep the lid on

Conversely, you can also judge the effect of a proliferation of false information, as events in the US election or the EU Referendum in 2016 have shown. This leads to uncertainty and mistrust, which not desirable in an enterprise setting.

This highlights the importance of a good governance framework, and of educating the consumer to rate the veracity of system outputs based on their source.

And finally…

This is very tantalising, but you might well ask how this Business Intelligence utopia can be achieved. Right now, some of our customers are setting off on this journey – so over time I hope to be able to revisit the topics shown in this article, and maybe share some good practices and highlight some bad ones.

Until then, try not to drown in your data lake!