Building data-driven organisations: what can we learn from Tech companies?
Around 2 years ago, I left the World Bank and a career in the public and development sectors that was focused on strengthening evidence-based policymaking and moved to the private technology sector in Data/Research Science. My aim was to better understand how these organisations collect, organise, and analyse data for decisions and whether some of these learnings could be brought into the public sector.
During discussions at work and through my research, I often heard suggestions that there was a lot to learn from organisations in the ‘tech’ industry, such as Amazon, Google, Facebook, and Apple, on the topic of data-driven decision-making. But often these suggestions lacked an in-depth understanding of what these technology firms really did and how. I hoped to fill this gap.
So, what have I learnt?
1. Heavy investment in skills and infrastructure is necessary
Unfortunately, there is no getting away from the fact that building a data infrastructure that collects and organises data into an analysable format requires significant investment in infrastructure and personnel, including:
Software engineers (to build interfaces and apps and event streams that can eventually become recorded data)
Survey specialists (for designing and coordinating the collection of survey data)
Data scientists/data engineers (to build data pipelines that take the vast streams of data and organise them into processable tables for analysis, in real time)
Data scientists/research scientists (to analyse the data, find patterns, build hypotheses, and design experiments appropriate for the measurement infra)
2. Organisational incentives can embed data-driven approaches into day-to-day operations
In most cases, to ‘ship’ a new feature or product in tech firms, teams have to set up a randomised experiment (e.g., an A/B test) and demonstrate that the new feature has a positive causal effect on key metrics (e.g., sales or customer engagement). It is also common to use the outcomes of such experiments to determine performance incentives and compensation for individuals and teams. These experiments are usually quality-checked by a central team that specialises in experiments or has designed the experimentation platform.
This provides a strong organisational incentive for teams to understand experimentation and the key data around their product (e.g., baseline performance, the distributions of key metrics, the locations that are performing best/worse, and the product components that are performing best/worse). Product teams are also incentivised to focus on features that will have a demonstrable impact on key metrics within the confines of an experiment.
This can be both good and bad. Good, because this will usually mean focusing on problems and changes with the biggest impact. Bad, because this can often mean neglecting problems that might be harder to measure or will not show up in a short-term experiment.
The ‘Bad’ is probably a bigger concern for situations where early big gains from experimentation have been exhausted, where outcomes may take time to materialise after the treatment, and where there are important hard-to-measure outcomes.
2.1. How can we link this to the public sector (where measurement and incentives are completely different)?
Before moving on to the next item, it is worth acknowledging that this particular strategy is much more complicated in a sector like the public sector, where outcomes and productivity are often hard to measure accurately, where outcomes can take months/years to realise, where the production function from inputs to outputs is often highly complex (see this report).
While it might not be feasible nor advisable to dive straight into data-driven and experimentation-based approaches in this environment, a lot can be done to start building the systems, widespread understanding, and interest in these approaches.
Starting with some organisational incentives tied to using (quality) experimentation and data could help initiate a process of building the necessary blocks towards an effective data-driven decision-making infrastructure and culture. These incentives could be ‘soft’ to start and could include reports of the number of ‘quality’ experiments run by teams, the accuracy of the data collected and held by teams, and examples of high-quality experimentation where it is already possible.
3. Having independent data/experiment teams can mitigate against conflicting incentives
Product teams that design their own experiments and key metrics could be tempted to construct them in such a way that maximises their performance incentives. This does not necessarily happen in a purposeful and malicious way, but can implicitly occur through small decisions that happen during the design process where researchers have degrees of freedom.
An effective way to guard against this is to have independent teams that: (i) help design and sign-off experiments and metrics put forward by product teams; (ii) are incentivised differently, not by the same metrics, such as the number of products shipped.
In the public sector, given the challenging measurement environment described above, it is likely to be even more important to have such centralised and independent teams so that more objective approaches to data-driven decision-making are established.
Wrap-up
These three factors are the first few major areas that stood out to me when thinking back over my experiences around data-driven decision-making in the public and private sector. But I acknowledge that this is a highly complex space with lots of nuances and challenges! I hope to explore more of these in future posts…