Why does Bellrock Technology exist?

The world of data science, machine learning and artificial intelligence has come a huge way. Models are being developed across a huge spectrum of areas to help organisations make better decisions using intelligent insights. And yet deploying these models into your organisation is still extremely hard.

Data science delivery doesn’t need to be so difficult

The Challenge

You hire a data science team or consultancy to develop models for a range of use cases. You might even ask your existing teams to contribute their knowledge of how the data should be interpreted. Now what?

It is difficult to convert these data science models into business applications that drive better decision making. Worse still, this difficulty is often underestimated or even ignored. As a result, the majority of data science projects fail to deliver value, with 4 out of 5 or more never making it into production.

There are many reasons for this.


First, you need to get your models running. That means thinking about infrastructure.

  1. Will you run on-premise or in the cloud?
  2. On public cloud or private cloud?
  3. Whichever option you choose, how will you deploy new software in a reliable and repeatable way?


The models produced by your data science teams may even need to be re-written in a software language compatible with your infrastructure before they can be deployed. Does your team have the skills or time to achieve this?


Many platforms offer one-click deployment to the cloud, where models can be accessed via APIs (Application Programming Interfaces). But to provide insights, your models need to run constantly on streams of live data. Linking your data with these models has traditionally required significant software engineering before they can run for real.

Data Silos

What’s more, your data may be siloed across multiple systems, held in multiple formats and owned by multiple people. This complicates the software engineering task even further.


Then data security needs to be considered. Particularly if the models are going to run in the cloud. How can you be certain that your data or systems won’t be at risk if you expose them to new software? Particularly if that new software is still being validated, as is often the case with data science models that are still being developed, trained, tested and improved.

Data pipelines

As models are improved and new stages are added to model pipelines, you also need to cope with change. This is a challenge for any software application, but is particularly true for data-driven systems. Changes and additions are regular events as greater understanding is gained of how to best interpret the data under investigation. But integrating models manually, even when using software development best practices such as APIs and continuous integration, can build up enormous technical debt. Each addition can make future changes exponentially more difficult and risky to the stability of the overall system.

Delivering impact

If, after all this, you can get models to run, consume streaming data and produce live insights, these still need to be delivered to business users if they are to have an impact. Who is the audience and how do you deliver the results in a simple way that drives improved decisions.

Return on investment

Hiring your data science team may have represented a significant investment. But this could be dwarfed by the additional investment needed to support it. Further teams of Data Engineers, ModelOp Engineers, Software Engineers, Application Security Engineers, Solutions Architects and Business Intelligence Specialists could all be required if you are to see returns on your investment. And even with these teams, can they deliver quickly enough to meet your timescales?

This is where Lumen comes in.

4 out 5 data science projects fail to make it into production

0 days

It takes on average 180 days to deliver a model into production

0 people

It takes on average a team of 6 people to deploy a model into production

The Solution

Lumen has provided a more complete and automated data science delivery solution since Bellrock Technology was founded in 2012.

It reduces the time and cost involved in monetising data science by making it possible to combine and deliver models to business users as ready-to-use software applications without the need for a software development team. Lumen deploys your models, finds and links them with relevant production data and lets you configure apps and dashboards to share results.

Lightweight adapters can be provided to connect to any centralised or distributed sources of data, from legacy systems to modern data lakes. And APIs mean results can be fed into any pre-existing business intelligence solution.

All this means you can focus on improving your models and generating results.

Our Journey

Bellrock Technology’s story began in 2009 when a team at the University of Strathclyde began developing an artificial intelligence (AI) based solution to the challenges of delivering data science research. The University works with many industrial partners and the team became frustrated that its models, often commissioned at great expense, could not be more easily delivered to help improve day-to-day decision making.

That artificial intelligence based solution became Lumen. Bellrock Technology spun out of the University in 2012 and has been working with commercial organisations to deliver data science faster and more efficiently ever since.

Next Steps