The life of Brian: Measurelab’s generative AI journey

I recently had the pleasure of attending the AI Summit London 24 which was chock full of fascinating talks, opinions and demos around AI. I heard from lots of companies in lots of industries who were all facing the same issues, “ok it ain’t going anywhere, how do we keep pace and adapt?”. It had me reminiscing about how we put our structures and policies in place around generative AI, and how our exploration of the technology led us to the create Brian, our very own generative AI assistant!

A realisation

Let’s take a trip back to the distant, distant past of November 2022 when GPT 3.5 began to appear on the scene and captured public imagination. We of course had a play, commented on how impressive it all was and got on with our lives. Little did we know that was just a jab, the right hook was heading our way March 2023 when GPT-4 appeared and really blew us away. It could produce usable code, more compelling writing, hallucinated much less frequently and started to steal away a few questions from our precious Google. 

When, a few months later, the burgeoning sign of simple  ‘agents’ surfaced with the introduction of GPTs, we really descended into a full on existential crisis. It wasn’t just what the models could do, it was the pace of change. In less than a year we had gone from technologically impressive novelty to the threat of autonomous AI ‘agents’ . 

How, we thought, do you safely keep up with all this, and more importantly, how does Measurelab? We all have a plate full of impactful work we are delivering for clients, are we also supposed to be learning everything that is being fired at us point blank from the proverbial AI cannon? Can we ignore it? Can we afford not to be gobbling up every bit of information on the subject?

The genesis of Brian

All of these questions led us to write what I would describe as a bit of an internal ‘call to arms’ entitled ‘The AI Revolution & Measurelab’ (I have a flair for the dramatic). This was an honest and frank look at what we do, how we do it and how all of that could be affected by generative AI. There was a fair amount of crystal ball gazing but the conclusion was, not only could everything we do be touched by generative AI, it almost certainly would be in time. If we allowed ourselves to shut out the ‘noise’ because of overwhelm, we would almost certainly be left behind. The age of the pre-generative AI consultancy was waning.

So we began to think, what does consultancy with AI at its core look like? 

  • It will have clear guidelines on generative AI and its use 
  • It would have an understanding of the nuts and bolts of the technology 
  • Its employees would be empowered to augment themselves with new solutions
  • It would have buy-in from everyone involved. 

Simple as that eh?

In the pursuit of our ambitious goals, we decided an internal platform for generative AI use, rules, automations, prompts and assistants was a good approach. It would not only give us a single hymn sheet from which we could all sing, but the building of it would force exploration and learning in the ever advancing field. It would provide a central place for new ideas and automations, and most importantly, it would be cute as a button.

Enter Measurebot…

Later renamed Measurebrain…

Frequently misspelled by my dyslexia as Measurebrian

Finally and thankfully renamed Brian

Logo for Brian, Measurelabs internal generative AI assistant

Development Journey

Brian has made his way through a number of different iterations, each stage reflecting new knowledge and understanding and each stage unlocking more power and potential. As stated above our aim here is not to stand still once we have a v 1.0. We wanted to use the platform as a catalyst for learning and adoption, so it needed to stay up to date with the latest advances. We must never rest on our laurels.

Version 1: one model Brian

The very first version of Brian was built using Python, Flask and interacted directly with the OpenAI API. It was more complex in its structure but lighter on its features than future versions, but that is learning for you!

Version 1 had a single model (gpt-4) and a single assistant (prompted flavour of the generative AI model).

Very quickly it allowed us to centralise the use of generative AI models. This was critical if we wanted to move at pace but be sure of governance and security. It also acted at an introduction to the technology for many within the company.

Version 2: automation and generative AI one stop shop

As our use of the technology and knowledge of its inner workings grew, so did the platform. Spaces were created for users to upload and share prompts, resources and to set ‘quests’ for processes to be automated or augmented with generative AI. 

The number of models grew, with gpt-3.5, gpt-4 and DALL-E making appearances. We also saw the birth of new assistants, pre prompted flavours of Brian that help solve specific problems or work in specific ways (think GPTs).

Here we hit our stride with what it could and could not do, unlocked new levels of collaboration and set out our ‘AI Redlines’. These redlines where guardrails designed to ensure governance and fair use while maintaining freedoms to explore.

Image of early version Brian user interface. Image shows chat box, conversation history and headings, chat, prompts, quest board, tools and redlines

Version 3: rip it up and start again

By this time we had begun to really explore and deliver on some generative AI based projects that had given us new knowledge and insight. To that end, barely 6 months since Brian came to be, he was rebuilt from the ground up. Gone was Flask, in came Streamlit, a lightweight python framework primarily used for data visualisation. Gone was the direct communication with OpenAI in was a framework for interaction with generative AI that opened up mountains of new opportunities, Langchain

All of this meant we now had access to models from multiple companies; Anthropic, OpenAI and Google. It meant we could build our own tools and access a wealth of community built tools, giving Brian new and exciting capabilities (search, visualisation, API connections). In came streaming of responses for the first time, making interaction with Brian much more conversational and satisfying.

Through all of this development we flexed our GCP muscles, using the platform to manage much of the infrastructure. To name a few of these services:

Image of latest version of Brian user interface. Image shows chat with LLM generated python code for flask application

Beyond Brian

Aside from internal adoption and augmentation, Brian has helped us realise our goal of understanding the application of the core technology. This has allowed us to deliver on a number of projects both internally and externally. These project leveraged Brian itself and the underlying technology to solve problems in other ways. We have developed automated communication updates, connections to databases for analysis, visualisation generation engines, schema generation solutions, internal workflow automations, drafting assistants, transformation processes… I could go on and on.

One point I want to stress is that I am not advocating for wholesale automation without thought. While it is true that most things can and will be touched by the technology, it is up to you how you wield it. Decide what to augment, what to automate and what to leave alone. It’s crucial to remember that technology is a tool to complement human intelligence, not replace it. While Brian and its successors can handle a lot of heavy lifting, the strategic thinking, empathy, and ingenuity of our team remain irreplaceable.

It is also important to state this has also not been a flawless experiment. To say we have 100% adoption and augmentation would be a falsehood, but that is part of an ongoing, fast paced journey and there is still lots to learn. We are of course excited to pass all of our learnings onto our clients and the community at large.

So, as we look beyond Brian, the journey is as much about evolving our understanding and capabilities as it is about the technology itself. We are excited for what the future holds and are confident that by embracing AI, we will continue to pioneer innovative solutions, stay ahead of the curve, and deliver unparalleled value to our clients.

If you have any questions around generative AI, its adoption, or anything else, please get in touch.

Written by Matthew, approved by Brian

Share:
Written by

Matthew is Head of Engineering and Technology at Measurelab and loves solving complex problems with code, cloud technology and data. Outside of analytics, he enjoys playing computer games, woodworking and spending time with his young family.

Subscribe to our newsletter: