Start the conversation
Contact us today to see how you could benefit from our services, technology and marine expertise
My family and I have recently moved house. I’m at that classic time in life when needs are changing; bigger garden for the kids, spare room for the parents to stay in and larger lounge so the dog has his own sofa.
Posted 12th December 2023|7 minute read
When we moved in my eldest son chose his bedroom – much bigger than in the old house – and I asked him what he was going to do with all the space. “This area is for my digital lab” was the response (and here is me thinking that he could have shelves for all his Lego). Intrigued, but secretly thinking I could catch him out, I asked; “what’s a digital lab?”. The look of disappointment with me on his face; “What!? Like machine learning, AI, digital twins, stuff like that. How old areyou dad?”. Truly beaten by an 8 year-old, I went back down stairs to my typewriter – you know, the thing that you hit letters and it prints them on this stuff called paper.
I put my son talking about “digital labs” down to just being a sign of the times. The ease and fluidity with which young people can navigate the digital world we live in is incredible. Scary, but incredible. The way we think, act and work is in a much more digital way. From paperless utility bills to tools like ChatGPT, we can’t avoid interacting with something digital. This digital integration has been happening for many years and it doesn’t seem to be stopping anytime soon. I exist in a science research world and we are grappling with what this means for us; will it help? What skills will we need? What do we do with all this data? These are just a few of the questions we are continually asking ourselves. In particular, we focus on AI. AI is something that has huge potential for science but it’s complicated stuff. I’m sure my son would instantly understand it all but for me, I’m still confused. Oh, and by the way, if you’re interested, I’m 38.
Let’s start with some context and definitions. AI-based approaches to tasks or activities isn’t something new, they have been around for decades. Indeed, the birth of AI can arguably be traced back to the 1950s. Practical examples include such things as computer-aided vision from the 1970s. What we are seeing now is the wide spread impact of AI through the pace of its development, scale it has achieved and how it has permeated into everyday life.
It is also necessary to at least try and define, or constrain, what we mean by AI. For me, it’s relatively simple; AI is used extremely broadly, and covers a huge range of goals and techniques but I think of AI as an artificial system that does tasks usually associated with an intelligent being (mostly, a human). Terms such a “digital twin” or “machine learning” are ways to achieve artificial intelligence. In the case of machine learning, you don’t program the intelligent algorithm directly into the computer. Instead, you show it a large number of examples and (with some constraints) the computer learns the algorithm for itself (e.g. how to recognise faces). Not all AI applications use machine learning though; think traditional chess-playing computers that use brute-force computation, but still behave “intelligently”.
So. it’s a complex space with different types and layers of AI-based approaches and new approaches emerging that have capabilities with significant impact. Generative AI is something that has exploded more recently and is both exciting and concerning. ChatGPT and deepfakes fit into this category. I struggle to navigate the technical complexities of AI and keep pace with the rate of technological development but I can see its increasing use in the scientific fields and if I was to stick my neck out, I would say it is potentially game-changing.
Let me try and unpack this. As an ocean science research organisation, we rely on ocean data for what we do. Understanding the ocean is reliant on gaining data from it, and lots of it. In its most simplistic form, AI can allow us to capture, process and interpret data quicker and at a higher rate. Think of ocean science research as a journey to answer a question. Along that journey we undertake a range of activities; we define the data we need to capture, the tools we need to capture it – usually a research vessel, autonomous equipment, satellite data, etc – then we build it. We then go and collect the data, process it, analyse and interpret it to draw scientific conclusions. What drives this journey forward are human based activities and decisions; thousands of them. AI offers us the opportunity to swap human led activities and some of the limitations we have, with an AI-based approach. Think about the data collection stage in the journey, we could swap a human physically looking at the environment, piloting equipment to look at it or working through thousands of images to identify specific species with computer vision, robots, or species recognition algorithms. For me, the power and success of AI therefore comes down to three things;
I’d also argue there is a fourth point around whether it crosses an ethical boundary in undertaking the first three points but that is a conversation for a different day.
From a science perspective, these three, rather obvious points, open up a world of possibilities. Some of the reasons why we are interested in AI include its ability for consistent accuracy through robotic control, accepting some inaccuracies in exchange for speed performance can lower costs for some activities and the fact that, from an ocean science perspective, AI is computationally cheaper than a traditional physics-based approach. We have a large “Digital Ocean” function in our organisation who work closely with other scientists to develop AI and machine learning applications in a range of areas such as image processing from satellites and underwater vehicles, data curation and quality control, hybrid modelling where we replace computationally-expensive parts of a simulation with a cheaper statistical emulation and the control of robots for use in autonomous data capture systems.
When I talk to colleagues involved in this work it feels like we are nearing the point where we can scale this up to achieve significant impact in our work. But I always return to the question of “so what? What does this give us?”. For me, AI has to have impact and transform the way we do something so we can do more or make our science more accessible. If we can develop AI-based techniques that allow us to process more data, more accurately and quicker then theoretically this allows us to undertake more science. More science means more understanding of our ocean and this is a good thing. When we step into the world of data visualisation through AI then this opens up a new channel to communicate data and science in a way that people can interact with through things such as augmented reality and virtual reality. This democratises complex science by translating it into something accessible to non-scientific audiences. These are just some examples of what is possible but there are many more.
However, before we get carried away, a word of caution. AI can have some downsides, and in a scientific world where accuracy is king for drawing conclusions this can be challenging. Machine learning in particular requires a large amount of “training” data or observations which we often don’t have. The training step can be very laborious and is often where the main costs of AI lie. AI algorithms can sometimes produce really inaccurate results and this can be particularly dangerous when they look sensible or reasonable. And finally, there is the big one; AI based approaches are sometimes seen as “black boxes” and are therefore not trusted. People don’t understand what is going on in the box and can jump to nightmares of killer robots wandering around completely uncontrolled. The growth in “explainable AI” is trying to allay these fears by letting us probe more into how the AI is working in the box.
This all said, I think the science community will need to relax a bit more about the use of AI. People often pick on the above objections as a way to reject AI techniques, but the “traditional” techniques are sometimes no more accurate. The weather forecasting community is going through this challenge at the moment – it was assumed that the traditional physics-based models would always win, but some AI-based models are gaining serious traction for attacking particular problems (e.g. short-term forecasting).
I recognise I have given the characteristically “political” view here; the pro’s, the con’s and no definitive answer. Let me try and give you a shorter answer to the question. Firstly, here is some context for my answer; we can’t see AI in the science world in the same way as we do in our everyday life or through the same lens as it is often presented by the media. It’s much more about the technical mechanics of how it is built and used within the scientific process. Seen in this way, my personal opinion is that science will become increasingly enabled by AI approaches, but it requires close contact with the mathematical and computational experts to ensure that the right techniques are chosen, and the right conclusions drawn. It’s very easy to get into a “Wild West” of people throwing around algorithms and data and coming up with nonsense results. If we do this we undermine the potential of AI in the scientific methodology and this fuels the scepticism of it as a tool that can fundamentally change how we “do science”.