A new year is an opportunity to look ahead. At Smith Institute we asked some of our experts what they thought would be key trends in artificial intelligence, machine learning and beyond this year.
Here is what they had to say.
Formal AI standards taking top billing
Melissa Tate, Commercial Director
According to the UK government, “AI may be one of the most important innovations in human history.” It has the potential to enable our societies to do more with less, creating jobs, improving lives and transforming systems along the way.
Nearly every industry is already taking advantage of AI tools and technologies – to discover new drugs and drive healthcare innovations; to provide driver assistance and map us from A to B; to power translation and search the world’s information; to enable low carbon electricity networks, transparent supply chains, secure banking, safer streets – and more.
As the race to capture the benefits of this technology intensifies, so too does the importance of establishing standards and regulations to mitigate the risks that it poses. 2023 will be a significant year for progress on these standards – from the UK to the EU to the USA and beyond. Some trends to watch:
- UK and other governments will continue to take a pro-innovation approach to AI – striving to be leaders in its development at the same time as they build new standards and requirements
- Discussions will focus on privacy and civil liberty, reducing bias and discrimination, and defending against design and implementation flaws – all to support public safety and trust in AI systems.
- The EU’s risk-based approach to AI regulation will serve as a useful basis for other global standards
- Policymakers, digital ethicists and data scientists will make progress on defining metrics to measure an AI’s accuracy and trustworthiness – and to flag when an AI is confidently inaccurate.
2023 will be a year of yet more technological leaps for AI, and a year of vital contributions to AI standards and assurance techniques. We look forward to collaborating in de-risking and building on this transformational tool.
Expanded development and adoption of causal AI
Dr Kieran Kalair, Mathematical Consultant
Throughout industry, we are constantly making decisions that impact future operations. Often, it can be useful to look back at past experiences and ask, “what would have happened if I’d have acted differently?” The problem is, we never know exactly what would have happened – we took a particular action at that time because it appeared the optimal thing to do and observed the outcome of it. To better understand how our actions impacted the situation, or how alternatives might have performed better or worse, we need to utilise statistical and mathematical techniques.
These techniques are called causal inference. One branch of causal inference allows us to model scenarios and estimate ‘counterfactuals’: the outcome we were not able to observe because we didn’t take the corresponding action in the moment. Another branch considers ‘A/B testing’, asking “if we make an intervention on one group, and not the other, can we assess what impact this intervention had?”
In the past few years, there has been enormous effort to incorporate these methods into AI frameworks which has led to a broad area of work known as ‘causal AI’. In 2023, I expect that we’ll explore the following causal AI questions in greater detail:
- Ensuring explainability of AI through a causal approach. Will approaching explainable AI through a causal perspective be more beneficial to certain industrial applications than the widely used feature attribution methods?
- Counterfactual estimation. Will causal AI methods improve our assessments when carrying out “what if” analysis, and give more confidence when making changes to critical decision-making processes?
- Control trials through simulation. If conducting a controlled trial of an intervention is impossible, can causal AI offer a model-driven replacement to simulate highly complex practical scenarios?
- Data generation and privacy. Will basing generative models on methods from causal AI improve the quality of synthetic data and ensure data privacy is retained?
The level of academic research in causal AI is promising, in both fundamental methodology and practical application. It will be interesting to see how these academic advancements can be used to aid business decisions and actions throughout 2023 and beyond.
Advanced optimisation in decarbonising electricity systems
Dr Tom Dobra, Mathematical Consultant
Consumers have been stung by high electricity prices in 2022. This is in part due to the ongoing need to switch to more sustainable generation, and because of heavy reliance on the natural gas that comprises over 40% of our energy mix. Throughout the 20th century, most of our power came from a few dozen large, reliable generators with the majority burning fossil fuels such as coal and gas. Skilled human operators, with the aid of linear optimisation, were able to use the markets to supply us with cost-effective electricity.
Sustainable decarbonisation vastly increases the number of smaller generators such as wind farms. Dependent on the weather, wind and solar are not only inherently volatile but also fail to replace the mechanical inertia – critical to keeping the grid stable – provided by spinning turbines in coal and gas plants. The net result is a system that is now too complicated for humans to balance cost-effectively without modern tools. Thus, we reach back to gas for stability.
2023 is the year to embrace state-of-the-art optimisation tools that solve Mixed Integer Programmes (MIPs) and nonlinear problems to empower our greener electricity system to its full potential. MIPs are for problems mixing thousands of binary (on/off) decisions with continuous price curves depending on an output level. For example, they can recommend to operators which generators, batteries and other services to deploy for the cheapest but resilient electricity today, or to investors for best returns tomorrow. Let’s optimise our way to affordable green energy.
Deeper adoption of transformers
Dr Alex Bowring, Mathematical Consultant
I think we will continue to see the relentless growth of the transformers architecture for solving problems in machine learning and AI, ultimately replacing architectures like convolutional neural networks (CNNs) that were the model of choice until now.
Transformers were introduced in 2017 by Google in the paper “Attention Is All You Need”. In the few years since, they have already become the front-running model for natural language processing (NLP) tasks. However, the ‘self-attention’ mechanism used by transformers to mimic cognitive attention for NLP tasks generalises readily to other AI domains, such as image generation or classification and computer vision. Self-attention allows transformers to take a holistic approach to AI problems that is simply not possible with CNNs. Comparing transformers and CNNs in the context of image processing Maithra Raghue, a computer scientist at Google Brain, said: “If a CNN’s approach is like starting at a single pixel and zooming out, a transformer slowly brings the whole fuzzy image into focus” (Will Transformers Take Over Artificial Intelligence?). Andrej Karpathy, the former Director of AI at Tesla, recently said that transformers architecture is the best idea in AI at the moment (Transformers: The best idea in AI – YouTube).
Since the transformers architecture can be universally applied to a range of AI problems, a vast range of AI domains will be revolutionised as the full potential of transformers is realised; transformers will be used to predict protein structures, analyse legal documents, and train robots to complete complex tasks and render services.
Quantum technology adoption
Cameron Booker, Mathematical Consultant
Quantum computing has certainly been a buzzword for the last few years with numerous start-ups and university spin-offs appearing every year. That said, in terms of real-world impact, the industry is yet to really deliver on its promise of a new era of computation. This is largely a result of the challenges that come with trying to control large numbers of qubits for extended periods of time. Consequently, I expect 2023 to be the year when most of the industry turns its attention to fault-tolerant devices and invests in researching new avenues for error correction rather than the recent emphasis on noisy intermediate-scale devices.
I am also looking forward to many new end users starting to investigate what quantum computers can do for their business. Currently, we have a limited selection of quantum algorithms but if a wider range of end-users start to take an active interest, then I expect this number to start growing significantly as we find new creative ways to solve problems with quantum technology.
Finally, I hope that 2023 is the year when the wider business community starts to appreciate the existence and benefits that all quantum technologies can offer, not just quantum computing. For example, the field of quantum sensing is reaching a level of maturity that it will soon be possible to use quantum data in quantum computers. There is reason to hope that this would make quantum computers even more beneficial in certain settings.
We’re looking forward to seeing what developments happen in AI, machine learning and beyond over the next 12 months. Our teams are working with clients across industries to put these technologies to work for the greater good – transitioning to a secure, low-carbon energy system, protecting assets and infrastructure in the face of climate change, keeping the UK safe and strong, and delivering the products and services that ease the journey through 2023.
What do you think will be big in computing technology over the next year? Let us know on LinkedIn.
If you would like to discuss an AI, machine learning or quantum computing opportunity or challenge, get in touch with us.