Extreme Connect 2025
May 19-22
Paris, France
Learn More
Every day, we see headlines about advances in artificial intelligence, but is the excitement masking the limits we’ve reached? What if the next leap in artificial intelligence, the creation of Artificial General Intelligence (AGI) – a genuinely autonomous and independent reasoning mind - requires us to rethink the very nature of computing itself?
Today, we are reaching the limits of generative AI in terms of model efficiency and hardware limitations. Recent headlines about low-cost training and inference and new quantum topological architectures have disrupted the investment trajectory and have questioned how far we can go without a significant technological change. In this context, we might need to reflect on a profound insight from the renowned science educator Richard Feynman:
“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical. And by golly it’s a wonderful problem, because it doesn’t look so easy.” Richard Feynman 1981 Lecture
This quote from Feynman underscores the idea that AI has been rooted in classical computing to date, and to advance further, we may need to embrace the principles of quantum computing sooner rather than later.
The convergence of Artificial Intelligence (AI) and Quantum Computing (QC) promises to revolutionize technology advancement. The breakthrough required may just be understanding that our intelligence is driven by quantum effect, not just electrical pulses that we simulate with classical computers. If we get this right, there is a potential to really transform industries, solve complex problems, and drive unprecedented advancements, just as generative AI has suddenly led to a previously unimaginable new age of innovation.
Artificial Intelligence (AI) has undergone significant transformations since its inception, evolving from basic machine learning models to sophisticated generative AI systems. At its core, AI is software designed to run on traditional computer hardware, processing various forms of input—such as data, text, images, and other media—into meaningful outputs. Initially, AI models relied on classical computing methods, utilizing Central Processing Units (CPUs) and memory to perform operations in a single-threaded manner. These early models stored all parameters in memory, which limited their ability to handle complex computations and large datasets.
As the demand for more advanced AI capabilities grew, the limitations of CPUs became apparent. This led to the adoption of Graphics Processing Units (GPUs), which offered parallel processing capabilities. GPUs could handle multiple operations simultaneously, making them ideal for training and running neural networks. This shift was particularly crucial for developing deep learning models, which require extensive computational power to process vast amounts of data and perform numerous calculations concurrently.
The advent of GPUs marked a pivotal moment in AI development, enabling the creation of more complex and powerful models. One of the most significant advancements in this era was the rise of generative AI, particularly large language models. These models, such as OpenAI’s GPT series, leverage high-memory GPUs to store billions of parameters. This capability allows them to process and generate human-like text with remarkable accuracy and coherence. Generative AI models have revolutionized various fields, from natural language processing to creative industries, by enabling machines to produce content that closely mimics human output.
Generative AI represented a significant leap forward from standard machine learning. While traditional machine learning focuses on tasks like classification, regression, and prediction, generative AI is designed to create new content. This includes generating text, images, music, and even video. The ability to generate content opens up new possibilities for AI applications, such as automated content creation, advanced AI agents, and undetectable real-fakes.
The transition from CPUs to GPUs has been instrumental in driving these advancements. GPUs’ parallel processing capabilities have allowed AI researchers to train larger and more complex models, pushing the boundaries of what AI can achieve. As we look to the future, the potential of quantum computing looms on the horizon. QCs have shown the benefit of instantaneous parallel processing, which is precisely the issue with large AI models, although they have a similar issue with accuracy. Once this is overcome, it will be exponentially faster than traditional computing, unlocking entirely new opportunities for AI development.
The journey of standard computing to get to today’s ubiquitous association with AI has been one of symbiotic evolution. Classical computing, rooted in over half a century of innovation, is undoubtedly the foundation for modern technology. The development of the first programmable computers in the 1940s and the subsequent invention of the transistor in 1947 were pivotal initial milestones, but these had to be supported by an ecosystem of software and solutions development that created the internet and mobile generations. These advancements in the power of computing paved the way for the creation of more powerful and efficient computers, to the point where we hold more in our hands than the early billions spent bringing humans to space.
AI, on the other hand, began its journey in the 1950s with the pioneering and evangelical work of researchers like Alan Turing and John McCarthy. The field has since evolved through various phases, from the early days of symbolic AI and expert systems to the current era of machine learning and deep learning. The advent of big data and the exponential growth of computational power have fueled the rapid advancement of AI, enabling it to tackle increasingly complex tasks. Unfortunately, this progress might just be hitting a wall again. Mega-LLMs are not providing the increased performance improvement that Moore’s law gave as a guide for computing up to now.
Like every other aspect of society, AI is heavily intertwined with classical computing, but as seen in the timeline at the beginning, this symbiosis might extend to quantum soon, or will we hit another AI winter when a lack of practical applications stymies progress?
Quantum computing, on the other hand, has been a purely theoretical concept in the recent past, and only recently has it made significant strides towards practical application. The theoretical foundations of quantum computing were laid in the 1980s by physicists like Richard Feynman. While waiting for the applied physics to catch up, we have seen the development of quantum algorithms, such as Shor’s algorithm for factoring large numbers and Grover’s algorithm for database search, demonstrated the potential of quantum computers to outperform classical computers in certain tasks, although these tasks were quite niche (see list below with using AI to analyze the complex outcome of AI) in finance and deep science.
How do we measure this evolution? Using Qubits. A standard bit in computing is like a light switch; it can be either on or off. Conversely, a Qubit is more like a dimmer switch; it can have a percentage value of being on or off, making it easier to perform very hard calculations with the right algorithm. The challenge is getting the qubits to behave without too much noise.
As of 2025, the largest quantum computers have surpassed the 1,000-qubit mark. For example, IBM’s Condor quantum processor boasts 1,121 qubits, making it the largest quantum computer built to date. This quantum processor is integrated into IBM’s Quantum System Two, a modular system designed to support even larger processors in the future. The development of such powerful quantum computers highlights the rapid advancements in quantum technology and its potential to revolutionize various fields, including AI.
Google has also made significant strides in quantum computing with its Willow chip, a 105-qubit processor. Willow has achieved two major milestones: reducing errors exponentially as it scales using more qubits and performing a standard benchmark computation in under five minutes. These achievements demonstrate the potential of quantum computing to solve complex problems more efficiently than classical computers.
In recent years, huge breakthroughs have been witnessed in quantum computing. Companies like IBM, Google, and more recently Microsoft’s Majorana architecture, have developed quantum processors with increasing numbers of qubits, the fundamental units of quantum information. In 2024, Google used the above q-chip results to claim (albeit for the second time!) quantum supremacy by demonstrating that its quantum processor could solve a problem in 200 seconds that would take the world’s fastest supercomputer 10,000 years to solve. While these were again applications of quantum algorithms, we have yet to see how the practical effects of less noisy quantum solutions will affect existing classical computing algorithms.
This leads us to where we are today, approaching the convergence of AI and Quantum Computing and testing enhanced AI capabilities in new and profound ways. Quantum computing’s ability to process vast amounts of data simultaneously and solve complex optimization problems can significantly accelerate AI algorithms and improve current technology towards AGI.
Combining the strengths of quantum computing with the parallel processing power of GPUs can lead to significant advancements in AI. Quantum computers can perform billions of “Train of Thought” operations, allowing AI models to process and learn from data at unprecedented speeds. This synergy brings us closer to the realization of general AI. Going back to Feynman’s quote, nature is not classical, how do we realize natural intelligence without using the tools that nature provided for us in quantum mechanics – randomness, sparks of intelligence, and nature/nurture system memory? When machines can understand, learn, and perform tasks with human-like intelligence, the benefits of which are listed here.
The combination of Quantum ML (QML) and GPTs can lead to AI systems that are not only faster but also more capable of understanding and interacting with the world in a human-like manner. Some practical applications are being formulated, such as using Quantum Large Language Models (qLLM), that will address the scalability issues that currently limit the development of larger and more complex AI models. This scalability is essential for creating AGI systems that can process and learn from the vast amounts of data required to achieve human-like intelligence.
The integration of Quantum Machine Learning with Generative Pretrained Transformers represents a significant leap toward achieving Artificial General Intelligence. By leveraging the unparalleled computational power of quantum computers and the sophisticated capabilities of GPTs, we can overcome the current limitations of AI research. This breakthrough will pave the way for more advanced, efficient, and adaptable AI systems, bringing us closer to the realization of AGI.
The future of AI and Quantum Computing is filled with immense potential and challenges. As quantum hardware continues to advance, we can expect more practical applications to emerge. However, there are significant challenges to overcome, including error correction, qubit coherence, and the development of scalable quantum systems. Focusing on the development of secure and private edge services can significantly support the advancement of local AGI systems, leveraging the unique capabilities of quantum computing. By deploying quantum computing at the edge, we can harness the inherent randomness and immense processing power needed for AGI to process local inputs in real time.
Era | Year | Technology | Key Developments |
Early AI Models | 1950s-1980s | CPUs | Rule-based systems, basic pattern recognition |
Transition to GPUs | 1990s-2010s | GPUs | Deep learning models, CNNs, RNNs |
Generative AI | 2010s-present | Big GPUs | LLMs, generative AI models like the GPT series |
Quantum Supremacy | 2025+ | Quantum | QNNs, exponential speed-ups in AI |
This approach ensures that AGI systems are contextually aware and responsive to their immediate environment, enhancing their effectiveness and adaptability.
Moreover, integrating these edge services with robust cloud AI services and agents provides a powerful hybrid model that balances local processing with centralized intelligence. This strategy not only mitigates the risks associated with centralized AI control, often depicted in sci-fi scenarios like Skynet, but also ensures data privacy and security. By decentralizing AI processing and maintaining stringent security protocols, we can foster the development of AGI in a manner that is both innovative and safe, paving the way for intelligent systems that augment human capabilities without compromising ethical standards.
In conclusion, the convergence of AI and Quantum Computing represents a transformative shift in the technology landscape. This synergy offers unprecedented computational power and efficiency, paving the way for groundbreaking advancements. As we continue to explore and develop these technologies, it is crucial to stay informed and engaged with their rapid evolution. The future holds exciting possibilities, and the convergence of AI and Quantum Computing is at the forefront of this technological revolution.
Quantum computing may be on the brink of achieving quantum supremacy, possibly by 2030. As quantum technology advances, the costs associated with quantum computing are expected to decrease, potentially reaching levels comparable to high-memory GPUs used in AI today. This convergence will make quantum computing more accessible and practical for a broader range of applications. This synergy will enable AI models to process and learn from data at unprecedented speeds, bringing us closer to the realization of general AI, where machines can understand, learn, and perform tasks with human-like intelligence.
Quantum computing promises to be a game-changer in the world of AI, unlocking new opportunities and driving significant advancements across various fields. As we continue to explore the potential of this revolutionary technology, we can look forward to a future where AI and quantum computing work together to create a smarter, more efficient, and more sustainable world.