Quantum computing came to the forefront last October when Google published an article in Nature declaring it had achieved ‘Quantum Supremacy’ with its Sycamore processor. With their claim promptly rebuffed by IBM in a tit-for-tat tech scrap, it’s worth reviewing what quantum computing is, its possible applications, and why everyone’s getting so hung up on ‘Supremacy’.
In computing, the amount of changing information which a computer can utilise at any given moment in a calculation is its memory (sometimes called RAM – random access memory). This is typically expressed in terms of ‘bytes’: for example, the new iPhone 11 Pro has 4 gigabytes (4 billion bytes) of memory – which means it stores about 1 million times more calculation information than the Apollo 11 guidance computer could. Each ‘byte’ is composed of 8 ‘bits’, which have a binary value of either 0 or 1 and are the fundamental building blocks of conventional computing.
Enter quantum computing. Proposed in the 1980s and conceptually developed by physicists like Richard Feynman (famous for his work on quantum electrodynamics), it fundamentally alters the way we process information. Instead of ‘bits’, it utilises ‘qubits’, which can have values between 0 and 1. This makes them potentially useful in tackling problems current computers struggle with – particularly in areas like material science and artificial intelligence. It’d be wrong however, to label them simply as more efficient computers – they’re fundamentally different in the way they operate and what they aim to do – you’d never want to word process on a quantum computer.
Part of what makes qubits useful is also what makes them challenging to use – the reason they can take intermediate values is that they’re essentially a probability distribution between 0 and 1. This makes certain outcomes more likely than others, but there’s inherently a degree of randomness associated when you measure the value – in the quantum world, nothing is guaranteed! This is perhaps why, despite the long conceptual life of quantum computers, we’re yet to see them widely implemented in a practical way.
Hence the concept of qualifying these through quantum supremacy, essentially the point at which a quantum computer can solve a problem that a conventional computer practically can’t. Google’s recent claim of supremacy stems from their quantum computer’s solution of a random-number generation problem in 200 seconds, a calculation they argue would take IBM’s conventional supercomputer 10,000 years. Beef. IBM immediately shot back that in actuality, an optimized calculation would take only 2.5 days (still 432 times slower than Google’s computer, but a lot less than 10,000 years). There will likely be more words traded in the coming weeks, but really it’s undeniable that Google’s accomplishment is impressive, even if it’s only in the generation of random numbers (which do have a lot of practical uses in computing but sadly aren’t the key to cracking AI).
Whether you think the difference between 200 seconds and 2.5 days is enough to declare supremacy (I certainly do), it’s encouraging to see it being established and challenged, and a hearty bit of competition between tech giants will likely do no harm to its further advancement. While the applicability of quantum methods will vary from problem to problem, this could well be the first step in an exciting new era of Computing. Supreme.
image source: Google