The rise of the supercompetitor (Part 1)

I have previously defined cognitive advantage as ‘the demonstrable superiority gained through comprehending and acting to shape a competitive environment at a pace and with an ingenuity that an adversary’s ability to comprehend and act is compromised‘. The enabler of such a capability is the wide scale adoption of AI to maximise knowledge discovery and to hyper-accelerate decision-making either by automated action or in augmenting a human.

But what would the impact be if a company, or a government, held a cognitive advantage? There is a lot to unpack here – and I am still feeling my way – and so I am going to cover this topic over a series of posts.

Holding a cognitive advantage could equip the benefactor with a profound and deeply defensible position. If you are able to control the environment that your adversary is trying to make sense of then you not only hold the best cards but you also know what cards your adversary has, because you chose what cards they should have. The odds are stacked completely in favour of the benefactor.

Out-thinking and out-manoeuvring rivals and adversaries, continuously, will require hyper-accelerated decision-making with the agency to act with precision, foresight and an understanding of complex system effects. I explore this in my book ‘Cognitive Advantage‘ and, in summary, it could result from the optimal orchestration of artificial and human intellects alongside beneficial data accesses to yield an ability to act with deep sagacity (‘the trait of solid judgment and intelligent choices’).

I propose a simple* goal for AI: KK > UK + KU + UU

Where KK (Known Knowns), UK (Unknown Knowns), KU (Known Unknowns), UU (Unknown Unknowns).

What does this mean? Simply that if a company, or a country, deploys AI to continually discover new knowledge faster than adversaries, and turns that knowledge (Known Knowns) into actionable steps, for either a machine or a human, then that could yield a cognitive advantage (or, if you like, a decision advantage).

With such a capability, a competitor may decide to actively shape a competitive environment (eg. our online and offline worlds) at a pace and in a way that their competitors simply cannot keep up nor understand the meaning of such changes. Compromising the cognitive abilities of competitors in this way may lead to a decline in their ability to act with any kind of accuracy or insight. After all, they are making sense of the world that the competitor with a cognitive advantage has created. This is likely to lead to poor decision-making thus further reducing the ability of competitors to put up an effective response. In the limit this could feed a regressive cycle of continual decline of perception and reasoning of competitors leading to decisions that are no better than random chance.

As the ability of competitors to compete continues to decline, would this allow a single competitor – that holds a cognitive advantage – to become a supercompetitor that dominates in an unassailable winner-takes-all outcome? What might this mean? What would the impact be? What if it becomes impossible to regulate such organisations; a supercompetitor that may not wish to be understood may not be possible to regulate.

How would we know which companies, or countries, are already on (or have the potential to get on) a supercompetitor trajectory? How might we begin to prepare for such a possibility? Are there already super competitors amongst us such as the big tech firms like Microsoft or Google enabled as they are by their strategic investments in world-class AI labs such as OpenAI and Deepmind? Or a cognitive advantage as a sovereign capability developed by the United States or the China?

I am unsure if we can prevent the rise of a supercompetitor. If such a state were to occur we should hope that the supercompetitor is guided by strong ethical principles. But there is an inherent paradox in such a statement; an ethical supercompetitor is unlikely to want to be a supercompetitor as they would deem it to be unethical. But what is more unethical? Being a benevolent supercompetitor to prevent bad actors from becoming a supercompetitor? Or deciding that it is not a state that any single organisation or country should occupy, and lead by example, but at the risk that this leaves the field open for less ethical competitors?

Image generated with Stable Diffusion using the prompt “Side profile of wise woman, city in the background, dark colors”

* ultimately, such an equation is not that helpful. When we act on Known Knowns we are inevitably increasing the number of Unknowns (on the right-side of the equation) that we then have to re-discover to turn back into Known Knowns!!! Nevertheless, the principle of using AI to maximise knowledge discovery and sagacity, that this equation illustrates, does convey a motivating factor that could lead to the rise of a supercompetitor.

In an AI world winning in business and politics will go to those that have a ‘cognitive advantage’

When I was at the Complex Systems Conference in Singapore in September 2019, I found myself musing on the question: in a world where we have all maxed out our use of AI, how will that change the way that a business outcompetes their rivals? In a world where automated decision-making will take over more and more of the running of businesses and entire countries how do you compete to win? The conclusion I came to was that it was about out-thinking and out-manouevring your rivals and adversaries to the extent that you shape the environment in which you are competing in a way that your adversaries (humans and AI) can no longer accurately comprehend it and, thus, would begin to make increasingly bad decisions.

Now, this has happened throughout history and to quote Sun Tzu “… the whole secret lies in confusing the enemy, so that he cannot fathom our real intent”. However, the key difference this time is that AI will have a significant role in shaping that competed environment at a speed and a propensity for handling big data that humans are simply left behind. We may enter a cognitive war of AI versus AI.

I am hypothesising that AI will come to dominate global action that shapes our offline and online worlds. So, if you want to compete you will need to shape the digital environment that AI is attempting to predict, understand and act in. In other words, the competitive moves we make in the future will (a) be done automatically by AI on our behalf, and (b) therefore we will need to consider how AI will perceive our actions (recognising that, for the moment at least, most AI is dependent on big data). If the long-established practice of marketing to convince people to buy your product is extended to marketing to artificial intellects too, to persuade an AI to behave in a way that you want it to, then you start to get the point.

I call this having a cognitive advantage which I define as:

the demonstrable superiority gained through comprehending and acting to shape a competitive environment at a pace and with an ingenuity that an adversary’s ability to comprehend and act is compromised

I wrote a paper about this last year:

I will also be giving a talk on Cognitive Advantage at this year’s Future of Information & Communication conference in Vancouver (FICC 2021). A version of the conference paper will also be published in Springer’s ‘Advances in Intelligent Systems’.