I have previously defined cognitive advantage as ‘the demonstrable superiority gained through comprehending and acting to shape a competitive environment at a pace and with an ingenuity that an adversary’s ability to comprehend and act is compromised‘. The enabler of such a capability is the wide scale adoption of AI to maximise knowledge discovery and to hyper-accelerate decision-making either by automated action or in augmenting a human.
But what would the impact be if a company, or a government, held a cognitive advantage? There is a lot to unpack here – and I am still feeling my way – and so I am going to cover this topic over a series of posts.
Holding a cognitive advantage could equip the benefactor with a profound and deeply defensible position. If you are able to control the environment that your adversary is trying to make sense of then you not only hold the best cards but you also know what cards your adversary has, because you chose what cards they should have. The odds are stacked completely in favour of the benefactor.
Out-thinking and out-manoeuvring rivals and adversaries, continuously, will require hyper-accelerated decision-making with the agency to act with precision, foresight and an understanding of complex system effects. I explore this in my book ‘Cognitive Advantage‘ and, in summary, it could result from the optimal orchestration of artificial and human intellects alongside beneficial data accesses to yield an ability to act with deep sagacity (‘the trait of solid judgment and intelligent choices’).
I propose a simple* goal for AI: KK > UK + KU + UU
Where KK (Known Knowns), UK (Unknown Knowns), KU (Known Unknowns), UU (Unknown Unknowns).
What does this mean? Simply that if a company, or a country, deploys AI to continually discover new knowledge faster than adversaries, and turns that knowledge (Known Knowns) into actionable steps, for either a machine or a human, then that could yield a cognitive advantage (or, if you like, a decision advantage).
With such a capability, a competitor may decide to actively shape a competitive environment (eg. our online and offline worlds) at a pace and in a way that their competitors simply cannot keep up nor understand the meaning of such changes. Compromising the cognitive abilities of competitors in this way may lead to a decline in their ability to act with any kind of accuracy or insight. After all, they are making sense of the world that the competitor with a cognitive advantage has created. This is likely to lead to poor decision-making thus further reducing the ability of competitors to put up an effective response. In the limit this could feed a regressive cycle of continual decline of perception and reasoning of competitors leading to decisions that are no better than random chance.
As the ability of competitors to compete continues to decline, would this allow a single competitor – that holds a cognitive advantage – to become a supercompetitor that dominates in an unassailable winner-takes-all outcome? What might this mean? What would the impact be? What if it becomes impossible to regulate such organisations; a supercompetitor that may not wish to be understood may not be possible to regulate.
How would we know which companies, or countries, are already on (or have the potential to get on) a supercompetitor trajectory? How might we begin to prepare for such a possibility? Are there already super competitors amongst us such as the big tech firms like Microsoft or Google enabled as they are by their strategic investments in world-class AI labs such as OpenAI and Deepmind? Or a cognitive advantage as a sovereign capability developed by the United States or the China?
I am unsure if we can prevent the rise of a supercompetitor. If such a state were to occur we should hope that the supercompetitor is guided by strong ethical principles. But there is an inherent paradox in such a statement; an ethical supercompetitor is unlikely to want to be a supercompetitor as they would deem it to be unethical. But what is more unethical? Being a benevolent supercompetitor to prevent bad actors from becoming a supercompetitor? Or deciding that it is not a state that any single organisation or country should occupy, and lead by example, but at the risk that this leaves the field open for less ethical competitors?
* ultimately, such an equation is not that helpful. When we act on Known Knowns we are inevitably increasing the number of Unknowns (on the right-side of the equation) that we then have to re-discover to turn back into Known Knowns!!! Nevertheless, the principle of using AI to maximise knowledge discovery and sagacity, that this equation illustrates, does convey a motivating factor that could lead to the rise of a supercompetitor.