Computer systems, the World Wide Web, and millions of humans form a complex, symbiotic relationship that dramatically increases our native problem-solving ability. Over the coming years, further technological advances will dramatically boost the efficacy of each of these elements, giving rise to exponential improvements. Overall, we will be a lot smarter. Some call this 'IA' – intelligence augmentation, others, 'SI' – super-intelligence. For clarity, I prefer to reserve 'SI' for non-biological entities.
Technology will continue to improve intelligence along three mutually supportive – and increasingly integrated - axes:
While these three basic elements are tightly coupled, it is important to distinguish between individual augmented human intelligence, and aggregate cognitive ability. To most of us, from a personal perspective, it is crucial that the superior intelligence is employed towards our goals – usually, towards our long-term survival and flourishing. It is important that the extra intelligence is essentially ours – an extension of our selves, a tool. Super intelligence would be of little value to us were it directed at purposes tangential or opposed to ours. So, who decides what the SI is applied to? Who controls it?
It seems that initially the individuals or groups who utilize it – those who are connected, and able to use it - are the ones largely determining its use. Yes, the overall system has its own complex dynamics (both intended and unintended design 'features') – some people call this its ethics. However, given this dynamic, it is the users who are in charge.
What could change this locus of control? Could any other element's ability dramatically streak ahead, to overshadow human purpose?
Some have suggested that the Net as a whole could develop its own explicit goals or values – values, distinct from those of its users. Driven by its massive collective processing ability, it would then act to achieve these goals. Without going into detail here, to me, this seems extremely unlikely. I do however predict that purpose-designed systems with general intelligence will indeed stage a 'hard take-off' that biological beings will be unable to match.
Today, almost all 'AI' development focuses on specialized ability: factory work, speech recognition, face recognition, chess, investing, data mining, etc. Only a tiny fraction of resources are aimed at developing general intelligence in machines – the underlying core abilities that allow us to learn any and all of the specialized skills. This 'general' ability is in fact the exquisitely balanced interaction of very specific cognitive elements; including such things as focus & selection, short and long-term memory, induction & concept formation, and, crucially, meta-cognition. Such specific requirements are very unlikely to come together in the Internet as a whole, or in any specialized design. General intelligence is a specialty of its own.
General artificial intelligence – 'Real AI' – focuses on learning rather than knowledge. Knowledge is an effect of intelligence, not its cause.
In addition to optimizing the system for skills/ knowledge acquisition and integration, it can also be designed to foster self-improvement - intelligence bootstrapping. The most successful designs are likely to be such 'Seed AIs'. Naturally, the system can also have full access to all of the Web's facilities, plus substantial (distributed) human learning assistance.
A design dedicated to general intelligence will have a very different dynamic than the scattered, uncoordinated, goal-less Net infrastructure. It will not suffer the massive dilution of purpose, duplication of processing, nor design decisions made by committees of thousands. Many of these attributes, are of course the very strengths of the Internet – for its purpose – but they are not optimal for Real AI.
Once a Seed AI achieves (roughly) human-level intelligence, it will quickly, and dramatically, outpace human augmentation (see: Why Machines will become Hyper-Intelligent before Humans do). Moreover, as the SI progresses, it will have less and less need for human input and skills.
What goals will such non-human SI have? Indeed, will it have any goals of its own? At this stage, these seem to be open questions.
Can we not use the SI's intelligence to uplift us to its own level? A few considerations: