This interview with Stephan de Spiegeleire, Senior scientist the The Hague Centre for Strategic Studies, takes place in a series of discussions with the keynote speakers of An Interesting Afternoon – The world in 50 years and how do we get there? This event by the Dutch Future Society takes place at April 11, 2014 in Amsterdam. For registrations, follow this link.
How do you see the future in fifty years?
I personally expect a lot of positive changes to happen in the coming few decades. There remain enormous (self-imposed) inefficiencies in the world which we are finally starting to break down. The fact that so many opportunities for international trade arbitrage – most of them not a reflection of underlying endowments but of glaring political and regulatory inefficiencies – continue to exist is a perfectly example of this. Technology and globalization are starting to alter that – slowly, painfully but still quite decisively. We can in my opinion also expect major breakthroughs in both physical AND social technologies that will lead to quantum leaps in fields like education, health, and in many other spheres of private and public life, including even the (hopefully) ‘smarter’ use of armed force, something that we at HCSS are doing a lot of work on. So in my view many trends are going in the right direction. Of course history does not flow in linear fashion, and backlashes are likely, but the deeper tides of history clearly seem to be pushing in the direction of more efficiency and equity. We always have to remain on the lookout for the infamous Black Swan – but I am struck how we typically see such low-likelihood/massive-impact events as negative; they can be of course, but they can also be positive.
Which innovations would probably generate the biggest change?
I personally see the exponential growth in artificial intelligence as the development that is likely to have the biggest impact on who we are as human beings. We all have a hard time wrapping our minds around this. Kurzweil describes a point in the not-so-distant future where artificial intelligence overtakes human intelligence. He calls moment in time this the singularity and the logic behind it is depressingly convincing – both deductively and empirically. But what would this singularity really mean for human beings? Will some of us decide to remain ‘homo sapiens’ and end up in some sort of reservation or zoo with fewer mental and physical abilities than the ‘others’ and with a much slower evolutionary development? Or are we bound to collectively evolve into a homo post-sapiens, a merger between human and machine? But what then remains the ‘homo’ part? Who are ‘we’ in that scenario? Or would the entire evolutionary development of mankind just be subsumed in a post-‘homo’ entity in which our species just disappear as human beings just like Neanderthals disappeared. Maybe fifty years is a bit too short, but Kurzweil makes an extremely persuasive case, highlighting the exponential learning curve of technological innovation. The speed at which this goes may surprise us. I have to say that I hate the seeming inevitability of this, but I find few mental buoys to hold onto.
So I think in two different future time horizons. For the first one, pre-singularity, I am quite optimistic. This may be the period where we as human beings come close to realizing the true potential that is in us. The irony might be that at the very point in time where we finally start realizing our true human potential, we may also exhaust it and be ‘out-evolved’. With consequences that I think we all have a hard time even thinking about.
What can we do to anticipate the future and make it a better future? Which issues should we address?
Good old ‘homo sapiens’ remains at the heart of our current world as individuals and as societies. Therefore, education – the further development of the intrinsic human potential – should in my view be at the heart of our efforts to improve our futures. I think our current educational system was appropriate for the industrial age but now requires radical change. To me the industrial age was the age of the line. We drew ‘lines’ around all sorts of things: in education around class groups, courses, around quantitative vs qualitative approaches, around scholarly fields, etc. Even the way we transmit knowledge in most fields is quite linear: you start with ‘a’, then move to ‘b’, etc. Now we are starting to see the contours of a new more network-based educational paradigm with the individual learner at its heart and with technology allowing for knowledge absorption in a way that is much more organic to us as human beings. This is a primary challenge that we have to take care of through both the private and the public sector. This would yield better educated people, with combinations of skillsets that better reflect who they really are. Some of these combinations we may currently not even find in any course list of any university. Such people would be able to deploy and develop themselves in a global market with low barriers to access where their talents can be used to tackle any issues in coalitions of skill sets. This more modular way of collaboration will be much more agile than in our current labor markets.
My number two priority would be ‘politics’. How do we aggregate the interests of the many? We see a lot of problems with our current form of nationally-based democratic political systems right now. Also here we see a lot more ‘lines’ than we see ‘networks’ or ‘ecosystems’. That means the system is slow, not very agile and is not always an accurate reflection of networked societal interests. In the post-industrial age, I expect something more effective to emerge. Probably not through some grand engineering plan, but more through emergent design. The challenges of the world are likely to trigger various forms of bottom-up activism – and also better ways of aggregating these efforts. We already see this happening through various new forms of social action, but the global system is slow to respond. Eventually we will be forced to think of new solutions.
What does it ask from futurists?
Foresight can play a very useful role by ripping off the mental shackles of presentism and recentism that are often holding us back. We are too much driven by the present and the – usually very recent – past. Futurists can provide a counterweight for that by looking differently at the world, the biases that we hold about the future and the speed of change around us. I often say that we should abuse the future to enrich our discussions about the present. Some alternative solutions would not even be debatable if we put them in the present. But by projecting them a few decades into the future, they suddenly do become more ‘thinkable’. Which can be very useful.
But futurists – and I really don’t like that word – can of course also be very dangerous. I see two key dangers. The first one is the temptation to pretend we can really predict concrete developments so far out. It is a hard temptation to resist, especially since our customers often demand this from us. But selling ‘pseudo-certainties’ can be very harmful. The second danger is that futurists often claim to be dealing with ‘the future’, thereby presenting time as a linear concept with nice boundaries between the past, the present and the future, which is again neatly broken down in the short-term, the medium-term and the long-term. Most discussions about the future I have participated in were not at all about the future. Foresight is typically about (re-) imagining, re-framing today’s strategic option space and about designing a strategic portfolio of options that are robust against multiple futures. And I find that a more useful way of thinking about this type of work than anything related to forecasting or prediction.
Predictions are often wrong. What is a sensible way to say things about the future? How to navigate between confidence and uncertainty?
We do need to be careful to navigate between the Scylla of over-confidence and the Charybdis of too much uncertainty (although I personally see the former as much more dangerous). The right way to deal with this in my view is with a lot of humility, self-scepticism and intellectual honesty. This applies especially to experts who are continuously asked to become bolder in their expressions and who often get ‘trapped’ in defending their own previous predictions. Real deep experts tend to be much more aware of the true complexity of most societal challenges and the multiple ways in which they can develop. But so such humility and honesty do require a great amount of resistance towards the various customers who wants decisiveness and predictive statements. At least in the military world, which in my experience is the single most important ‘sector’ that has – for better or for worse – consistently taken strategic foresight quite seriously since the end of World War II, I have certainly seen an encouraging trend in that direction.