Decision-Making in Complex Dynamic and Evolutive Systems

By december 7, 2020 Algemeen

The year 2020 has surprised humans in many ways. From encountering a pandemic, addressing a global recession, and witnessing the global geopolitical changes, humanity is standing in ambiguous times. However, not everything is uncertain. Throughout the year, emerging technologies such as artificial intelligence, robotics, Internet of Things, and augmented/virtual reality, amongst others have spearheaded innovation with a promising future. These technologies have validated that despite the crisis, technology will transform the world. Not only do we have more access to information than ever before, but we also see a confluence of cyber, physical, and biological technologies that no longer exist in labs, but impact on us every day.

We are now confronting a complex network of interdependent global problems which we seem increasingly incapable of dealing with effectively at either the national or international levels, and arguably it is the very successes of human intelligence that have ratcheted the complexity of the challenges we face to a level that unaided human intelligence is now unable to cope with. The convergence of nano – info -bio -cogno science and technologies could be the key to finding solutions to some of our most deep-seated problems.

We will focus on the advancements in Artificial Intelligence that are of paramount importance in the years to come in order of more effective decision-making in managing the many complex problems we face at every scale, from global climate change, collapsing ecosystems, international conflicts and extremism, through to all the dimensions of our ever growing cities, economics and governance that affect human well-being to the implementation of the United Nations Sustainable Development Goals.


Human intelligence rests on billions of years of evolution from the earliest origins of life, and despite its undeniably unique nature within the biosphere, and the apparent gulf that distinguishes the human species from all others, it should nevertheless be seen as an extremum within a continuum. The unifying feature of all-natural intelligence systems is that they have evolved under strong selection pressures to solve the problems of surviving and thriving sufficiently to reproduce better than their competitors. Unlike the evolution of faster speed, sharper teeth, more efficient energy harvesting and utilization, or better camouflage, all of which improve physical capabilities, the evolution of intelligence enables better choices to be made as to how and when to employ those capabilities, by processing relevant sensed and stored information. If the environment is challenging enough, whether through the prevalence of threats, the scarcity of necessary resources or through intense competition for them, then there is a high fitness pay-off for evolving both the necessary physical characteristics for sensing, processing and storing the relevant information, and the intelligence to exploit them.

From this perspective we can define intelligence as the ability to produce effective responses or courses of action that are solutions to complex problems—in other words, problems that are unlikely to be solved by random trial and error, and that therefore require the abilities to make finer and finer distinctions between more and more combinations of relevant factors and to process them so as to generate a good enough solution. Obviously, this becomes more difficult as the number of possible choices increases, and as the number of relevant factors and the consequence pathways multiply. Thus, complexity in the ecosystem environment generates selection pressure for effective adaptive responses to the complexity.


While the scale and urgency of the global and urban problems we face have certainly intensified, what we have since learned in the fields of complexity science, evolutionary psychology, brain and behavioral science, and artificial intelligence, suggests that we may be close to another tipping point where we could possibly drive the emergence of advanced artificial intelligence systems that can effectively support human decision-making in managing these challenges.

Artificial intelligence (AI) has subtly permeated our lives. Increasingly, individualized applications of AI are being developed by businesses and used by consumers to power human decision-making processes, reduce the time needed to complete everyday tasks, and automatically inform us of events and incidents in the social and political world around us. Few would argue that this class of technologies has not enhanced our life experiences and made our lives more connected, efficient, and enjoyable.

What is clear: AI-powered products and services have made it into nearly every aspect of our personal and professional lives in just a few years. And as AI solutions continue to emerge and converge, that pace of change will only continue to accelerate.

With such rapid progress, it is difficult to make assumptions about the future of AI. But instead of focusing on the unknown, we can examine what we know about AI, its current applications, and potential future impact.

Much like human intelligence, AI works by taking in large amounts of data, processing it through algorithms that have been adjusted by past experiences, and using the patterns found within that data to improve decision-making. What makes AI remarkable is the speed, accuracy, and endurance it brings to this human-like learning process. AI’s are constantly tweaking their understanding of their environment, updating their perspective of reality, and updating the probability of their predictions without clinging to any old ideas. The greatest fear about AI is singularity (also called Artificial General Intelligence), a system capable of human-level thinking. According to some experts, singularity also implies machine consciousness. Regardless of whether it is conscious or not, such a machine could continuously improve itself and reach far beyond our capabilities.

Considering that our intelligence is fixed, and machine intelligence is growing, it is only a matter of time before machines surpass us unless there is some hard limit to their intelligence. We have not encountered such a limit yet. Especially taking Quantum Computing in account, yet still an emerging technology, it can contribute to reducing computing costs after Moore’s law comes to an end. Quantum Computing is based on the evaluation of different states at the same time where classical computers can calculate one state at one time. The unique nature of Quantum Computing can be used to efficiently train neural networks, currently the most popular AI architecture in commercial applications. AI algorithms running on stable quantum computers have a chance to unlock singularity. To be clear, the artificial intelligence technology we have today, known as narrow AI, is nowhere near achieving such feat.

Today’s machine learning algorithms are, to a degree, capable of adapting their behavior to their environment. They tune their many parameters to the data collected from the real-world, and as the world changes, they can be retrained on new information. For instance, the coronavirus pandemic disrupted many AI systems that had been trained on our normal behavior. Among them are facial recognition algorithms that can no longer detect faces because people are wearing masks. These algorithms can now retune their parameters by training on images of mask-wearing faces. Clearly, this level of adaptation is very small when compared to the broad capabilities of humans and higher-level animals, but it would be comparable to, say, trees that adapt by growing deeper roots when they can’t find water at the surface of the ground.

An ideal self-improving AI, however, would be one that could create totally new algorithms that would bring fundamental improvements. This is recursive self-improvement would lead to an endless and accelerating cycle of ever-smarter AI. It could be the digital equivalent of the genetic mutations organisms go through over the span of many many generations, though the AI would be able to perform it at a much faster pace.

Today, we have some mechanisms such as genetic algorithms and grid-search that can improve the non-trainable components of machine learning algorithms (hyperparameters). But the scope of change they can bring is very limited and still requires a degree of manual work from a human developer.

Recursive self-improvement, a recursive neural network however, will give AI the possibility to replace the algorithm that is being used altogether, this last point is what is needed for the singularity to occur.


Decision-Making in Complex Dynamic and Evolutive Systems.

Interdependence is a defining feature of complexity and has many challenging and interesting consequences. In particular, the network of interdependencies between different elements of the problem means that it cannot be successfully treated by dividing it into sub-problems that can be handled separately. Any attempt to do that creates more problems than it solves because of the interactions between the partial solutions.

Dynamical processes driving development of the situation often involve many positive and negative feedbacks, thus amplifying and suppressing different aspects of the situation, and resulting in highly non-linear dynamics. This means that relying on linear extrapolation of current conditions can lead to serious errors.

There is no natural boundary that completely isolates a complex problem from the context it is embedded in. There is always some traffic of information, resources, and agents in and out of the situation which can bring about unexpected changes, and therefore the context cannot be excluded from attention.

Complex problems exist at multiple scales, with different agents, behaviors, and properties at each, but with interactions between scales. This includes both emergence, the appearance of complex structure and dynamics at larger scales as a result of smaller-scale phenomena, and its converse, top-down causation, whereby events or properties at a larger scale can alter what is happening at the smaller scales. In general, all the scales are important, there is no single right scale at which to act.

Interdependence implies multiple interacting causal and influence pathways leading to, and fanning out from, any event or property, so simple causality (one cause—one effect), or linear causal chains will not hold in general. Yet much of our cultural conditioning is predicated on a naïve view of linear causal chains. Focusing on singular or primary causes makes it more difficult to intervene effectively in complex systems and produce desired outcomes without attendant undesired ones—so-called side-effects or unintended consequences. Effective decision making requires the ability to develop sufficient understanding of the causal and influence network to engage with it effectively, neither oversimplifying it, nor becoming overwhelmed with unnecessary levels of detail.

Furthermore, such networks of interactions between contributing factors can produce emergent behaviors which are not readily attributable or intuitively anticipatable or comprehensible, implying unknown risks and unrecognized opportunities.

There are generally multiple interdependent goals in a complex problem, both positive and negative, poorly framed, often unrealistic or conflicted, vague or not explicitly stated, and stakeholders will often disagree on the weights to place on the different goals, or change their minds. Achieving sufficient high level goal clarity to develop concrete goals for action is in itself a complex problem.

Complex situations generally contain many adaptive agents with complex relationships and shifting allegiances, and new behaviors and features continually arise. This means that approaches that worked in the past may no longer work, interventions that frustrate the intents of some agents will often simply stimulate them to find new ways to achieve them, and opportunities created by the inevitable new vulnerabilities that interventions create will be rapidly identified and exploited.

Many important aspects of complex problems are hidden, so there is inevitable uncertainty as to how the events and properties that are observable, are linked through causal and influence pathways, and therefore many hypotheses about them are possible. These cannot be easily distinguished based on the available evidence.


There are cognitive abilities that are essential for successful tackling complex problems. One immediate conclusion that can be drawn is that there is a massive requirement for cognitive bandwidth—not only to keep all the relevant aspects at all the relevant scales in mind as one seeks to understand the nature of the problem and what may be possible to do, but even more challenging, to incorporate appropriate non-linear dynamics as trajectories in time are explored. Given the well-known limitations of human working memory, short-term memory, and attention span, this is an obvious area for advanced AI support to target.

But there is a more fundamental problem that needs to be addressed first: how to acquire the necessary relevant information about the composition, structure and dynamics of the complex problem and its context at all the necessary scales, and revise and update it as it evolves. This requires a stance of continuous learning, i.e., simultaneous sensing, testing, learning, and updating across all the dimensions and scales of the problem, and the ability to discover and access relevant sources of information. At their best, humans are okay at this, up to a point, but not at the sheer scale and tempo of what is required in real world complex problems which refuse to stand still while we catch up.

Moreover, there are both physiological factors such as the impacts of stress, fatigue and anxiety on cognitive performance, and particular features of the human brain, legacies of our evolutionary history, which compound the difficulties.

Because the human brain evolved to deal with the problems of surviving and thriving that our ancestors faced, modern humans are still equipped with the same heuristics, behavioral tendencies and biases that worked well enough in the distant past. These hardwired shortcuts based on rules of thumb, operating automatically below conscious awareness and so permitting very rapid adaptive responses to various simple conditions, enabled them to cope with the level of complexity that existed then—keeping track of a hundred or so individuals and their interactions, intents, and histories. But features relying on approximations that held true for dealing with common problems in past environments can morph into risky bugs in today’s highly interconnected and rapidly evolving complex situations.


How does AI need to evolve in order to better support more effective decision-making in managing the Complex Dynamic and Evolutive Systems? Research in complex decision-making at an individual human level (understanding of what constitutes more, and less, effective decision-making behaviors, and in particular the many pathways to failures in dealing with complex problems), informs a discussion about the potential for AI to aid in mitigating those failures and enabling a more robust and adaptive (and therefore more effective) decision-making framework, calling for AI to move well-beyond the current envelope of competencies.

There is a vast, unquantifiable reservoir of relevant knowledge, expertise and -hopefully- wisdom, in the cohorts of those who currently strive to deal with the aforementioned spiraling problems. If an AI system is to be successful it needs to absorb what they know that cannot be itemized in databases. Human’s expert knowledge and AI working together, learning the strengths and limits of each other’s capabilities, and evolving better ways to arrive at good decisions, through evaluating and learning from the consequences of those decisions.

We suggest an expert driven AI Decision Support System, containing many detailed components, plus representations of the interactions and interdependencies between the components. The AI core needs the ability to develop situational models of the complex problems to be managed, and as much of their context as necessary, and to evolve them in a real time loop through predictive processing and updating, i.e., by monitoring relevant developments, using the current version of the model to predict expected consequences, comparing predictions to actual outcomes, and hence updating the models as a result of what is learned. This means that the model is an ‘open system’ so that their structures and composition can change as more is learned, and as the situation itself changes over time.


We have developed and experimented with an ‘analog’ (proven concept and evolved) model and are working hard on the research and development of a digital and intelligent Decision Support System, away from competing with humans and toward creating new cooperative partnerships with them, to extend our joint capability to manage the wicked problems that threaten us. A system that will not only show how -by example- a city ‘looks’ like (designed and imagined) but also how it ‘works’ (the interconnectivity, interaction, multiple feedback loops, order effects, non-linear dynamics, behavior, etc.) and how it ‘lives’ (the perception and experiences of users and inhabitants). It will involve developing many new aspects of AI capability, but every new capability we create will help generate the next.

We are rushing into a future that we can barely imagine, but we need to look ahead with as much clarity as we can muster, embrace the present opportunity, and use it to face and guide toward a most preferable future.




Leave a Reply

+ 9 = twaalf