A Simple Theory of Complexity and Systemic Collapse

Posted on 24/04/2013 by

0


Hypothesis

Every system has an activation, minimum, and maximum input level of materials and energy. Definitions:

By system, I mean anything from a lightbulb to the US Interstate; ie any designed or analysable system. By input, I mean those materials and energy necessary for the system’s functioning. By maximum input, I mean that level of input over which the system does not behave as designed, thus causing a failure of the system. This failure is generally catastrophic, but can also be a survivable operating failure, after which a reboot is possible. By minimum input, I mean that level of input below which the system does not operate, thus causing a failure of the system. Typically, going below the minimum represents a survivable operating failure, after which a reboot is possible. However, dropping below the minimum input level can represent as catastrophic a failure as crossing over the maximum input level. By activation input I mean that level of input which is required to get the system operating in its designed fashion — for the most part, the activation input is somewhere in between the minimum and maximum input levels.

Additionally:

The input profile of a system are the above input levels taken as a dynamic whole.

Elaboration — Complexity

For a simple example of a system, lets take an incandescent lightbulb. Post-manufacture, the only input required is electricity. The lightbulb can neither be maintained nor repaired. Please temporarily ignore whence the electricity comes; I treat it as an externality which is inexhaustible.

One the lightbulb is installed, it requires a certain amount of electricity to cause the filament to glow, which is the activation input. Further inputs of electricity increases the glow of the filament, until the maximum input level is reached. The filament is structurally unable to handle input over the maximum, which causes a catastrophic failure. The filament breaks, and the lightbulb is now rubbish.

On the flip side, once the filament is glowing, the electric input can be reduced to below, possibly substantially, the activation input. The filament will reach an input level, however, below which the filament will no longer glow. This represents a survivable operating failure of the lightbulb’s intended function. A ‘reboot’ of the system can be achieved by reintroducing the activation input.

It is my thesis that all systems operate in generally the above fashion, regardless of whether or not the system has integrated failsafes. Systems can indeed have failsafes; which is to say, internal mechanisms which prevent the system from going over the maximum input level. However, failsafes can only prevent systemic failure from anticipated overloads. Non-anticipated overloads are not prevented by failsafes. Failsafes are not typically designed to deal with minimum input levels; however, examples do exist in practice. I will write further about failsafes later in this post.

Lets return to the incandescent lightbulb. A failsafe can be installed in order to prevent an overload of electricity from the socket into the filament. However, this failsafe would not prevent a massive surge of electricity, say from a lightening strike, from overloading the system. The failsafe itself is a system, which has discoverable maximum, minimum, and activation input levels — ie its own input profile. A lightbulb can also have a failsafe against electricity dropping below the minimum input level. This is commonly seen in emergency lighting, which have a backup power supply in case of a blackout. This failsafe also has its own input profile.

Failsafes obviously take what was a very simple system — a single incandescent lightbulb — and increase the complexity of the system. Complexity itself becomes an input, because theoretically a given system can always have a new failsafe added to it, or the system can be integrated into a larger system. One emergency lightbulb becomes emergency lighting for an entire apartment building, and so forth. It is a truism that more complex systems have more and more integrated failsafes, in addition to the system(s) which the failsafes are designed to protect. Therefore, the failsafes themselves are part of the complexity of the entire integrated system. Each subsystem, if you will, has its own input profile.

The failure point of the entire integrated system is the interaction of the input profile of the sub-systems, as well as the input profile of the system overall. It is the multifaceted interaction of these input profiles which create complexity. Leveraging upon complexity to create more complex systems — say, integrating one complex system into a larger complex system — creates a geometric increase in the manifold possible interactions of all the input profiles of the entire integrated system.

To put it another way. A lightbulb might not turn on because: 1) the bulb is burnt out; 2) the socket is faulty; 3) the circuit or fuse is blown; 4) you didn’t pay your electric bill; 5) the lines to your house have been knocked down; 6) the lines to your nearest transformer have been knocked down; 7) the transformer station has been struck by lightening; 8) the transmission wires have been knocked down; 9) everyone just turned on all their air conditioners at once and there’s no more juice in the system; 10) the grid is down; et cetera, et cetera, et cetera…

Hypothesis Mark 2

Every system has a maximum, minimum, and activation input levels of materials and energy. Complex systems arise from the integration of different systems, or sub-systems such as failsafes. Systemic failure arises from the interaction of all input profiles within the entire given system.

Defining Complexity & Highly Complex Systems

At this point, I’d like to spend some time actually defining what complexity really is. This will lead into describing a system which has complexified to the point where it needs complexity as an input.

As distinct from materials and energy, complexity is an input of human beings; more precisely, human beings with specialised knowledge or skills geared toward the management or design of systems. In other words, management. The creation of complexity inputs requires materials and energy, not to mention the time necessary to train people with the specialised skills or knowledge needed. It’s very important to note that the people in question are not the inputs, but rather the knowledge or skills they carry. Hence why complexity, when it becomes a needed input of a system, is distinct from materials and energy. It’s a derivative of materials and energy, and cannot be created without said materials and energy; however, complexity is not in and of itself materials and energy.

A system which does not need inputs, or very limited amounts of inputs, can be considered a simple system. Many simple systems can actually produce their own inputs, or a large portion thereof. A garden is a good example: the results from its production can be recycled back into the system, thus reducing or even eliminating its need for additional inputs. This represents the most resilient and sustainable variety of system.

As needs for inputs grows, the complexity of the system increases geometrically. However, so long as the complexity of the system is such that it can be managed internally by the system, that is merely a complex system. An example is an farm which requires inputs of fertiliser — which is to say, petroleum. The farmer or farmers, being internal to the system, can themselves manage the inputs, and do not require inputs of complexity to assist in the management of said inputs.

Once the inputs to the system exceed the ability of the system to internally manage, the system reached the level of complexity which acts as an activation input of complexity. Therefore, the system develops a minimum input level of complexity without which the system would cease to operate. Maintaining the US Interstate, for example, is a system which has a minimum input level of complexity; the same can be said of oil wells, nuclear reactors, and indeed modern automobiles. Without a certain level of complexity input, these highly complex systems fail. Depending on the nature of the system in question, the minimum input failure might be a survivable operating failure, or it might be a catastrophic failure; the difference depends upon the other inputs which the system needs to remain functioning, as well as the system itself.

The system cannot survive without inputs of complexity so that it can maintain the flow and usage of its other inputs. Highly complex systems are the most dependent upon external inputs. This dependency makes them the least resilient style of system design, and therefore the most prone to failure from interruptions of inputs.

Because of the geometric nature of the complexity of integrated input profiles, highly complex systems eventually reach a very interesting point: the maximum input level of the ways in which complexity can be calculated. Which is to say, a highly complex system can reach a point where it cannot have more subsystems added into it, because the system as it stands is beyond analysis. Being unable to analyse the system as it stands, it is impossible to identify where further sub-systems can be added. Once the maximum input of new subsystems has been exceeded, the entire system in question has reached failure. Just like a lightbulb can receive an overload of electricity, a highly complex system can receive an overload of complexity.

Elaboration — Limits to Complexity

As complexity inputs increase, the minimum input level of complexity, materials, and energy rises in concert. The maximum input level rises as well. However, because a given system requires its inputs to become more tightly managed as in becomes more complex, the range between the maximum and minimum input levels narrows. The narrowing band between maximum and minimum input levels represents a narrowing of operational parameters, which results in increasing vulnerability to input interruption This is why increasing complexity makes a given system more, not less, resilient in the face of subsystem failures. Assuming that two systems — one more complex than the other — share an identical subsystem, the failure of that subsystem is a geometrically higher risk to the more complex system, than to the lesser.

Theorising a limitless ability to expand materials and energy inputs, complexity inputs would eventually cause the minimum input level to become exactly the same as the maximum input level. This is inherently a self-destructive situation, because a system without any operational tolerance whatsoever can be caused to fail by so much as a tiny delay in a given input. Paradoxically, the system would require continuing inputs of complexity. This would cause the system to self-destruct, because the minimum input level would become greater than the maximum. In other words, it would take more inputs to keep the system running than the system is able to manage. This is theoretical, because materials and energy are not limitless inputs; however, it is an important demonstration of the limits of complexity as a tool.

Moving on to more concrete limits; specifically, the finite supply of materials and energy inputs.

Inputs of complexity themselves require complex systems to produce the necessary complexity. Nuclear power plants require nuclear engineers, who in turn require universities and student loans… et cetera. These complexity inputs are, as mentioned above, in addition to the other inputs, such as materials and energy. Complexity is an input with a theoretical limit; however, materials and energy are all finite, and therefore are inputs with very real and hard limits.

Increases of complexity axiomatically requires geometrically more materials and energy, because the nature of increases in complexity is geometric. Regardless the theoretical limit on complexity, as discussed above, the natural limits to materials and energy means that additional inputs of complexity become impossible once complexity has surpassed the available supply of materials and energy. Theoretically, a given system could become far more complex (say, if modelled in a computer simulation which ignored the geometric rise in the need for other inputs); however, in reality the physical limits of materials and energy will cause a given system to undergo failure as if the theoretical limit to complexity had been reached. This failure is caused by a minimum input level exceeding the ability of other systems to provide materials and energy, which includes Plant Earth herself.

Highly complex systems which are encountering the finite limit on inputs will not decomplexify, because that would bring complexity below the system’s minimum input level and cause failure. Rather, a given system will attempt to 1) increase its own complexity; 2) absorb other systems which are competing for the same limited inputs; and 3) enclose sources of inputs.

Action 1 is ironic, because increases in complexity bring about geometric increases in minimum input levels, especially for additional supplies of complexity. Action 2 is similar, but on a grander scale, in that integration of highly complex systems represents massive increases in minimum input levels. It is distinct from Action 1, because the system in question does not regard integration for reasons of maintaining input supplies as an increase in complexity. Action 3 does not necessarily mandate an increase of complexity, but it will spark increases of complexity in competing systems, and drive Actions 1 and 2 all the more aggressively. Additionally, it is becomes a game of musical chairs, in that the finite and dwindling supply of materials and energy will cause more and more systems to fail.

Axiomatically, this puts all highly complex systems on a path toward either total integration or mutual strangulation. Perversely, the actions which highly complex systems take to preserve their inputs are precisely the ones which drive these systems toward their theoretical complexity failure point. Complexity, therefore, becomes inherently destructive once it passes a certain threshold, which likely is the activation level for complexity. Indeed, complexity could be best seen as a one-way trip to failure.

In these circumstances, more complex systems would seem to have advantages over less complex systems, due to their ability to wield larger inputs of complexity, materials, and energy. This is true only to a degree, in that more complex systems can indeed readily absorb or choke less complex systems. However, those systems which have not crossed over the activation input level for complexity will be immune, or relatively immune, to absorption or strangulation. This is because these systems are able to directly utilise materials and energy without the need for complexity inputs; which is to say, they do not rely very heavily upon other systems (eg oil companies relying upon universities for petroleum engineers, et cetera), and are therefore less vulnerable to such attacks. Additionally, they have a geometrically broader gap between maximum and minimum input levels, which creates a far greater tolerance for variation in materials and energy inputs.

Hypothesis Mark 3

Every system has a maximum, minimum, and activation input levels of materials and energy. Complex systems arise from the integration of different systems, or sub-systems. Complexity itself is created by the interaction of all input profiles within the entire given system; systemic failure points arises from that same interaction. Complexity grows geometrically, as additional systems are integrated, which requires geometric increases in input levels.

Some systems become sufficiently complex as to need complexity as an input, in addition to materials and energy. This creates a very high degree of interdependence between the intertwined, highly complex systems. Additionally, these intertwined systems share many of the same input needs; due to natural limits on materials and energy (and therefore complexity as an input) this puts these systems in a competitive relationship. Such systems seek to absorb competitors, out-compete competitors by aggressive complexification, and starve out competitors by enclosing inputs.

On the other hand, some systems are capable of internally producing some percentage of needed inputs. This gives these systems a degree of independence and resilience which is in proportion to the percentage of internally-produced inputs. More complex systems will attempt to ‘unlock’ those internally-produced inputs by attacking the less complex systems, so as to increase the total available supply of inputs.

As complexity increases, a theoretical failure point is approached, wherein the given system will require more inputs than it is able to structurally handle. Natural limits to materials and energy cause this failure point to occur at far lower levels of complexity than this theoretical point of complexity overload.

Elaboration – Natural Limits

The idea that materials and energy aren’t absolutely finite is delusional. Have a very nice day.

Moving on. Due to natural limits, inputs have an absolute and reasonably predictable  maximum availability. The total supply of the resources necessary to create materials and energy — and, by extension, complexity — are recoverable only to the point where gains in materials and energy exceed costs in materials and energy. Additionally, systems which cannot produce their own inputs will attempt to starve systems which can produce some percentage of their own inputs. The best example of this is how globalisation attacks local economies in an attempt to ‘unlock’ the inputs which local economics produce for themselves. The impoverishment of local economies over the past century can be explained quite simply by the usurpation of internally-generated inputs by global highly complex systems.

The maximum supply of inputs — peak inputs, if you will — will only be completely knowable after total demand of all input-consuming systems exceeding the availability of said inputs. At this point, highly complex systems will begin to fail, in the order of the level of their complexity. Because the band between minimum and maximum inputs narrows as complexity increases, the most vulnerable systems to input starvation are the most complexified. Therefore, these are the first systems to fail when subjected to input starvation. Coming down the bell curve of peak inputs will see a cascading failure of less and less complex systems, until eventually all systems which do not produce some proportion of their own inputs internally are the only systems left standing.

The complex and highly complex systems which suffer from input starvation will structurally fail due to an input or inputs passing below the minimum input level. It would seem that these systems could be reactivated by an application of inputs up to the activation level of said systems. However, due to the irreversible nature of peak inputs, the inputs which created the system in the up-leg of the bell curve no longer exist. The systems nominally suffered a survivable operating failure; however, they have actually had a catastrophic failure from which there is no recovery. Natural limits to inputs make it impossible to reboot the systems, which means that the systems cannot be decomplexified — even if that were possible.There are no failsafes against natural limits within highly complex systems.

Rather, systems which fail due to permanent input starvation will receive no further inputs whatsoever, simply because the systems have dropped below minimum input levels. These systems will simply decay in place, entropy attacking the most inputs-intensive subsystems first. The usefulness of the failed highly complex systems to those surviving systems is questionable, as the very form of the failed systems made them fail in the first place.

During the catastrophic failure of highly complex systems, those systems which can internally produce needed inputs will gain a surprising ascendancy. Because inputs are internalised, these systems are resilient in the face of complexity collapse, and in the process of highly complex systems being starved of inputs, will be able to enclose further inputs. The process of more complex systems attacking less complex systems will cease, replaced by less complex systems starving more complex systems of needed inputs due to superior ability to manage inputs without the need for inputs of complexity.

Hypothesis Mark 4, which is really just an expanded Mark 3
The new part is outside the quote.

Every system has a maximum, minimum, and activation input levels of materials and energy. Complex systems arise from the integration of different systems, or subsystems. Complexity itself is created by the interaction of all input profiles within the entire given system; systemic failure points arises from that same interaction. Complexity grows geometrically, as additional systems are integrated, which requires geometric increases in input levels.

Some systems become sufficiently complex as to need complexity as an input, in addition to materials and energy. This creates a very high degree of interdependence between the intertwined, highly complex systems. Additionally, these intertwined systems share many of the same input needs; due to natural limits on materials and energy (and therefore complexity as an input) this puts these systems in a competitive relationship. Such systems seek to absorb competitors, out-compete competitors by aggressive complexification, and starve out competitors by enclosing inputs.

On the other hand, some systems are capable of internally producing some percentage of needed inputs. This gives these systems a degree of independence and resilience which is in proportion to the percentage of internally-produced inputs. More complex systems will attempt to ‘unlock’ those internally-produced inputs by attacking the less complex systems, so as to increase the total available supply of inputs.

As complexity increases, a theoretical failure point is approached, wherein the given system will require more inputs than it is able to structurally handle. However, due to natural limits of materials and energy — as in, peak inputs — this failure point occurs at far lower levels of complexity than the theoretical point of complexity overload. Highly complex systems working in concert will exceed the total available supply of inputs, and will begin to fail. The most complex systems will fail first, due to their low tolerance to variations in input levels. Cascading failure will proceed in order of complexity, as the shrinking supply of inputs constantly drops below the total needed supply of inputs to keep all extant systems in operation.

Although nominally these systems suffered survivable operating failure, in reality the systems have suffered catastrophic failure. The failed systems cannot be rebooted because the inputs necessary to approach the activation input levels are no longer available to all the systems which work in concert to supply each other of needed inputs. These failed systems will remain permanently offline, and will decay in order of internal complexity, the most inputs-intensive subsystems succumbing first.

Those systems which internally produce needed inputs will be better situated to survive peak inputs, in direct proportion to their ability to internally satisfy input needs. The more a given system can internally produce its own inputs, the more resilient it will be in the face of the shrinking supply of inputs, and the more it can leverage inputs acquired externally. This represents a competitive advantage, and will facilitate an accellerated starvation of more complex systems due to superior utilisation of available inputs.

Advertisements
Posted in: Analysis