Blay

The Governmentality of Risk Management and Resiliens in the Face of Crises

The following are notes taken at a seminar at KTH with Ash Amin from sometime last year. Most ideas are his, and at the moment it is hard to distinguish where they end and mine begin… I should have posted it long ago but it was reactualized lately, partly after I attended a meeting for Transition Gothenburg (…and partly by watching the discovery series “Doomsday Preppers”)

Life becomes catastrophe management

There is a new mentality prevalent in political and public culture the last 10-15 years. One in which risk and crisis is not something that should or can be avoided and planned for, but something one (and that includes us all) must be prepared for and for which our systems must be resilient. Life is no longer about building on top of a solid foundation but to constantly evade and recover from crisis. The only stability possible is a prototyping stability that is constantly building and rebuilding. Building becomes a process rather than stable product (Nigel Thift, Out of Order).

Prediction becomes more difficult

Design involving speculation and resiliens becomes necessary due to the fact that complexities does not allow you to predict the direction. Speculation (as the creation of scenarios) can be used to hilight and make directions present (rather than making huge stabilizing projects). Speculation therefor is not a model of a coming “real” project, but a beginning that can expand contaigously with each iteration. Local experiments start as tests or concepts and can become methods that are copied and expanded by others. The problem of prediction is not only one of complexity but also one of information certainty. It is a dual problem. Not only is there so much information and interconnected events that prediction becomes difficult, the certainty of the information at hand is dubious due to multiple or even unknown sources to it. However, both causes should yield the same respons – distributed resiliens. Interconnections of events produce surprise. Even information that was reliable could be dangerous to act on since the conditions might have changed between the information gathering and the act. This is a problem that the US army have wrestled with in Afghanistan. In their theoretical reports, design thinking has come to the forefront of how to tackle this problem. This design thinking involves the use of probing prototypes, several iterations of design acts and a distributed, networked knowledge process. It is never assumed that the problem and situation have been understood properly. The complexity of the design situation forces the creativity, information-generation and action to happen at the edges. For the US army this translates into the soldier becoming an entrepreneur-soldier and knowledge broker having to act and analyse on his own, creating a system of distributed knowledge and action. Acting in a situation of uncertainty and risk requires one to act by divergence, rather than focusing to cover larger areas of possible outcomes. Resiliance is more important than being right and efficient.

Hacktivism as plugging in becomes difficult

If hacking is traditionally understood as plugging in to systems and using the force of the system against itself (see Abstract Hacktivism), this becomes problematic in a situation of high uncertainty. It becomes diffucult to know how to use the energy of what you plug in to since that system is not a stable and predictable one. Hacktivism therefor have to plug in horizontally to others they are constantly communicating with, so that they get information about system changes, rather than plugging in to black boxes that can’t be trusted. The only reason plugging into black boxes used to work was because the inputs and outputs of those systems remained in place. Any changes in that system could not have been predicted by the hackers. Today its hard to plug in because what you get as output can change quickly and if you relied on that for your input you are in trouble. Plugging in horizontally on the other hand allow you to get notified of systems changes and even be able to request them. This is called co-operation. The same logic applies to the commercial world. Companies used to create predictable markets and desires. Now the “prosumers” (producer-consumers) create these themselves. The companies therefor rely more on selling the prosumers (product-consumer) themselves rather than creating fixed products for them. This is a much more resilient strategy than having to invest resources and fixed infrastructure for product development that by the time it is ready to launch might as well be outdated.

Flow control as a new governmentality

The latest security paradigm is flow control, where good flows are turned up and bad flows shut down. It is a governmentality that observs and regulates rather than dictates and plans. In other words completely in line with deleuzes society of control. “We all need to act together”! This becomes the primary call. Everyone need to participate and generate flows, however the role of the government is to monitor, select, filter and amplify them. Choose the right flows and you will be encouraged, choose the wrong flows and you will be fought. Which flows are considered good or bad can only be decided after the fact, all must be enabled and actualized before the filtering can happen.

Brief history of risk management from the post-war period to 9/11

In the immediate post-war period we see the rise of a state whose main goal is to create a safe, predictable and controllable future. The main task is to control war driven technologies; nuclear, chemical, biological – and to keep war to a known and defined theatre of war. Three aspects of risk emerges in the post-war period.

  • Calculations of risk.
    Risk is known, upcoming risks are predictable.

  • Insurance against risk.
    Risk can be avoided or at least covered

  • Management of risk by experts.
    There is always an action plan for every catastrophy

Today, roughly after 9/11, these three have changed

  • Failing insurance
    Insurance can’t or don’t want to cover the costs and replace what is damaged in huge disasters.

  • Lack in trust in experts
    Due to more participatory media/knowledge landscape

  • Public discourse of risk
    There is an awareness of global risks. Environmental risks, viruses, terrorism. Everyone is an actor in these risk scenarios (as opposed to the threat of nuclear war)

  • Individualisation of risk management
    Everyone has to manage their own life risks and take their own risks. Everyone is responsible for calculating risks, taking risks in order to advance and facing the consequences in the event of failure. Individuals must tackle risk by taking risks (even the meta-risk of deciding which risk to prepare for).

Risk calculation today

Risk calculation today is not based of rational calculation of the probabilities that certain scenarios would become real. Instead, risk calculation based on absolute contingent scenarios of the “what if”-kind. They are not forecasting models of likely futures but scenarios built on what happens if something happens. The temporal dimension of forecasting has given in to the spatial logic of scenarios.

Citizens in todays risk management becomes post-catastrophy builders. Not only are they first responders to a crisis or function pre-emptively. The citizens must also form communities restructioning a disaster area (see Detroit). It is not expected that the state will be able to take responsiblity for the restructioning process. The role of experts is changing. Instead of providing solutions, the role of the expert becomes one of reengineering expectations – making people understand that they always have to be ready, that they have to learn to adapt to changing conditions, that they can’t expect infinite growth. Basically play down the expectations on the future. The expert creates a constant awareness of risk and must handle and create controversies (in a Latourian sense) in order to manage expectations and readyness/preparedness. The expert must make future scenarios speak (again in a Latourian sense) today. Expertize becomes sense-making rather than fixed knowledge. It is impossible to locally store knowledge about the vast number of different criseses that can hit so this has to be stored networked and searched for during the unfolding of the event.

Civil defense changes from command and control to community resilience. What is encouraged in a situation of crisis is informal citizen action, “hacking”, rather than citizens waiting for an expert response. In the recent British floodings [*I don’t know exactly which floodings this refers to, possibly an event in the beginning of 2011, but it has been a common and debated problem in Britain the last years], the civil defence authorities praised the people spontaneously building flood gates while riduculing the people whose first reaction was to call the authorities and ask what to do. From the perspective of civil defense, individuals and groups must become resilient and act in the event of catastrophy.

Research approaches

Several positive aspects of this development has been presented in reaserch which has hilighted the turn to democratic openess, deliberative communities and self-organisation instead of central expert authority (ex. solnit2009, jasanoff2010, callon2001). For Callon, the only way forward is to involve everyone in risk management and make it present or else it will catch all by surprise and allow for populist responses. This is close to the Latourian parliament of things. Solnit, looking at post-disaster research says that inventiveness and altruism kicks in during/after disaster. We do not get an all-against-all barbaric situation. Though this reserach is only looking at the first period during/after disaster. She also finds that everywhere a disaster has tried to be managed in a top-down fashion, it has failed. These has been the worst handlings of crisis. What works is bottom-up approaches. Jasanoff calls for a new kind of approach to technology development and management which she calls technologies of humility. This refers to technologies and technology policy that do not assume prediction and control and that reshapes the relations between experts, governments and citizens for a more distributed development. Technologies of humility build in as much preparedness for the unknown and for change as possible.

The negative aspects hilighted, which the above perspectives somewhat neglect, has been that it is a kind of laisse-fairre approach from the state which allows it not to take responsibility and blaim complexity as well as leading to an authoritarianism relating to the militarization of everyday life due to the constant preparedness. The above approaches neglets the new governmentality that comes with these developments.

In the above mentioned research, there is a certain romance for human lay knowledge that comes with a number of problems. It neglects complex systems (for example cities) and their dependence on complex circulations of information and matter and the vast coordination it requires to function; a circulation that is based on software intelligence. They also neglect that altruism and do-it-yourself approaches fades quickly in the time after a crisis. People soon start to ask for expertise and security. This romance of resiliens risks leaving the most vurnerable to their own devices, the ones most needed of support. When citizens efforts are prefered, only the capable manage and often not the ones most immediately effected by disaster. Finally, the research don’t cover that these approaches mean that the responsiblility of the center is lost.

Along these developments are also a new governmentality and a new biopolitics. We get the well-known suspension of democracy when a crisis is always in the air. This biopolitics rests on an “ontopower” (Virno?), an ontological clearence of the unexpected. The crisis is not treated as a part of the logic of the system but an unexpected external event. The unexpected is naturalized and therefor made permanent (as in the war on terror). The unexpected is dealt with by taking it out of the system ontologically. What we get from that is a militarization of everyday life, since the unexpected can strike anytime and anywhere. Even though the realization of the unexpected is completely contingent and unpredictable, it must be fought in the present. We therefor get a violence against a future that may or may not be realized.

Where does technologies of humility and the open participation fit within this governmentality? Is it just about picking up the pieces after the unexpected event? Or of a distributed surveillence where citizens discover the event approaching in the final minutes before it strikes just in time for an emergency reaction? There seem only to be small spaces for the democratic option. Spaces where it might be able to roam free and act freely, but whose borders and time-frame is very strict. This fits perfectly with flow control and the starting and stopping of autonomous flows (compare to cybernetics).

The role of hacking

How can hacking as a practice increase this space? By acting before crisis happens? By the eschatological practice of playing out crisis situations before they become disastrous (see Copyriot)? Or is hacking merely playing dangerously close to the preparedness discourse that also forms the basis of the war on terror and securitization of everyday life?

If we are to adopt a post-human approach to the resiliens and disaster research otherwise mostly occupied with human communities, we find this in the open technologies of hackers. Open technologies have the biopolitics of preparedness and resiliens built in. Open source software is used to prevent backdoors and security threats, from company control and censorship and in case of having to adopt the software to differnt situations.

The opposite of this resiliens technology is slick, user friendly technology that only performs the function that the developing company decides, which is performed effectively but which is also black boxed.

The distrubuted nature of technology development in hacker communities forces a kind of technology that is open, modular, based around sharing and is constantly being rebuilt. It has to be able to enter and exit relations yet still function and is therefor perfectly suited to be able to function also in a post-disaster scenario. It is technology being able to do more than its current function and able to function in other situations than the one it was built for, which could for example involve being able to generate energy in different ways and being able to be repaired with abundant materials. This also points towards a horisontal dependency and recognition of the complexity and interdependance of various systems which goes against the individualized attitude of survivalists.