Design as performative sensemaking

science, humanities, design: the three cultures Are there designerly ways of knowing — distinct from the ways of the sciences and humanities? I’ve explored these questions in the writings of Béla Bánáthy and Nigel Cross, from whom I adapted the table above.

This re-drafted version of the “three cultures” table includes some recent thinking and one additional source, Klaus Krippendorff’s “Design Research, an Oxymoron?

Krippendorff proposes to understand design as “making sense of things (to others).”

From his “five activities that define human-centred design”:

  • Designers invent or conceive possible futures, including its artefacts that they may be able to bring about, imaginable worlds that would not come about naturally. …
  • Designers need to know how desirable these futures are to those who might inhabit them, and whether they afford diverse communities the spaces they require to make a home in them. …
  • Designers experiment with what is variable or could be changed, in view of the opportunities that variability could open up for them and others. …
  • Designers work out realistic paths, plans to proceed towards desirable futures. …
  • Designers make proposals (of realistic paths) to those who could bring a design to fruition, to the stakeholders of a design.

See also: “Communication across discourse communities

Some context for NYT on wind energy variability

NREL wind penetrationI don’t often comment on daily journalism, but yesterday’s NYT piece on wind energy variability (print: “Grappling With the Grid” / web: “Intermittent Nature of Green Power Is Challenge for Utilities,” by Diane Cardwell) could use a little context.

Here’s the crux of the NYT story:

It is not the first time the grid system operator, ISO New England, which operates in six states, has cut back energy from the [wind] farm since it began operating at the end of last year, or from others in the region, including some in Maine and New Hampshire. Other windy states and regions like Texas and the Midwest have experienced similar cutbacks, known as curtailments.

But the recent Vermont episode, which set off a debate among government officials, the New England grid executives and the wind farm producers, highlights a broader struggle taking place across the country as utilities increasingly turn to renewable sources of energy. Because energy produced by wind, for example, is intermittent, its generating capacity is harder to predict than conventional power’s. And a lack of widely available, cost-effective ways to store electricity generated by wind only compounds the complex current marketplace.

Context is readily available in the newly published (August 2013) National Renewable Energy Laboratory (NREL) “2012 Wind Technologies Market Report” (pdf), from which I reproduced the figure at top and the graph below. The figure for international wind penetration shows that grid integration of several times current U.S. levels is possible; the graph of U.S. regional curtailment gives the data for the challenges that Cardwell describes.

Although U.S. curtailment percentages have declined overall, the challenges of integration are real — though primarily institutional, rather than technological (according to NREL here).

NREL curtailment Clearly, there is a lot to be learned from European experiences. One source I’ve found valuable has been the 2011 Stanford TomKat Center Grid Integration of Renewables Workshop (videos and presentations), where Hans Henrik Lindboe of EA Energy Analyses spoke frankly on the experiences of global leader Denmark.

A few highlights from Lindboe’s talk:

In 1993, Denmark changed government. We got a social democratic party. They were very keen on being green, and within eight years, the amount of wind power increased five times. …

The utility people were very preoccupied by showing scientifically — with big, nice models — that going above five percent renewable energy was terrible. Couldn’t happen. You would get blackouts. …

In the year 2000, Denmark joined the Nordic electricity market. … I believe this was one of the most important incentives for integration of renewable energy: to have an efficient market. … Money-wise the spot market is 90 to 95 percent of the turnover in the electricity market. … The spot market is an hour-by-hour auction. …

It’s important that [the electricity market] is international, because you get the favor of geographical spreading. … We trade in the spot market internationally, but we also trade reserves internationally. So if we have a prediction error in the western part of Denmark, it might be [addressed with] regulated resources from Sweden, or from Norway, or even from Finland, transferred down to us, via the grid.

See also:
U.S. National Institute of Standards and Technology (NIST) on Smart Grid Interoperability Standards;
and my 2011 P&P post, “Intermittency and the HVDC Supergrid.”

When Ioan Fazey at the UK-based Sustainable Learning project asked me to contribute a blog post, I took it as an opportunity to interview Digital Habitats coauthor John David Smith:

HS: And do you find that the perception of belonging to a community is correlated to a perception of learning?

JS: Well, one definition that Etienne Wenger talked about in one of the online seminars we did is that a community of practice is the experience of social learning that becomes expressed as a social formation. That’s a whole bunch of squishy but important ideas linked together. The experience of learning and the experience of connection can take place along different vectors. Whether I see myself as a member of the community might precede or follow other important kinds of acquisitions: of identity, of knowledge, or competence — or as membership in opposition to another community.

Full interview >>

US Mayors’ Shareable Cities Resolution

NOW THEREFORE BE IT RESOLVED, that The United States Conference of Mayors urges support for making cities more shareable by (1) encouraging a better understanding of the Sharing Economy and its benefits to both the public and private sectors by creating more robust and standardized methods for measuring its impacts in cities; (2) creating local task forces to review and address regulations that may hinder participants in the Sharing Economy and proposing revisions that ensure public protection as well; and (3) playing an active role in making appropriate publicly owned assets available for maximum utilization by the general public through proven sharing mechanisms.

Submitted by:
The Honorable Ed Lee, Mayor of San Francisco
The Honorable Michael A. Nutter, Mayor of Philadelphia
The Honorable Antonio R. Villaraigosa, Mayor of Los Angeles
The Honorable Rahm Emanuel, Mayor of Chicago
The Honorable Francis G. Slay, Mayor of St Louis
The Honorable Jonathan Rothschild, Mayor of Tucson
The Honorable Charlie Hales, Mayor of Portland, OR
The Honorable Thomas M. Menino, Mayor of Boston
The Honorable Carolyn G. Goodman, Mayor of Las Vegas

H/t: Shareable and Collaborative Consumption. Full text pdf — see resolution #87.

Communicative resilience as a duality bridge

“Resilience and transformability are not ‘opposites,'” cautioned Brian Walker at the Resilience 2011 conference. “They are compatible aspects of a complex adaptive system that functions at multiple scales” (video and slides).

I think of resilience-and-transformation as a duality: each defined in relation to the other, and together comprising a whole. As systems thinking can be understood as both systemic-and-systematic or as including both analysis-and-synthesis, resilience thinking includes both system resilience and system transformation.

Skillfully navigating this duality is one of the strengths of Bruce Goldstein’s 2012 edited volume, Collaborative Resilience: Moving Through Crisis to Opportunity.

From chapter 15, his “Conclusion: Communicative Resilience”:

[T]he dynamics of a bouncing ball and a society in crisis are not the same. The concept of social-ecological resilience developed in Collaborative Resilience addresses these differences by providing a less deterministic and more creative definition of a system’s capacity to absorb disturbance and reorganize while undergoing change. This definition is intentionally not a conservative one of “bouncing back”; it emphasizes that resilient systems can adapt or transform. …

Just as social-ecological resilience was deliberately not termed socio-ecological resilience so as not to imply that the social was a subordinate modifier of the ecological, communicative resilience not just a method for achieving resilience through collaboration. Instead, it is a framework for communities to both define and pursue resilience through collaborative dialog, rather than solely through expert analysis. A resilient system emerges as participants debate and define ecological and social features of the system and appropriate scales of activity. Poised between collaborative practice and resilience analysis, communicative resilience is both a process and an outcome of collective engagement with social-ecological complexity. …

Communicative resilience goes beyond suggesting that joint fact-finding and collective sensemaking can help a community better understand social-ecological relationships. It suggests that the system does not preexist the collaborative, which draws on diverse knowledge practices and storytelling to define it, their place in it, and its preferred condition. It is a coproductive dynamic, as system conditions are determined and reshaped through collaborative interaction.

See also:
The co-production of science and society
Collaborative Resilience book review at Ecology and Society

John Sterman: Almost nothing is exogenous

“Almost nothing is exogenous,” said MIT System Dynamics Group director John Sterman in his 2002 Forrester Prize Lecture, “All models are wrong: reflections on becoming a systems scientist” (pdf).

It’s a perspective I’ve often voiced.

Here’s the key piece from Sterman’s talk:

If you ask people to name processes that strongly affect human welfare but over which we have no control, many people name the weather, echoing Mark Twain’s famous quip that ‘‘Everybody talks about the weather, but nobody does anything about it.’’

But today even the weather is endogenous. We shape the weather around the globe, from global warming to urban heat islands, the Antarctic ozone hole to the ‘‘Asian brown cloud.’’ For those who feel that global warming, ozone holes, and the brown cloud are too distant to worry about, consider this: Human influence over the weather is now so great that it extends even to the chance of rain on the weekend.

Cerveny and Balling (1998) showed that there is a seven-day cycle in the concentration of aerosol pollutants around the eastern seaboard of the United States. Pollution from autos and industry builds up throughout the workweek, and dissipates over the weekend. They further show that the probability of tropical cyclones around the eastern seaboard also varies with a seven-day cycle. Since there are no natural seven-day cycles, they suggest that the weekly forcing by pollutant aerosols affects cloud formation and hence the probability of rain. Their data show that the chance of rain is highest on the weekend, while on average the nicest day is Monday, when few are free to enjoy the out of doors. Few people understand that driving that SUV to work helps spoil their weekend plans.

In similar fashion, we are unaware of the majority of the feedback effects of our actions. Instead, we see most of our experience as a kind of weather: something that happens to us but over which we have no control. Failure to recognize the feedbacks in which we are embedded, the way in which we shape the situation in which we find ourselves, leads to policy resistance as we persistently react to the symptoms of difficulty, intervening at low leverage points and triggering delayed and distant, but powerful feedbacks.

If almost nothing is exogenous, does that affect how one might think about assumptions of objectivity and subjectivity in scenario planning?

Examining multi-level climate governance

In the 2009 World Bank report, “A Polycentric Approach for Coping with Climate Change,” Elinor Ostrom challenged the notion that a global atmosphere requires global action.

Given the complexity and changing nature of the problems involved in coping with climate change, there are no “optimal” solutions. … The advantage of a polycentric approach is that it encourages experimental efforts at multiple levels, as well as the development of methods for assessing the benefits and costs of particular strategies adopted in one type of ecosystem and comparing these with results obtained in other ecosystems.

This polycentricity is playing out slowly, of course, at governance levels from nations to provinces/states to cities.

Recently, University of Oregon’s Ronald Mitchell circulated a request for scholarly writings on multi-level governance. Ron wrote:

I am trying to advise a student working on why a country might be reluctant in assuming international climate obligations even while it’s subnational units (provinces/states, cities) are taking action on climate change. Any suggested of literature that would point in the direction of good theorizing on the factors that might explain such variation would be appreciated.

Among the responses, here are four abstracts that caught my eye.

Zelli, F. (2011). The fragmentation of the global climate governance architecture. WIREs: Climate Change, 2(2), 255-270.

The term fragmentation implies that policy domains are marked by a patchwork of public and private institutions that differ in their character, constituencies, spatial scope, subject matter, and objectives. While the degree of fragmentation varies across issue areas and their respective architectures, global climate politics is characterized by an advanced state of institutional diversity. In recent years, scholars have increasingly addressed this emerging phenomenon of international relations. The article finds that the predominant focus of these studies has been on dyadic overlaps, i.e., interlinkages between two institutions, and less on the overarching level of entire architectures and their degree of fragmentation. This goes in particular for research on the global climate change architecture. Many studies have attended to the relationship between the United Nations climate regime and other institutions: multilateral technology partnerships, regimes regulating other environmental domains like ozone or biological diversity, and regimes from non-environmental issue areas like the world trade regime. However, a cross-cutting account of these overlaps which addresses the overall implications of institutional fragmentation on climate change is still missing. As possible areas for further research the article identifies: consequences of fragmentation (e.g., a new division of labor or increased inter-institutional conflict), fragmentation management and conditions of its effectiveness; theory-driven analyses on the reasons of fragmentation within and across policy domains.

Hochstetler, K., & Viola, E. (2012). Brazil and the politics of climate change: beyond the global commons. Environmental Politics, 21(5), 753-771.

Assessing the changing role of the emerging powers in global climate change negotiations, with special attention to Brazil, we ask why they have agreed to voluntary reductions at home without formalising those commitments in ways that might persuade other large emitters to make similar binding commitments. We argue that for very large emitters, the climate issue does not evince the ‘global commons’ logic often attributed to it. Instead, since their actions can directly affect climate outcomes alone or in small groupings, large emitters are more responsive to domestic cost-benefit calculations, making international commitments based on shifting interest group pressures at home. In Brazil, a coalition of ‘Baptists and bootleggers’ found principled and interest-driven reasons to support new climate commitments after 2007.

Fisher, D. R. Forthcoming. Understanding the relationship between sub-national and national climate change politics in the United States: Toward a theory of boomerang federalism. Environment & Planning C, Government and Policy.

This paper looks at how sub-national policies in the United States are interacting with policymaking at the federal level to address the issue of global climate change. It focuses on a coordinated attempt to get the national government to fund local efforts to address climate change. Although local climate initiatives in the US were successfully translated into a national policy to support these local efforts, their implementation through hybrid arrangements that are being formed between business and local governmental actors will potentially create additional challenges to federal policymaking. I introduce the notion of boomerang federalism, which builds on the extant research on federalism and vertical policy integration, to explain the process through which local efforts mobilize initiatives at the national level that, in turn, provide support for the local initiatives themselves. Reviewing the implementation process of this effort, I discuss the ways that businesses are working alongside local governments to address climate change.

Harrison. K. Forthcoming. Federalism and Climate Policy Innovation: A Critical Reassessment. Canadian Public Policy.

This article argues that the prospects for US state and Canadian provincial climate policy innovation and diffusion are limited in several respects. Subnational climate leaders tend already to be the cleanest states and provinces, and even they have been strategic in the sectors they regulate and the instruments they employ. In some cases, this selectivity appears to be motivated by opportunities to shift compliance costs to other states. Efforts to pool risks through state and provincial collaboration also are flagging in the wake of the Canadian and US federal governments’ failure to adopt nation-wide policies to level the playing field.

See also: Ronald Mitchell: Reasons for climate optimism.

“The purpose of knowledge-making is so rarely debated,” write Peter Reason and Hilary Bradbury-Huang in the introduction to 2008’s The SAGE Handbook of Action Research: Participative Inquiry and Practice.

We start from these assertions — which may seem contentious to some of the academic community, while at the same time obvious to those of a more activist orientation — because the purpose of knowledge-making is so rarely debated. The institutions of normal science and academia, which have created such a monopoly on the knowledge-making process, place a primary value on pure research, the creation of knowledge unencumbered by practical questions. In contrast, the primary purpose of action research is not to produce academic theories based on action; nor is it to produce theories about action; nor is it to produce theoretical or empirical knowledge that can be applied in action; it is to liberate the human body, mind and spirit in the search for a better, freer world.

Hear, hear!

See also: “Knowing in action and the practice turn.”

Making, the practice turn, the active voice

Blumenauer - Portland Made

“The creation of Portland Made is a masterstroke,” lauds Oregon Congressman Earl Blumenauer in this video. “You are a key reason why my hometown is truly America’s most livable city.”

Portland Made, which is looking to establish itself as hub-of and brand-for the local designer-retailer-manufacturer movement, is a recent addition to the city’s thriving maker culture. Economist Charles Heying documented the extent of this movement in his 2010 book, Brew to Bikes: Portland’s Artisan Economy, which included a look-back glance at the so-called “200 artisan skills required to make a Victorian town functional.” Heying calculated that Portland makers had maintained or recreated 112 of the skills, nearly two-thirds the total.

To me, this trend is significant for several reasons. One is articulated in, say, Chris Anderson’s thesis of making as the harbinger of a new industrial revolution. Another is resilience — making as part of providing for local-regional needs, as I wrote in this article on jobs of the future.

Still, I think there’s more as well. Consider this question: Is the story of the maker movement just about material goods and functional purposes?

Think analogously. How about school gardens, where children learn to grow vegetables. Is the significance merely about fresh produce for the cafeteria? Or is it also about what’s cultivated in the child? How about Getaround carsharing or airbnb homesharing. Purely functional? Or are there less tangible benefits, a culture of sharing nurtured among participants?

Making, gardening, sharing: What these have in common is that they are practices. As I cited in a post on “the practice turn,” practices are embodiments of agency, expressed in an active voice. Practices are how everyone can contribute to shaping the norms and logics that shape our lives.

As I was thinking about this stuff, I remembered David Bollier’s description of the 2012 German Sommerschool on the Commons, where participants rewrote Elinor Ostrom’s design principles for management of common-pool resources as practices for commoning (“Eight Points of Reference for Commoning”).

I’ve created a figure with Ostrom’s principles (from the recent book, Sustaining the Commons), side-by-side with the active-voice practices.

Ostrom design principles in active voice

Sean Carroll: Teleology and the laws of science

Sean Carroll emergent features

“Teleology is absolutely acceptable in the laws of science,” says Caltech physicist Sean Carroll in this June 2013 talk (“Purpose and the Universe”) at the American Humanist Association annual conference.

Not the kind of statement one hears every day. Carroll begins by joking that they must have invited the wrong person, takes the keynote audience from quantum field theory to moral philosophy, and receives a standing ovation. His website is here and slides for this talk are here.

Starting at ~42:40:

We are very, very fortunate to live in a hierarchical reality, where there are ways to talk about the universe, other than using the fundamental laws of physics.

There are these vocabularies, these ontologies, these stories, these models — whatever you want to call them — that apply at these upper levels, and you don’t even need to know how the lower level works. And what’s interesting about this is, not only can you have different levels of description, but they can sound very, very different. Things that you thought were really important at the bottom level might not even show up at the upper level.

I’ll give you one obvious example — my favorite example of what we call an emergent feature in a higher-level theory: the arrow of time. The fact, we already mentioned, that the past is different from the future. The past is done. The future is up for grabs. I can make choices about the future. I remember yesterday; I don’t remember tomorrow.

Well, if you go back to Newton’s equations — if you are Laplace or you’re Schrödinger or whatever — that distinction between past and future is nowhere to be found. In the fundamental laws of physics, the past and future are on exactly equivalent grounds. There is no difference between the past and future as far as the fundamental laws of physics are concerned.

Yet, you would be crazy to try to describe human behavior, or biological evolution, or even the evolution of cosmology and the whole universe, without taking into account that in the macroscopic realm there is a very strong arrow of time. There’s a very noticeable difference between the past and the future.

So, it’s not true that just because something is invisible at the lower level, it can’t be there at the higher level. They just need to be compatible.

How do you make the arrow of time in the macroscopic world compatible with the reversibility of the microscopic world? It’s because of the big bang. Because in the past, 13.8 billion years ago, the universe started in a very, very special, delicately arranged state. It’s like a little wind-up toy that started there and has been puttering along ever since. Winding down. Dispersing its energy, as entropy goes up.

We have a match. We have a way of mapping the fundamental laws of physics and the macroscopic ones. It just requires knowing which macroscopic configurations are allowed. And you find that words — like: cause and effect, memory, choice — appear and are very important at the macroscopic level, even though they are nowhere to be found at the lower level.

So, I would say — having gone through this whole journey — that teleology is absolutely acceptable in the laws of science. It is on the table as a possibility. Teleloogy is the idea that the way to talk about a system is to say: It has a goal. It is trying to find something.

At the equation level of the fundamental laws of physics, there is no teleology. But that’s not the end of the story. At the higher level, in the emergent level, in the macroscopic realm, there might very well be a teleology.

Remember, you agreed with me when I said the Roomba has a teleology. It was built in order to vacuum the room. There was a purpose. There was a reason why somebody built that Roomba. And that person who built the Roomba is made of atoms, obeying the fundamental laws of physics.

Is there purpose in the universe? Of course. I have purposes. You have purposes. You came here, and you’re just made of atoms. It’s because you’re describing yourself at a different level of complexity.

So, if teleological language — finding a purpose, moving toward a goal — is the best scientific theory applicable to a certain level, then there’s every reason to talk that way.