SFI: Complexity as incompressible regularity

“A complex system is a system that has a largely incompressible regularity,” says evolutionary theorist David Krakauer (~56:30), at this August 2012 Santa Fe Institute discussion.

Physicist Murray Gell-Mann adds (~1:11:00), “The minimum description length of the regularities is the complexity.” And computer scientist Melanie Mitchell notes the multiple definitions of complexity (~55:45).

This video with Krakauer, Gell-Mann, Mitchell, theoretical physicist Christopher Llewellyn Smith, and archaeologist Colin Renfrew  is one of many available from SFI.

From “Complexity: Life, Scale, & Civilization”:

Gell-Mann (~41:45):
Something is simple if it can be written down in some known notation and take up very little space — minimum description length. So if we already have a certain kind of mathematics for one skin of the onion, and something like it describes the next level of the onion, and something very close to that describes the level after, then we have something that looks beautiful to us. Because we’re employing mathematics that we already know.

And that seems to be the case. It seems to be the case that the fundamental law of elementary particle physics — and probably the initial condition of the universe, as well — that both of them are very simple, in this sense. The skins of the onion resemble one another. That gives us a remarkable power, and you can see this over and over again.

Krakauer (~1:14:30):
Accidents, as we’re describing them, lead to new regularities. … All living things on earth have a DNA- or RNA-based genome. The original circumstances that gave rise to that were probably very hard, if not impossible, to predict. But once that was there, a huge number of new regularities emerged — a new layer of the onion.

See also: Isomorphic relationships across complex networks

Integral quadrants and fishery co-management

Last time, I referenced a paper on West Hawai‘i fisheries co-management by Washington State University marine biologist Brian Tissot and coauthors. It’s a story I’m familiar with from interviews I did with Brian and others for the 2012 Ecotrust publication, “Resilience & Transformation: A Regional Approach.”

From that piece (see p.31):

“The West Hawaii Fisheries Council is the best thing that ever happened to fisheries management in Hawai‘i,” declares Tina Owens of the Lost Fish Coalition. “Honolulu retains final authority — but we do all the legwork to reach agreements and make sure the policies work for us.”

This relationship between Honolulu’s Department of Aquatic Resources and the council on the island of Hawai‘i is a notable example of collaborative management of public resources. Co-management relationships can be characterized as ongoing processes of testing and revising institutional arrangements and ecological knowledge. The West Hawaii Fisheries Council emerged from a local process that succeeded in designating over 30 percent of coastal waters as off-limits to aquarium collection. Ten years later, abundance of aquarium species is up and the industry is thriving, with more collectors catching more fish.

More recently, I noticed that this story has also been described through the integral quadrants framework — by Brian Tissot in a 2005 World Futures article that is reprinted in the 2009 book, Integral Ecology: Uniting Multiple Perspectives on the Natural World (of which, a balanced review here).

The integral matrix is one of many frameworks I discuss with students in my systems thinking class. It’s basically a 2×2 for examining internal and external dimensions of individual and social situations.

Here is one version, redrawn from the article, “Four Quadrants of Sustainability,” by Barrett Brown:

Integral quadrants

From Brian Tissot’s article, “Integral marine ecology: Community-based fishery management in Hawai‘i”:

Successful fishery management requires that a dynamic balance of disciplines provide a fully integrated approach. I use Integral Ecology to analyze multiple-use conflicts with an ornamental reef-fish fishery in Hawai’i that is community-managed via the implementation of a series of marine protected areas and the creation of an advisory council. This approach illustrates how the joyful experiences of snorkelers resulted in negative interactions with fish collectors and, thereafter, produced social movements, political will, and ecological change. Although conflicts were reduced and sustainability promoted, lack of acknowledgment of differing worldviews, including persistent native Hawaiian cultural beliefs, contributed to continued conflicts.

Figure design by Andrew Fuller

The 2008 UN and partners’ Roots of Resilience report examines resource-dependent communities with an emphasis on ownership, capacity, and connection.

Here is the nut graph, from “World Resources 2008: Roots of Resilience — Growing the Wealth of the Poor,” a joint publication of the United Nations Development Programme, United Nations Environment Programme, World Bank, and World Resources Institute:

In this volume we explore the essential factors behind scaling up environmental income and resilience for the poor. …

Our thesis is that successfully scaling up environmental income for the poor requires three elements: it begins with ownership — a groundwork of good governance that both transfers to the poor real authority over local resources and elicits local demand for better management of these resources. Making good on this demand requires unlocking and enabling local capacity for development — in this case, the capacity of local communities to manage ecosystems competently, carry out ecosystem-based enterprises, and distribute the income from these enterprises fairly. The third element is connection: establishing adaptive networks that connect and nurture nature-based enterprises, giving them the ability to adapt, learn, link to markets, and mature into businesses that can sustain themselves and enter the economic mainstream.

I was thinking about how these lessons might apply closer to home. One approach would be to examine the role of community-based management on U.S. lands and waters.

A promising example on West Hawaii is described in the 2009 paper, “Hawaiian Islands Marine Ecosystem Case Study: Ecosystem- and Community-Based Management in Hawaii,” by Brian Tissot, William Walsh, and Mark Hixon:

In response to long-term pressure from Hawaiian communities to promote local co-management of marine resources, the Hawaii legislature passed the Community-Based Subsistence Fishing Area (CBSFA) Act in 1994 (Minerbi, 1999). This law established a legal process whereby DLNR [the State’s Department of Land and Natural Resources] could designate areas as CBSFAs to allow local communities to assist in the development of enforcement regulations and procedures and fishery management plans that incorporate traditional knowledge.

These communities contain a high proportion of native Hawaiians and are generally organized around traditional Hawaiian ahupua‘a, or former geopolitical land divisions located within individual watersheds (Friedlander et al., 2002; Tissot, 2005). Since 1995, three such areas have been designated as CBSFAs in Hawaii. …

The West Hawaii community has a long history of collaboration regarding resource conflicts, primarily concerning the aquarium fishery, which extends back into the late 1980s (Walsh, 2000; Maurin & Peck, 2008). …  Synergy among these organizations, along with high community involvement and support, eventually created a critical mass for effective co-management through Act 306 of the Hawaii State Legislature in 1998 (Hawaii Revised Statutes 188F). …

The specific mandates of Act 306 required: (1) substantive involvement of the community in resource management decisions; (2) designation of ≥30% of coastal waters as “Fishery Replenishment Areas” (FRAs) where aquarium fish collecting is prohibited; (3) establishment of a portion of the FRAs as marine reserves, or no-take areas, where fishing is prohibited; (4) evaluation of the effectiveness of these FRAs after 5 years. …

At the time of the initial five-year evaluation of the FRA network, seven of the ten most heavily collected species (representing 94% of all collected fish) had increased in overall density (Walsh et al., 2004).

Two opportunities for students of systems

Two opportunities for students of systems:

  1. Hull University Business School, home to Mike Jackson and Gerald Midgley, is offering PhD scholarships in “systems thinking, cybernetics, problem structuring and information systems” (described here and here, with applications here). Due January 11th.
  2. Sonoma State University’s Organization Development program, home to Debora Hammond, is hosting an Evolutionary Learning Lab, January 4-6, with Ockie Bosch from the University of Adelaide. Dr. Bosch’s work at Cat Ba Island, Vietnam, a UNESCO Biosphere Reserve, will be central to the discussion at the International Society for the Systems Sciences 2013 conference in Hải Phòng, Vietnam.

What do design labs look like up close?

Be visual. Be creative. Be iterative. Be diverse (multidisciplinary). Be human centered.

That’s one approach (from iip/create) to the design of design labs, a term that is interpreted in a lot of ways. In some cases, it’s used to describe a long-term engagement, in others a one-day workshop. The best overview I’ve seen is the January 2012 report, “Change Lab/Design Lab for Social Innovation,” by Frances Westley, Sean Geobey and Kirsten Robinson.

I looked around for video documentation of design lab workshops: diverse and knowledgeable people sharing and stimulating each other, in a creative environment, sometimes in response to a focus question or provocateur. Here are a couple of examples. If you know of others, please chime in.

Finding the transcendent interest” is a video from Insight Labs, working with TEDActive:

Union Square Ventures brought together internet thought leaders (Yochai Benkler, Lawrence Lessig, John Perry Barlow, Tim O’Reilly, Clay Shirky, Beth Noveck, James Boyle, Tiffiniy Cheng, Steven Johnson, among others) for the April 2012 Hacking Society workshop. In this segment, Fred Wilson asks: “Is it true that a pure network would not have a leader?

Matthew Salganik: Wiki surveys

Matthew Salganik - wiki surveys“As you can see, this is basically kitten war for ideas,” said Princeton sociologist Matthew Salganik in introducing the wiki survey tool All Our Ideas at the Collective Intelligence 2012 conference.

Instead of a simple poll of which cat is cuter — like at kittenwar.com — participants are asked the equally direct: Which idea is better?

Other design features include: Participants can contribute ideas but cannot campaign for them among friends or followers, because paired presentation of candidate ideas is set by the All Our Ideas algorithm. And for any individual idea, no information is provided about contributor or current vote tally.

Salganik discussed the case of the PlaNYC project conducted by Mayor Bloomberg’s office. Its wiki survey question was: Which do you think is a better idea for creating a greener, greater New York City?

Staff seeded the survey with 25 ideas; 464 ideas were contributed (239 in another slide, the discrepancy is unclear); and 28,829 votes were cast. Of the top-ten vote getters, eight were contributed by citizen participants.

According to Salganik, top vote-getters included both alternative framings of ideas that mayor’s office staff were aware of (“Keep NYC’s drinking water clean by banning fracking in NYC’s watershed;”) and novel ideas that staff later verified as potentially effective (“Plug ships into electricity grid so they don’t idle in port — reducing emissions equivalent to 12000 cars per ship.”)

Salganik explained:

Your intuition might be: If we open this up and anyone in New York can upload something, we’re going to get a lot of junk. And that is true; you get a lot of junk. The worst ideas are uploaded by users. It’s also the case, as I said, that some of the best ideas are uploaded by users. So the ideas uploaded by users have a much higher variance. And if you combine variance with high volume, then you’re likely to find these extreme cases. …

Usually our intuition is variance is bad. But if we’re only looking for extreme cases, and we have a good filtering mechanism, then this variance, plus the volume, is the place where we’re going to find these extreme cases. To the extent that the variance is sufficient and the volume is sufficient, the ideas that are uploaded by users will always end up scoring better than many of the initial ideas.

Video below, slides here (pdf), and paper here. More videos from the Collective Intelligence 2012 conference.

Jonathan Zittrain: Remember your humanity

“My job as a law professor is to turn all the dials to 11,” said Jonathan Zittrain as he gleefully challenged participants at the Collective Intelligence 2012 conference.

Quoting from the 1955 anti-nuclear weapon manifesto signed by Bertrand Russell, Albert Einstein and others, he urged the developers of collective intelligence platforms to “respect the humanity of the people caught up in them.”

From his abstract:

Platforms for distributed labor offer the efficiency of breaking large tasks into discrete pieces, often in ways that obscure the purpose of the whole to which a piece contributes. When should clickworkers seek and accept responsibility for the ethical valence of a task commissioner’s aims, especially when a task does not signal anything about its larger purpose?

More videos from the conference.

Wolfers Intentions versus Expectations“It could be in November that this is going to be a chart that is going to come back and haunt me,” said Justin Wolfers at the April 2012 Collective Intelligence conference.

Expectations are better predictors than intentions, he insisted. And his chart proved accurate (i.e., to accurately support his claim). In it, poll respondents predicted Obama’s election when asked about expectations, but not when asked about intentions.

From Wolfers’ abstract:

Most pollsters base their election projections off questions of voter intentions, which ask “If the election were held today, who would you vote for?” By contrast, we probe the value of questions probing voters’ expectations, which typically ask: “Regardless of who you plan to vote for, who do you think will win the upcoming election?” We demonstrate that polls of voter expectations yield consistently more accurate forecasts than polls of voter intentions.

Wolfers was one of 22 coauthors on a 2008 Science article (“The Promise of Prediction Markets”) that advocated for a loosening of U.S. Commodity Futures Trading Commission (CFTC) regulations on prediction markets. Last week, the CFTC filed a suit against Intrade, a popular web-based prediction marketplace, for allowing trade of regulated commodity futures. (See WaPo WonkBlog: “Why economists love Intrade — and why the government hates it.”)

Here is the April 2012 talk:

Thomas Malone, director of the MIT Center for Collective Intelligence, talks about their research in an Edge video: “How can people and computers be connected so that — collectively — they act more intelligently than any person, group or computer has ever done before?”

He describes three factors as significantly correlated with group intelligence: average social perceptiveness of group members, evenness of conversational turn taking, and percentage of women.

In another project, Malone and colleagues aim to develop a ‘genome’ or set of design patterns for collective intelligence:

One of the projects we’ve done we call ‘mapping the genomes of collective intelligence.’ We’ve collected over 200 examples of interesting cases of collective intelligence … things like Google, Wikipedia, InnoCentive, the community that developed the Linux open source operating system, et cetera.

Then we looked for the design patterns that come up over and over in those different examples. Using the biological analogy, we call these design patterns ‘genes,’ but if you don’t like the analogy or the metaphor, you can just use the word ‘design patterns.’ We’ve identified so far about 19 of these design patterns—or genes—that occur over and over in these different examples.

In 2009’s “Harnessing Crowds: Mapping the Genome of Collective Intelligence,” Malone and coauthors Robert Laubacher and Chrysanthos Dellarocas describe these design patterns along four dimensions (versions of this paper differ slightly):

  • Who is performing the task? (hierarchy, crowd)
  • Why are they doing it? (money, love, glory)
  • What is being accomplished? (create, decide)
  • How is it being done? (collection, collaboration, individual decisions, group decisions)

In 2011’s “Programming the Global Brain (pdf),” Abraham Bernstein, Mark Klein, and Thomas Malone write, “These design patterns, in turn, can be embodied in various programming metaphors.” They describe: an idea ecology, a web of dependencies, an intellectual supply chain, a collaborative deliberation, a radically fluid virtual organization, and a multi-user game.

Others that write about patterns of collective intelligence include Tom Atlee, George Pór, and the P2P Foundation; The MIT Center also has a wiki handbook on the topic.

Buzz Holling on target-based management

I’ve been following the discussion of social impact bonds, in which a social venture like prison inmate training and rehabilitation is financed by a third party, evaluated against certain targets, and ultimately paid for by taxpayers if the targets are met.

In this context, I took a look back at Buzz Holling’s writing on target-based management. He’s critical. But how relevant are his concerns about the use of targets in ecosystem management to other contexts, to a social context like mental health care or prison inmate training and rehabilitation?

From “What Barriers? What Bridges?” in the 1995 book Barriers and bridges to the Renewal of Ecosystems and Institutions:

I reviewed some twenty-three examples of managed ecosystems. In each of the cases the goal was to control a target variable in order to achieve social objectives, typically maintaining or expanding employment and economic activity.

In each case the goal was to control the variability of the target – insects and fire at low levels, cattle grazing at intermediate stocking densities, and salmon at high populations. The level desired was different in each situation, but the common feature was to reduce variability of a target whose normal fluctuations imposed problems and periodic crises for pulp mill employment, recreation, farming incomes, or fishermen’s catches.

At the same time, however, elements of the system were slowly changing as a consequence of the initial success of the policy. And because the problem was defined narrowly, such changes were not perceived. In short, the success in controlling an ecological variable that normally fluctuated led to more spatially homogenized ecosystems over landscape scales.  It led to systems more likely to flip into a persistent degraded state, triggered by disturbances that previously could be absorbed. This is the definition of loss of resilience.

Those changes in the ecosystems could have been managed were it not for concomitant changes in two other elements of the interrelationships – in the management institution(s) and in the people who reaped the benefits or endured the costs.

Because of the initial success, in each case the management agencies shifted from their original social and ecological objectives to the laudable objective of improving operational efficiency of the agency itself – spraying insects, fighting fires, producing beef and releasing hatchery fish with as much efficiency and as little cost as possible.

Success brought changes in the society as well. Dependencies developed and powerful political pressures were exerted for continuing the sustained flow of the food or fiber that no longer fluctuated as it once had. More investments therefore logically flowed to expanding pulp mills, recreational facilities, cattle ranches, and fishing technology.

This is the development side of the equation, and its expansion can be rightly applauded. But if the ecosystem from which resources are garnered becomes less and less resilient, more and more sensitive to large-scale transformation, then the efficient but myopic agency and the productive but dependent industry simply become part of the source of crisis and decision gridlock.