Monday, October 22, 2012

Deepening the Narcissistic Wound: A Critique Through Steven Johnson's "Peer Progressive"

The work seeks to prevent a wide-spread misconception of the changing contemporary landscape: the belief that personal free will is the saving grace of a world run by machines. This idea speaks to a failure of the organism as it represses the utterly foreign nature of the emerging world. Steven Johnson's idea of the 'peer progressive' is examined as a better explanation of the new scene. This leads to the proposal that outmoded concepts like individual, humanity, and intelligence are best discarded. Only then will people make their place in the new world.

Important Links:
Greg Satell's piece.
Steven Johnson's piece

The stimulus for this post comes from Greg Satell's piece on the Evolution of Intelligence. The work starts off quite nicely by posing critical questions about intelligence and then extending it into the machinic domain. Admirably, it does not shy back from some of the harder implications:
Nevertheless, intelligence is something we admire, both in ourselves and in others.  It has been considered for most of history, to be a uniquely human virtue.  So it is unnerving, even terrifying, when we encounter other types of intelligence.  From crowdsourcing to computers performing human tasks, we’re going to have to learn to make our peace.
The terror that the author describes has been of particular interest to me along with any ideas that point to the decentration of the human position (1, 2, 3, 4). Thus, it was all the more disheartening when the author acquiesced to the popular human tendency to flee such tension with his closing remark:
In the future, our world will driven by machine intelligence, but our choices will remain our own.
This suggests a problem of some significance: the re-appropriation of extra-human phenomena in human-centric terms. In the history of science, this type of ad hoc appeal is an indication of a dying discipline. Yet, the pseudo-resolution it offers to those faced with the terror of an emerging systemic re-organization is so incredibly tempting that the idea is best classified as incredibly dangerous. Thus, this post will hopefully serve as an ideological inoculation of sorts to the inherent problems of such ad hoc commitments.

Interestingly, Satell's defense of his position through the use of Steven Johnson is all the more confusing when one examines the latter's claims. Johnson, a brilliant thinker and writer on a vast array of technologically related topics, has recently released a new book, Future Perfect, that seems to demonstrate the exact opposite of 'free personal choice.' In its place, one achieves the "peer progressive." To quote Johnson:
Inspired in many cases by the decentralization of the Internet, the movement uses the peer network as its organizing principle, with no single individual or group "in charge."
Thus, it is unclear how--to use Satell's words--these "faceless masses" of decentralized networks propagating themselves with the very hardware that embodies the terrifying foreign intelligences could possibly uphold the individual wills of its constituents. The etymological parallels between decentration and decentralization should speak to the absurdity of this perspective.

Satell's claim that human intent may still exists in these networks seems more coherent. Yet, the folk conceptions of 'humanity' or any conception that can even marginally suggest the possibility of primitivism is certainly not a part of this coherence. Human without machine is a fantastical concept that, at best, belies mankind's over-attachment to their own personal meat puppets. Thus, when taken with a more sophisticated conception of the human-machine system, Satell's observation about intent is almost entirely devoid of content. Human intent is machine intent as there is no distinction of kind.

What it takes to "make our peace" with this budding new era of the 'peer progressive' is not an appeal to such time honoured ideas as "choice," but rather the annihilation of obsolete ideas like individuality, humanity, and even intelligence. There has never been an individual apart from the group, a human apart from their technologies, or an intellect apart from the vast collection of systems (e.g., emotion) that support and motivate it. These false dichotomies, though real and relevant in past characterizations of the world, hinder the current development of the species, if not its progress. It is not my goal to oppose them for that would simply perpetuate the dichotomy. Instead, I hope to indicate that we already know what they lead to and where they lead is not useful in a network-centric world.

Images courtesy of:

1 comment:

  1. There is a distinction to be made here. The idea that humanity, the individual, and intelligence as known now must be radically changed is almost irrevocably true. A[G]I, genetically 'superior' intelligences, networks -- these all point to the coming emergence of higher than human intellect in a form that may entirely disregard the idea of the 'individual'.

    Yet "I" am not amused. Humanity is creating these technologies not to replace us but to augment us. Yes, they will be able to replace us, and perhaps we shall eventually become discarded as obsolete meat puppets hindering the assaulting progress of science, but "I" have a question. More than a question, "I" have a proposition. Actually, "I" have an existence, and that is intrinsically enough to argue this: Why should we bother to progress to the point of our own obsoletion? Are we trying to become God? Almighty, all powerful, immortal, and infallible?

    It is in our humanity -- our limitation -- that purpose of any sort arises. A game with perfect players and no rules is no game at all. There is Experience in our flesh and blood. There is awe and wonder and courage and compassion and kisses and punches and love and hate, and life within us. When individuals, as self-reflexive organisms, decide to do an action with their limited resources, capacities, and ability, they are doing something we intrinsically value. A being beyond this capacity -- this inevitable being that is non self-reflexive, determined, and cannot think outside itself does not hold this intrinsic value of its actions.

    However, future individualistic AGIs and genetic beings will (may...) have this same capacity humanity does. Whether this evolution is good or bad is not something truly up to us, but it must be admitted there there is still something "Human" about these beings. So I make a distinction. There are some entities which do not hold themselves accountable, whose actions are chosen solely by their creators or environments, and lead inevitable lives. Those are not "Human". Our values, or at least, my values, lie in the individual who is able to choose. Who lives a limited life, viewing one's energy and efforts as worthy, authentic, and even artistic in itself. Who deals with some sense of personal responsibility upon the actions within its control. If this distinction does not exist at all, then we were doomed from the start, and the wheel of time is merely turning back to the beginning.


A place in which to share your thoughts...