Showing posts with label encodingism. Show all posts
Showing posts with label encodingism. Show all posts

Tuesday, June 26, 2012

Machinic: An illustration through Bausola's Weavr

Inspired by a podcast of the CBC radio show Spark, especially given my previous post, I have decided to expand the idea of machinic organization as it relates to David Bausola's weavr: a social bot. 

Nora of Spark starts her discussion with the observation that traffic on the internet is shifting from a predominantly user-based environment to one that is dominated by bots and their ilk. That is correct. Humans are no longer central in the framework of the internet. In some sense, we have been replaced by bots and not with a resulting decrease in quality. In fact, given that these bots aid in the creation of networks that support things like Google's search engine, one could argue that things have improved. A great deal of these bots are also maliciously related (e.g., spam bots) so one could argue in the other direction as well. It is a moot point whether this is for better or worse as it is happening regardless. The point is that not only was the persistence and functioning of the internet not tied to any particular individual, but it was not even tied to any particular system--where 'system' is used very abstractly to represent any organized assemblage of parts including humans and bots. Thus, the shift emphasizes or highlights what some have described as a decentration and one in which both humanity and individualism must be replaced by a framework of broader scope. In this ever changing landscape, the weavr comes to the fore.

The weavr is a strange bot that operates in social networks of various kinds. It defaults to having a blog and can be interfaced with Twitter. It dialogues, wanders around with Google maps, and draws associations between its 'interests,' 'emotions,' and various other associations (e.g., its position on Google maps and the position people are posting various materials from). On the broad side, it even dreams.

I simply adore this little creature and Bausola's emphasis on the idea of emergence is certainly invigorating. However, I recently had a shift that changes my relation to this idea as it relates to AI. The shift occurred with a thought that was similar to the following:

If a 'truly' novel intelligence on par with humanity were to be created, what benefit would come to humanity?


The thought stemmed from the work of Mark Bickhard. I briefly commented on a similar element of Bickhard's work in the latter part of this post. Briefly, Bickhard's critique of representation in the Encodingist framework results in the following dilemma: any construction qua representation that I overlay over this hypothetical 'super' AI must necessarily be separate from the functioning of that AI.

It is worth noting that I am neither being a phenomenologist nor an epiphenomenalist when I make the previous claim. It is an entirely different framework. To emphasize this, I will switch my use of the Encodingist 'representation' to the Interactivist 'anticipation' while relying, somewhat, on the common sense view of anticipation to get me through the analogy. The result follows: my anticipations of the AI are separate from the functioning of the AI much like my anticipations of other people are separate from the functioning of other people. The key point here is that my anticipations can be wrong and hence have to be separate. This "have to" is crude, but a more sophisticated discussion is beyond the scope of this post. I urge anyone who is interested to read Bickard's text for a much more detailed and eloquent rendition of the problem, especially Chapter 7, p. 55 and onward.

Given this framework, the 'super' AI, at the moment of its attaining this level of sophistication, becomes inherently 'Other' to me. But, this degree is just the beginning as even other humans can be 'Other.' There is also the difference of species: my framework of anticipations was built in interaction with other humans. And, though my association between the AI and humanity was originally justified in the construction of the 'AI as tool,' the 'AI as self-organizing/maintaining system' is outside of this domain. Thus, it warrants the description as 'truly alien.'


One could argue that we have at least elementary forms of such self-organizing systems and, as such, the transformation would not warrant the degree of 'Otherness' that I am proposing. However, I would argue that this is false. At most we have more sophisticated forms of 'AI as tool' that allow for modest degrees of self-organization in respect to a specific task. Simultaneously, then, I can push the requirements for this purely hypothetical 'super' AI further out by requiring that they posses the ability to be "recursively self-maintaining" a la Bickhard (p. 21).

Regardless, my point is that this transition of the AI from a tool to a functioning entity is not useful. First, the very idea that there could be a transformation is Encodingist: it is the magical transformation that takes a material substrate, 'AI as tool,' into the efficacious realm of a symbol manipulator, 'AI as individual or recursively self-organizing system.' In Interactivism, there is no such transformation. If anything is problematic, in the Interactivist context, it is systems that are more than locally self-organizing, which are more likely to be unwieldy. Thus, and to my second point,  I can only imagine how problematic it would be if I had to convince my calculator  to crunch numbers for me.

To return to the discussion of weavrs but in this new context, we can work towards a better conceptualization that does not have the Encodingist overtones of Bausola's infomorph while maintaining its machinic qualities. Weavrs are a new type of social tool. They emphasize the growing shift on the internet away from users and towards pseudo-autonomous functions by invading into a domain that previously excluded bots by definition: the social sphere. They also give us insight into some elements of how we as humans work, but not by possessing the individualistic properties of humans (e.g., intention). Rather, it is by showing that humans do not have those properties either.

This is the decentration. This is why it is called "machinic" organization. This is also probably why Jon Ronson got so upset about the weavr of the same name: not only does the weavr partially delegitimate the particular individual, it delegitimates all individuals even if just potentially (i.e., even if some future update may take it to this degree entirely but the current one is still too limited). Thus, I believe I can answer both Olivia Solon of Wired's question, "what do you think of weavrs?" and Bausola's question about what weavrs are in a single comment:

Weavrs are simply a tool for social exploration. But, by being such, they anticipate a time when all such exploration is relegated to their ilk. Through this anticipation they mark the end of humanity as organism... as 'system' par excellence and in place they speak of a time when a human-function is no more valuable than an AI-function and no less replaceable. This is of the utmost significance to both the study of AI and humanity as it relates to itself.



Pictures courtesy of:
http://www.goldrootherbs.com/2010/10/11/the-systemic-theory-of-living-systems-and-relevance-to-cam/
http://twitter.com/#!/PixelNinjWeavr
http://www.myjewishlearning.com/blog/rabbis-without-borders/2012/02/14/the-singularity-vs-the-gift-of-death/
http://www.evolo.us/architecture/architecture-designed-to-simulate-self-organizing-biological-systems/
http://www.flickr.com/photos/philterphactory/6829175342/

Monday, June 18, 2012

A comment on "Zeno's Sound": Representation, nothing, and the shift to process

Hello folks,

I have decided to embody what I was describing previously about commenting via blog posts with links. The source of this current post, on which I am commenting, is a blog post titled "Zeno's Sound."

I am familiar with the author and, thus, this post is a continuation of an extended conversation we have been having. You, my fellow hypothetical readers, will now have the luxury of enjoying (or joining) it, given the shift to the current framework.

The issue I am having with the post has to do with the framework in which the author is operating. I would further like to contextualize by stating that I previously endorsed a variation of this framework, but had a recent turn due to the work of Mark Bickhard. Thus, this post will, simultaneously, be the first in a series of related topics to this recent turn.

I would explain the author's perspective as a particular rendition of the implications of such theorists as Alain Badiou, Gilles Deleuze, and other, similar continental philosophers. In fact, there is a related post that engages with Badiou's material directly. However, what is added in the discussion is the ties to music, sound, and art, more generally.

Before I continue, I should put a disclaimer:
I do not purport to know what the authors of any of these works are saying. I am not a student of continental philosophy, nor the respective authors. I am familiar with their works and have discussed many of their ideas with other students. But, that is the extent of my scholarly prowess. Thus, I am engaging with this material from a largely removed position as well as a different framework. The author of the work that I am commenting on, in this framework, is key to my ties to this literature base. However, part of my point in this comment is that I do not believe it is even possible to know what the authors are saying. I can hear people already cringing at this statement (another absolute relativist), but give me a moment to explain.

The framework that I am endorsing--which (potentially) remedies the initial spur for this comment--no longer accepts the proposition that symbols and/or information (including both these words as well as sound, etc.) encode and/or transmit anything. This theory, interactivism, denies encodingism of any form. Instead, one merely has their anticipations of future states as dictated by prior experiences with symbols, dialogue, etc. (this is an oversimplification but I am only going to peripherally engage with this idea for now...). The result, then, is that I can only comment on the previous experiences I have had with this material, largely through the author on whom I am commenting. Thus, if you have a rebuttal that runs very close to the text, you may be viewing an entirely different world from the 'same' set of symbols. I am always interested in such criticisms, but they may be missing my point entirely. I would kindly ask, given this, that one takes an initially agnostic position to the framework which I am endorsing: an external critique is inherently comparative and thus only peripherally relevant from an internal perspective.

Now, a particularly astute observer might notice that I am also making a comparative claim. You would be right. This is actually my point. I am introducing a new (i.e., vulnerable) form through a juxtaposition with the authors work via this comment. It requires some space in which to grow before it can clash into fully fledged bodies of knowledge with massive support bases.

To continue my comment...
The continental philosophers who the author is appealing to, in this new framework, are geniuses of the encodingist world view. That is, they addressed many of the inherent problems and contradictions created by the endorsement of encodingism through such fascinating concepts as "nothing" or the "null set." And, it turns out, that the author's conception of silence is closely related to this idea.

In sum, I would reduce (probably incorrectly) the author's points to the following:

(1st paragraph) Silence (or nothing) is nowhere or is not a thing.
(2nd paragraph) Silence is simultaneously everywhere and in everything.
(3rd paragraph) Representation ruins negative sound.
(4th paragraph) Representation ruins positive sound.
(5th paragraph) This problem is fundamental or it is not merely a matter of pragmatics.
(6th paragraph) Any discussion of sound has already lost the silence.

As my translation demonstrates, representation or encodingism is the issue and silence or nothing is only a minor, if particularly creative, palliative. What is needed is a different framework.

If one removes representation in place of anticipation one gets the following:
The digitization of sound and/or silence is merely a means to create anticipatory structures of what will occur in process when interaction occurs between the listener and the productive mechanism. It is a crystallized, symbolic foreshadowing in a highly complex anticipatory network. Thus, what Cage, Zeno, and the null set are pointing to is merely the limits of the current anticipations, limits which arguably no longer exist in systems which have integrated these paradoxes in a productively anticipatory fashion (i.e., which can utilize their predictions of these phenomena in their systems cohesively and usefully [i.e., to make further predictions]).

Interestingly, one can actually take the author's post to be the perfect embodiment qua illustration of this claim. That is, the author is illustrating how Cage, Zeno, and the null set are no longer limits since he can use their implications in a productive fashion as per the example of sound.
 I can, however, anticipate that they would oppose the null sets inclusion in this list as illustrated by the last two sentences of the 6th paragraph:

"... can there be a change of intensity of no sound? Unfortunately, or perhaps fortunately, there is not an answer to this within our range of hearing."

Silence is a symbol that is not a symbol. Again, a property that is not necessary if symbols don't contain anything. They have no content so all symbols are the null set--an oddly poignant point given the parallels in mathematics. One simply anticipates future numbers and, thus, the symbolic system entirely.