=== THESE ARE MY ROUGH, DAILY NOTES. TREAT THIS AS UNFILTERED MUSINGS, NOT AS FULL-FORMED PROPOSITIONS :) ===
Read the Atlantic interview of Christof Koch, who asserts that consciousness might be a property of all complex networks. Also read an Atlantic summary of Latour and Ihde’s argument against instrumentalism (i.e., technology is neutral, and therefore, “guns don’t kill people, people kill people”), which appears to be what Latour (1994) calls materialism. Technologies, they claim, transform us. The gun is not the same gun when it is in someone’s hands, and the person is not the same person when they have a gun in their hands; both have a greater probability of causing damage. (I’m going to ignore for now any claims that people are more quick to anger with a gun in their hands.) The point is: the gun affords certain actions more than others.
Can this argument be extended to consciousness and the internet? If the Internet affords anything, does it afford a greater capacity for consciousness? Just as people’s abilities to recall information (and remember, altered behaviour is the justification for this justification, although the salient difference is more than behavioural, if I read Latour correctly) when “equipped” with Internet, so does the Internet alter with human usage. Algorithms that associate information and categorize them, like Google’s page rank, take effect whenever humans use search engines, creating new associative links.
Indeed, Latour makes four distinctions between how technology (“technique”) mediates humans. First, he treats them as being somewhat indistinguishable from (or symmetric to) each other. Then he talks about the following those of mediation:
Translation: assigning agency
Composition: activating behaviour
Reversible black boxing: not sure actually how best to describe this, but something around the impossibility of separating object from human activity on it
Delegation: substituting intent
Latour claims that “our delegation of action to other actants who now share our human existence is so far progressed that a program of antifetishism could only lead us to a nonhuman world, a world before the mediation of artifacts, a world of baboons,” which kind of leads to the question: so what if it does?
Nevertheless, there is some kind of sense that we have gone past the point of no return particularly if we consider artificial intelligence. Latour seems to be lending support for a kind of posthuman stance, where a “new humanism” can only be found if you think about actants as symmetric: subjects and objects have equally important roles to play.
Last night, I had a long discussion with Milo, who basically advanced his essentially “sociologism”ical (constructivist) position that “people kill people, guns don’t kill people”. I need to present more arguments that lean towards technological determinism.
Tried to finish reading Latour but I feel like he kind of just makes categories that are not really suitable as categories in that they are not in any way orthogonal to each other.
Stuff read today:
Heidigger, “The Question Concerning Technology and Other Essays”
Daniel C. Dennett’s Consciousness explained
Themes for further exploration
Associative learning and consciousness
=== THESE ARE MY DAILY NOTES. TREAT THEM AS UNFILTERED CONTEMPLATIONS, NOT AS FULLY-FORMED PROPOSITIONS :) ===
Critiquing Pinker’s How the Mind Works, Bringsjord (2001) attacks the idea that the human mind is an evolved computer. His primary line of attack rests on (or adapts?) Wallace’s Pradox: that there are cognitive capabilities that seem to be “overkill”. He constructs a logical argument based on the idea of a correspondence (e) between problems (p) that our ancestors faced and powers (o) that they had to solve the problem. In order to for anyone to defeat his argument, you have to either: a) accept exaptation as a mechanism for explaining overkill powers; b) accept the “reductionist”principle that higher order (more abstract) problems can be eventually broken down into smaller problems that our ancesotrs might have faced “in their concrete behaviours”.
In attempting to disprove exaptation, Bringsjord greatest fallacy is to assume that the way evolution works is something like this: there is a problem p, and when humans evolve the power o that can solve problem p, o is absolutely unifunctional. It does nothing other than solve o. He calls the this position exaptation and recruits Pinker in his stance. In fact, both Pinker and he seem to be confused about the entire thing. No one “does the exapting”. P
Bringsjord’s attack on reductionism is that it explains nothing. As a theory, there is no example that can be raised by which the theory can be disproved. Fair enough. But is it sufficient to disprove it? He goes to say that reductionism cannot account for infinitary logics. I’m actually not sure what he’s trying to get at here, and i have a feeling he’s not very convinced of it himself. He goes on to question the value of adaptability of basic (first order) behaviours: “What survival value did this modifiability have at the time that our foraging ancestors were running about? Our foraging ancestors could have just dandy with a slot system frozen at the level of expressiveness required for finding berries and killing animals.” This is… well, actually, it’s a good point. The only thing I can think of is it that at some earlier point, the brain evolved the capacity to be reconfigurable in general… not just for symbolic swapping, but just in general, period.
The biggest problem of this paper is that it’s old. Not its fault, of course. But it’s missed out on all the computational modelling that’s been done over the past 14 or so years, and it’s missed out on advances in neuroscience. Bringsjord criticizes psychology for not be able to deliver on the promise of reverse-engineering human behaviour. But of course, we now know that it’s more neuroscience that is capable of delivering on that promise.
He does offer, however, a more damning criticism: Pinker invokes “cognitive closure” to explain our inability to address deep philosophical problems such as free will and sentience. Somewhat consistently with Godel (?), he proposes that we are not able to sufficiently explain our own free will because we’re not equipped to do so. But perhaps if we had a maker, they could. So… could we as the makers of the Internet perhaps be able to explain what the characteristics of the Internet are? Is it conscious? Perhaps the Internet itself cannot deduce its own consciousness, but we who sit on the outside could…
… Except that we’re not on the outside. We are *part* of the internet in as much as it is meaningless (for now) to talk about the capabilities of an Internet with users as it is to talk about the agency of a gun without a person wielding it (Latour).
If i think about this carefully in relation to my dissertation: does the internet have enough “powers” to be conscious, of which one of the features might the ability o handle infiniteary logics? I have no idea. But this paper kind of gives me an idea of what kind of properties I’m looking for to be able to say that the the Internet is conscious. And that’s ultimately what I’m looking for. For instance, the Internet was designed to do “certain things”. Is it possible that it’s doing “higher order things” now that could not have been foreseen by the original designers? Is it capable of whatever this guy Bringsjord calls infinitary logics?
Bringsjord (2001). Are we evolved computers?
Stuff to read
Godel, Escher, Bach
Themes to explore
The latest in AI research: have we built general purpose machines that, when successful in solving one type of problem, are able to solve another? What models for emergent (and unexpected) behaviours are there?
Can a machine be ever aware of its own limitations?
What can neuroscience tell us about consciousness?
I realize that I don’t necessary want to model cognitive behaviour or, perhaps more precisely, I don’t want to advance new models for cognitive behaviour, implement, and test them. That’s a lot of bricklaying that I’m not interested in. I just want to work with what other people have coming up before.
I’m also realizing something about my choice of the Internet as a field of study. Not only does it align with my previous experience, but inadvertently I have to turn to neuroscience to try to understand some things… which is good for cognovo collaboration
I just registered for a Coursera course on data analysis and statistical methods.
How did we come to care so much about creativity? The language surrounding it, of unleashing, unlocking, awakening, developing, flowing, and so on, makes it sound like an organic and primordial part of ourselves which we must set free—something with which it’s natural to be preoccupied. But it wasn’t always so; people didn’t always care so much about, or even think in terms of, creativity. In the ancient world, good ideas were thought to come from the gods, or, at any rate, from outside of the self. During the Enlightenment, rationality was the guiding principle, and philosophers sought out procedures for thinking, such as the scientific method, that might result in new knowledge. … They saw imagination as a kind of mental scratch pad: a system for calling facts and images to the mind’s eye and for comparing and making connections between them. … Coleridge made a useful distinction, largely lost today, between two kinds of imagining. All of us, he thought, have a workaday imagination, which we use to recall memories, make plans, and solve problems; he called this practical imagination “fancy.” But we also have a nobler kind of imagination, which operates, as Engell puts it, like “a human reflex of God’s creative energy.” The first kind of imagination understands the world; the second kind cares about it and brings it to life. …This watchful, inner kind of creativity is not about making things but about experiencing life in a creative way; it’s a way of asserting your own presence amidst the much larger world of nature, and of finding significance in that wider world. By contrast, our current sense of creativity is almost entirely bound up with the making of stuff. …. How did creativity transform from a way of being to a way of doing? The answer, essentially, is that it became a scientific subject, rather than a philosophical one…. [It involves the] conflation of the production of things with the living of a creative life. From this point of view, creativity is really just a fancy kind of productivity.Creativity Creep by Joshua Rothman
Found this scrap from my notes as a Master’s student at SFU. I really want to revisit the idea of anthropomorphic movement: we identify with things that are closer to being human. There’s also an initial paper that I wrote about this. I’ll have to look through this.