Inter-Web Nostalgia pt. 2

Douglas Engelbart, December, 1968

More here

Engelbart’s model of augmentation, or H-LAM/T: Human using Language, Artifacts, Methodology, in which he is Trained:

This model is based on the Sapir-Whorf linguistic hypothesis, in which language is supposed to condition the thoughts and behaviors of its speaker community. Like Sapir-Whorf, Engelbart’s version of language is both deterministic and relativistic, but it also moves beyond linguistics to include toolmaking and education, collecting them all under the common heading ‘augmentation.’ In more philosophical terms it is a media theory, a theory of man as essentially mediatic.

In Engelbart’s system there is no artifact without a human, nor human without an artifact. Categorizing language, tools (artifacts), theory (methodology), and education (training) all as different forms of augmentation is what allows him to conceive of communication as essentially technological and all-encompassing. With the development of his NLS (oN-Line System), he invented not only the first hypertext-based multi-user computer interface (which would be selected as the second node of ARPANET), but also introduced the GUI (Graphic User Interface), the now-familiar visual grammar of windows and hypertext. Apparently the idea came from the same place all the big ideas in information and computing science came from: working radar during WWII. He also invented the mouse.

engelbart_mouse.gif

“I don’t know why we called it a mouse…it started that way, and we never did change it.”

Here‘s more Engelbart:

Our culture has evolved means for us to organize the little things we can do with our basic capabilities so that we can derive comprehension from truly complex situations, and accomplish the processes of deriving and implementing problem solutions. The ways in which human capabilities are thus extended are here called augmentation means, and we define four basic classes of them:

  1. Artifacts–physical objects designed to provide for human comfort, for the manipulation of things or materials, and for the manipulation of symbols.
  2. Language–the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts (“thinking”).
  3. Methodology–the methods, procedures, strategies, etc., with which an individual organizes his goal-centered (problem-solving) activity.
  4. Training–the conditioning needed by the human being to bring his skills in using Means 1, 2, and 3 to the point where they are operationally effective.

The system we want to improve can thus be visualized as a trained human being together with his artifacts, language, and methodology. The explicit new system we contemplate will involve as artifacts computers, and computer-controlled information-storage, information-handling, and information-display devices. The aspects of the conceptual framework that are discussed here are primarily those relating to the human being’s ability to make significant use of such equipment in an integrated system.

In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of “artificial intelligence” has been going on for centuries.

f. Compound Effects

Since many processes in many levels of the hierarchy are involved in the execution of a single higher-level process of the system, any factor that influences process execution in general will have a highly compounded total effect upon the system’s performance. There are several such factors which merit special attention.

Basic human cognitive powers, such as memory intelligence, or pattern perception can have such a compounded effect. The augmentation means employed today have generally evolved among large statistical populations, and no attempt has been made to fit them to individual needs and abilities. Each individual tends to evolve his own variations, but there is not enough mutation and selection activity, nor enough selection feedback, to permit very significant changes. A good, automated H-LAM/T system should provide the opportunity for a significant adaptation of the augmentation means to individual characteristics. The compounding effect of fundamental human cognitive powers suggests further that systems designed for maximum effectiveness would require that these powers be developed as fully as possible–by training, special mental tricks, improved language, new methodology.

In the automated system that we contemplate, the human should be able to draw on explicit-artifact process capability at many levels in the repertoire hierarchy; today, artifacts are involved explicitly in only the lower-order capabilities. In the future systems, for instance, it should be possible to have computer processes provide direct and significant help in his processes at many levels. We thus expect the effect of the computer in the system to be very much compounded. A great deal of richness in the future possibilities for automated H-LAM/T systems is implied here–considerably more than many people realize who would picture the computer as just helping them do the things they do now. This type of compounding is related to the reverberating waves of change discussed in Section II-A.

Another factor can exert this type of compound effect upon over-all system performance: the human’s unconscious processes. Clinical psychology seems to provide clear evidence that a large proportion of a human’s everyday activity is significantly mediated or basically prompted by unconscious mental processes that, although “natura” in a functional sense, are not rational. The observable mechanisms of these processes (observable by another, trained person) includes masking of the irrationality of the human’s actions which are so affected, so that few of us will admit that our actions might be irrational, and most of us can construct satisfying rationales for any action that may be challenged.

via

The problem he foresees in relation to the ‘human unconscious’ is the problem of everything his theory is forced to leave out. In tune with the rest of his logic, thinking the unconscious easily slips into social theory and the structures of communication. The H-LAM/T diagram looks something like an intersubjective version of Freud’s theory of the psyche, divided as it is into hierarchical processes traversed by energy flows.

freud_topo.gif

But Freud’s internal processes are ignored, or more accurately, there is no real difference between internal and external when considered in terms of the diagram, only a relationship between two process structures through their ‘matching processes.’ Though the word ‘human’ applies to one of them, the relationship is technical, not psychological. Hence there is no hierarchy, no primary drives to be conditioned by a secondary censorship. The fundamental opposition here is not internal to a privileged process structure (man); it is found between the diagrammed man-machine relation and the ‘outside world,’ or more simply, between known and unknown (knowledge being the equivalent of its diagrammatic representation). Knowledge, the diagram, and communication link to form a technical network defined by function and interaction. This is a different sort of ontology than that of empiricism, in which there is a difference between representation and reality that requires constant testing. Here knowledge is increased by expanding the territory of the diagram, by simultaneously extending and intensifying its functionality. The old tragic theological-philosophical distinction between theory and practice breaks down. Everything is reduced to expansion. If terms like ‘human,’ ‘whole,’ ‘material,’ ‘reference’ and ‘intrinsic’ continue to exist, it is because they are useful labels for defining a network; if they have some other kind of meaning it will manifest as an externality and provocation.

part one here

Advertisements

52 Responses to “Inter-Web Nostalgia pt. 2”

  1. If terms like ‘human,’ ‘whole,’ ‘material,’ ‘reference’ and ‘intrinsic’ continue to exist, it is because they are useful labels for defining a network; if they have some other kind of meaning it will manifest as an externality and provocation.

    Are you saying that basically we’re dealing with the cognitivist-behavioral black box, where only input and output matters?

  2. they line up but i think the underlying theory is different. in this technocratic approach the level of detail and effectiveness of the theory/system increase together. it’s a reduction to functionalism, not a reduction to some pre-existing entity (like soul, brain, society, etc). that it sometimes seems to insist on the latter type of reduction is contingent on the current state of science.

    like as neuroscience and cognitive science advance, it will be possible to integrate the brain with computers in a more intensive way. brain-computer integration is also the condition for these advances. so as foucault says, ‘power’ advances along with knowledge, except the process is technological and not just discursive/economic.

    right now there is still too much about the brain that is unknown, or unmapped in terms of its functionality, which is the same thing to these people. the brain is still to a large degree a mysterious substance (black box) that unpredictably interrupts the ‘symbolic’ (this network that is under construction). there are also aspects of society that aren’t mapped yet either. this is what they have in common. so at the moment society and brain are 2 separate disciplinary problems for this technocratic system, but not necessarily and probably not for long.

  3. functionalism, behaviorism, cognitivism, neurocognitive science – they are all opposed to psychoanalysis for claiming that the Unconscious doesn’t exist (because it cannot be measured). Haven’t been following advances in neuroscience but I doubt they’ve moved beyond that initial (in my view) error.

  4. if you mean unconscious in the sense of mental activity that we are unaware of in our day to day lives and unable to directly control, then neuroscience is working on an explanation. from what i’ve seen i agree that other psychological approaches have really questionable heuristics that just ignore the problem or rely on drugs. the discovery that cognition involves emotion is i think undermining a lot of these theories (like the computational theory of mind). but the lacanian idea of a linguistic unconscious that is both combinatoric and ‘social’ seems not too different from what information science (the early form of which influenced him) works on, and its basic assumptions seem to me recognized by engelbart in the above excerpt.

  5. traxus4420 Says:

    i guess what i’m saying is that the presence of a ‘black box’ is provisional, and doesn’t imply a location, absolute limit, or that ignorance (‘bracketing’) of the unknown is acceptable. maybe in some practical applications of existing knowledge, but not in the possible expansion/intensification of that knowledge.

    the only thing that could absolutely limit this functionalist approach is if there is a limit to what can be ‘digitalized,’ or broken down into its smallest discrete units and understood systematically. so functionalism may be limited now in comparison to other theories, but not necessarily in the future. in a sense it’s not even a theory, it’s just a description of the current state of technology.

  6. traxus4420 Says:

    engelbart’s seems to me to be a general functionalism, not a functionalist theory of mind. going by the H-LAM/T model the latter wouldn’t even really be possible, since its scope is too limited.

  7. the only thing that could absolutely limit this functionalist approach is if there is a limit to what can be ‘digitalized,’ or broken down into its smallest discrete units and understood systematically. so functionalism may be limited now in comparison to other theories, but not necessarily in the future.

    please expound, it’s a bit impressionistic. in the Unconscious of course because the chain of signifiers endlessly slides there’s no way you can really ”break it down”.

  8. traxus4420 Says:

    doesn’t lacan himself argue for the primacy of 0 and 1? i thought the unconscious in lacan was a combinatoric abstract machine that consisted of discrete units. maybe it’s true that the unconscious can’t be diagrammed, but why couldn’t it be modeled? as information science has moved beyond ’60s engelbert, i believe the current ideal model isn’t the diagram anymore, it’s the computer. you’ve even got people like stephen wolfram arguing that the universe is essentially computational, a claim i don’t pretend to fully understand but it speaks to how theoretical science is becoming more and more dependent on technology. so maybe ‘unit’ should be replaced by ‘bit’ and ‘systematically’ by ‘computationally.’

    one also has to remember engelbert’s model isn’t just a network, it’s a digital network. his theory of augmentation is based on something he actually built. the digital material (at bottom it’s just 0s and 1s) is concealed by the abstract symbols he has to put it in in order to communicate in natural language. but it is a human-computer interface he’s proposing. part of what i think he’s doing in his theorizing of human cognition is retroactively making the human mind compatible with computer technology. the theory is not purely descriptive or referential; it translates cognition into the terms best suited for technological augmentation.

    anyway, i’m just now trying to teach myself this stuff, so maybe it will make more sense in future installments.

  9. doesn’t lacan himself argue for the primacy of 0 and 1? i thought the unconscious in lacan was a combinatoric abstract machine that consisted of discrete units

    yes it’s a combinatoric machine but the point is that it works on the relations between the discrete units, as in de saussire’s linguistics, and because they are in a moebial relationship to each other, constantly shifting, there’s no way you can ”affix” them (in the Unconscious I mean). That would be like creating a perpetuum mobile computer or a translation machine. I think still within realms of total science fiction,.

  10. traxus4420 Says:

    1. there is a difference between ‘affixing’ (locating, bracketing off, etc.) and modeling. in the fink book he makes a simple model of the unconscious through a coin-toss game. a more complex model is at least theoretically possible.

    2. lacan’s unconscious is itself a model of observed processes, suited to his particular needs, and theoretically replaceable with something else. especially if it incorporates more phenomena than lacan was able to.

  11. well I don’t WANNA believe that machines are better than humans alright ????

    i just opened a continuation of the utopia discussion with ref. to Gnosticism, Inland Empire and Burial. I will consider it unmanly if you don’t show up just because you can’t cope with Jonquille!

  12. traxus4420 Says:

    ha! yes i have been unmannerly about commenting lately. exploiting zizek and jameson for page hits has made me a power-mad virtual shut-in.

    i’ll stop by after listening to the burial album.

  13. This idea of cognitivism denying the unconscious or the impact of emotion on thought is a red herring. Even a psychoanalyst has no DIRECT access to the analysand’s unconscious. My awareness of your mental and emotional activity is mediated by language, gestures, symptoms — external cues. From these cues I might be able to infer internal states and processes, but I cannot perceive them directly.

    Cognitive researchers structure experimental protocols so as to evaluate alternative inferences about black-box processes. They do this by systematically tweaking inputs and observing outputs, including linguistic outputs, mistakes, tone of voice, visual attention, etc. That’s all anybody has to work with. If the cognitivists believed that consciousness was the only important thing they’d rely exclusively on conscious research subjects’ self-reports about mental processes. They don’t.

  14. That’s right Clysmatics and what I INFER from your comment is that cognitivism has been so embedded in your BRAIN PROTOCOLS that the idea of the Unconscious upsets you and leaves you feeling messy. You would rather have it clean and unambiguous: input, processing (which MAY include hypothetic constructs, although you’d rather deal with intervening variables) and HOP! output. This inference I make not on any presupposed unconscious thoughts you might have on the subject, but on the affective tone of your comment, which barely disguises irritation with psychoanalysis that has apparently upset your daily intellectual functioning.

  15. Not only am I irritated with psychoanalysis; I’m irritated with you for persisting in spewing this anti-cognitivist output despite your having audited various tutorials I’ve offered on this topic and my pointing out how your hero the pinocchio theorist is naively misinformed and fundamentallistically irrational on this subject. Having seen ample evidence of your intelligence and facility in incorporating new concepts into your cognitive schemata, I must infer that you are deploying various parodic protocols intended to agitate my affective tone.

  16. Actually you did not elaborate on why you think my hero and cyberpunk icon Shaviro is fundamentalistically irrational on the subject, and in the thread here you give me some vague INFERENCE of the cognitivists ”tweaking inputs” so if you want to be Jimminy Cricket please provide some more MEAT.

  17. I refer to Shaviro’s recent critique of an evolutionary psychology book, which he repeatedly associates with cognitivism in his post. He criticizes some bad science, which is fine. Then he says that “the brain, or the mind, or “human nature” in general, is massively underdetermined by the particular biological traits of which the evolutionary psychologists make so much.” Psychological research makes no claim to being able to arrive at a fully deterministic model of human thought, behavior, etc. They’re mostly looking for patterns in data that looks mostly like randomness (or individual difference, if you like). A study that accounts for 20% of the variance between individuals is regarded as a strong finding — that leaves 80% unaccounted for. The research model isn’t deterministic; it’s probabilistic.

    The evolutionary psychologist says that at the instinctual level we can’t tell the difference between real and simulated threats. Shaviro says that’s wrong: fine, I agree. Then he says: “I am inclined to think that William James is right in saying that we feel afraid because we have a certain physiological reaction, rather than we have the physiological reaction because we feel afraid. But this is precisely why it is a category error to think that fear can be defined in cognitive terms, which would have to happen in order for the question of whether the experience is real or simulated to even come up.” First, “inclined to believe” isn’t a particularly compelling argument in an empirical field of study. But then in the rest of the objection I have now idea what he means. The cognitivist doesn’t assert on principle that that cognition precedes and causes physiological response and affect: she figures out ways of investigating the relationships between cognition, physiology and affect. It’s an empirical question, not the logical consequence of a structured philosophical system. But Shaviro says that the evolutionary paradigm fails because it “has already assumed what it claims to prove.” If so, that’s just bad scientific practice in this particular case. If the hypothesis fails the empirical test it will be debunked.

    “Its cognitivist assumptions… leave it utterly incapable of dealing with the non-cognitive, affective aspects of human life, as well as (ironically enough) with the ways that “cognition” itself contains far more than it can account for.” As I said in a prior comment and again in this one, that’s just not so. Cognitivists study the interrelationships among cognition, affect, physiology, behavior, motivation, etc. They have no a priori commitment either to denying the non-cognitive aspects of human experience nor to asserting that the cognitive controls all these other aspects of life. And again, empirical cognitive psychology is prepared to acknowledge that there’s way more to cognition than they can account for in their probabilistic findings.

    So why does Shaviro level all these unfounded accusations against cognitivism? He doesn’t seem to understand what he’s talking about, but he makes pronouncements as if he does. Either he’s generalizing from a couple bad studies to the whole field, or he’s looking for illustrations of a field he already thinks is bad. It seems that his main problem is that cognitivism asserts a deterministic model in an underdetermined field, that he wants to leave room for emergence and agency and difference. Fine, no problem. It’s a straw man version of cognitivism that he’s knocking down.

  18. The research model isn’t deterministic; it’s probabilistic.

    how does probabilistic cancel out deterministic? Whether or not linear straightforward or uncertain, the assumption is still that you can attach certain causes to certain consequences. While in the psychoanalytic Unconscious there is ABSOLUTELY no such possibility because its very structure according to Lacan is to slide permanently. I don’t know about Deleuze and Guattari, but I have a hunch they agree with Lacan on that point.

    The cognitivist doesn’t assert on principle that that cognition precedes and causes physiological response and affect: she figures out ways of investigating the relationships between cognition, physiology and affect.

    Yes but how does she see those relationships: does cognition regulate physiology and affect? I think Shaviro’s idea here is that it’s reductivist, but I’m not sure on what basis you counter that.

  19. They have no a priori commitment either to denying the non-cognitive aspects of human experience nor to asserting that the cognitive controls all these other aspects of life. And again, empirical cognitive psychology is prepared to acknowledge that there’s way more to cognition than they can account for in their probabilistic findings.

    But here you’re trapped in rhetorics, don’t you say. You have to look at what you just wrote with lacanian eyes, what language are you using? It NEITHER denies NOR asserts means that it doesn’t fall into the field of its study, or its presence is treated as random (or significant) noise which may or may not influence the intervening variables in the experiment.

  20. It’s, like, the cognitivist science is IMPOTENT to these issues, they neither smell nor stink, they can stay there as long as they don’t make too much noise, as long as the lab stays clean, as long as I don’t have to have too many upsetting thoughts about sex and violence and all the TROUBLE in the world!

  21. traxus4420 Says:

    thanks guys, i’ve been waiting to be around a cognitivist vs. psychoanalysis throwdown for EVER.

    ktismatics, what do you think of engelbert’s model? does cognitive psych have anything to say about human-technic interaction?

    the only cog sci i am at all familiar with is steven pinker, who i don’t find convincing. i think what dejan is reacting to here is that it is very easy for a probablistic model to become normative, especially in clinical practice. empirical agnosticism is all well and good in theoretical science, but even provisional claims carry some weight of authority when made by men in white coats. this is especially pernicious when the models refer primarily to the brain as causal center without having a complete empirical picture of it, and thus have a high possibility of error when applied to messy reality.

    it always seemed to me cog psych was dependent on neuroscience for its foundation, and consequently dependent on the state of technology at best, naive assumptions about human nature at worst, and in the middle the strength of its experiments. but apparently there is controversy about this point.

    the counterpoint offered by lacan seems to be that it’s far more speculative than any self-respecting empirical scientist would accept, which makes its practice more of an ‘art.’ this allows it to address phenomena that empiricists are forced to avoid, though what kinds of results it gets is uncertain. you would think its vagueness and lack of ‘scientific rigor’ would make lacanians less likely to be dogmatic about its theories, but this is not the case. hence the comparisons with cultish religion and mysticism.

  22. traxus I love my dad’s pastel-colored ”she wore Blue Velvet” world and I am Buddhistically convinced that it has to exist right next to its underbelly; this is why after all psychology has two and not one major traditions. But in the current triumphant march of technofascism it is only ONE (cognitive-behavioral) paradigm that threatens to push the other one totally to the margins. The balance is lost, as everywhere else. And if I remember my philosophy of psychology classes well, I think cognitivism (like neobehaviorism) operates on the s-(o)-r formula where the organismic variable is in brackets because it is in a state of ”suspended disbelief” just like my dad described in the comment above. It’s something that the experimenter tolerates, but would rather pretend that it isn’t there.

  23. traxus4420 Says:

    though some interesting work now is being done on the co-evolution of humans, technology (often including language), and the environment that i think seem to make lacan’s theories more reasonable from an empirical perspective.

    i want to read this book by terence deacon.

    the confusion of the reviewer makes me think that if he knew structuralism and poststructuralism he would have had an easier time making sense of it.

    one fun factoid: by applying the logic of c.s. peirce’s structuralism (symbol/icon/index) to brain scans, deacon was able to show that ‘iconic’ or imagistic recognition is localized in the brain, for the most part so is indexical, but symbolic recognition is totally diffuse; it has no single location and it seems to function by something like semantic variation (like the chain of signifiers). the speculative evolutionary explanation is that iconic recognition/memory evolved first, then indexical, then symbolic, THEN consciousness. the various adaptations of human consciousness are deemed under this theory to offer no evolutionary advantages without first assuming a capacity to understand symbols.

  24. the various adaptations of human consciousness offer no evolutionary advantages without
    a capacity to understand symbols.

    yes and then because in your evolutionarist’s average mind (i include my mother here, with whom i have always crossed swords on the functionality of human behavior) this fact doesn’t compute – that we are animal symbolicums – and worse, that we may not be driven by anything utilitarian, rather something diffuse or, akhem, UNPREDICTABLE.

  25. Even if empirical psychologists had a strong and reliable model of human cognition, individual human differences override the general regularities. Everybody has different genetically-endowed capabilities and drives, different experiences in the world, different others to interact with. So even on just the input side the material that is available to memory and the relative strengths of neural associations is going to be vastly different from one person to the next. Outputs would be all over the scatterplot.

    And that’s just on a purely behavioral level, as if inputs were mechanically processed by the brain into theoretically predictable outputs. The cognitive paradigm acknowledges and demonstrates that individuals also exercise different ways of processing inputs based on things like intentionality, preference, bias, attention. These intermediaries between I and O may be conscious or unconscious, freely chosen by autonomous subjective agents or bent by cultural macroforces like economics and power. Collectively, these intermediaries are regarded as “cognitive,” mostly to distinguish them from environment and physiology.

    If anything, then, the cognitive paradigm is less deterministic than the behavioral one. Structuralism in the way Europeans talk about it never had much of an influence on the American-dominated empirical psychology from which cognitivism emerged. Even somebody like Chomsky, who proposed one of the early structural models of psycholinguistics that eventually led to cognitivism, looks like a pragmatic instrumentalist when compared with somebody like Saussure. For Chomsky linguistic structure is an instrumental capability for intentionally manipulating language in order to generate unique sentences. So he talks about “generative grammar” as a very flexible tool for assembling signifiers on the fly to suit the speaker’s purposes. He does propose that human brains are uniquely structured to handle generative grammars, making him kind of Hegelian in that regard. But if anything the advance of cognitivism has led the field to dismiss Chomsky’s unique-brain-structure argument as an unnecessary holdover from idealism. The human brain evolved from other primate brains; human cognitive-linguistic abilities evolved from other primate abilities.

    I’ll stop for now — other environmental stimuli are crying out for my attention and response.

  26. The working empirical psychologist isn’t typically driven by philosophy or grand theory. Some start out with an inclination to use science as a sort of rhetorical device, to stage demonstrations of favorite theories. This inclination is quickly trained out of you. Empirical investigations are informed by an attempt to understand phenomena that so far have not been investigated or have eluded prior efforts to pull them out of randomness. Sometimes the theory makes the researcher aware of classes of phenomena that it might be able to account for; sometimes the phenomena are compelling in their own right; sometimes they’ve been partially accounted for by competing theories and the question is whether the new theory suggests an alternative, perhaps a more complete, understanding.

    A specific study takes place within a narrow band of theory and empiricism. In writing up the findings the researcher might cite one or two broadly-known figures who signify the general field of endeavor, but for the most part the citations are very specific to the empirical question under investigaion, and usually very recent. The field as a whole expands somewhat amorphously from the surface rather than building depth or structure or moving linearly down well-defined trajectories. Rare is the pitched dialectical “throw-down” between competing theories. In experimental design the battle is almost always waged against “the null hypothesis” = phenomenological randomness.

  27. In the cognitive paradigm, consciousness isn’t a structured assemblage of content; rather, consciousness is a dynamic interface where a specific set of assembly procedures is mapped onto a particular subset of content (perceptions, memories, ideas) in a way that generates structured and meaningful output — thought, speech, behavior, etc. The content, the toolkit of procedures, the array of alternative prefabricated structures that can be imposed on content — all of it remains unconscious until it is called up, either intentionally or not, by the conscious interface. So as the individual moves through the continuous present the vast majority of her cognitive capability is unconscious. The content of the unconscious is loosely interconnected in a distributed and multiply-connected matrix. The structure of the unconscious is more virtual rather than actual: content can be assembled on the fly according to any number of structuration paradigms and procedures.

    Some pre-canned structures are easier for consciousness to summon than others, based on habit or demonstrated pragmatic value — so even this dynamic structuring work of consciousness becomes stereotypical, nearly automatic, almost unconscious. Some virtual structures rarely become actual in consciousness: maybe they’ve never been tried before, maybe they’ve failed miserably before, maybe they’ve become associated with unpleasant emotions or memories so they don’t readily pop to the surface, etc.

    It might be possible to exercise an individual’s cognitive structuration processes so that the passage between unconscious and conscious becomes freer and more flexible. You might make the person aware of automatic and stereotypic ways of thinking, do “free association” exercises in which material is dynamically structured in unaccustomed ways, identify obstacles in memory and affect that repress certain structures, identify past events that cause habitual structures to be applied transferentially to inappropriate situations, etc.

  28. I realize I haven’t been specifically addressing your objections and questions, Parodycenter and Traxus — more an attempt to lay down a substrate. I’ll pause for now, then come back and see if I can generate any more specific responses.

  29. also exercise different ways of processing inputs based on things like intentionality, preference, bias, attention. (yes although persistently no mention is made of METAPHOR and METONYMY)

    The content, the toolkit of procedures, the array of alternative prefabricated structures that can be imposed on content — all of it remains unconscious until it is called up, either intentionally or not, by the conscious interface. So as the individual moves through the continuous present the vast majority of her cognitive capability is unconscious.

    But dad this sounds like Freud’s classic hydraulic model (on Traxus’s map) whereas the updated psychoanalysis as you know sees the unconscious as always present, running parallel to the conscious.

  30. From top to bottom, selectively…

    “right now there is still too much about the brain that is unknown, or unmapped in terms of its functionality, which is the same thing to these people.”

    Brain function is more interesting psychologically than brain physiology and biochemistry. Leave the other stuff to the biologists.

    “so at the moment society and brain are 2 separate disciplinary problems for this technocratic system, but not necessarily and probably not for long.”

    Is this your position Traxus or Engelbart’s? It sounds sort of Swedenborgian, later to become Hegelian, where the structure and function at the microlevel have direct macro analogs — “on earth as it is in heaven.”

    “the lacanian idea of a linguistic unconscious that is both combinatoric and ’social’ seems not too different from what information science (the early form of which influenced him) works on”

    Yes I agree that there are similarities.

    “lacan’s unconscious is itself a model of observed processes, suited to his particular needs, and theoretically replaceable with something else. especially if it incorporates more phenomena than lacan was able to.”

    Right.

    “does cognition regulate physiology and affect?”

    There’s no a priori commitment from cognitivists as to what constitutes a theoretically acceptable answer to this question. It’s a field of empirical investigation. BUT… If physiology bypassed cognition altogether and generated affect and behaviors directly, there would be no point to the cognitivist paradigm and psychology would still be behavioristic. As long as it’s possible to detect that the impact of physiology and experience is modified between input and output, then there must be something happening inside the black box.

    “It NEITHER denies NOR asserts means that it doesn’t fall into the field of its study, or its presence is treated as random (or significant) noise which may or may not influence the intervening variables in the experiment.”

    Right. It’s not a “totalizing discourse” — give the philosophers and the poets something to do.

    “as long as I don’t have to have too many upsetting thoughts about sex and violence and all the TROUBLE in the world!”

    These are acceptable subjects for empirical scientists, just as they are for filmmakers and musicians, revolutionaries and analysts.

    “even provisional claims carry some weight of authority when made by men in white coats.”

    Well I think it’s okay for them to stand by their findings. It’s usually the writers of textbooks (like Pinker) who reify the provisional.

    “it always seemed to me cog psych was dependent on neuroscience for its foundation, and consequently dependent on the state of technology at best.”

    Until very recently the neuro people and the cognitive people worked in almost total isolation from one another. Functional brain scans are kind of a new toy for the cognitivists. People with brain injuries have always been a good source of information for cognitivists, but it’s mostly been by observing their performance on tasks rather than mucking around directly in the grey matter.

    “But in the current triumphant march of technofascism it is only ONE (cognitive-behavioral) paradigm that threatens to push the other one totally to the margins. The balance is lost, as everywhere else.”

    Cognitive-behavioral therapy has little if any connection with cognitive empirical work; it’s more of a parallel development. But there is imbalance in both research and practice. There are no non-empiricists in academic psychology — that’s why the field is so disappointing for undergrads who want either to explore big ideas or to get some dating tips. And on the therapy front it’s the cost-benefit analysis of the accountants that’s decisive. There’s very little empirical evidence that any sort of psychotherapy is effective, whether long-term or short-term. What do they mean by “effective” is I think the key question. Cured, healed, adjusted, ready to go back to work?

    “we may not be driven by anything utilitarian, rather something diffuse or, akhem, UNPREDICTABLE.”

    Unpredictable doesn’t necessarily mean undetermined, which is a point made by theologians and philosophers, myth-tellers and novelists, as well as empiricists. I think most empirical psychologists believe in free will, free choice, etc. — which is one reason they come under attack by those who doubt the free agency of the subject in an overdetermined social order.

    “this sounds like Freud’s classic hydraulic model (on Traxus’s map) whereas the updated psychoanalysis as you know sees the unconscious as always present, running parallel to the conscious.”

    What is the unconscious actively DOING, I wonder, while consciousness is engaged? Is it assembling material to send into consciousness for further processing? Does it autonomously pursue its own agenda of storing, reorganizing, making connections? Probably, though certainly conscious attention does make material more salient to the unconscious processors. It’s something I don’t know that much about in the cog sci world. I will reiterate this, though: actual conscious and unconscious processes can only be inferred from inputs and outputs until brain scan technology gets way more sophisticated than what’s available today.

  31. Not directly responsive to the Engelbart device, but for a time I worked in interdisciplinary AI lab building expert systems and such. We psychologists were sad to realize that our knowledge of human cognition wasn’t as useful as we’d hoped. Humans are inefficient and sloppy information processors when pitted against mathematical optimization algorithms and the sheer speed and power of a computer.

  32. traxus4420 Says:

    thanks ktismatics, your substrate helps escape the cartoon cognitivism i sometimes default to out of ignorance.

    “Brain function is more interesting psychologically than brain physiology and biochemistry. Leave the other stuff to the biologists.”

    but this seems like a response to a contingent limit — i.e. if the technology were better one could draw more concrete connections between physiology/biochem and psychology. if you accept that the brain affects behavior the only reason you could dismiss these things is if the science isn’t adequate yet. key word being yet.

    “Is this your position Traxus or Engelbart’s? It sounds sort of Swedenborgian, later to become Hegelian, where the structure and function at the microlevel have direct macro analogs — “on earth as it is in heaven.”

    that’s not quite what we mean. engelbert’s model is basically a functional heuristic (with ‘black boxes’ and such) that specifically addresses human-technical interaction and universalizes it by including language as a kind of technic. so by including language and tools into a theory of human cognition you explicitly make mind and society interconnected problems. ‘black boxed’ factors are not determined by an impermeable split between human and ‘artifact.’ that relationship can potentially include everything — what is external to the relation is only ‘the unknown’ (‘the outside world’) but it still influences/determines the relation (via ‘energy flows’).

    the computational theories (the ones that apply information science to other scientific disciplines) presuppose that the universe can be broken down into discrete units, that difference is a result of the different algorithms used to structure those units. like atomism made digital (so not monads but zeroes and ones — the micro makes up the macro but without having the analogous relationship you’re talking about).

    “for a time I worked in interdisciplinary AI lab building expert systems and such.”

    that’s interesting — i’m reading some things from the late ’80s by rodney brooks, an AI researcher. as you may know he disagreed with representation-based attempts to construct AI as a replica of human cognition and presented a model based on biology, where the AI architecture is built from the ground up, with functional ‘layers’ that aren’t necessarily aware of each other.

    subsumption architecture

    “A subsumption architecture is a way of decomposing complicated intelligent behaviour into many “simple” behaviour modules, which are in turn organized into layers. Each layer implements a particular goal of the agent, and higher layers are increasingly more abstract. Each layer’s goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer.”

  33. traxus4420 Says:

    deacon has an interesting refutation of chomsky in the review i link to above

  34. The review of Deacon’s book brings up an interesting question: did human symbolic capacity evolve directly (as Deacon says), or is it part of a broader social-cognitive capability? I’d like to call your attention to the work of Michael Tomasello, who argues for the latter position. Tomasello, a psychologist, is co-director of the Max Planck Institute for Evolutionary Anthropology. I think is book The Cultural Origins of Human Cognition is terrific, despite the tepid review on the Amazon page. I posted on this book here and here. Tomasello’s more recent book is called Constructing a Language — I haven’t read all of it but as the title indicates it’s more specifically about children’s language acquisition. I posted about it here and here. I better stop now and see if all these links actually work.

  35. Those last two links are particularly pertinent in comparing Tomasello’s usage-based model of language acquisition with Chomsky and, to a lesser extent, with Saussure.

  36. If physiology bypassed cognition altogether and generated affect and behaviors directly, there would be no point to the cognitivist paradigm and psychology would still be behavioristic. As long as it’s possible to detect that the impact of physiology and experience is modified between input and output, then there must be something happening inside the black box.

    yes the epistemological scheme of cognitivism is stimulus, organismic variable, response. But the organismic variable is bracketed. It is only considered in speculative mode, never taken for granted. In this way the cognitivism posits itself like Ego vis-a-vis the Id… it is full of DOUBTS.

    Right. It’s not a “totalizing discourse” — give the philosophers and the poets something to do.

    how cosmopolitan and democratic of the cognitive science to be so tolerant towards the psychoanalytic other

    actual conscious and unconscious processes can only be inferred from inputs and outputs until brain scan technology gets way more sophisticated than what’s available today.

    there’s no reason for me to dismiss that idea except that i think ”the discovery of the brain’s functioning” is a kind of an exact sciences utopia on the order of ”the cure for diabetes” which Ive been hearing about for three decades now and you two decades longer. We really have to see how that one develops, and if it ever does.

    But I swear I still don’t quite get what Traxus’s research is about?

  37. traxus4420 Says:

    thanks ktismatics! i’ll check at least a couple of those out before replying for real.

    dejan i don’t doubt that the BRAIN’s functioning will be mapped pretty well at least (which could and probably will have some disastrous consequences), but you won’t get the MIND until you incorporate linguistics and the social sciences.

    my ‘research’ is a mystery even to me.

  38. I like psychological science as a way of understanding what is, generically, about selves. I like analysis as a way of exploring what is, specifically, about this and that self. I’m mostly interested in ways of exploring what isn’t, what could be, what maybe could never be, what’s virtual and emergent and different and excellent, in selves both individually and collectively. I suppose that’s my research project.

  39. I just simply refuse to accept the world as a sterile and overly rational Swiss laboratory in which only watches and cheese are being sold and people are developing all these computationally sound and interesting cognitive skills like downloading their memory into a laptop and sending it across the universe. that whole scene gives me the creeps, as well as that surveillance crowd with their everincreasing palette of eavesdropping devices. I have even developed a kind of an aversion to techie types, because as soon as they open their mouths I see some new form of webcam in it with nanotechnological access to my blood stream.

  40. Sounds like Foucaultian conspiracy theory, with the techies building the ultimate Panopticon. There’s some truth to it, in the same sense that virtually any clever idea can be adapted for heinous purposes. On the other hand, might not awareness of how cognitive systems work give you an edge in recognizing systematic attempts to manipulate the mental apparatus? In a similar way, understanding le petit objet a and cinematic technique can be used to create persuasive advertising, but you can use it to help you resist that advertising.

  41. When I used to work in AI I was motivated by the idea that handing off the routine parts of “thought work” to machines would free the human workers to explore aspects of their jobs that hadn’t been figured out yet — to let themselves become scientists and engineers rather than mere maintenance workers. But a couple problems became evident. Most workers like to do the routine work: it might be boring but it’s an easy way to pass the time. And the corporations aren’t all that interested in becoming sources of continual innovation — too much risk. “Be a quick second” was the advice to MBA students: let somebody else do the innovative work, then come along afterward, winnow through all the clever innovations looking for the few potentially profitable ones. Then buy the guy out for peanuts and rev up the production, distribution and marketing apparatus. Content is cheap; being the pipeline for distributing content is where the money is.

  42. Content is cheap because there are enough people willing and able to create even when they’re not getting paid for it. If business had to pay all the innovators full value for what they exploit it would cost them a fortune.

  43. traxus4420 Says:

    i checked out the tomasello links; his theory of language acquisition sounds interesting, i’m a little skeptical of the thesis of ‘cultural origins…’, just because i’ve read Frans de Waal’s work on apes and he argues they are capable of cooperation, collective intention, and imitative learning, just to a lesser degree than humans. i saw him speak once and he showed videos of the experiments that demonstrate this. they can also recognize themselves in a mirror. but while apes have some capacity for understanding symbolic communication, they can’t seem to use it. so i’m convinced by deacon’s argument that an advanced capacity to understand and manipulate symbols is what separates humans from the apes — not quite as convinced by the argument that symbolic capacity is an evolutionary anomaly springing from the social organization of reproduction — language comes from sexual politics, in other words — but it’s an amusing thought and might be true.

  44. t. so i’m convinced by deacon’s argument that an advanced capacity to understand and manipulate symbols is what separates humans from the apes — not quite as convinced by the argument that symbolic capacity is an evolutionary anomaly springing from the social organization of reproduction — language comes from sexual politics, in other words — but it’s an amusing thought and might be true.

    traxus i will try to dig out one of my old textbooks from the 80s, written by this prominent yugoslav child development psychologist, which was an excellent summary of the symbolic function such as i have never been able to find in the west. i don’t remember the whole book but i distinctly remember it was experimentally disproved that apes possess the same quality of the symbolic function that humans do. this was all itemized and operationalized in the book, listing things like ”the ability to evoke the absent object”, which would satisfy clycmatics’s demand for experimental rigor I guess as well. I distinctly remember attending the lectures of this guy in Slovenlia and how the visiting British psychologists found themselves terribly surprised by these discoveries, as their empiricist mindframe saw a lot of suspicious and murky territory in them.

  45. In a similar way, understanding le petit objet a and cinematic technique can be used to create persuasive advertising, but you can use it to help you resist that advertising.

    Yes of course dad, it always cuts both ways…

  46. Traxus ”evolutionary anomaly” already runs the Unconscious linguistic agenda of eliminating the possibility that humans could be driven by non-utilitarian motives and while these motives might be shyly or playfully incorporated into this theory, they are never really taken for granted. It can’t just be, it must be either an anomaly or an improvement. There’s no place for dreams, mistakes, fantasies here, and where are the evolutionaries who would try to analyze the ape’s dreams? I mean if one can have psychotherapy for dogs, why not psychoanalysis for apes?

  47. Content is cheap because there are enough people willing and able to create even when they’re not getting paid for it. If business had to pay all the innovators full value for what they exploit it would cost them a fortune.

    Yes and this would be a serious obstacle to releasing the prole’s creative potential in accordance with Hardt and Negri, much as I don’t doubt it exists, just by letting them create…you have to somehow get the investors to pay fairly, which will never happen because investors are investors, or the proles have to come up with a way to create without having to pay to the investors, which is difficult when the investors own the media network. all of which seems to suggest that one will have to keep looking for portholes and rabbit holes instead of imagining the overthrowing of property relations spearheaded by Chabert and Leninino to the soundtrack from GOODBYE LENIN.

  48. “i’m a little skeptical of the thesis of ‘cultural origins…’”

    I’ll try to write something about this tomorrow — maybe I already have. It is an interesting issue given the structuralist position about systems of signifiers cut loose from signifieds. There’s empirical evidence supporting the position that the signifying function precedes the symbolic order in children’s linguistic development, and that this signifying activity demands that the child understand the other’s intentionality relative to the signified. Only humans can do it; apes can’t.

  49. After looking at those four posts I linked you to, I think I can’t do much more for general orientation to the idea. Maybe the thing is to consider is what symbol systems are for. Tomasello contends that symbols are essentially a stylized way of pointing — a way of drawing the other’s attention to some object in the environment. Primates can’t follow a pointing finger to the intended object: it requires them to understand the other’s intention, to see some feature the world from the other’s perspective. The joint signifying function in intentional communicative context is thus the essential feature of a symbol. This communicative triangle of self, other and object, mediated by symbolic language, corresponds to Donald Davidson’s triangulated epistemology: “I know, for the most part, what I think, want, and intend, and what my sensations are. In addition, I know a great deal about the world around me, the locations and sizes and causal properties of the objects in it. I also sometimes know what goes on in other people’s minds… no amount of knowledge of the contents of one’s own mind insures the truth of a belief about the external world, [and] no amount of knowledge about the external world entails the truth about the workings of a mind. If there is a logical or epistemic barrier between the mind and nature, it not only prevents us from seeing out: it also blocks a view from outside in… The source of the concept of objective truth is interpersonal communication. Thought depends on communication.” Consistent with the later Wittgenstein.

  50. Even infants can tell the difference between naughty and nice playmates, and know which to choose, a new study finds. Babies as young as 6 to 10 months old showed crucial social judging skills before they could talk, according to a study by researchers at Yale University’s Infant Cognition Center published in Thursday’s journal Nature.

    The infants watched a googly-eyed wooden toy trying to climb roller-coaster hills and then another googly-eyed toy come by and either help it over the mountain or push it backward. They then were presented with the toys to see which they would play with. Nearly every baby picked the helpful toy over the bad one. The babies also chose neutral toys — ones that didn’t help or hinder — over the naughty ones. And the babies chose the helping toys over the neutral ones.

    “It’s incredibly impressive that babies can do this,” said study lead author Kiley Hamlin, a Yale psychology researcher. “It shows that we have these essential social skills occurring without much explicit teaching.”

    There was no difference in reaction between the boys and girls, but when the researchers took away the large eyes that made the toys somewhat lifelike, the babies didn’t show the same social judging skills, Hamlin said. The choice of nice over naughty follows a school of thought that humans have some innate social abilities, not just those learned from their parents. “We know that they’re very, very social beings from very, very early on,” Hamlin said.

    A study last year out of Germany showed that babies as young as 18 months old overwhelmingly helped out when they could, such as by picking up toys that researchers dropped.

    David Lewkowicz, a psychology professor at Florida Atlantic University in Boca Raton who wasn’t part of the study, said the Yale research was intriguing. But he doesn’t buy into the natural ability part. He said the behavior was learned, and that the new research doesn’t prove otherwise. “Infants acquire a great deal of social experience between birth and 6 months of age and thus the assumption that this kind of capacity does not require experience is simply unwarranted,” Lewkowicz told The Associated Press in an e-mail. But the Yale team has other preliminary research that shows similar responses even in 3-month-olds, Hamlin said.

    Researchers also want to know if the behavior is limited to human infants. The Yale team is starting tests with monkeys, but has no results yet, Hamlin said.

  51. “Primates can’t follow a pointing finger to the intended object: it requires them to understand the other’s intention, to see some feature the world from the other’s perspective.”

    Just read a passage in a chapter on sex, ritual and language by Chris Knight (begun on Drew Hempel’s warm recommendation) that relates the escape strategy of a lone male pursued by a band. Before making a succesful getaway he (quite possible presumptious?intentional) assumed pose of gazing at at distant threat, which distratcted the menacers.

  52. Learn facts about the life insurance industry

    Information on the life insurance industry

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: