AI/CS considered intellectually lame

I’ve been reading some old and odd papers from a way back. It may be because of the Easter break, but I’m meandering even more than usual. Here are two snippets, that reaffirm my belief that my own subject, computer science, is the most lame and benighted of the sciences.

The first is from a paper by Luc Steels, which some AI folks might know of for work such as his (amazing) Talking Heads experiment. In a paper last year, Personal Dynamic Memories are Necessary to Deal with Meaning and Understanding in Human-Centric AI, he set out these levels of construction of meaning of an experience (drawn from an art history text, interestingly enough):

  • The base level of an experience details the external formal properties directly derivable from the perceived appearance of the experience, for example, the lines, shapes, color differences in hue, value (brightness) and saturation, textures, shading, spatial positions of elements, etc. in the case of images.
  • The first level of meaning is that of factual meaning. It identifies and categorises events, actors, entities and roles they play in events, as well as the temporal, spatial and causal relations between them. In the case of images they require a suite of sophisticated processing steps, starting from object segmentation, object location, object recognition, 3D reconstruction, tracking over time, etc.
  • When there are actors involved, a second level, that of expressional meaning becomes relevant. It identifies the intentions, goals, interests, and motivations of the actors and their psychological states or the manner in which they carry out actions.
  • The next level is that of social meaning. It is about the social relations between the actors and how the activities are integrated into the local community or the society as a whole.
  • The fourth level is that of conventional meaning, based on figuring out what is depicted or spoken about and the historical or cultural context, which has to be learned from conversations or cultural artefacts, like books or films.
  • The fifth level is known as the intrinsic meaning or content of an experience. It is about the ultimate motive of certain images or texts, or why somebody is carrying out a certain behavior. It explains why this particular experience may have occurred.

Yeah. You know how this goes. How far has CS/AI ever got? How far is it likely to, given its positivist approach?

The second paper was one I skimmed years ago, but came back to the other day. It’s Hubert Dreyfus’ Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian, and it strikes me yet again how almost all of CS/AI is unremittingly and naively positivist.

Most of what Dreyfus pointed out as problematic in 2007 is still at the core of our field. The massive gap between what philosophers, biologists and neuroscientists know about intelligence, ontology, epistemology, and activity… and what computer scientists do… is saddening.

And, yeah, I know: even though it’s clear that our current approaches are amazingly limited, one can do amazing things with positivist-based tech. Also, what’s the alternative? That’s hard. So, in the meantime… on we go, pretending to ourselves—and selling to others—that AI/ML might be like human intelligence one day, even though we are building it not to be.

There’s gold in them there shills. Sigh…

Author: mjchalmers

Professor of computer science at U. Glasgow, UK.