For a single action, which outcomes it is directed to may be multiply determined by an intention and, seemingly independently, by a motor representation. Unless such intentions and motor representations are to pull an agent in incompatible directions, which would typically impair action execution, there are requirements concerning how the outcomes they represent must be related to each other. This is the interface problem: explain how any such requirements could be non-accidentally met.
This recording is also available on stream (no ads; search enabled).
We have seen arguments for three claims about motor representation:
Some motor representations represent outcomes rather than, say, only joint displacements and bodily configurations (seeMotor Representation).
There are actions whose directedness to an outcome is grounded in motor representation (seeMotor Representations Ground the Directedness of Actions to Goals).
Motor representation differs from intention with respect to representational format (seeMotor Representations Aren’t Intentions).
A consequence of these claims is that a single instrumental action may involve representations of the outcomes to which it is directed in at least two different representational formats, motor and propositional. This leads to what we will call the interface problem, which this section introduces.
Realising it is rapidly going cold, you form an intention to drink the tea. Your hand expertly secures the mug and moves it to your mouth exactly as it opens. Nothing is spilled in these exquisitely coordinated movements.
As this illustrates, there are cases in which a particular action is guided both by one or more intentions and by one or more motor representations. In at least some such cases, the outcomes specified by the intentions match the outcomes specified by the motor representations. Furthermore, this match is not always accidental.
How do non-accidental matches between intention and motor representation come about? (This is The Interface Problem)
This question is a problem because of two natural routes to answering the question are unavailable. Appealing to common causes of intentions and motor representations is a non-starter; and appealing to content-respecting causal processes despite a lack of inferential integration between intentions and motor representations amounts to no more than a stab in the dark.
‘I was at the end of a salad bar line, sprinkling raisins on my heaping salad, and reached into my left pocket to get a five-dollar bill. The raisins knocked a coupleof croutons from the salad to the tray. I reached and picked them up, intending to pop them into my mouth. My hands came up with their respective loads simulta- neously, and I rested the hand with the croutons on the tray and put the bill in my mouth, actually tasting it before I stopped myself.’ (Norman, 1981, p. 10)
For a philosophers’ perspective on action slips, see Mylopoulos (2022) (who also introduces many excellent scientific sources).
Your question will normally be answered in the question session associated with this lecture.
You may variations on this definition of instrumental in the literature. Dickinson (2016, p. 177) characterises instrumental actions differently: in place of the teleological ‘in order to bring about an outcome’, he stipulates that an instrumental action is one that is ‘controlled by the contingency between’ the action and an outcome. And de Wit & Dickinson (2009, p. 464) stipulate that ‘instrumental actions are learned’.
To illustrate, one way of matching is for the B-outcomes to be the A-outcomes. Another way of matching is for the B-outcomes to stand to the A-outcomes as elements of a more detailed plan stand to those of a less detailed one.
_[of plan-like structures]_ In the simplest case, plan-like hierarchies of motor representations _match_ if they are identical. More generally, plan-like hierarchies _match_ if the differences between them _do not matter_ in the following sense. For a plan-like hierarchy in an agent, let the _self part_ be those motor representations concerning the agent's own actions and let the _other part_ be the other motor representations. First consider what would happen if, for a particular agent, the other part of her plan-like hierarchy were as nearly identical to the self part (or parts) of the other's plan-like hierarchy (or others' plan-like hierarchies) as psychologically possible. Would the agent's self part be different? If not, let us say that any differences between her plan-like hierarchy and the other's (or others') are _not relevant_ for her. Finally, if for some agents' plan-like hierarchies of motor representations the differences between them are not relevant for any of the agents, then let us say that the differences _do not matter_.