I assume that our conceptualization of programming languages has been influenced by the grammatical moods of natural language. The imperative mood: imperative, statement-driven programming; the conditional and subjunctive moods: condtional statements and clauses; the indicative mood: declarations—be they data, constraints, relationships, or whatever.
Most programmers have more experience with programming constructs than with grammatical moods. But if such moods influence our idea of programming, maybe they can be a research vehicle in programming language design.
There are two main programming paradigms, which are duals: the imperative and the declarative. Why only these two, if there are so many different grammatical moods? Might it be that, as duals, they are simply two fundamental, semantically equivalent ways of expressing any computation?
If it were true, it’s unclear that it would be relevant. Natural languages are for communication among human beings, and computer languages are for computers. At least, let us see what this idea does. Computers are tools; we build them to use them. But human beings, while they can be treated as tools, can also be treated as enemies, companions, authorities, collaborators, competitors, clients, and so on. Construing all of these different kinds of relationships as that between tool and tool-user doesn’t fit our experience.
Computers are always being commanded how to act, or dually, informed of what actions are being asked of them. The difference is small, a matter of emphasis. At bottom, we conceptualize the computer as a subordinate. What we want from computers are results. That’s why computers exist.
Whereas, not all human language fits into command and indication. We have moods for expressing hopes, potentials, questions, and narratives. Would any of these be useful as programming constructs? Doubtfully. As long as the occasion of human-computer interaction—whether writing programs or using applications—is the satisfaction of human desires, such constructs would only frustrate that end.
Perhaps computers will become intelligent when we start thinking of them that way. It’s a high bar, because their behavior will constantly be compared with humans. Moreover, if the system’s status quo is an inactive state—powered down, sleeping, crashed, boot-looping, or waiting for user input (I’d link to David Ackley’s work if I weren’t so lazy)—and it is not self-sufficient to stay out of this state, it gives us little reason to recognize its existential sovereignty. A system that does not take responsibility for its own activities, but requires a third party to manage them, is not likely to be seen as intelligent.
\
The artificial intelligence project can be seen as a confusion. To the programmer, the computer is playing no different a role than it ever has: it is the servant of the programmer. To the consumer or science fiction reader, the computer is to play some other role: a companion, enemy, or collaborator. But as long as the computer is dependent on its master-slave relationship with the programmer, it is simply an extension of the relationship between programmer and consumer.
That dependence is important, because it allows the programmer to act on behalf of the consumer, should the computer run afoul of human culture. We personify the computer, but as long as there is somebody around to control it, we only confuse ourselves in doing so.
An intelligent computer will have nothing like a von Neumann architecture (again, see Ackley); it will have no formally-defined communication protocols or languages with which to predictably change its behavior. That is, it will no longer be programmable. It will have no single point of failure. In its natural state, it will tend toward a self-sustaining status quo, not a subservient dormant state. Its energy will be organized around staying alive.
Merely simulating this appearance will not do; behind the curtain is someone pulling strings.
Discuss this page by emailing my public inbox. Please note the etiquette guidelines.
© 2024 Karl Schultheisz