Celebrate the Infeasible

|

It’s an interesting time to go back and look at the old artificial intelligence work. This summer I’ve been reading Marvin Minsky’s The Society of Mind (1985), the kind of systematic monograph people don’t seem to publish anymore. The computer-like schemas Minsky draws out for how the mind must work belong to cognitive psychology, a school of thought that was sidelined with the rise of neuroscience. It breaks down the work of the mind into basic functions—until they are so basic that none of them, alone, constitutes thinking.

At MIT’s AI lab in the 1960s, Minsky’s team created a robot hand married to a camera and computer. They worked out some of the first solutions to make robots responsive to their changing environments, enough to build a tower out of blocks. These same challenges come up to this day, most recently for Amazon’s picking and packing robots. In Ocado’s 3D printed grocery-packing robots, the capabilities include picking up each piece of produce in just the right way using reinforcement learning.

But it’s the descriptions of generative AI that make the book so striking for this reader at the dawn of Chat GPT. He calls it the “puzzle principle”: “We can program a computer to solve any problem by trial and error, without knowing how to solve it in advance, provided only that we have a way to recognize when the problem is solved,” he writes.

If your task is to create a bridge, say, you can have two programs. One generates every possible arrangement of boards and nails, and the second one determines whether the resulting structure spans the stream. What’s interesting here is that Minsky feels he is describing something ridiculously infeasible. “In practice, it can take too long for even the most powerful computer to test enough possible solutions,” he says. But of course, “the most powerful computer” today is an entirely different beast, and the systems he painstakingly set down are now having their moment of feasibility.

How easy might it have been for him to focus on the impracticality—from the vantage point of a world of Amiga and Atari computers—and dismiss his hypothetical solutions before they even reached the page. Thank goodness he didn’t.

This brought to mind Herodotus, the first historian, who made a similarly courageous decision to disclose the improbable, at the risk of ridicule. Unlike Homer, who set his stories 400 years earlier than the time of writing, Herodotus limited himself to recording the accounts of ‘sons of sons’, so that his histories would describe events within living memory. Even so, some of his research uncovered oral narratives he considered outlandish.

Still, he wrote them down. Take with a grain of salt the claim of the Phoenicians that they sailed all the way around Africa, he scoffs. Those bullshitters say they sailed so far that the sun started falling on the opposite side of their boats! As skeptical as he was, Herodotus took down every word and ultimately let the future judge for itself.

Of course, we now know that this fantastical detail, the heavenly bodies rearranging themselves, is proof that it really happened. The Phoenicians must have crossed the equator, of which Herodotus knew nothing.

We all do our work with little understanding of the future world that may ultimately consume it. That’s why it’s important to still our inner editor when she objects on the grounds of feasibility. When Kepler wrote to Galileo about their respective astronomy projects, he showed extraordinary imagination, envisioning a world in which the infeasible would inevitably become feasible. “Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes,” he writes in the year 1610. “In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies. I shall do it for the moon, and you, Galileo, for Jupiter.”

Image: Phoenician ship Carved on the face of a sarcophagus. 2nd century AD. Author: NMB (CC license)

2 thoughts on “Celebrate the Infeasible

  1. For many years scientists in a variety of fields have been using a process called “inversion” to arrive at a solution to a problem. For this you need two processes: one to arrange a solution (i.e. build a model) and another to measure how close this solution is to reality (i.e. measured data). There are then a wide variety of inversion techniques that work step-wise towards an accurate solution: calculate a solution, measure its closeness to reality, tweak conditions hopefully in the “right” direction, and repeat until the “closeness” reaches some acceptable threshold. For all this to work, one needs a mathematical representation of the process that builds the model. I work in electromagnetics and we use Maxwell’s equations in some form or other to build the model. But it could also be fluid flow for aeronautics or combustion, etc… Just so long as you have at least an approximate mathematical solution.

    AI does essentially the same thing as inversion when it is being trained. It’s an iterative process of training the system until it reaches some measure of “closeness” that it is considered acceptable. The truly fascinating to me is that in the case of an AI system you don’t actually know the mathematics of the system you are trying to mimic. It is a generic black box. Part of the AI training machinery is to actually create de novo the modeling machinery. A well-trained AI arrives at a solution without having to know how it actually works! And, I should add, the complexity of what’s inside the AI black box is so enormous that your average scientist cannot just look inside to find the math. Our brains are too small and the black box too chaotic. It’s fascinating.

    1. I think cosmologists use a similar process to find good models. I remain impressed as all get-out that people can even think of these processes, let alone make them and use them.

Comments are closed.

Categorized in: Miscellaneous