You easily know when a friend or a colleague is happy or not. Why? Because of our human senses. But what about guessing if an autonomous machine, such as a robot, is pouting because it expects new instructions, or is happily crunching data? In "Fractals show machine intentions," Technology Research News tells us that "researchers from Switzerland and South Africa have designed a visual interface that would give autonomous machines the equivalent of body language." This interface consists of a clustered algorithm which regroups the myriads of internal states of a machine into a small number, and a fractal generator. By looking at these changing fractal images, you start to 'feel' the machine's 'thoughts.' The first practical applications should appear within five years, while self-evolving or self-repairing robots will not come before a long time, according to the researchers.
Here is how the interface works.
The researchers' autonomous machine interface consists of a clustering algorithm that groups the machine's many internal states into a manageable number of representations, and a fractal generator.
Clustering algorithms organize data like that contained in genes into groups with similar traits, and analyze raw data without any sense of the data's meaning or assumptions about how it should be structured.
In the researchers' scheme, snapshots of a machine's sensory input, computational processing and output are clustered and the clusters are displayed as fractal images. The fractal generator produces a fractal pattern in the center of the display and patterns move outward in concentric rings, giving observers a sense of change over time.
On the image below, you can see a representation of four particularly distinct states. The one in the center represents the current state of the machine, while outer rings carry earlier fractals, partially overlapping with temporarily adjacent patterns (Credit: University of Zürich).
Is this the only way to guess machine intentions? Not necessarily.
It's not clear that the researchers' approach is necessary, said Jeffrey Nickerson, an associate professor of computer science at Stevens Institute of Technology. Autonomous machines could be programmed to explicitly represent their intentions, he said. "If understanding intentions is hard, then why not force the machine to provide indications of intentions, or at least a trace of reasoning?"
The research work, "Towards genuine machine autonomy," has been published in the March 31, 2004 of the Robotics and Autonomous Systems journal. Here is a link to the abstract.
We investigate the consequences and perspectives resulting from a strict concept of machine autonomy. While these kinds of systems provide computationally and economically cheaper solutions than classically designed systems, their behavior is not easy to judge and predict. Analogously to human communication, a way is needed to communicate the state of the machine to an observer. In order to achieve this, we reduce the proliferation of microscopic states to a manageable set of macroscopic states, using a clustering method. The autonomous machine communicates these macroscopic states by means of a visual interface. Using this interface, the observer is capable of learning to associate machine actions and states, allowing it to make judgments on, and predictions of, behavior. This emerged to be the crucial ingredient needed for the interaction between humans and autonomous machines.
For more information, you can read the full paper (PDF format, 9 pages, 959 KB), where I found the above image.
Sources: Eric Smalley, Technology Research News, June 16/23, 2004; and various websites
7:08:58 PM
Permalink
|
|