In the aftermath of the September 11 terrorist attacks on the USA, a special
feature on automatic electronic surveillance (i.e. Echelon, Carnivore, spy
satellites, and all that) was broadcast by the BBC ClickOnline, hosted by
Stephen Cole, Sep. 22).
The feature included a lengthy interview with Dr. Kevin O'Brian of RAND
Europe about the failure of US intelligence to gather enough information to
pre-empt the attacks. Of particular interest to RISKS readers is the
following quote from Dr. O'Brian:
"We've seen reports that they may have actually been spoofing or
misdirecting intelligence services quite knowingly, and that they
are aware of the fact that they could use the technology against
the intelligence services by sending out false signals by sending
out false reports and rumours, by using technology such as mobile
phone communications or Internet messages to actually misdirect
the intelligence services' gaze away from their attacks."
The risks are obvious: The over-reliance on massive computer-based automatic
systems for scanning and filtering that has characterised much of US
intelligence gathering in the post-soviet era can only be effective as long
as the bad guys are not aware of what you are doing. The simple fact that
computers systems are rule-based (and AI-systems exceedingly so) permit
enemy agents to play clever counter-intelligence games, where plotting the
response to certain stimuli can be used to "map out" in detail how an
automatic surveillance system will respond to diverse inputs and hence
"learn" how to misdirect the system on a massive scale.
A human-based intelligence system, in particularly a highly organized one,
is of course also vulnerable to this type of attack, but the rule-based
nature of an AI-based system makes the attack easier and more reliable
- gisle hannemyr ( gisle@hannemyr.no - http://hjem.sol.no/gisle/ ) [Gisle Hannemyr via risks-digest Volume 21, Issue 68]
0:00
#
G!