 |
Monday, August 30, 1993 |
Is discussing risks RISKY? I would like to see more discussion of this topic
-- even though it's been discussed in years past. I agree completely with
PGN, who suggests that many people (I'd argue the majority) are living with
blinders on. Even those on the provider/vendor side who should understand
the risks of certain technologies (cellular phones being an obvious
example), have a) underrated the intelligence of potential adversaries,
b) overestimated the cleverness of their own technology, c) underestimated
the speed at which exploitation information and devices would be
disseminated, d) assumed that the using public can't be hurt by what they
don't know, and e) let the magnitude of the financial rewards overshadow
everything. Perhaps, more open discussion -- and knowledge that such
discussion -was- going to happen -- would encourage providers not to
make naive assumptions regarding the risks and might cause users to demand
more of the products they buy. (Where have we heard that before?)
Anyway -- one approach to the problem has developed over the last few years
(since the Internet worm incident, to be more precise) that might be worth
noting. A voluntary cooperative group of security incident response teams
known as FIRST (Forum of Incident Response and Security Teams) has developed
to address the problem of sharing potentially risky information without
giving away the store in the process. Member teams include response teams
representing a wide range of "constituencies", including the Internet (i.e.,
CERT), various government agencies (e.g., DISA/ASSIST for DoD, Dept of
Energy's CIAC, CCTA for the UK, SurfNET in the Netherlands, etc.),
private sector organizations, vendors, and academia. Member teams share
information on both latent and active system vulnerabilities through a
series of alerts issued by the various teams. The alerts attempt to walk
the fine line of describing a problem in sufficient detail (along with
corrective actions) without providing enough information for exploitation.
By initially distributing alerts only among member teams (and careful
vetting of members), there is reasonable control over distribution.
This certainly has not solved the problems associated with identifying and
closing system or network risks, it has made, I believe, great strides toward
building trust and mutual support through effective information sharing and
cooperation. Other groups have use a similar approach to address similar
problems -- e.g., the sharing of virus information. I would be quite
interested to hear how others have addressed the problem. [dds@csmes.ncsl.nist.gov (Dennis D. Steinauer) via risks-digest Volume 15, Issue 01]
19:57
#
G!
| |
Maximillian Dornseif, 2002.
|
|
|