Updated: 26.11.2002; 23:24:58 Uhr.
disLEXia
lies, laws, legal research, crime and the internet
        

Tuesday, November 26, 2002

Extra line in Chemical Bank program doubles ATM withdrawals

An extra line meant to be "dormant" for now caused Chemical Bank to deduct twice any amount its customers withdrew from ATM machines Tuesday night and Wednesday. However, they received praise from the state consumer board for their prompt and open response to the problem.

My information comes from articles in The New York Times, 18 Feb 1994, p. A1 and 19 Feb 1994, p. C1. The new line of code was part of a year-long effort to add functionality to ATM machines. It sent a copy of the ATM withdrawal to a different computer system (the one that handles paper checks), which then deducted the money a second time. This second system is only run overnight, so the problem was not detected until Thursday morning.

About 430 checks were bounced incorrectly as a result, but Chemical contacted the customers affected, and offered to pay any charges they incur, or write letters of explanation to the recipients of the checks. The NY state consumer board has also asked them to refund any fees for the ATM transactions which were completed incorrectly.

There were about 150k ATM transactions incorrectly doubled, amounting to $15M. (Last year in the US there were 7G ATM transactions averaging $50, according to The NYT article.)

Steven Bloom, who runs a consulting firm in NJ said: "There are similar episodes that take place all the time, but we never hear about them because the bank is able to get the accounts straight before it opens its doors in the morning. The problem in this case is the ATM system is highly visible and runs 24 hours a day, seven days a week."

-John Sullivan@geom.umn.edu

[Also noted by Linn H. Stanton , Mark Bergman , Jeremy Epstein , "Greg D." , and PGN. I took John's version because his version was the most Digest-able, although not entirely consistent with the others. Further sources included the following clips:]

In one of the biggest computer errors in banking history, Chemical Bank mistakenly deducted about $15 million from more than 100,000 customers' accounts on Tuesday night, causing panic and consternation among its customers around the New York area. The mistake affected 150,000 transactions from Tuesday night through Wednesday afternoon. Some checks were bounced Thursday morning as a result, although the bank said the number was small. [The New York Times, Friday 18 Feb 1994]

Millions of dollars vanished from New Yorkers' bank balances Wednesday, when a computer deducted $2 from accounts for every $1 withdrawn from automated teller machines." [...] Sean Kennedy, president of the Electronic Funds Transfer Association (a trade group) said "I'm beginning to learn that it does happen from time to time [and] usually it's a software error". [The Washington Post, 18 Feb 1994, from Jeremy Epstein]]

Customers stormed into Chemical Banking Corp's branch offices to complain of empty accounts and bounced cheques after a computer glitch affected at least 70,000 of the bank's approximately one million customers. [The Financial Post, a Canadian business paper, from Greg D.] [sullivan@msri.org (John Sullivan) via risks-digest Volume 15, Issue 57]
Will be moved to Sunday, February 20, 1994 - 23:01 # G!

What (else) happens when the airbag in your car is detonated ?

[Autoweek 7 Feb. 1994]

A British Ford dealer set out to impress potential purchasers with the burglar-proof features of the new Ford Mondeo by staging a break-in in his showroom.

As a room full of potential customers watched, the hired thief walked up to the front of the car and gave it a swift kick in the bumper, near the airbag sensor. The bag inflated, AND the central locking system disengaged. The thief then opened the door, quickly broke the steering column lock, hot-wired the ignition and started the car.

News spread quickly, and copycat incidents have followed.

Autoweek says "Sales of The Club should increase."

Historical Anecdote: Word from friends in MoTown, was that when Ford was testing the very first airbags in Police cars, the fuel cut-off relay would also be triggered by the same impact sensing circuit. Street-wise evaders found this out and they would tap the bumper to trigger the airbag if the cops were too close in pursuit, disabling the vehicle. (This also may have been how Ford was able to guarantee the ability to inspect the vehicle after the bags were deployed, as it was a testing situation.)

--Bill caloccia@Team.Net caloccia@Stratus.Com

[The first item was also noted by Chip Olson. PGN] [William Caloccia via risks-digest Volume 15, Issue 57]
Will be moved to Thursday, February 17, 1994 - 22:56 # G!

Canada to monitor phone calls,fax,etc.?

Canadian security intelligence services is trying to make equipment to keep records of all conversations from millions of airborne phone, fax, radio signal and other transmissions. The first thing that comes to my mind from this high-tech snoop gadget is that it violates the people's trust and confidence. Nobody can ever be confident to have a private conversation with others. They are always afraid of what have been said because the government keeps records of these conversations. This monitoring of phone calls is the invasion of privacy. As we have read from the other examples in the text book about risk_forum digest contributions, the computers could make mistakes. In the case of Canadian government, using computers could cause someone else to be accused by the government for something he/she didn't do. An error could result, for example, from two persons having the same name. The other risk factor could be the possibility of an intruder accessing a system and erasing some of the data or other information. An intruder changing the data could cause other people to be at risk. Computers are not always to be credited. They could make errors, or someone else could cause these errors by changing the data. This hardware on Canadian security service will have the same problem, but the main issue is that the Canadian government is taking advantage of the new technology to invade people's private life. [eng350q3@csulb.edu (Sahel Alleyasin) via risks-digest Volume 15, Issue 55]
Will be moved to Tuesday, February 15, 1994 - 22:43 # G!

No switch on new Sun Microphone

A recent product announcement from Sun Microsystems (SunFLASH Vol 62 #8, 4 February 1994) introduces "new microphone, SunMicrophone II, to ship with current and new Sun desktop platforms". Among the features described by the announcement for this "uni-directional microphone which allows greater focus on direct voice input while providing less interference from background ambient noise" is the following Q&A:

Q. Does the SunMicrophone II look similar to the SunMicrophone?

A. No, the two products look very different. The current SunMicrophone has a unique square shape, with an on/off switch. The SunMicrophone II looks like a classic microphone on a rectangular stand, with no on/off switch. Both products come in Sun colors and with Sun logo.

So, the new, "improved" model has no "on/off" switch, although the old one did. Maybe the new microphone is "uni-directional", but that doesn't mean it can't pick up ambient sound--just turn up the gain.

This "improvement" makes it all the more difficult to follow the final recommendation of CERT Advisory CA-93:15 (21 October 1993), quoted in part below. It's bad enough that the problem existed in the first place, but Sun has now made it worse!

III. /dev/audio Vulnerability This vulnerability affects all Sun systems with microphones. ...

A. Description /dev/audio is set to a default mode of 666. There is also no indication to the user of the system that the microphone is on.

B. Impact Any user with access to the system can eavesdrop on conversations held in the vicinity of the microphone.

C. Solution [...] *** Any site seriously concerned about the security risks associated with the microphone should either switch off the microphone, or unplug the microphone to prevent unauthorized listening. ***

Even if this vulnerability is fixed from a systems viewpoint, a user is still vulnerable to Trojan horse programs that exploit the user's own (legitimate) access to the microphone--and the information discussed in a person's office may be far more sensitive than the information stored on an office computer.

This is especially a problem for multi-level secure (MLS) systems. Although MLS systems offer protection against disclosure of information by Trojan horse programs, that's no help at all if the microphone picks up a Top Secret conversation that occurs in the office while the user happens to be logged in at Unclassified. Sure--one might look around to be sure there's nobody who can inadvertently overhear, or close the office door--but the computer? Computers don't eavesdrop, do they?

Computer manufacturers need to address these risks. It's certainly nifty to have desktop audio- and video-conferencing, but not when that equivalent to installing a bug in every office (and remember not to aim your video camera at the whiteboard).

Every microphone and video camera should have a positive on/off switch and some positive indication (such as a light) to show when it's actually in use (as opposed to just being enabled by the on/off switch). The broadcast industry learned this years ago, with its "ON THE AIR" lights. Fail-safes, such as permitting only manual activation, but computer deactivation, or requiring manual confirmation of any attempted activation, would be better still.

Olin Sibert |Internet: Sibert@Oxford.COM Oxford Systems, Inc. |UUCP: uunet!oxford!sibert [Olin Sibert via risks-digest Volume 15, Issue 55]
Will be moved to Tuesday, February 15, 1994 - 22:42 # G!

Another ATM "front end" fraud - this time caught

An Article in London's Evening Standard of February 11 says that "in one of the most ingenious and innovative high-tech crimes of recent years", culprits planted a fake ATM card reader at a London branch of the Midland bank. In a variation on the theme, the reader was not planted over top of the ATM, but was installed to emulate the door opening devices which most banks use. Users were asked to swipe their cards through the device, and then type in their PINs, to gain admission to the ATM hall.

A suspicious customer informed the bank. Some customers had used the device unsuspectingly, but no money was stolen.

I see the following developments:

- As we know, thieves are well able to reproduce magnetic swipe cards. They no longer need to steal peoples' cards to gain access to their accounts. Any scheme which gives the card number and PIN will do. If this plan really qualified as "ingenious" it would have transmitted the data by radio directly to the thieves' card making machine, and the resulting cards would have been used without delay.

- The article was on the front page of a popular newspaper. Although it did contain some excess verbiage (such as the quote above) it also contained all the salient technical details, it described the extent of success and the outcome of the scheme. There is a quote from a bank spokesman and a quote from the police. I've never seen such a complete description of a RISK-worthy story in such a prominent position. Is this a sign that the non-technical public are becoming more aware of the risks of technology, or at least more interested in it ?

Jonathan Haruni [jharuni@london.micrognosis.com (Jonathan Haruni) via risks-digest Volume 15, Issue 54]
Will be moved to Monday, February 14, 1994 - 22:40 # G!

Voice-mail phreaking

Hacker attempts to chase cupid away SAN FRANCISCO (UPI, 10 Feb 1994) -- Two bachelors who rented a billboard to find the perfect mate said Thursday they had fallen victim to a computer hacker who sabotaged their voice mail message and made it X-rated. Steeg Anderson said the original recording that informed callers how they may get hold of the men was changed to a "perverted" sexually suggestive message. He said the tampering occurred sometime Wednesday." [United Press newswire via Executive News Service (GO ENS) on CompuServe]

The article states that Pacific Bell has been investigating other voice-mail tampering recently as well.

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn ["Mich Kabay / JINBU Corp." <75300.3232@CompuServe.COM> via risks-digest Volume 15, Issue 54]
Will be moved to Monday, February 14, 1994 - 22:40 # G!

Pacific Bell Customers Get Unpleasant Messages

Pacific Bell customers get messages on voice mail that they'd rather not hear Valley Times (Livermore Valley area), 10 Feb 1994

Electronic hackers have been intruding in to the Pacific Bell voice mail service. "The hackers have broken into the system, altering message greetings and changing passwords, which can keep legitimate users out of their mailbox." Pacific Bell spokeswoman Sandy Hale said that it is a rare occurrence. Patrice Papalus Director of the San Francisco-based Computer Security Institute said "Telecommunications, computer and switchboard fraud is on the increase...Breaking into voice mail is really common."

The article went on to say that two teenagers who were infuriated because they didn't receive a free computer game poster in a magazine promotion broke into IDG's voice-mail system and distributed obscene messages and greetings to female employees. In some cases, customers couldn't get through.

"The violations are unauthorized use of telephone services and a computer crime," said Joe Cancilla, an Asst. V.P. of external affairs with Pac Bell. Etc.

Lin Zucconi zucconi@llnl.gov ["Lin Zucconi" via risks-digest Volume 15, Issue 51]
Will be moved to Thursday, February 10, 1994 - 22:31 # G!

A RISE IN INTERNET BREAK-INS SETS OFF A SECURITY ALARM [Excerpt]- By PETER H. LEWIS, c.1994 N.Y. Times News Service

NEW YORK Citing computer-security violations of unprecedented scope, security experts have issued a warning that unknown assailants have been breaking into scores of government, corporate and university computers connected to the global Internet communications network. Saying that it had been ``flooded'' with reports of computer break-ins in the last week, the federally supported Computer Emergency Response Team broadcast its warning late Thursday night over the Internet, a web of computer networks used by an estimated 15 million people in the United States and abroad. Sophisticated software, secretly planted on various computers throughout the Internet, has allowed unknown intruders to steal passwords and electronic addresses from legitimate users, computer security experts said Friday.

[The full article summarizes a situation that by now should be familiar to RISKS readers. See the following CERT Advisory, and a comment from Klaus Brunnstein. See also articles the same day in the Washington Post and elsewhere. PGN] ["Peter G. Neumann" via risks-digest Volume 15, Issue 45]
Will be moved to Tuesday, February 8, 1994 - 22:26 # G!

Don't trust the phone company

I am the victim of false accusations.

My wife and I were at home some time last week. I was busy cooking dinner. My wife was busy chasing our two year old, when we received a phone call which my wife accepted. The fellow on the other end of the line was extremely irate. His wife has been receiving obscene phone calls for some time now. He had purchased the service provided by the phone company which allows you to call back the last person to dial you. After his wife had discontinued the obscene call she'd just received, he had used this feature to righteously confront her abuser. Instead he had dialed us.

This was somewhat perplexing until a few minutes later, my wife's best friend called. Imediately after saying hello, My wife began relating this strange occurence to her friend. Her friend then told my wife that it was her husband who had made this call utilizing this phone service.

This has put a heavy strain upon my wife's relationship with her friend, because her friend's husband has assumed that I am the author of these obscene calls. Whereas I barely have time for all the things which fill my life. I have no time or interest in making such calls.

It is my belief that my wife had tried to call her best friend during the obscene phone call. This attempt overwrote the perpetrator's number, so that when the call back service was used, our phone rang instead.

If there are any knowledgeable netter's out there that could give me any more info, I'd appreciate it.

Regards Tom Bodine [tbodine@utig.ig.utexas.edu (Tom Bodine) via risks-digest Volume 15, Issue 46]
Will be moved to Tuesday, February 8, 1994 - 22:24 # G!

CERT Advisory - Ongoing Network Monitoring Attacks

CA-94:01 CERT Advisory February 3, 1994 Ongoing Network Monitoring Attacks In the past week, CERT has observed a dramatic increase in reports of intruders monitoring network traffic. Systems of some service providers have been compromised, and all systems that offer remote access through rlogin, telnet, and FTP are at risk. Intruders have already captured access information for tens of thousands of systems across the Internet.

The current attacks involve a network monitoring tool that uses the promiscuous mode of a specific network interface, /dev/nit, to capture host and user authentication information on all newly opened FTP, telnet, and rlogin sessions.

In the short-term, CERT recommends that all users on sites that offer remote access change passwords on any network-accessed account. In addition, all sites having systems that support the /dev/nit interface should disable this feature if it is not used and attempt to prevent unauthorized access if the feature is necessary. A procedure for accomplishing this is described in Section III.B.2 below. Systems known to support the interface are SunOS 4.x (Sun3 and Sun4 architectures) and Solbourne systems; there may be others. Sun Solaris systems do not support the /dev/nit interface. If you have a system other than Sun or Solbourne, contact your vendor to find if this interface is supported.

While the current attack is specific to /dev/nit, the short-term workaround does not constitute a solution. The best long-term solution currently available for this attack is to reduce or eliminate the transmission of reusable passwords in clear-text over the network.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

I. Description

Root-compromised systems that support a promiscuous network interface are being used by intruders to collect host and user authentication information visible on the network.

The intruders first penetrate a system and gain root access through an unpatched vulnerability (solutions and workarounds for these vulnerabilities have been described in previous CERT advisories, which are available anonymous FTP from info.cert.org).

The intruders then run a network monitoring tool that captures up to the first 128 keystrokes of all newly opened FTP, telnet, and rlogin sessions visible within the compromised system's domain. These keystrokes usually contain host, account, and password information for user accounts on other systems; the intruders log these for later retrieval. The intruders typically install Trojan horse programs to support subsequent access to the compromised system and to hide their network monitoring process.

II. Impact

All connected network sites that use the network to access remote systems are at risk from this attack. All user account and password information derived from FTP, telnet, and rlogin sessions and passing through the same network as the compromised host could be disclosed.

III. Approach

There are three steps in CERT's recommended approach to the problem:

- Detect if the network monitoring tool is running on any of your hosts that support a promiscuous network interface.

- Protect against this attack either by disabling the network interface for those systems that do not use this feature or by attempting to prevent unauthorized use of the feature on systems where this interface is necessary.

- Scope the extent of the attack and recover in the event that the network monitoring tool is discovered.

A. Detection

The network monitoring tool can be run under a variety of process names and log to a variety of filenames. Thus, the best method for detecting the tool is to look for 1) Trojan horse programs commonly used in conjunction with this attack, 2) any suspect processes running on the system, and 3) the unauthorized use of /dev/nit.

1) Trojan horse programs:

The intruders have been found to replace one or more of the following programs with a Trojan horse version in conjunction with this attack:

/usr/etc/in.telnetd and /bin/login - Used to provide back-door access for the intruders to retrieve information /bin/ps - Used to disguise the network monitoring process Because the intruders install Trojan horse variations of standard UNIX commands, CERT recommends not using other commands such as the standard UNIX sum(1) or cmp(1) commands to locate the Trojan horse programs on the system until these programs can be restored from distribution media, run from read-only media (such as a mounted CD-ROM), or verified using cryptographic checksum information. In addition to the possibility of having the checksum programs replaced by the intruders, the Trojan horse programs mentioned above may have been engineered to produce the same standard checksum and timestamp as the legitimate version. Because of this, the standard UNIX sum(1) command and the timestamps associated with the programs are not sufficient to determine whether the programs have been replaced.

CERT recommends that you use both the /usr/5bin/sum and /bin/sum commands to compare against the distribution media and assure that the programs have not been replaced. The use of cmp(1), MD5, Tripwire (only if the baseline checksums were created on a distribution system), and other cryptographic checksum tools are also sufficient to detect these Trojan horse programs, provided these programs were not available for modification by the intruder. If the distribution is available on CD-ROM or other read-only device, it may be possible to compare against these volumes or run programs off these media.

2) Suspect processes:

Although the name of the network monitoring tool can vary from attack to attack, it is possible to detect a suspect process running as root using ps(1) or other process-listing commands. Until the ps(1) command has been verified against distribution media, it should not be relied upon--a Trojan horse version is being used by the intruders to hide the monitoring process. Some process names that have been observed are sendmail, es, and in.netd. The arguments to the process also provide an indication of where the log file is located. If the "-F" flag is set on the process, the filename following indicates the location of the log file used for the collection of authentication information for later retrieval by the intruders.

3) Unauthorized use of /dev/nit:

If the network monitoring tool is currently running on your system, it is possible to detect this by checking for unauthorized use of the /dev/nit interface. CERT has created a minimal tool for this purpose. The source code for this tool is available via anonymous FTP on info.cert.org in the /pub/tools/cpm directory or on ftp.uu.net in the /pub/security/cpm directory as cpm.1.0.tar.Z. The checksum information is:

Filename Standard UNIX Sum System V Sum -------------- ----------------- ------------ cpm.1.0.tar.Z: 11097 6 24453 12

MD5 Checksum MD5 (cpm.1.0.tar.Z) = e29d43f3a86e647f7ff2aa453329a155

This archive contains a readme file, also included as Appendix C of this advisory, containing instructions on installing and using this detection tool.

B. Prevention

There are two actions that are effective in preventing this attack. A long-term solution requires eliminating transmission of clear-text passwords on the network. For this specific attack, however, a short-term workaround exists. Both of these are described below.

1) Long-term prevention:

CERT recognizes that the only effective long-term solution to prevent these attacks is by not transmitting reusable clear-text passwords on the network. CERT has collected some information on relevant technologies. This information is included as Appendix B in this advisory. Note: These solutions will not protect against transient or remote access transmission of clear-text passwords through the network.

Until everyone connected to your network is using the above technologies, your policy should allow only authorized users and programs access to promiscuous network interfaces. The tool described in Section III.A.3 above may be helpful in verifying this restricted access.

2) Short-term workaround:

Regardless of whether the network monitoring software is detected on your system, CERT recommends that ALL SITES take action to prevent unauthorized network monitoring on their systems. You can do this either by removing the interface, if it is not used on the system or by attempting to prevent the misuse of this interface.

For systems other than Sun and Solbourne, contact your vendor to find out if promiscuous mode network access is supported and, if so, what is the recommended method to disable or monitor this feature.

For SunOS 4.x and Solbourne systems, the promiscuous interface to the network can be eliminated by removing the /dev/nit capability from the kernel. The procedure for doing so is outlined below (see your system manuals for more details). Once the procedure is complete, you may remove the device file /dev/nit since it is no longer functional.

Procedure for removing /dev/nit from the kernel:

1. Become root on the system.

2. Apply "method 1" as outlined in the System and Network Administration manual, in the section, "Sun System Administration Procedures," Chapter 9, "Reconfiguring the System Kernel." Excerpts from the method are reproduced below:

# cd /usr/kvm/sys/sun[3,3x,4,4c]/conf # cp CONFIG_FILE SYS_NAME

[Note that at this step, you should replace the CONFIG_FILE with your system specific configuration file if one exists.]

# chmod +w SYS_NAME # vi SYS_NAME

# # The following are for streams NIT support. NIT is used by # etherfind, traffic, rarpd, and ndbootd. As a rule of thumb, # NIT is almost always needed on a server and almost never # needed on a diskless client. # pseudo-device snit # streams NIT pseudo-device pf # packet filter pseudo-device nbuf # NIT buffering module [Comment out the preceding three lines; save and exit the editor before proceeding.]

# config SYS_NAME # cd ../SYS_NAME # make

# mv /vmunix /vmunix.old # cp vmunix /vmunix

# /etc/halt > b

[This step will reboot the system with the new kernel.]

[NOTE that even after the new kernel is installed, you need to take care to ensure that the previous vmunix.old , or other kernel, is not used to reboot the system.]

C. Scope and recovery

If you detect the network monitoring software at your site, CERT recommends following three steps to successfully determine the scope of the problem and to recover from this attack.

1. Restore the system that was subjected to the network monitoring software.

The systems on which the network monitoring and/or Trojan horse programs are found have been compromised at the root level; your system configuration may have been altered. See Appendix A of this advisory for help with recovery.

2. Consider changing router, server, and privileged account passwords due to the wide-spread nature of these attacks. Since this threat involves monitoring remote connections, take care to change these passwords using some mechanism other than remote telnet, rlogin, or FTP access.

3. Urge users to change passwords on local and remote accounts.

Users who access accounts using telnet, rlogin, or FTP either to or from systems within the compromised domain should change their passwords after the intruder's network monitor has been disabled. 4. Notify remote sites connected from or through the local domain of the network compromise.

Encourage the remote sites to check their systems for unauthorized activity. Be aware that if your site routes network traffic between external domains, both of these domains may have been compromised by the network monitoring software.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The CERT Coordination Center thanks the members of the FIRST community as well as the many technical experts around the Internet who participated in creating this advisory. Special thanks to Eugene Spafford of Purdue University for his contributions. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

If you believe that your system has been compromised, contact the CERT Coordination Center or your representative in Forum of Incident Response and Security Teams (FIRST).

Internet E-mail: cert@cert.org Telephone: 412-268-7090 (24-hour hotline) CERT personnel answer 8:30 a.m.-5:00 p.m. EST(GMT-5)/EDT(GMT-4), and are on call for emergencies during other hours.

CERT Coordination Center Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213-3890

Past advisories, information about FIRST representatives, and other information related to computer security are available for anonymous FTP from info.cert.org.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Appendix A: RECOVERING FROM A UNIX ROOT COMPROMISE

A. Immediate recovery technique

1) Disconnect from the network or operate the system in single- user mode during the recovery. This will keep users and intruders from accessing the system.

2) Verify system binaries and configuration files against the vendor's media (do not rely on timestamp information to provide an indication of modification). Do not trust any verification tool such as cmp(1) located on the compromised system as it, too, may have been modified by the intruder. In addition, do not trust the results of the standard UNIX sum(1) program as we have seen intruders modify system files in such a way that the checksums remain the same. Replace any modified files from the vendor's media, not from backups.

-- or --

Reload your system from the vendor's media.

3) Search the system for new or modified setuid root files.

find / -user root -perm -4000 -print

If you are using NFS or AFS file systems, use ncheck to search the local file systems.

ncheck -s /dev/sd0a

4) Change the password on all accounts.

5) Don't trust your backups for reloading any file used by root. You do not want to re-introduce files altered by an intruder.

B. Improving the security of your system

1) CERT Security Checklist Using the checklist will help you identify security weaknesses or modifications to your systems. The CERT Security Checklist is based on information gained from computer security incidents reported to CERT. It is available via anonymous FTP from info.cert.org in the file pub/tech_tips/security_info. 2) Security Tools Use security tools such as COPS and Tripwire to check for security configuration weaknesses and for modifications made by intruders. We suggest storing these security tools, their configuration files, and databases offline or encrypted. TCP daemon wrapper programs provide additional logging and access control. These tools are available via anonymous FTP from info.cert.org in the pub/tools directory.

3) CERT Advisories Review past CERT advisories (both vendor-specific and generic) and install all appropriate patches or workarounds as described in the advisories. CERT advisories and other security-related information are available via anonymous FTP from info.cert.org in the pub/cert_advisories directory.

To join the CERT Advisory mailing list, send a request to:

cert-advisory-request@cert.org

Please include contact information, including a telephone number.

CERT Coordination Center Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213-3890

Copyright (c) Carnegie Mellon University 1994

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Appendix B: ONE-TIME PASSWORDS

Given today's networked environments, CERT recommends that sites concerned about the security and integrity of their systems and networks consider moving away from standard, reusable passwords. CERT has seen many incidents involving Trojan network programs (e.g., telnet and rlogin) and network packet sniffing programs. These programs capture clear-text hostname, account name, password triplets. Intruders can use the captured information for subsequent access to those hosts and accounts. This is possible because 1) the password is used over and over (hence the term "reusable"), and 2) the password passes across the network in clear text.

Several authentication techniques have been developed that address this problem. Among these techniques are challenge-response technologies that provide passwords that are only used once (commonly called one-time passwords). This document provides a list of sources for products that provide this capability. The decision to use a product is the responsibility of each organization, and each organization should perform its own evaluation and selection.

I. Public Domain packages

S/KEY(TM) The S/KEY package is publicly available (no fee) via anonymous FTP from:

thumper.bellcore.com /pub/nmh directory

There are three subdirectories:

skey UNIX code and documents on S/KEY. Includes the change needed to login, and stand-alone commands (such as "key"), that computes the one-time password for the user, given the secret password and the S/KEY command.

dos DOS or DOS/WINDOWS S/KEY programs. Includes DOS version of "key" and "termkey" which is a TSR program. mac One-time password calculation utility for the Mac.

II. Commercial Products

Secure Net Key (SNK) (Do-it-yourself project) Digital Pathways, Inc. 201 Ravendale Dr. Mountain View, Ca. 94043-5216 USA Phone: 415-964-0707 Fax: (415) 961-7487

Products: handheld authentication calculators (SNK004) serial line auth interrupters (guardian)

Note: Secure Net Key (SNK) is des-based, and therefore restricted from US export.

Secure ID (complete turnkey systems) Security Dynamics One Alewife Center Cambridge, MA 02140-2312 USA Phone: 617-547-7820 Fax: (617) 354-8836

Products: SecurID changing number authentication card ACE server software

SecureID is time-synchronized using a 'proprietary' number generation algorithm

WatchWord and WatchWord II Racal-Guardata 480 Spring Park Place Herndon, VA 22070 703-471-0892 1-800-521-6261 ext 217

Products: Watchword authentication calculator Encrypting modems

Alpha-numeric keypad, digital signature capability

SafeWord Enigma Logic, Inc. 2151 Salvio #301 Concord, CA 94520 510-827-5707 Fax: (510)827-2593

Products: DES Silver card authentication calculator SafeWord Multisync card authentication calculator

Available for UNIX, VMS, MVS, MS-DOS, Tandum, Stratus, as well as other OS versions. Supports one-time passwords and super smartcards from several vendors.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Appendix C: cpm 1.0 README FILE

cpm - check for network interfaces in promiscuous mode.

Copyright (c) Carnegie Mellon University 1994 Thursday Feb 3 1994

CERT Coordination Center Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213-3890

This program is free software; you can distribute it and/or modify it as long as you retain the Carnegie Mellon copyright statement.

It can be obtained via anonymous FTP from info.cert.org:pub/tools/cpm.tar.Z.

This program is distributed WITHOUT ANY WARRANTY; without the IMPLIED WARRANTY of merchantability or fitness for a particular purpose.

This package contains: README MANIFEST cpm.1 cpm.c

To create cpm under SunOS, type: % cc -Bstatic -o cpm cpm.c

On machines that support dynamic loading, such as Sun's, CERT recommends that programs be statically linked so that this feature is disabled.

CERT recommends that after you install cpm in your favorite directory, you take measures to ensure the integrity of the program by noting the size and checksums of the source code and resulting binary.

The following is an example of the output of cpm and its exit status.

Running cpm on a machine where both the le0 and le2 interfaces are in promiscuous mode, under csh(1):

% cpm le0 le2 % echo $status 2 %

Running cpm on a machine where no interfaces are in promiscuous mode, under csh(1):

% cpm % echo $status 0 % [CERT Advisory via risks-digest Volume 15, Issue 45]
Will be moved to Friday, February 4, 1994 - 22:21 # G!

Headline: "Child molesters use computer talk as bait"

This is the headline of article in the 3/3/94 Boston Globe on the front page of an inside Metro/Region section.

For most parents, the thought of their child sitting in a bedroom and skillfully using a computer is a source of comfort and pride" Increasingly, however, the home computer has become a source of danger, as manipulative child molesters reach out to unsuspecting children through thousands of interactive and easy-to-use computer bulletin board systems."

... The news article triggering this discussion article is: A 23-year-old Chelmsford [Mass] man pleaded not guilty to an attempted kidnapping charge after he allegedly used a computer bulletin board to attempt to coax a teen-ager into helping him abduct a young boy for sexual purposes

The article goes on to explain BBS systems and how they allow impersonal contact between juveniles and child molesters. Law enforcement officials in Massachusetts have been concentrating upon (and getting publicity) for investigating computer assisted child-abuse. There have been several other charges, and in 1992 a Cambridge man pleaded guilty to raping two boys who he met through a BBS.

[Also noted by Bob_Frankston@frankston.com. PGN] [dtarabar@hstbme.mit.edu (David Tarabar) via risks-digest Volume 15, Issue 62]
Will be moved to Thursday, March 3, 1983 - 22:20 # G!

Czech computer fraud (More on RISKS-15.22)

>From the Associated Press newswire via Executive News Service (GO ENS) on CompuServe:

Czech-Computer Fraud

PRAGUE, Czech Republic (AP, 19 Jan 1994) -- A bank employee was sentenced to eight years in prison for stealing nearly $1.2 million in the Czech Republic's first major computer fraud, a newspaper reported Wednesday. Martin Janku, an employee of the Czech Savings Bank in Sokolov, transferred money to his own account in the bank with the help of his own computer program between September 1991 and April 1992, the daily Mlada Fronta Dnes said.

The article continues with a few details:

o Janku arrested when he tried to withdraw money from a branch where a teller recognized him as a programmer she'd met during training;

o sentenced to 8 years in jail;

o claims he was testing bank security;

o returned about $1 million of money he stole; rest, he says, was stolen from his car.

[Moral: never test someone's security systems without written authorization from the right people.]

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn ["Mich Kabay / JINBU Corp." <75300.3232@CompuServe.COM> via risks-digest Volume 15, Issue 44]
Will be moved to Sunday, January 30, 1994 - 22:12 # G!

E-Mail Fraud

Electronic Mail Fraud, by Lori Carrig

With the advent of Electronic Mail we have several risks associated with this modern vehicle of information. Of these risks one is E-Mail fraud. Just like with Mail and Telephone fraud, Electronic Mail fraud has come of age. Before I diverge into this problem let me set an example of an incident I worked on:

I received a notice from a user that she received E-Mail promoting computer chips for sale from a internet address. She had not received her items which she paid for and wanted to report it. She gave the following account:

She had received and responded to this address about the sale of the described items. After several exchanges of E-Mail from the culprit the price was determined and she inquired to the method of payment for the desired items. The culprit explained that they would only take Money Order. The culprit would accept a check, but would not ship the desired items until after the check has cleared. A name and address was given, but no phone number, to send the check to. The culprit stated that they would ship the items after 5-7 days. After 9 days the victim sent an E-Mail message to the culprit and inquired if they received the check. A message was sent back stating that they did receive the check and was awaiting for it to clear the bank. After 30 days the victim had not received the ordered, and paid for, items from the culprits.

The risk here is quite apparent. As with Mail and Telephone Fraud, E-Mail fraud can cause extreme losses to users. What would prevent such an incident? Well here are some of the ways I have seen:

1. Order from known companies like Intel, Microsoft, CompUSA, and so on.

2. Pay with an Credit Card. You have the right to cancel the payment within 30 days if the items have not been received. There is other risks associated with giving you Credit Card number out, but I will not address them at this time.

3. Request a phone number and call the vendor to confirm the order.

The best prevention is number 1 above and common sense. Know who you are dealing with before any monetary exchange takes place. Items 2 and 3 above have other risks involved with them which, in turn, may be another form of fraud.

There are other samples of E-Mail fraud, but I will not address them at this time. I will leave that for future releases. With the estimated losses to Computer crime ranging from $3-6 billion dollars(1) it is cause for alarm.

If you suspect that you are a victim of E-Mail fraud, contact your local police department. Please note that only 11% of computer crimes are ever reported to law enforcement.(2)

The opinions above are mine only and DO NOT reflect any other parties position.

Lori Carrig carrigl@nic.ddn.mil

Footnotes:

(1) Publisher: Search, Sacramento, CA (2) Publisher: Law and Order, September 1990 [carrigl@fire.nic.ddn.mil (Lori Carrig) via risks-digest Volume 15, Issue 41]
Will be moved to Wednesday, January 26, 1994 - 22:08 # G!

Sperrungsverfügung fast gekippt

Das Verwaltungsgericht Minden hat einem beteiligten Internetserviceprovider attestiert, dass dieser die Verfügung solange nicht umsetzen muss, bis die Rechtmässigkeit der Sperrung von bestimmten Internetseiten juristisch geklärt ist.

Obwohl damit weder die Rechtmässigkeit noch die Rechtswidrigkeit der umstrittenen Sperrungsverfügung der Bezirksregierung Düsseldorf geklärt ist, freut sich der Verband der deutschen Internetwirtschaft (eco). Die Entscheidung des Gerichts sei kompetent und sachgerecht, teilt der Verband in einer Erklärung mit. Das Urteil habe Signalwirkung für weitere 17 anhängige Gerichtsverfahren. ãDas Gericht hat erkannt, dass die Sperrungsverfügung nicht sperrt&147;, sagte Oliver J. Süme, Vorstand Recht und neue Medien bei eco. Weder in Nordrhein-Westfalen noch irgend woanders könnten die Inhalte aufgrund der technischen Infrastruktur des Internets unzugänglich gemacht werden. Selbst wenn alle deutschen Internetzugangsprovider die Sperrung umsetzen würden, könnten die Seiten durch simple Tricks dennoch aufgerufen werden, da die Inhalte nicht von den Servern entfernt worden seien. Man rechne nun damit, dass auch die anderen Verwaltungsgerichte in den Eilentscheidungen zu Gunsten der Provider urteilen werden, da es erhebliche Zweifel an der Rechtmäßigkeit der Sperrungsverfügung gebe. [PC-Magazin]
Will be moved to Wednesday, November 13, 2002 - 21:54 # G! Translate

Gator schießt zurück

Der Adware-Distributor Gator will jetzt gerichtlich das Recht erkämpfen, mit seiner Pop-Up-Werbung die Browser-Darstellung anderer Sites zu überlagern. Der Anwender hat es ja so gewollt. [intern.de]
Will be moved to Tuesday, November 12, 2002 - 21:50 # G! Translate

XS4ALL-Berufung abgewiesen

Die Berufung des niederländischen Providers sollte eine rechtliche Klarstellung bewirken, wann ein Provider Inhalte löschen und Namen von Kunden herausgeben muss. [intern.de]
Will be moved to Friday, November 8, 2002 - 21:49 # G!

SpamCop Blacklists Declan, Again

Declan McCullagh reports that his Politech server has been blacklisted by SpamCop -- for the third time. Longtime readers may... [Freedom To Tinker]
Will be moved to Monday, November 4, 2002 - 21:48 # G!

Logic Bomb planted in retribution for nonpayment

Excerpted from the Associated Press Newswire via Executive News Service (GO ENS) on CompuServe:

APn 11/23 0106 BRF--Computer Virus

WESTBURY, N.Y. (AP) -- A computer company owner and his technician are accused of planting a virus in a dissatisfied customer's computer system, after the customer refused to pay for a program. Michael Lofaro, 29, owner of MJL Design of Manhattan, and his technician, John Puzzo, 22, were charged Monday with attempted computer tampering and coercion, said Lt. Lawrence Mulvey of the Nassau County police.

The article explains that the maximum penalties are 4-7 years and up to $5,000 in fines. The client, William Haberman, owner of Forecast Inc., a furniture company in Westbury, complained about poor performance in a program sold by MJL Design and refused to pay the full invoice when the vendor allegedly ignored his complaints.

According to the accusation, Lofaro and Puzzo planted a ``computer virus'' [which I think is simply a logic bomb, judging from the phrasing--MK] and threatened to detonate it.

The accused were arrested when they came to defuse the logic bomb.

[Surprising to see the old confusion between viruses and logic bombs persisting in a newswire report.--MK] [Not surprising at all.--PGN]

Michel E. Kabay, Ph.D. Director of Education National Computer Security Assn ["Mich Kabay / JINBU Corp." <75300.3232@compuserve.com> via risks-digest Volume 15, Issue 29]
Will be moved to Tuesday, November 23, 1993 - 21:47 # G!

Who owns the unused cycles?

Earlier today I talked my sys-admin into letting me install the software to help factor RSA-129 on my workstation. When I mentioned how easily it installed he suggested I run it on a number of other workstations -- after all I had login permissions for them all.

An hour later a coworker was giving me a stern lecture about how I shouldn't run a process on his system in background without getting his full permission first (not only to run it and to be assured that it would not consume resources, but also that it satisfied *his* requirements for legitimacy). The fact that the process was nice'd and previously approved by the sys-admin was considered irrelevant.

I've since talked to several other coworkers; about 1/3 feel the same as the coworker mentioned above, the other 2/3 feel that if the system resources are available they can be used by anyone as long as they don't impact the primary user. *Everyone* appears to believe that their view is obvious, although most admit that other views are not totally unreasonable.

This specific application is trivial, but what does this portend for the future? It's not hard to identify legitimate background tasks which could be run by businesses overnight, but will efforts to use idle resources run into hostility by workers who feel that ``their'' workstation or PC is being grabbed by others who don't respect their privacy or ownership? Would such distributed software be acceptable at night, or by users without any indication of system load (be it ``perf meters'' or flashing disk lights), but not by users who could notice such indications of active processing?

Distributed processing over LANs seems promising, but have users had individual PCs and workstations which acted alone too long for them to accept the idea of a supra-system computer?

Bear Giles bear@cs.colorado.edu/fsl.noaa.gov [Bear Giles via risks-digest Volume 15, Issue 29]
Will be moved to Thursday, November 18, 1993 - 21:45 # G!

Lawyer discovers the RISK of computer efficiency

>From the New York Times, Friday November 12, 1993 (page B20):

At the Bar. David Margolick. "Court asks a lawyer, if a computer is doing most of the work, why the big fee?"

[Abstracted and excerpted] Craig Collins, a lawyer in San Mateo California, used the West CD-ROM library, a system that contains every court opinion published in California in the last 33 years on three compact disks, to research a parental rights case. Under penalty of perjury, he swore that he had devoted 22 hours, ten of them over the Fourth of July weekend, to writing several memorandums concerning the rights of step-parents in custody cases. "At his normal rate of $225 an hour, that worked out to $4,950, part of his total tab of $9,591.50. The money was to come from the stepfather, who lost the case, provided it was approved by Judge Roderic Duncan of the Alameda County Superior Court."

"That was not quite what happened. Indeed, after deconstructing the mechanics of modern computer research, Judge Duncan not only balked, but handed Mr. Collins to the disciplinary enforcement section of the State Bar of California."

As it turned out, large portions of Mr. Collins memorandums were copied directly from the court opinions, without attribution. Collins explained that he had quoted the courts at length because "their language ``was better written than I would have composed it myself.''" The court, however, found that 22 hours was rather extreme for cutting and pasting since Mr. Collins was an experienced lawyer. At the hearing, William P. Eppes II, a representative of the West Publishing Company testified that Mr. Collins had used the system for a total of of 9 hours and 33 minutes since he had purchased it. The witness, who was also a lawyer, testified that it seemed entirely plausible that Mr. Collins had put in the time he claimed.

The judge was impressed by the witness' reasoning and withdrew his claim that Mr. Collins had not worked as long as he did. "All those hours at the computer, the judge seemed to say, reflected inefficiency rather than dishonesty."

Although disciplinary proceedings were dropped, Mr. Collins is still displeased with a judge who, in an interview, he described as "a ``cavalier'' judicial ``maveric'' whose ill-considered opinions had periodically been criticized by the California courts of appeal. How did he know? He consulted his trusty CD-ROM, and plugged in the words ``Duncan'' and ``reversal.''"

["Quotes" are directly from the article. ``Quotes'' are quoted material in the original article.

On the same page of the Times, you will also find an interesting article on modern computerized fingerprint systems. The FBI has a database of 30 million unique cards and performs more than 32,000 searches per day. The modern systems can compare a print at rates faster than 1,000 per second.

Martin Minow minow@apple.com] [Martin Minow via risks-digest Volume 15, Issue 28]
Will be moved to Wednesday, November 17, 1993 - 21:43 # G!

_Naissance d'un virus_ soon to be published :-)

By the general secretary of the Chaos Computer Club France (CCCF), the French translation of "The Little Black Book of Computer Viruses" will soon be published by Addison-Wesley France (fax: +33 1 48 87 97 99).

Naassance d'un Virus (dec 1993, 237 pages, circa 98 FF).

Jean-Bernard Condat, PO Box 155, 93404 St-Ouen Cedex, France Phone: +33 1 47874083, fax: +33 1 47874919, email: cccf@altern.com [cccf@altern.com (cccf) via risks-digest Volume 15, Issue 26]
Will be moved to Wednesday, November 10, 1993 - 21:27 # G!

not so easy to be anonymous

In RISKS-15.19, Steven S. Davis points out that anonymous remailers (at least the one at anon.penet.fi) remove signatures beginning with -- lines. But there is a much more effective signature.

On the two occasions that I have been curious enough to investigate the real identity of anonymous posters I have had no difficulty identifying them with a bit of searching about. Both of the people I was looking for had posted signed messages in the same or nearby groups, and were readily identified. How? Consider Steven's text:

"In Risks 15.17, an32153@anon.penet.fi remarked upon the dangers of including a signature with anonymous postings. It's not quite as absurd as it seems, if someone uses a mailer that appends the signature automatically ( I can't imagine that anyone who cared about their anonymity, as opposed to those who just are assigned an anonymous id because they reply to somebody who uses one, would deliberately append a revealing signature ). The solution to that, at least on anon.penet.fi, is simple: The server considers anything after a line beginning with two dashes as a signature and cuts it off ( this can be a complication if someone tries to append a document to a message and uses a row of dashes to separate it from the main text ). So if you want to send mail anonymously, either dump your signature or be certain it starts with --."

Now, look at the style:

1) he has a unique habit of adding spaces after ( and before ).

2) the paren clauses come at the end of sentences. They are not dependent clauses, and the . comes outside the )

3) he uses commas before dependent clauses. (cf last sentence)

The meter is distinctive. (Read it aloud without paying attention to the words.) Ta-d-d-d-d-d, COMMA, d-d-d-d-d-d-d ( Ta-d-d-d-d-d-d, COMMA, ta-d-d -d-d-d-d-d ). Ta-d-d-d-d-d, COMMA, d-d-d-d-d-d-d ( Ta-d-d-d-d-d-d, COMMA, ta-d-d-d-d-d-d-d ).

I'm not picking on Steven; anyone who doesn't write in a formal, carefully corrected prose style will get caught by this.

It is real easy. And not so easy to really be anonymous.

[PGN adds: By the way, you might have mentioned line lengths. (But I use a standard of 78 for RISKS, so that the people who add "< " do not overflow, and I usually reblock longer or shorter lines.) I also usually neutralize the time zone on authored mailings to RISKS for which the author wishes to remain anonymous. You also did not mention giveaway mispelings. (I try to run every issue through a speling corekter.) As Tom Lehrer once wrote <and as my grepper notes I quoted 2.5 years ago in RISKS-11.48>,

Don't write naughty words on walls that you can't spell. ] [ariel@world.std.com (Robert L Ullmann) via risks-digest Volume 15, Issue 25]
Will be moved to Wednesday, November 10, 1993 - 21:27 # G!

Interesting book review --- Bruce Sterling's Hacker Crackdown

The adjective may be chosen to modify either. Ian Stewart is a mathematician who writes wonderfully well, as readers may see by looking at his review, in the London Review of Books 15 (21) of 4 November 1993, of Bruce Sterling's `The Hacker Crackdown: Law and Disorder on the Electronic Frontier', Eric Raymond's edition of `The New Hacker's Dictionary', and Bryan Clough and Paul Mungo's `Approaching Zero, Data Crime and the Computer Underworld'. (I had wondered what Clough had been doing since he retired from soccer).

Stewart refers to various incidents, such as the 15 Jan 1990 4ESS problems, the stoned virus, the Internet worm (but when will people stop deprecating Eric by implication?), and the Secret Service crackdown on Steve Jackson games and `Knight Lightning'. Stewart's closing sentence: `"Approaching Zero" shows that we have a lot to fear from the activities of those (few) hackers who are genuinely malevolent. "The Hacker Crackdown" suggests that we have just as much to fear from programming errors - and that American citizens have far more to fear from their Secret Service.'

Peter Ladkin [Dr Peter B Ladkin via risks-digest Volume 15, Issue 24]
Will be moved to Tuesday, November 9, 1993 - 21:24 # G!

White House distributes STONED 3 virus

Heard on the Rush Limbaugh radio show of 10/29/93, not confirmed:

The White House distributed the 1300-page health care legislation proposal widely on floppy disk. Copies went to legislative staffs and to the press.

It seems that each disk was infected with the STONED 3 virus, which causes a PC to display "Your PC is STONED. Legalize marijuana."

The commentator drew the obvious ironies and puns. (No doubt our esteemed moderator will find non-obvious puns.)

-=- Andrew Klossner (andrew@frip.wv.tek.com)

[People who live in grass browses shouldn't know STONED? PGN] [andrew@frip.wv.tek.com (Andrew Klossner) via risks-digest Volume 15, Issue 20]
Will be moved to Friday, October 29, 1993 - 21:18 # G!

Norwegian hackers fined

In Baerum (a small, wealthy area just outside Oslo, the capital of Norway), two hackers were accused of stealing telephone services, and several other forms of fraud. The elder (23) got 18 days suspended and a 2000NKR (300$) fine after last year having used a phony name and signed out a modem, and assorted computer-related items from a transport-company. He told the court that he was acting on behalf of another person he got in touch with on a BBS. He was told to check a mailbox (a physical one), and pick up the papers for transport there. He did so, and met with the transport company, identified himself mainly by the acquired papers, and signed out the goods. He paid with a stolen Eurocard-number. He left some of the aquired items on a public place, to be picked up by the other person involved, and kept some for himself. In court it also came out that he used to work at a gas-station, wrote down all credit-card numbers used, and mailed them around the world.

The younger (16) had committed the same scam with the transport-company a couple of times for a "Calvin", which he met on a French BBS. He was fined 2000 NKR (300$).

None of the boys were sentenced for telecom fraud, on technicalities. The court also found that the boys had been roaming international databases, but did not consider this a computer crime, as they had not destroyed or modified anything. (I personally would like to see a burglar getting of the hook because he did not find anything worth stealing!)

The defendants and their lawyers were very satisfied with the verdict.

ystein Gulbrandsen Taskon A/S [oysteing@taskon.no via risks-digest Volume 15, Issue 20]
Will be moved to Friday, October 29, 1993 - 21:17 # G!

Report on Software Product Liability

I ran across a report that may be of interest to RISKS readers. It is a SEI report: Software Product Liability (CMU/SEI-93-TR-13) by Jody Armour (School of Law, U. of Pittsburgh) and Watts S. Humphrey (SEI Fellow, Software Engineering Institute). It is available (Postscript, but without figures) via anonymous FTP from ftp.sei.cmu.edu in directory pub/documents/93.reports as file tr13.93.ps. The abstract starts with a reference to an accident involving a radiation machine [Therac 25], although it is not specifically identified, is likely to be an accident already extensively discussed in RISKS, so I have omitted it. The rest of the abstract follows:

Software defects are rarely lethal and the number of injuries and deaths is now very small. Software, however, is now the principle controlling element in many industrial and consumer products. It is so pervasive that it is found in just about every product that is labeled "electronic." Most companies are in the software business whether they know it or not. The question is whether their products could potentially cause damage and what their exposures would be if they did.

While most executives are now concerned about product liability, software introduces a new dimension. Software, particularly poor quality software, can cause products to do strange and even terrifying things. Software bugs are erroneous instructions and, when computers encounter them, they do precisely what the defects instruct. An error could cause a 0 to be read as a 1, an up control to be shut down, or, as with the radiation machine, a shield to be removed instead of inserted. A software error could mean life or death. [youman@umiacs.UMD.EDU (Charles Youman) via risks-digest Volume 15, Issue 20]
Will be moved to Sunday, October 31, 1993 - 21:17 # G!

Direct E-Mail: J.S. McBride & Co.

According to the Internet Business Report 1.3 (page 4), J.S. McBride and Company are selling access to a database of Internet addresses, including demographic information. They claim over one million entries. The net address is jim_mcbride@netmail.com, and I am sure they would enjoy hearing from anybody who would like to be removed from the list.

[Equifax revisited? PGN] [[Anonymous] via risks-digest Volume 15, Issue 21]
Will be moved to Monday, November 1, 1993 - 21:16 # G!

Clerk stole from ATMs he was told to top up ...

>From Straits Times (Singapore) dated 2 Nov 1993, page 21:

His job was to top up ATM machines with cash. Instead, he filled his own wallet - with $122,000. Ahmed Ansar, a clerk with a security company filched $ 250 to $19, 350 on 22 different occasions between September 92 and September 93 from the ATMs at the Changi Airport. He was discovered and apprehended in a sting operation and confessed to his other crimes.

How is it that the fraud was not detected for over 12 months ?

Does it not show a surprising and damaging lacuna in the whole system ?

Would a manual cashier be allowed to run short for one year ?

In another incident, reported in September of this year, a man was convicted of rigging a lottery run by a bank. He rigged the lottery to reward himself and his accomplices.

It appears that Singapore is racing towards computerization without devoting much thought to the risks and security issues involved. [kishor@iti.gov.sg (Apte Kishor Hanamant) via risks-digest Volume 15, Issue 23]
Will be moved to Wednesday, November 3, 1993 - 21:16 # G!

Re: White House and STONED 3 virus

"Rush Limbaugh always uses whatever anti-Clinton story he can find, but only one recipient of the disk reported infection with the STONED 3 virus; the others had no infection, suggesting that it didn't originate at the White House."

Thanks. The report didn't quite ring true -- who boots from floopy these days? Perhaps the story is of more value in the statement it makes about uncritical social acceptance of computer RISK anecdotes.

-=- Andrew Klossner (andrew@frip.wv.tek.com) [andrew@frip.wv.tek.com (Andrew Klossner) via risks-digest Volume 15, Issue 23]
Will be moved to Tuesday, November 2, 1993 - 21:15 # G!

Prague computer crime

CZECH TRANSITION SPURS BOOM IN ECONOMIC CRIME, By Bernd Debusmann PRAGUE, Nov 3 (Reuter, 2 November 1993) - The Czech Republic's transition to a market economy has led to a boom in economic crime ranging from embezzlement to tax evasion, as criminals exploit money-making opportunities denied them under communism. According to the latest police statistics, economic crime jumped 75.2 percent in the first nine months of the year compared with the same period in 1992 -- a steeper increase than any other criminal activity. [From the Reuter newswire via the Executive News Service (GO ENS) on CompuServe]

The article goes on to state that with the growth of an economy, the opportunities for economic crime are increasing apace. Although police claim to solve 75% of the cases of fraud reported to them, there seem to be many more unreported cases. In a recent case, "Martin Janku, a 23-year-old employee of the Czech Republic's biggest savings bank, Ceska Sporitelna, is accused of transferring 35 million crowns ($1.19 million) from various corporate accounts to his personal account over an eight-month period."

In a typical hacker's excuse, Janku claims to have done this to demonstrate the bank's poor security. He wrote software himself to be able to tamper with client accounts--but only, he said, after repeatedly warning his bosses of weak security precautions. The theft was not detected by the bank itself until Janku withdrew part of the money. He was arrested as he was in the process of stuffing half a million dollar's worth of banknotes into a briefcase.

The problems of inefficient bureaucracies are compounded by poor laws and indifferent enforcement. The problem is so widespread that about 30% of the residents of Prague own a country home--and a large percentage of those are claimed by analysts to be built through illegal economic activity.

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn ["Mich Kabay / JINBU Corp." <75300.3232@compuserve.com> via risks-digest Volume 15, Issue 22]
Will be moved to Thursday, November 4, 1993 - 21:14 # G!

Master of Disaster Phiber Optik sentenced

Mark Abene, 21, widely known as Phiber Optik, was sentenced to a year and a day in prison. He will serve 600 hours of community service. He pleaded guilty last July to conspiracy, wire fraud and other federal charges relating to his activities as one of five Masters of Disaster indicted for breaking into telephone, educational, and commercial computer systems. [Perhaps in a few years more, they will be Doctors of Disaster?] [PGN Excerpting Service, drawn from the Associated Press and Reuters, both on 3 November 1993]

The Reuter article give background information, including

o the charges against MoD marked the first use of wiretaps to record both conversations and datacomm by accused hackers.

o the hackers attacked phone switching computers belonging to Southwestern Bell, New York Telephone, Pacific Bell, U.S. West and Martin Marietta Electronics Information and Missile Group.

o they broke into credit-status reporting companies including TRW, Trans Union and Information America, stealing at least 176 TRW credit reports.

o the young men were apparently competing with each other and other hacker groups for "rep" (reputation) and were also interested in harassing people they didn't like.

o the Reuter article mentions that "they wiped out almost all of the information contained on a system operated by the Public Broadcasting System affiliate in New York, WNET, that provided educational materials to schools in New York, New Jersey and Connecticut" and left the message, ""Happy Thanksgiving you turkeys, from all of us at MOD."

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn ["Mich Kabay / JINBU Corp." <75300.3232@compuserve.com> via risks-digest Volume 15, Issue 22]
Will be moved to Thursday, November 4, 1993 - 21:13 # G!

CERT (was "security incident handling") (Moran, RISKS-15.19)

I am not connected with CERT (other than knowing a number of the people involved) and can understand Mr. Moran's position. It is true that generally CERT is "input only" and, while I do not necessarily agree with their position, it is arguable.

CERT does not and cannot provide solutions, they are not funded to do so. It is also their policy not to discuss reported problems other than with the developer of the product in question, and only to produce advisories when a fix for the problem is available.

Having tried to convince manufacturers before that there is a problem, IMHO CERT plays a very necessary role in this matter since CERT does not have to establish credentials.

I do have an advantage over CERT in that, as a hobbyist, I can create and distribute a "fix" with no guarantees or warranties, something neither CERT nor the manufacturer can do (one of the problems with a litigation-happy society). Of course since the "bad guys" enjoy this freedom also, it is a difficult matter. I can state uncategorically that it is *much* more difficult to write an anti-virus program than a virus, much easier to hack/crack than to protect in a manner inoffensive to legitimate users, but then I am egotistical enough to accept those handicaps.

Back to the subject at hand, for example there is currently what I consider a severe problem in Novell Netware 3.x and 4.x that will not be discussed openly just yet since there is no fix. Novell has been contacted and hopefully a new "feature" will soon appear - for Novell the fix should just require a simple change to a single program (maybe 2).

My advantage is that for me this is an ethical choice and not a policy or business dictate, a freedom which neither CERT nor the vendor enjoys. I do know that many people within such organizations do not necessarily agree with such decisions but have no choice in the matter.

Thus I do feel that CERT plays a very valuable role in the process of computer security though it is not often visible as such.

Padgett [padgett@tccslr.dnet.mmc.com (A. Padgett Peterson) via risks-digest Volume 15, Issue 20]
Will be moved to Thursday, October 28, 1993 - 18:17 # G!

"security incident" handling -- comments on CERT's policy

The subject of what and how information about computer security problems should and should not be made available has been discussed here and in a variety of other forums. Typically these discussions are in abstract terms, and focusing on the difficulties of balancing competing problems. I have been encouraged to submit the below description of my interactions with CERT (Computer Emergency Response Team) because people who don't deal directly with CERT have been surprised at the extreme position that CERT takes on these matters: it is not much of an exaggeration to say that CERT's position is that they are an input-only channel. Since CERT is being used as a model for similar groups, their approach takes on even wider significance.

My background: I handle part of the management of a cluster of approx 50 Suns. Because my group collaborates with various companies and universities, I wind up getting involved in dealing with breakins at some of those other sites ("collective insecurity" :-) ). My site is well-known (we were one of the first sites on the original ARPAnet), and I am known to people at CERT -- I should have no problem getting vetted as trustworthy if CERT were to do that. The following are my personal opinions (not my employers) based on my experiences plus those of the system managers and administrators that I interact with.

Previous Incidents

__________________

Over the years, I have dealt with a variety of breakins and attempts. When it was possible to trace the cracker forward or backward, I would send the M.O. (modus operandi = profile) of the cracker (not just the holes that he tried to exploit, but also a list of symptoms that a site should look for). After the creation of CERT, I would also send this MO to them.

I now regard the minuscule effort needed to send CERT a copy of this MO a waste of time because CERT refuses to provide this information to sites that have been broken into. I have personal experience with this on both sides: 1. I have contacted CERT about a particular cracker and given a substantial profile and asked if CERT could tell me anything more about this particular cracker. All CERT has been willing to provide is their generic documents. In backtracking the cracker, I found a site that had identified the cracker's MO and reported it to CERT. 2. Similarly, I have been the one to report an MO and later be contacted by a site that had gotten nothing from CERT, but had learned of me by backtracking the cracker to a site that I had contacted.

Current Incident ________________

A site with which mine has substantial interactions was broken into by a cracker in mid-September, and consequently I got involved in helping with the problem. We very quickly found several of his tools and enough other things to constitute a reasonable signature. We contacted CERT and they claimed to have no knowledge of this particular cracker.

The cracker was using captured passwords to daisy-chain from site to site. Unfortunately, we didn't immediately find all the holes and backdoors that he had planted. Consequently, the cracker persisted in having access to that site for some time, thereby having a chance to capture additional passwords. In the followup, we found multiple other sites that had been broken into by the same cracker (coming or going). None of these had gotten any useful information from CERT.

Two days before the recent CERT announcement that there was a hole in sendmail, I got a message from the admins for that site: "he's back". They found a backdoor that he had installed, but were unable to figure out how he had gotten in to install it. Suddenly, with the announcement of the hole, several things we had seen (and reported) seemed to fall into place.

Because this cracker had earlier probed my site from various other places on the net (we had already closed the holes he was exploiting), I was concerned that he might have used this newly found hole to compromise my site (remember, he had broken into a number of the universities and companies with which we collaborate). I called CERT and asked if they could tell me what symptoms to look for to determine whether or not this hole had been used. I was told that there were definite symptoms, but that CERT couldn't tell what they were because that would give away what the hole was. I reminded them that their advisory said "** This vulnerability is being actively exploited and we strongly recommend that sites take immediate and corrective action. **" and that we had already reported a breakin-in-progress (at the site I was helping), but to no avail. I subsequently got the information I needed from another source, but only at the cost of not being able to pass it on.

One of the many other sites that had been broken into by this same cracker posted to various relevant newsgroups a list of the sites that it had determined to have been compromised (the list was several screens long). CERT posted the following response to those newsgroups: > Newsgroups: comp.security.unix,comp.sys.sun.admin,alt.security, > comp.security.misc > From: cert@cert.org (CERT Coordination Center) > Subject: Re: Security Incident -- many sites exposed. > Reply-To: cert@cert.org > Organization: CERT Coordination Center > Date: Tue, 19 Oct 1993 15:51:54 EDT > > > CERT is aware of the incident reported earlier today and we are > working to help resolve it. It is CERT policy not to publicly > disclose sensitive incident information, particularly names > of sites that are, or may have been, involved. Therefore, we will not > post the list of affected sites here or on any other netnews group. > > We are reviewing the information concerning this incident and we will > endeavor to contact all sites known to be affected within the next > 24 hours. We would appreciate your patience and ask that you not > contact us about the earlier posting, via either e-mail or telephone, > so that we can concentrate our resources on contacting and helping the > affected sites. > > CERT Coordination Center

>From what I can determine, what CERT means by "help" is that they tell the site that they have been broken into and then provide the generic documents on security patches and practices. The sites I have talked to never have gotten information specific to a particular incident. Note also that this "response" comes more than a month after the first reports to CERT of this cracker (or a very similar one).

Comments ________

Caveat: since CERT is almost exclusively an input-only channel, it is hard to determine what they knew and when they knew it.

While I agree with the sentiment in CERT's posting above (that it is undesirable to publicly identify sites that have been broken into), I cannot disagree with the action of the site that posted the list of compromised sites -- the cracker seemed to be spreading faster than he was being found and excluded. (Note that I am not identifying the site that I went to help, nor am I free to publicly discuss details).

In my opinion, CERT's policy contributed substantially to the number of sites broken into and the persistence of this cracker on the network. First, when a system administrator contacts CERT and is told that CERT doesn't recognize the pattern of a given breakin, the SysAdmin is likely to believe that he is dealing with an isolated case, either involving a local user or just one or two other sites. The MO of this cracker left little evidence to contradict this view. Consequently, a SysAdmin could easily focus on the wrong containment measures, allowing the cracker to continue to use his site as a base to attack other sites.

Second, because CERT is unwilling to release info on the various tricks and tools that the cracker was using, a SysAdmin could easily stop short in his cleanup, after finding only some of the holes the cracker was using or had installed. This is what happened at the site I was helping. This gave the cracker time to capture passwords needed to daisy-chain to other sites. Similarly, since CERT refuses to give any advice on what holes the cracker might be using, the SysAdmin may well spend his time and efforts closing holes that aren't currently being exploited, giving the cracker time to further compromise that site and others.

CERT would seem to be a classic RISKy system -- because it doesn't behave the way people think it does/should, it causes people to take the wrong actions, especially during crises. And the classic way to deal with such a system is to teach people to ignore it.

-- Douglas B. Moran [Doug Moran via risks-digest Volume 15, Issue 18]
Will be moved to Wednesday, October 27, 1993 - 18:07 # G!

Russian Hacker Activity

According to the Associated Press last week, computer hackers nearly succeeded in stealing 68 billion rubles, or about $57 million, from Russia's Central Bank in August.

The unidentified hackers got into the bank's computer using a random combination of access codes, then tried to transfer the money into accounts at commercial banks. The attempt failed because the thieves lost too much time transferring the vast sums, and the bank detected the computer leak.

Since the beginning of the year, according to the AP, the Russian Central Bank has discovered attempted thefts and fraud totaling about 300 billion rubles, or $250 million.

This was only the latest in a string of thefts and attempted frauds at the state-run bank since the breakup of the Soviet Union, bank officials said. Bank officials told AP that, last year, thieves stole billions of rubles from the bank using false "avisos," or documents transferring money from one bank to another. [fowler@oes.ca.gov (David Fowler) via risks-digest Volume 15, Issue 18]
Will be moved to Sunday, October 24, 1993 - 18:00 # G!

Cracking feature in the small press

This week's (October 21) "Coast Weekly", a Monterey County free entertainment (mostly) paper has an article on "hacking" by staff writer Nicole Volpe. I'll quote part of an introduction from the editorial page. "While interviewing computer hackers for this issue, it occurred to me that there are a lot of similarities between reporters and cyberpunks - We share a belief in freedom of information, a general suspicion of those in power who operate secretly, and an unfortunate tendency to invade privacy.

This reporter got a taste of what it's like to be on the receiving end of privacy invasion when a hacker I was interviewing handed me a printout of personal information about me that he had retrieved, using nothing more than my home phone number. His reasons were valid enough - he wanted to be sure I was who I said I was. As a reporter I was impressed with the investigation, but on a personal level, it gave me the creeps. It was a lesson they don't teach you in J-school..."

The main article covers the exploits of some crackers in the Monterey area, their concern about the Clipper proposal, some stuff about arrests of crackers in other parts of the country, and an interview with a security man from Metromedia's long distance business. The latter says, "If you picked up the phone a year ago, dialed one digit, and then hung up, I could go back and find out what that one digit was. All the records are stored on magnetic tape." [Balance of message was apparently truncated.] [haynes@cats.ucsc.edu (Jim Haynes) via risks-digest Volume 15, Issue 19]
Will be moved to Sunday, October 24, 1993 - 18:00 # G!

Re: CERT Advisory - SunOS and Solaris vulnerabilities

>Any user with access to the system can eavesdrop on conversations >held in the vicinity of the microphone.

Maybe this has been noted in RISKS before, but ISDN speakerphones are said to have a similar vulnerability.

Bruce R. Lewis Analyst Programmer MIT Information Systems Distributed Computing & Network Services [brlewis@MIT.EDU via risks-digest Volume 15, Issue 18]
Will be moved to Friday, October 22, 1993 - 17:58 # G!

Re:Swiss AntiViral legislation

Colleagues and friends, thanks for the very helpful and positively critical comments. I append Mr. Frigerio's reply for your information. Klaus (Oct.21,1993)

PS: Mr. Frigerio will have another fight with lawyers who think that any legislation is dangerous as it may also hurt the "good viruses". I argued that "good viruses" exist only in Dr. Cohen's head, as those applications which he always mentions can be realized by non-replicative methods. Moreover, any auto matic reproduction has an unwished side-effect, as copyrights for any software does only apply to the original (=uninfected) program, so viruses "steal" also legal rights from both the originator and the user (who looses the guarantee, if any, of a working program :-)

>>>>>>>>>>>>>>>>>>>>>>> Mr. Frigerio's response <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Thanks to everybody who replied on the subject of Swiss Anti-Virus Legislation.

As somebody noticed there was a word missing in the English translation. It should have been: "... destructs electronically or similarly saved or TRANSMITTED data will..."

The text posted to the net, was a trial to include into the "data damaging" even creation and dealing/circulating computer viruses. The idea behind this, is that the virus itself already carries the malicious intent of his author. Therefore it is dangerous in any circumstance. Actually a virus can not be abused, as the idea of abuse includes the possibility, that a virus can be used in a good way too. As I have been told by specialists, there is no such "good use" of a virus as any unauthorized change of data has the potential of interfering with other data and/or programs in environments, that the virus author did/could not foresee. And even the unauthorized use of storage space is a damage, as this space will not be available for authorized uses of the computer system. Computer virus are an "absolute danger", and as any other dangerous thing (like explosive, poison, radioactive materials or genetic materials in specialized labs) computer virus should not be created or circulated without restrictions.

It has been remarked that in the text there was no word about the requisite intent or requisite knowledge of the committer. This way any BBS sysop would always risk criminal charges, if his BBS carries any virus infected software but the sysop isn't aware of it.

I apologize for not having told that Swiss Penal Law only considers intentional crimes, if there is no explicit indication that negligent acts are punished too. Therefore according to Swiss Penal Law terminology and system, the text posted to the net only considers who "knowingly and willingly" commits the act. That means that the author of the virus has to know it was a virus, what he created: this is always the case. And who circulates the virus has to know it was a virus and he wanted to circulate it. The knowledge that SW was or carried a virus can be proved easily by the fact that nobody knowingly stores viruses without labeling or marking them in any way, in order not to be infected himself (yes, I know: if there really is somebody so foolish, I have to find another way to prove his knowledge). For BBS a "Virus Directory" containing viruses or virus source codes is evidence enough for the "requisite knowledge and intent". The law does no want to punish accidental distribution of viruses.

The phrase "means destined for unauthorized deletion" has been considered unclear. "Means" certainly includes not only software, but source code (on paper as on disks) too. It has been remarked that it's the classical toolmaker problem: a knife can be used as woodcarver to make a great work, but it might be used by a thug to commit murder. I realized this problem, but would you consider a knife as generally destined to commit murder? Or would you consider explosive as generally destined to create damage? We have to be aware that most items can be used in a legal or abused in an illegal way. Seldom an item can only be used in an illegal way, but computer viruses are such items! I do not speak about software using virus specific reproduction techniques (like "killer viruses" for copyright enforcement or "anti-viruses" supposed to fight viruses) that make data changes with the explicit (contract/license) or implicit (highly probable agreement of the user) authorization of the user. This kind of SW is actually not included in the definition of "means destined for unauthorized deletion, modification, or destruction of data". Therefore you cannot say that Norton Utilities, WipeFile or any other similar general purpose SW or utilities are "destined for unautorized deletion, modification or destruction", although they certainly could be used for this.

The text doesn't say anything about malice, malicious intents or the intent to damage, as these elements are very difficult to prove in trial, if the accused denies any such intention. Actually I considered these subjective elements as not really necessary, as the virus already carries the malicious intent of its author: the malice of the author is proved by his virus, and the malice of somebody circulating the virus is proved, if his knowledge, that he was circulating a virus, is proved.

According to general principles of penal law the site of crime is the main link to charge somebody. If a virus has been created or circulated outside the national borders of Switzerland, Swiss Penal law cannot be applied. But if a virus created outside Switzerland is transferred electronically to Switzerland, the downloader will be held responsible, no matter if he was in Switzerland or abroad, as "importing" as a way to circulate the virus. The "success" of the act will take place in Switzerland. Anyway Art. 7 of Swiss Penal Law follows the principle of territoriality and the "Ubiquitaetsprinzip" (sorry: didn't find the correct English word: an act is considered being committed not only where the committer was, when he started his crime, but also where the "success" has been realized. Anyway I do consider clarifying this by inserting that "importing" virus is considered as "circulating in any way".

As this crime is prosecuted as soon as police or prosecution authority knows about it (so called "ex officio", there is no need for a specific complaint: a detailed information about a fact is enough to start investigations, no matter where the information came from (e.g. abroad).

There is no doubt, that professional ant-virus specialists and scientists should have access to viruses and be allowed to even create viruses. As long as this is covered by the aim of studying strategies to fight computer viruses, this is OK. I actually planned a system of registering these people with a federal authority (e.g. the IS Security Dptm. at the Swiss Federal Office of Information Technology and Systems or the Ministry of Justice). The posted text would be then need to be completed as follows: "Who, without being registered with the proper federal authority, creates... Only trustworthy individuals, who are professionally or scientifically active in combatting such means, may be registered on demand."

The Swiss legislator is actually not only considering "data damaging" but "hacking", "time theft" and computer fraud too, but these ARE NOT subjects of the discussion in this forum now. The same applies to software piracy, already ruled by another law. I will gladly email/fax the German, French or Italian text of the Penal Law draft to anybody interested. Please do not ask me an English translation of these, as I am not a professional English translator of legal text.

I am aware that the UK and Italy have/are going to have laws allowing to prosecute the creation and circulation of computer viruses. If anybody knows of other countries, may he please let me know in any way and as soon as possible.

On Monday, 25 October 1993, there will a meeting with the Ministry of Justice in order to convince them to propose this to the Parliament. This will be very very difficult, as there generally is very little knowledge on, or concern for the threat through computer viruses. Most people have simply never suffered an attack of computer viruses.

Thanks again for following this item with your comments.

Claudio G. Frigerio

P.S.: Please do not suggest to me to send them a floppy with a ..... just to make them more aware of the risks... P.P.S.: You can phone/email/fax/write to me in Italian, German, French, Spanish or English.

Claudio G. Frigerio, Bundesamt fuer Informatik/Stabsdienste, Feldeggweg 1, CH-3003 Bern (Switzerland) +41/31/325-9381 bfi@ezinfo.vmsmail.ethz.ch [Klaus Brunnstein via risks-digest Volume 15, Issue 19]
Will be moved to Thursday, October 21, 1993 - 17:55 # G!

Corrigenda: RISKs of trusting e-mail

It takes a tough hide to be a reporter. My note in RISKS-15.06 on the RISKs of trusting e-mail generated a modest flurry of responses pointing out some errors and asking for some clarifications. Since all who sent me notes could just as well have sent them directly to RISKs, I am assuming that even though they want parts of the record set straight they don't want to do so publicly. Although I know my "sources" on the scene were convinced of the accuracy of what they told me, by the time the information passed into their hands it seems that some of it was slightly garbled, although not badly enough to weaken the essential point of the whole incident.

(Not to detract from the seriousness of the situation, I do have to note that none of the email pointing the following out was digitally signed or authenticated.)

1. The secretaries of the principal figures involved in the resignation message did not take the *contents* of the message seriously. However, they took its existence seriously, believing it indicated there had been a serious compromise of the security of their office information systems. The incident itself has "undermined the confidence" of the clients of the University's computer systems. (This is new information which I think makes the incident actually of more interest than the original version.)

2. The FBI was not called in and the students (three, not five) were not expelled, but reprimanded and (temporarily, according to another source) denied their e-mail privileges. I suspect here my sources were telling me actions that were being contemplated but upon which a final decision had not yet been made.

3. It was not really fair to mention the name of the mail client the students used, since that is irrelevant and not the source of the problem: it is the SMTP protocol and the inherent insecurity of the internet that give the opportunity. One doesn't even need to have an e-mail program to forge an e-mail message: telnet works just fine.

4. "PEM" stands for "Privacy Enhanced Mail." See internet RFC's 1421, 1422, 1423 and 1424; implementations for a variety of platforms are available. (temptation to insert commercial here resisted.) PEM provides digital signatures, authentication, and encryption.

5. "6,000" of course is not the size of the student population at the U of W, but some could have read my note that way. The number of students, all of whom are eligible for an e-mail account, is about 41,000. "6,000" (the number now is actually closer to 7,000) is the number who have signed up for it so far.

Ted Lee, Trusted Information Systems, Inc., PO Box 1718, Minnetonka, MN 55345 612-934-5424 tmplee@tis.com [tmplee@tis.com (Theodore M.P. Lee) via risks-digest Volume 15, Issue 13]
Will be moved to Tuesday, October 12, 1993 - 17:35 # G!

RISKs of trusting e-mail

Until such time as either the general population learns what to expect or digital authentication (such as PEM) becomes widespread, I suspect we will hear more of this kind of incident. This academic year the University of Wisconsin started providing e-mail accounts to all students at its Madison campus. (6,000?, maybe) The students, both technical and non-technical, are being encouraged to use e-mail as a way of interacting with their instructors. They access the accounts either through University-supplied machines scattered throughout the campus or through dial-up Serial Link Protocol (SLIP) connections. A mix of Macintosh's, PC's and other assorted workstations are involved.

Last week (note how early in the school year) a group of five students, several from the Honors floor of one of the freshman dorms, were caught having forged several pieces of e-mail. Most potentially damaging was a note saying it was from the Director of Housing, to the Chancellor of the University, David Ward; note that the previous Chancellor is now Pres. Clinton's Secretary of HHS, so the present Chancellor is new to the job. The forged message was a submission of resignation. Ward's secretary had just returned from vacation and apparently assumed the proferred resignation was legitimate. The secretary accepted it and started to act upon it -- it was only during the course of that that it was discovered to be a fake.

The students also sent messages purporting to be from the Chancellor to other students asking them to pay their tuition. They also forged a message from the Chancellor (my information doesn't say who it went to) saying he was going to "come out of the closet" and announce it Sept. 25.

The students were only caught through a combination of circumstances. First, since they used one of the dial-in connections there were logs of who dialed in when. Secondly, during the course of their experiments they botched some addresses which caused enough traffic to go to the dead-letter office that the investigation could narrow what was happening. (It should be pointed out that the forgery was fairly easy to accomplish using the Eudora mail client on a Macintosh: the user has complete choice over the "from:" field of a message.)

The FBI is investigating whether any federal crime was involved and, needless-to-say, the students are likely to be expelled at the least.

Ted Lee, Trusted Information Systems, Inc., PO Box 1718, Minnetonka, MN 55345 612-934-5424 tmplee@tis.com [tmplee@tis.com (Theodore M.P. Lee) via risks-digest Volume 15, Issue 06]
Will be moved to Friday, October 1, 1993 - 17:35 # G!

E-mail for denial of services and corruption

I just did an experiment sending massive quantities of e-mail to a typical Unix box, and of course, I was able to overrun the disk capacity on the recipient machine, thus making the system grind to a crunching halt for lack of space. Since I sent it to daemon, nobody noticed the mail for quite some time, and it took a bit before they figured out the problem and were able to fix it.

I don't know for sure, but I think a lot of systems are susceptible to this attack, and there is no easy solution, at least if you still want to get mail.

To assess the degree to which this might be a threat, I got a listing of DoD and US Government sites from the Chaos Computer Club (thank you Charles) and tried sending mail to them - only 1 refused the mail out of 67 tried. Several told me there was no such mail recipient, but gave me a directory of other recipients with simnilar names - how helpful. A few told me they didn't have sucha user and identified that they were a particular type of system - now I know for certain what UID to send to.

Under some versions of Unix, you can put quotas on users, but not on e-mail space - as far as I know. The ULIMIT prevents unbounded growth, but it is now set high enough by default on most systems that it won't stop this attack. You can explicitly refuse mail on some systems, but I don't think there is a general way to do this selectively enough to defend against this attack. The default is almost always to get all that comes to you. Your suggestions are welcomed - FC [Fredrick B. Cohen via risks-digest Volume 15, Issue 06]
Will be moved to Thursday, September 30, 1993 - 17:34 # G!

Wiretap Laws and Procedures

The following article on wiretap laws and procedures was written in response to the many questions and misunderstandings that have arisen about wiretaps in the context of escrowed encryption as well as Digital Telephony. This article may be distributed. Dorothy Denning denning@cs.georgetown.edu

WIRETAP LAWS AND PROCEDURES WHAT HAPPENS WHEN THE U.S. GOVERNMENT TAPS A LINE

Donald P. Delaney, Senior Investigator New York State Police

Dorothy E. Denning, Professor and Chair Computer Science Department, Georgetown University

John Kaye, County Prosecutor Monmouth County, New Jersey

Alan R. McDonald, Special Assistant to the Assistant Director Technical Services Division, Federal Bureau of Investigation

September 23, 1993 1. Introduction

Although wiretaps are generally illegal in the United States, the federal government and the governments of thirty seven states have been authorized through federal and state legislation to intercept wire and electronic communications under certain stringent rules which include obtaining a court order. These rules have been designed to ensure the protection of individual privacy and Fourth Amendment rights, while permitting the use of wiretaps for investigations of serious criminal activity and for foreign intelligence.

This article describes the legal requirements for government interceptions of wire and electronic communications and some of the additional procedures and practices followed by federal and state agencies. The legal requirements are rooted in two pieces of federal legislation: the Omnibus Crime Control and Safe Streets Act (Title III of the Act (hereafter "Title III")), passed in 1968, and the Foreign Intelligence Surveillance Act (FISA), passed in 1978. Title III established the basic law for federal and state law enforcement interceptions performed for the purpose of criminal investigations, while FISA established the law for federal-level interceptions performed for intelligence and counterintelligence operations. We will first describe Title III interceptions and then describe FISA interceptions.

2. Title III Interceptions

Title III, as amended (particularly by the Electronic Communications Privacy Act of 1986), is codified at Title 18 USC, Sections 2510-2521. These statutes provide privacy protection for and govern the interception of oral, wire, and electronic communications. Title III covers all telephone communications regardless of the medium, except that it does not cover the radio portion of a cordless telephone communication that is transmitted between the handset and base unit. The law authorizes the interception of oral, wire, and electronic communications by investigative and law enforcement officers conducting criminal investigations pertaining to serious criminal offenses, i.e., felonies, following the issuance of a court order by a judge. The Title III law authorizes the interception of particular criminal communications related to particular criminal offenses. In short, it authorizes the acquisition of evidence of crime. It does not authorize noncriminal intelligence gathering, nor does it authorize interceptions related to social or political views.

Thirty seven states have statutes permitting interceptions by state and local law enforcement officers for certain types of criminal investigations. All of the state statutes are based upon Title III from which they are derivative. These statutes must be at least as restrictive as Title III, and in fact most are more restrictive in their requirements. In describing the legal requirements, we will focus on those of Title III since they define the baseline for all wiretaps performed by federal, state, and local law enforcement agencies.

In recent years, state statutes have been modified to keep pace with rapid technological advances in telecommunications. For example, New Jersey amended its electronic surveillance statute in 1993 to include cellular telephones, cordless telephones, digital display beepers, fax transmissions, computer-to-computer communications, and traces obtained through "caller-ID".

Wiretaps are limited to the crimes specified in Title III and state statutes. In New Jersey, the list includes murder, kidnapping, gambling, robbery, bribery, aggravated assault, wrongful credit practices, terrorist threats, arson, burglary, felony thefts, escape, forgery, narcotics trafficking, firearms trafficking, racketeering, and organized crime.

Most wiretaps are large undertakings, requiring a substantial use of resources. In 1992, the average cost of installing intercept devices and monitoring communications was $46,492. Despite budget constraints and personnel shortages, law enforcement conducts wiretaps as necessary, but obviously, because of staffing and costs, judiciously. 2.1 Application for a Court Order

All government wiretaps require a court order based upon a detailed showing of probable cause. To obtain a court order, a three-step process is involved. First, the law enforcement officer responsible for the investigation must draw up a detailed affidavit showing that there is probable cause to believe that the target telephone is being used to facilitate a specific, serious, indictable crime.

Second, an attorney for the federal, state, or local government must work with the law enforcement officer to prepare an application for a court order, based upon the officer's affidavit. At the federal level, the application must be approved by the Attorney General, Deputy Attorney General, Associate Attorney General, any Assistant Attorney General, any acting Assistant Attorney General, or any Deputy Assistant Attorney General in the Criminal Division designated by the Attorney General. At the state and local level, the application must be made and approved by the principal prosecuting attorney of the state (State Attorney General) or political subdivision thereof (District Attorney or County Prosecutor). The attorney must be authorized by a statute of that state to make such applications.

Third, the attorney must present the approved application ex parte (without an adversary hearing) to a federal or state judge who is authorized to issue a court order for electronic surveillance. A state or local police officer or federal law enforcement agent cannot make an application for a court order directly to a judge.

Typically, a court order is requested after a lengthy investigation and the use of a "Dialed Number Recorder" (DNR). The DNR is used to track the outgoing calls from the suspect's phone in order to demonstrate that the suspect is communicating with known criminals.

Title III requires that an application for a court order specify:

(a) the investigative or law enforcement officer making the application and the high-level government attorney authorizing the application;

(b) the facts and circumstances of the case justifying the application, including details of the particular offense under investigation, the identity of the person committing it, the type of communications sought, and the nature and location of the communication facilities;

(c) whether or not other investigative procedures have been tried and failed or why they would likely fail or be too dangerous; (d) the period of time for the interception (at most 30 days - extensions may be permitted upon reapplication);

(e) the facts concerning all previous applications involving any of the same persons or facilities;

(f) where the application is for the extension of an order, the results thus far obtained from the interception.

The process of making an application for a court order is further restricted by internal procedures adopted by law enforcement agencies to ensure that wiretaps conform to the laws and are used only when justified. The following describes the process for the FBI and the New York State Police.

2.1.1 FBI Applications

In order for an FBI agent to conduct an interception, the agent must follow procedures that go well beyond the legal requirements imposed by Title III and which involve extensive internal review. In preparing the affidavit, the FBI agent in the field works with the field office principal legal advisor and also with an attorney in the local U.S. Attorney's Office, revising the documentation to take into account their comments and suggestions. After the documents are approved by field office management, they are submitted to the Department of Justice's Office of Enforcement Operations (OEO) in the Criminal Division and to the FBI Headquarters (HQ). At FBI HQ, the documents go to the Legal Counsel Division (LCD) and the Criminal Investigative Division (CID). Within the CID, they are sent to the program manager of the criminal program unit relating to the type of violation under investigation, e.g., organized crime. The program manager determines whether the subjects of the proposed interception are worthy targets of investigation and whether the interception is worth doing. Attorneys in the FBI's LCD and the DOJ's OEO further refine the documents.

After the documents are approved by the DOJ's OEO and by FBI HQ, they are referred to the Deputy Assistant Attorney General (or above), who reviews the documents and signs off on them. At this point, the DOJ authorizes the local U.S. Attorney's Office to file the final version of the documents (application, affidavit, court order, and service provider order) in court. The U.S. Attorney's Office then submits the documents and the DOJ authorization to a federal judge. The entire process can take as long as a month.

The following summarizes the people and organizations involved in the preparation or approval of the application and the issuance of a court order:

1. FBI agent 2. FBI field office attorney (principal legal advisor) 3. FBI field office management 4. Attorney in local U.S. Attorney's office 5. DOJ Office of Enforcement Operations (OEO) 6. FBI HQ Legal Counsel Division (LCD) 7. FBI HQ Criminal Investigative Division (CID) 8. DOJ Deputy Assistant Attorney General (or higher) 9. Federal District Court judge

2.1.2 New York State Police Applications

Within the New York State Police, electronic surveillance is conducted by Senior Investigators in the Bureau of Criminal Investigation (BCI). In preparing an affidavit, the investigator works with the District Attorney's Office (or, in the case of a federal investigation, the U.S. Attorney's office) and with the BCI Captain of the investigator's troop. (Wiretap applications can be made and approved by the State Attorney General, but this is unusual.) The Captain assesses whether review by Division Headquarters is necessary and confers with the Assistant Deputy Superintendent (ADS) or Headquarters Captain for final determination. If Headquarters review is deemed necessary, then all documentation is sent to the ADS along with a memorandum, endorsed by the Troop Unit Supervisor and the Troop or Detail Commander, requesting approval. If Headquarters review is deemed unnecessary, then the memo is sent without the documentation. Once the ADS and District Attorney (DA) approve the application, the DA submits the application to a judge who grants or denies the court order.

2.2 Issuance of a Court Order

Not all judges have the authority to grant court orders for wiretaps. In New Jersey, for example, only eight judges are designated as "wiretap judges" for the entire state. These judges are given special training to be sensitive to personal rights of privacy and to recognize the importance of telephone intercepts for law enforcement.

Before a judge can approve an application for electronic surveillance and issue a court order, the judge must determine that:

(a) there is probable cause for belief that an individual is committing, has committed, or is about to commit an offense covered by the law;

(b) there is probable cause for belief that particular communications concerning that offense will be obtained through such interception;

(c) normal investigative procedures have been tried and have failed or reasonably appear unlikely to succeed or to be too dangerous;

(d) there is probable cause for belief that the facilities from which, or the place where the communications are to be intercepted are being used, or are about to be used, in connection with the commission of such offense, or are leased to, listed in the name of, or commonly used by such person.

In addition to showing probable cause, one of the main criterion for determining whether a court order should be issued is whether normal investigative techniques have been or are likely to be unsuccessful (criterion (c) above). Electronic surveillance is a tool of last resort and cannot be used if other methods of investigation could reasonably be used instead. Such normal investigative methods usually include visual surveillance, interviewing subjects, the use of informers, telephone record analysis, and DNRs. However, these techniques often have limited impact on an investigation. Continuous surveillance by police can create suspicion and therefore be hazardous; further, it cannot disclose the contents of telephone conversations. Questioning identified suspects or executing search warrants at their residence can substantially jeopardize an investigation before the full scope of the operation is revealed, and information can be lost through interpretation. Informants are useful and sought out by police, but the information they provide does not always reveal all of the players or the extent of an operation, and great care must be taken to ensure that the informants are protected. Moreover, because informants are often criminals themselves, they may not be believed in court. Telephone record analysis and DNRs are helpful, but do not reveal the contents of conversations or the identities of parties. Other methods of investigation that may be tried include undercover operations and stings. But while effective in some cases, undercover operations are difficult and dangerous, and stings do not always work.

If the judge approves the application, then a court order is issued specifying the relevant information given in the application, namely, the identity of the person (if known) whose communications are to be intercepted, the nature and location of the communication facilities, the type of communication to be intercepted and the offense to which it relates, the agency authorized to perform the interception and the person authorizing the application, and the period of time during which such interception is authorized. A court order may also require that interim status reports be made to the issuing judge while the wiretap is in progress.

2.3 Emergencies

In an emergency situation where there is immediate danger of death or serious physical injury to any person, or conspiratorial activities threatening national security or characteristic of organized crime, Title III permits any investigative or law enforcement officer specially designated by the Attorney General, the Deputy Attorney General, or the Associate Attorney General, or by the principal prosecuting attorney of any state or subdivision thereof, to intercept communications provided an application for a court order is made within 48 hours. In the event a court order is not issued, the contents of any intercepted communication is treated as having been obtained in violation of Title III.

In New York State, even an emergency situation requires a court order from a judge. However, the judge may grant a temporary court order based on an oral application from the District Attorney. The oral communication must be recorded and transcribed, and must be followed by a written application within 24 hours. The duration of a temporary warrant cannot exceed 24 hours and cannot be renewed except through a written application.

2.4 Execution of a Court Order

2.4.1 Installation of a Wiretap

To execute a court order for a wiretap, the investigative or law enforcement officer takes the court order or emergency provision to the communications service provider. Normally, the service provider is the local exchange carrier. When served with a court order, the service provider (or landlord, custodian, or other person named) is mandated under Title III to assist in the execution of the interception by providing all necessary information, facilities, and technical assistance. The service provider is compensated for reasonable expenses incurred. In light of rapid technological developments including cellular telephones and integrated computer networks, the New Jersey statute also requires the service provider to give technical assistance and equipment to fulfill the court order. This requirement has not yet been tested in court.

Normally, the government leases a line from the service provider and the intercepted communications are transmitted to a remote government monitoring facility over that line. In many cases, the bridging connection is made within the service provider's central office facility. Alternatively, a law enforcement agency may request the service provider to give the "pairs and appearances" (a place to connect to the suspect's line) in the "local loop" for the suspect's phone. A law enforcement technician then makes the connection.

When a suspect's telephone is subject to change (e.g., because the person is attempting to evade or thwart interception), then a "roving" wiretap, which suspends the specification of the telephone, may be used. In this case, prior to intercepting communications, the officer must use some other method of surveillance in order to determine the exact location and/or telephone number of the facility being used. Once determined, the location or telephone number is given to the service provider for coordination and prompt assistance. The officer may not intercept communications randomly in order to track a person (random or mass surveillance is not permitted under any circumstances).

2.4.2 Minimization

Once any electronic surveillance begins, the law enforcement officer must "minimize" -- that is, attempt to limit the interception of communications to the specified offenses in the court order. Prior to the surveillance, a federal or state attorney holds a "minimization meeting" with the investigators who will be participating in the case to ensure that the rules are followed.

Minimization is normally accomplished by turning off the intercept and then performing a spot check every few minutes to determine if the conversation has turned to the subject of the court order. This avoids picking up family gossip. Special problems may arise where criminals communicate in codes that are designed to conceal criminal activity in what sounds like mundane household discussion. If an intercepted communication is in a code or foreign language, and if someone is not reasonably available to interpret the code or foreign language, then the conversation can be recorded and minimization deferred until an expert in that code or language is available to interpret the communication. Should a wiretap fail to meet the minimization parameters, all of the evidence obtained from the wiretap could be inadmissible.

2.4.3 Recording

All intercepted communications are to be recorded when possible. As a practical mater, law enforcement officers make working copies of the original tapes. In many instances at the state and local level, the originals are delivered to the prosecutor's office and maintained in the prosecutor's custody. The copies are screened by the case officer for pertinent conversations (e.g., "I'll deliver the dope at 8:00 pm."). A compilation of the relevant conversations, together with the corroboratory surveillances often provides the probable cause for search warrants and/or arrest warrants.

2.4.4 Termination of Electronic Surveillance

Electronic surveillance must terminate upon attainment of the objectives, or in any event within 30 days. To continue an interception beyond 30 days, the officer, through a government attorney, must apply for and be granted an extension based upon a new application and court order.

When the period of a court order, or extension thereof, expires, the original tapes must be made available to the issuing judge and sealed under court supervision. The tapes must be maintained in such fashion for 10 years.

2.5 Notification and Use of Intercepted Communications as Evidence

Upon termination of an interception, the judge who issued the court order must notify the persons named in the order that the interception took place. Normally, this must be done within 90 days, but it may be postponed upon showing of good cause. If the judge determines that it would be in the interest of justice to make portions of the intercepted communications available to the subjects, the judge may do so.

The contents of the communications may not be used as evidence in any trial or hearing unless each party has received a copy of the application and court order at least 10 days in advance of the trial, and has been given the opportunity to move to suppress the evidence. A motion to suppress the evidence may be made on the grounds that it was not obtained in complete conformance with the laws.

2.6 Reports

Within 30 days after the expiration or denial of a court order, Title III requires that the judge provide information about the order to the Administrative Office of the United States Courts (AO). Each year the Attorney General (or a designated Assistant Attorney General) must report, on behalf of the federal government, to the AO a summary of all orders and interceptions for the year; reports for state and local jurisdictions are made by the principal prosecuting attorney of the jurisdiction. The AO then integrates these summaries into an annual report: "Report on Applications for Orders Authorizing or Approving the Interception of Wire, Oral, or Electronic Communications (Wiretap Report)" covering all federal and state electronic surveillance, including wiretaps. The 1992 report is about 200 pages and includes information about each interception authorized in 1992, update information for interceptions authorized in 1982-1991, and summary statistics. The summary statistics include the following data (numbers in parenthesis are the 1992 figures):

(1) number of interceptions authorized (919), denied (0), and installed (846)

(2) average duration (in days) of original authorization (28) and extensions (30)

(3) the place/facility where authorized (303 single family dwelling, 135 apartment, 3 multi-dwelling, 119 business, 4 roving, 66 combination, 289 other)

(4) major offenses involved (634 narcotics, 90 racketeering, 66 gambling, 35 homicide/ assault, 16 larceny/theft, 9 kidnapping, 8 bribery, 7 loansharking/usury/extortion, 54 other)

(5) average number of (a) persons intercepted (117), (b) interceptions (1,861), and (c) incriminating intercepts (347) per order where interception devices were installed

(6) average cost of interception ($46,492)

(7) type of surveillance used for the 846 interceptions installed (632 telephone, 38 microphone, 113 electronic, 63 combination)

(8) number of persons arrested (2,685) and convicted (607) as the result of 1992 intercepts

(9) activity taking place during 1992 as the result of intercepts terminated in years 1982-1991, including number of arrests (1211), trials (280), motions to suppress that are granted (14), denied (141), and pending (37), and convictions (1450) (there is a lag between interceptions, arrests, and convictions, with many arrests and most convictions associated with a wiretap that terminated in one year taking place in subsequent years)

Most of the above data is broken down by jurisdiction. Of the 919 authorized intercepts, 340 (37%) were federal. New York State had 197, New Jersey 111, Florida 80, and Pennsylvania 77. The remaining 114 intercepts were divided among 18 states, none of which had more than 17 intercepts. During the past decade, the average number of authorized intercepts per year has been about 780.

Individual law enforcement agencies also require internal reports. For example, the New York Sate Police requires that each week, the Troop or Detail Captain prepare a report summarizing the status of all eavesdropping activity within the unit, including the productivity and plans for each electronic surveillance installation and a brief synopsis of pertinent activity. This is sent to the New York State Police Division Headquarters Captain who prepares a report summarizing the status of all eavesdropping installations.

One of the reasons for the significant amount of post wiretap reporting is to provide a substantial record for legislatures when considering whether or not to reenact or modify wiretap statutes.

3. FISA Interceptions

Title 50 USC, Sections 1801-1811, the Foreign Intelligence Surveillance Act (FISA) of 1978, covers electronic surveillance for foreign intelligence purposes (including counterintelligence and counterterrorism). It governs wire and electronic communications sent by or intended to be received by United States persons (citizens, aliens lawfully admitted for permanent residence, corporations, and associations of U.S. persons) who are in the U.S. when there is a reasonable expectation of privacy and a warrant would be required for law enforcement purposes; nonconsensual wire intercepts that are implemented within the U.S.; and radio intercepts when the sender and all receivers are in the U.S. and a warrant would be required for law enforcement purposes. It does not cover intercepts of U.S. persons who are overseas (unless the communications are with a U.S. person who is inside the U.S.). Electronic surveillance conducted under FISA is classified.

FISA authorizes electronic surveillance of foreign powers and agents of foreign powers for foreign intelligence purposes. Normally, a court order is required to implement a wiretap under FISA. There are, however, two exceptions. The first is when the communications are exclusively between or among foreign powers or involve technical intelligence other than spoken communications from a location under the open and exclusive control of a foreign power; there is no substantial risk that the surveillance will acquire the communications to or from a U.S.person; and proposed minimization procedures meet the requirements set forth by the law. Under those conditions, authorization can be granted by the President through the Attorney General for a period up to one year. The second is following a declaration of war by Congress. Then the President, though the Attorney General, can authorize electronic surveillance for foreign intelligence purposes without a court order for up to 15 days.

Orders for wiretaps are granted by a special court established by FISA. The court consists of seven district court judges appointed by the Chief Justice of the United States. Judges serve seven-year terms.

3.1 Application for a Court Order

Applications for a court order are made by Federal officers and require approval by the Attorney General. Each application must include:

(1) the Federal officer making the application;

(2) the Attorney General's approval;

(3) the target of the electronic surveillance;

(4) justification that the target is a foreign power or agent of a foreign power (except no U.S person can be considered a foreign power or agent thereof solely based on activities protected by the First Amendment) and that the facilities or places where the surveillance is be directed will be used by the same; (5) the proposed minimization procedures, which must meet certain requirements to protect the privacy of U.S. persons;

(6) the nature of the information sought and type of communications subjected to surveillance;

(7) certification(s) by the Assistant to the President for National Security Affairs or other high-level official in the area of national security or defense (Presidential appointee subject to Senate confirmation) that the information sought is foreign intelligence information and that such information cannot reasonably be obtained by normal investigative methods;

(8) the means by which the surveillance will be effected;

(9) the facts concerning all previous applications involving the same persons, facilities, or places;

(10) the period of time for the interception (maximum 90 days or, when the target is a foreign power, one year);

(11) coverage of all surveillance devices to be employed and the minimization procedures applying to each.

Some of the above information can be omitted when the target is a foreign power.

Within the FBI, the process of applying for a court order under FISA is as exacting and subject to review as under Title III. The main differences are that under FISA, the FBI Intelligence Division is involved rather than the Criminal Investigative Division, the DOJ Office of Intelligence Policy and Review (OIPR) is involved rather than either the U.S. Attorney's Office or the DOJ Criminal Division, and the application is approved by the Attorney General (or Acting Attorney General) rather than by a lower DOJ official.

3.2 Issuance of a Court Order

Before a judge can approve an application, the judge must determine that the authorizations are valid; that there is probable cause to believe that the target of the electronic surveillance is a foreign power or agent of a foreign power and that the facilities or places where the surveillance is be directed will be used by the same; and that the proposed minimization procedures meet the requirements set forth in the law. If the judge approves the application, an order is issued specifying the relevant information from the application and directing the communication carrier, landlord, custodian, or other specified person to furnish all necessary information, facilities, and technical assistance and to properly maintain under security procedures any records relating to the surveillance.

3.3 Emergencies

In an emergency situation, the Attorney General or designee can authorize the use of electronic surveillance provided the judge is notified at the time and an application is made to the judge within 24 hours. If such application is not obtained, then the judge notifies any U.S. persons named in the application or subject to the surveillance, though such notification can be postponed or forgone upon showing of good cause.

3.4 Use of Intercepted Communications as Evidence

Like Title III, FISA places strict controls on what information can be acquired through electronic surveillance and how such information can be used. No information can be disclosed for law enforcement purposes except with the proviso that it may only be used in a criminal proceedings under advance authorization from the Attorney General. If the government intends to use such information in court, then the aggrieved person must be notified in advance. The person may move to suppress the evidence.

3.5 Reports

Each year, the Attorney General must give the Administrative Office of the United States Courts (AO) a report of the number of FISA applications and the number of orders and extensions granted, modified, or denied. In 1992, there were 484 orders. Since 1979, there has been an average of a little over 500 FISA orders per year.

Because intercepts conducted under FISA are classified, detailed information analogous to that required under Title III is not reported to the AO, nor made available to the public. However, records of Attorney General certifications, applications, and orders granted must be held for at least 10 years, and the Attorney General must inform two Congressional oversight committees of all surveillance activity on a semiannual basis. These committees are the House Permanent Select Committee on Intelligence and the Senate Select Committee on Intelligence.

Acknowledgements

We are grateful to Geoffrey Greiveldinger for many helpful suggestions on an earlier draft of this report. [denning@cs.cosc.georgetown.edu (Dorothy Denning) via risks-digest Volume 15, Issue 10]
Will be moved to Friday, September 24, 1993 - 17:34 # G!

ITAR issues in PGP & Moby Crypto subpoenas

As reported in many places, such as Current Underground Digest, New York Times (Sept 21) and on AP, subpoenas were served on representatives from the companies ViaCrypt and Austin Code Works for materials related to a grand jury investigation in California associated with the U.S. Customs Office. Both warrants are dated 9 Sept., but were served and received two days apart (contrary to the NYT account), with the ViaCrypt on Tues 14 Sept and ACW on Thur 16 Sept:

Austin Code Works:
>Any and all correspondence, contracts, payments, and record,
>including those stored as computer data, relating to the
>international distribution of the commercial product "Moby
>Crypto" and any other commercial product related to PGP and RSA
>Source Code for the time period June 1, 1991 to the present.

ViaCrypt: >"Any and all >correspondence, contracts, payments, and records, including those >stored as computer data, involving international distribution related >to ViaCrypt, PGP, Philip Zimmermann, and anyone or any entity acting >on behalf of Philip Zimmermann for the time period June 1, 1991 to the >present."

ViaCrypt just announced publicly a few weeks ago its intent to market a commercial version of PGP. G. Ward, author of Moby Crypto, has been very vocal on various newsgroups (sci.crypt, et. al.) indicating that an NSA agent had previously contacted him over the book, essentially a cryptography tutorial intended to be bundled with disks. Nevertheless the investigation appears at this point to be primarily PGP-oriented based on subpoena wording, and my following comments will focus on that aspect.

If the case progresses beyond this initial inquiry, the issues related to the ITAR code (International Traffic and Arms Regulations) restricting the flow of cryptographic software and documentation long debated in RISKS are likely to receive intense scrutiny and perhaps the first significant judicial test. Many aspects are related to the possibility of ITAR infringement in international PGP distribution, involving highly complex import and export issues, some of which follow.

PGP 1.0 was developed in the U.S. and soon spread internationally after its official release in the month of June 1 1991 (the significance of the subpoena date). Various sections of the ITAR govern the legal export of cryptographic software and technical documentation, one critical clause defines technical data as follows:

$120.21 Technical data.

Technical data means, for purposes of this subchapter: (a) Classified information relating to defense articles and defense services; (b) Information covered by an invention secrecy order; (c) Information, in any form, which is directly related to the design, engineering, development, production, processing, manufacture, use, operation, overhaul, repair, maintenance, modification, or reconstruction of defense articles. This includes, for example, information in the form of blueprints, drawings, 1 photographs, plans, instructions, computer software, 1 and documentation. This also includes information which advances the state of the art of articles on 2 the U.S. Munitions List. This definition does not 2 include information concerning general scientific, 2 mathematical, or engineering principles commonly 2 taught in academia. It also does not include basic marketing information or general system descriptions of defense articles.

The critical question: Is PGP (1) `computer software related to defense' or (2) `technical documentation encompassing general scientific & engineering principles'? Other sections of the ITAR definitely classify cryptographic software as a defense article. In a hypothetical legal case against PGP distribution, the defense might argue that the interpretation of PGP as (2) takes priority over, or is more relevant and applicable, than (1). A wide variety of respondents on the the `cypherpunks' list have indicated that the RSA *algorithm* embodied in PGP is unequivocally public domain knowledge in the U.S. and regularly `taught in academia'.

As a peripheral issue to *export* of PGP above, some sources point out that the IDEA algorithm was implemented outside the U.S. and apparently *imported* into the US in PGP. The legality of this may be affected by sections of the ITAR that bar import of material not legally exportable:

"123.2 Imports.

No defense article may be imported into the United States unless (a) it was previously exported temporarily under a license issued by the Office of Munitions Control; or (b) it constitutes a temporary import/in-transit shipment licensed under Section 123.3; or (c) its import is authorized by the Department of the Treasury (see 27 CFR parts 47, 178, and 179)."

Many armchair-ITAR-experts have noted that the act does not appear to specifically address distribution mechanisms intrinsic to an Internet PGP distribution, specifically either via newsgroups ([x].sources etc.) or FTP. It refers to traditional outlets associated with the "public domain" such as libraries but has questionable, ambiguous, and debatable interpretation on what might be termed `cyberspatial distributions' including BBSes.

Finally, If the case reaches a court, the actual outcome may also hinge on the apparent court precedent that *willful* violation of the ITAR ("criminal intent") must be demonstrated to exist for valid convictions under the law, seen for example in U.S. v Lizarraga-Lizarraga (in 541 F2d 826).

I thank the following people for accounts, information, and analysis which particularly influenced my post (which should in no way be considered representative of their own opinions):

J. Bidzos, G. Broiles, H. Finney, J. Markoff, G. Ward, P. Zimmermann

Note: complete ITAR text can be found via anonymous FTP at ripem.msu.edu:/pub/crypt/docs/itar-july-93.txt.

thanks to M. Riordan and D. Bernstein. ["L. Detweiler" via risks-digest Volume 15, Issue 11]
Will be moved to Thursday, September 23, 1993 - 17:33 # G!

Fungible microprocessors

A story delivered by CompuServe's Executive News Service newswires through my topic-filters into the "Security" in-box caught my eye yesterday afternoon:

"OTC 09/10 1606 Violent computer chip takeovers worry officials

SAN JOSE, Calif. (Sept. 10) UPI - The lucrative trade in computer chips has captured the attention of the state's street gangs, luring them to California's Silicon Valley where the armed takeover of supply warehouses has become a common occurrence, authorities said Friday."

The article includes an interview with Julius Finkelstein, deputy district attorney in charge of Santa Clara's High Tech Crime unit. Mr Finkelstein thinks that there is a trend towards violent robberies of computer processors in Silicon Valley because of the high demand for these chips. One of the reasons the chips are so lucrative on the gray market is that they have no serial numbers and cannot be traced to a stolen lot. The chips are as valuable as cocaine on a weight-for-weight basis, he said.

The most recent case occurred on Thursday, 9 Sept 93, when six thieves attacked Wyle Laboratory Inc. in Santa Clara in a well-planned, precise operation which netted thousands of dollars of Intel CPUs. Apparently the thefts have reached one a month so far, with signs of worsening as criminal street gangs realize how low their risks are, either of capture, successful prosecution or sentencing.

***

CPU chips, like pennies but not dollar bills, are fungible. That is, they are indistinguishable and equivalent. When a manufacturer buys gray-market CPU chips, there is no way to identify them as stolen because there is no way to tell which chips came from where and how they got there.

How long will it be before this kind of RISK to workers and loss for manufacturers leads to a cryptographically-sound system for imposing serial numbers on microprocessors? In this case, a unique ID could not only save money, it could save some innocent person's life.

Could the chip manufacturers engrave a unique ID on their chips during the wafer stage using their normal electron-beam/resist/UV/acid production phase? Each chip in a wafer would have a sequence number, and each wafer might have a wafer number. For such ID to be effective in reducing the fungibility of microprocessors, each manufacturer would have to keep secure records of their products and where they shipped them, much as pharmaceutical manufacturers and many others do. Would such an engraved number be readable once the chip were encapsulated? Does anyone know if X-rays, for instance, could pick up the engraved numbers?

Another approach might be to integrate a readable serial number in the physical package in which the CPU is embedded. Perhaps a unique, IR-readable information could be molded into the plastic or epoxy-resin package using technology that has already been applied successfully to producing access-control cards. Other technology that might be applicable includes the Wiegand effect, where the orientation of ferromagnetic spicules in a plastic matrix produces a characteristic and individual response to a radio-frequency electromagnetic beam. Perhaps it would be wise for the industry to agree on some standards to make it easier to read such numbers using a simple, inexpensive technique.

How much would all this engraving and record-keeping cost? Surely the costs would ultimately be borne by consumers; therefore, individual companies may balk at identifiers because they could derive a short-term competitive edge by continuing to manufacture fungible chips. In the long run, however, if theft continues to increase, plants producing identical chips may become the preferred targets of chip thieves.

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn ["Mich Kabay / JINBU Corp." <75300.3232@compuserve.com> via risks-digest Volume 15, Issue 05]
Will be moved to Saturday, September 11, 1993 - 17:30 # G!

Brussels Branch Of BNP Hit By Computer Fraud

Brussels, Belgium, September 8, 1993 (NB) -- The Belgian office of Banque Nationale du Paris (BNP) has admitted it was the victim of a major computer fraud in June of this year, according to Belgian press sources. The AFP news agency reports that a total of BFr 245 million was taken in the computer fraud, although bank officials have now recovered the money and Police are holding two suspects. The two fraudsters used their direct computer access facilities to request debits from BNP accounts and switch the proceeds into their own bank accounts with other banks. According to BNP sources, auditors picked up the fraud when they carried out a routine series of checks on inter-bank transactions in June.

As soon as the fraud was discovered, the third party banks were contacted and the money recovered. As a result of the fraud, BNP is carrying out an internal inquiry into how the frauds occurred and whether its security systems can be beefed up to prevent a recurrence. [ via risks-digest Volume 15, Issue 04]
Will be moved to Thursday, September 9, 1993 - 17:29 # G!

Re: The risks of CERT teams vs we all know (Cohen, RISKS-15.02)

Let's go back and review history (briefly). The CERT teams, beginning with the original CERT at CMU, were a reaction to the now infamous Morris Worm of 1988. The folks who "solved" the worm problem by disassembling the worm and generating patches for the network community, in a matter of hours, were the university people, both staff and students.

Then, we had source code.

Today vendors are more and more making their source code unavailable, or too expensive, or available only under untenable terms(1). Fewer of us university people now have source code for contemporary systems. Yet the crackers out there have all the source code they can steal, which is quite a collection. I know, I saw it.

The network is as vulnerable today as it was in 1988... We'll see what happens the next time! -Jeff

[1] Terms for example that prohibit students from having access. [Jeffrey I. Schiller via risks-digest Volume 15, Issue 03]
Will be moved to Wednesday, September 8, 1993 - 17:28 # G!

Lost Canadian crime statistics data

Toronto Star, Aug. 31, 1993 [p. A9]

TORONTO-- Statistics Canada reported a dramatic drop- almost 12 percent- in violent crime across Metro from 1991 to 1992. But according to Metro police, violent crimes [assault, sexual assault, robbery, etc.(!) ], except homicides, continued to climb last year. For example, Statistics Canada cited 24,408 assaults (both sexual and non-sexual) in Metro last year...But the Metro police annual report cited 29,071 assaults reported last year...

Officials at Statistics Canada and Metro police could not explain the discrepancies yesterday. A Statistics Canada official said the figures were provided by Metro police...

The next day (Sept. 1, 1993), the following report appeared [p. A2]:

Statistics Canada has likely lost computer data, causing a major miscalculation of Metro's violent crime rate, Metro police say... Puzzled StatsCan officials said they may know today what's wrong. [Gordon MacKay of the Canadian Centre for Justice Statistics, which compiled the figures for StatsCan] said that one possibility is a problem with data they received via a recently installed computer link-up.

Both Metro police and Statistics Canada officials said yesterday there were no problems when the calculations were done manually from typed reports.

This year's federal crime survey marked the first time Metro's figures were calculated using computer tapes provided by the force. The system was supposed to speed-up calculations and do away with paperwork...

MacKay said StatsCan usually sends preliminary findings to each police force for verification. But Metro police didn't receive the crime figures from the agency until yesterday-- hours after it had made its findings public, [said Mike Dear, Metro police's director of records and information security.]

The Thursday edition did not follow-up.

[An earlier problem with the Metro Police handling of crime data was contributed by Doug Moore to RISKS-14.18. PGN] [elf@ee.ryerson.ca (luis fernandes) via risks-digest Volume 15, Issue 02]
Will be moved to Friday, September 3, 1993 - 17:27 # G!

The risks of CERT teams vs we all know

The problem with restricting information to CERT teams, etc. is that this: 1 - creates a techno-elite 2 - limits distribution far too much

I expand upon it:

Creating a techno-elite makes it impossible for the average peerson or the interested novice to get involved. Most of the major breakthroughs in information protection ever the ages have come from one of these types and NOT from the techno-elite. We are creating an inbreeding situation that could be a fatal flaw.

Limiting distribution to these groups means that the vast majority of those who actually perform these protection functions are denied the facts they need to get the job done. Suppose the attacker takes out the phone lines to your CERT. You become hopeless because you are a sheep. If you know how things work on your own, at least you have a chance to defend yourself.

FC

P.S. In my exchange, you may not dial a 1 for local calls, and you must dial a 1 for non-local calls EXCEPT for international call. Dialing a 1 before everything doesn't work. Does anyone have a universal list of exchanges and which other exchanges are considered local to them? I think not! Without this, how can I automate the process? Wait for a disconnect and assume it was from a failure to dial/not dial a 1? [Fredrick B. Cohen via risks-digest Volume 15, Issue 02]
Will be moved to Friday, September 3, 1993 - 17:26 # G!

Draft Italian Antivirus Law

Prompted by the message by Mr. Brunnstein in RISKS-15.11, I thought RISKS readers might find it interesting to know that a "Computer Crime" act is currently under review by the Italian Parliament (to the best of my knowledge, one of its two branches has approved it).

I have enclosed a tentative translation as well as the original text of the article related to "malicious programs". The whole act also addresses other issues such as unauthorized entry or possession of access codes, etc.

A bit of personal comment about the wording of the article: while the Swiss text focuses on the concept of (lack of) "authorization" in order to define the illegal behaviour of both people and programs, there is no such "keyword" in the Italian proposal. Moreover, the provision against "programs ... having the effect of ... damaging a computer or ... the programs or data contained in ... it" is even more RISKy. It seems to me that, besides viruses, most of the bugs usually found in SW could fall under this article, since the unintentionality is not regarded as a matter of exclusion from the punishment.

Having followed the VIRUS-L forum for a while, I am perfectly aware that it is almost impossible to draw a satisfactory border between malicious programs and legitimate ones, but I feel that this text misses the point by more than a bit. Comments welcome.

Luca Parisi.

--Proposed Translation-- --Disclaimer: Please note that I'm not a lawyer, so people in the legal field might find it inaccurate; feel free to correct it if needed-- Article 4 of the [Proposed] computer crime act:

[material deleted]

"Article 615-quinquies of the Penal Code (Spreading of programs aimed at damaging or interrupting a computer system). Anyone who spreads, transmits or delivers a computer program, whether written by himself or by someone else, aimed at or having the effect of damaging a computer or telecommunication system, the programs or data contained in or pertaining to it, or interrupting in full or in part or disrupting its operation is punished with the imprisonment for a term of up to two years or a fine of up to It. L. 20,000,000."

--Original Text-- --Excerpt from: Camera dei Deputati - Disegno di Legge presentato dal Ministro di Grazia e Giustizia (Conso), recante "Modificazioni ed integrazioni alle norme del codice penale e del codice di procedura penale in materia di criminalita' informatica." - N. 2773--

Art. 4 [omissis]

"Art. 615-quinquies. - (Diffusione di programmi diretti a danneggiare o interrompere un sistema informatico). - Chiunque diffonde, comunica o consegna un programma informatico da lui stesso o da altri redatto, avente per scopo o per effetto il danneggiamento di un sistema informatico o telematico, dei dati o dei programmi in esso contenuti o ad essi pertinenti, ovvero l'interruzione, totale o parziale, o l'alterazione del suo funzionamento, e' punito con la reclusione sino a due anni e con la multa sino a lire venti milioni." [Luca Parisi via risks-digest Volume 15, Issue 13]
Will be moved to Wednesday, October 13, 1993 - 17:24 # G!

give us all your passwords

Last week, many of us at the company where I work were astonished to receive an e-mail message from our parent company's legal department asking everyone to send them all the passwords everyone had used on our LAN servers since January, 1991, except for current passwords. Fortunately, it was shortly revealed that this did not apply to our division, but not before I had sent back a reply telling the person in the legal department how dangerous I thought this was.

Later we found out at a company meeting that another division in our family of companies is being sued because of some possibly suspicious stock trading, and our legal department wants to make sure that it can get at any records on their network servers. I, of course, suspect that they are being spectacularly ignorant of how little use the password lists would be to them and the security risks involved with having lists of individual passwords laying around in plaintext form. Even though none of the passwords should be current, my experience suggests that many people stick to certain themes and patterns for passwords, especially when password aging is used, as it is on our servers. Our passwords expire every 40 days, which means that everyone working at our company since January 1991 has gone through 25 passwords by now, giving any crackers a sizable database to extrapolate from. And of course, everyone will probably send their password lists by e-mail, giving crackers an easy opportunity to intercept such lists. [stevev@miser.uoregon.edu (Steve VanDevender) via risks-digest Volume 15, Issue 11]
Will be moved to Sunday, October 10, 1993 - 17:22 # G!

Draft Swiss AntiVirus regulation

To whom it may concern:

The Swiss Federal Agency for Informatics (Bundesamt fuer Informatik, Bern) is preparing a legislative act against distribution of malicious code, such as viruses, via VxBBS etc. You may know that there have been several attempts to regulate the development and distribution of malicious software, in UK, USA and other countries, but so far, Virus Exchange BBS seem to survive even in countries with regulations and (some) knowledgeable crime investigators.

In order to optimize the input into the Swiss legal discussion, I suggested that their draft be internationally distributed, for comments and suggestions from technical and legal experts in this area. Mr. Claudio G. Frigerio from Bern kindly translated the (Swiss) text into English (see appended text, both in German and English); in case of any misunderstanding, the German text is the legally relevant one! Any discussion on this forum is helpful; please send your comments (Cc:) also to Mr. Claudio G. Frigerio (as he's not on this list).

"The Messenger" (Klaus Brunnstein: October 9, 1993)

############################################################### Appendix 1: Entwurf zu Art. 144 Abs. 2 des Schweizerischen Strafgesetzbuches

"Wer unbefugt elektronisch oder in vergleichbarer Weise gespeicherte oder uebermittelte Daten loescht, veraendert oder unbrauchbar macht, oder Mittel, die zum unbefugten Loeschen, Aendern oder Unbrauchbarmachen solcher Daten bestimmt sind, herstellt oder anpreist, anbietet, zugaenglich macht oder sonstwie in Verkehr bringt, wird, auf Antrag, mit der gleichen Strafe belegt."

P.S.: gleiche Strafe =JBusse oder Gefaengnis bis zu 3 Jahren; bei grossem Schaden, bis zu 5 Jahren Gefaengnis sowie Verfolgung von Amtes wegen (Offizialdelikt)

############################################################### Draft of article 144 paragraph 2 of the Swiss Penal Code (English translation)

Anyone, who, without authorization - erases, modifies, or destructs electronically or similarly saved or data, or anyone who, - creates, promotes, offers, makes available, or circulates in any way means destined for unauthorized deletion, modification, or destruction of such data, will, if a complaint is filed, receive the same punishment.

P.S.: same punishment = fine or imprisonment for a term of up to three years; in cases of a considerable dam-age, five years with prosecution ex officio.

Author: Claudio G. Frigerio, Attorney-At-Law, Swiss Federal Office of Information Technology and System, e-mail: bfi@ezinfo.vmsmail.ethz.ch [Klaus Brunnstein via risks-digest Volume 15, Issue 11]
Will be moved to Saturday, October 9, 1993 - 17:20 # G!

Re: Dorothy Denning and the cost of attack against SKIPJACK

On page 14 of the August 30, 1993 issue of Government Computer News, Kevin Power reports that Dorothy Denning told the Computer System Security and Privacy Advisory Board that SKIPJACK would not be compromised by exhaustive attack methods in the next 30 to 40 years.

I am reminded of a story, perhaps apocryphal. In the middle seventies Fortune magazine was working a feature on computer crime. Most of the experts that they interviewed told them that the security on most of the nation's commercial time sharing systems was pretty good. However, they admitted that one convicted felon and hacker, Jerry Schneider, would tell them otherwise. Of course Fortune had to interview him. According to the story, the interview went something like this:

Fortune: Mr. Schneider we understand that you are very critical of the security on the nation's commercial time sharing systems.

Jerry: Yes, that is right. Their security is very poor.

Fortune: Could you break into one of those systems? Jerry: Yes, certainly.

Fortune: Well, could you demonstrate for us?

Jerry: Certainly, I'd be happy to.

At this point Jerry took the reporters into the room where his "Silent 700" terminal was. He connected to the system that he normally used but deliberately failed the logon. When he deliberately failed again at the retry prompt, the system disconnected. Jerry dialed in again, failed a third time, and this time he broke the connection. He dialed a third time but this time he dialed the number of the operator.

Jerry: This is Mr. Schnieder. I seem to have forgotten my password. Can you help me?

Operator: Sorry Mr. Schnieder, there is nothing that I can do. You will have to call back during normal business hours and talk to the security people.

Jerry: I am sorry too, but you do not seem to understand. I am working on something very important and it is due out at 8am. I have to get on right now.

Operator: I am sorry. There is nothing that I can do.

Jerry: You still do not understand. Let me see if can clarify it for you. I want you to go look at your billing records. You will see that you bill me about $800- a month. This thing that I am working on; it is why you get your $800-. Now, if you do not get off your a-- and get me my password so that I have this work out at 8am, by 9am there is going to be a process server standing on your front steps waiting to hang paper on the first officer through the door. Do I make myself clear?

Apparently he did.

Operator: Mr. Schnieder, I will call you right back.

At this point he appears to have one or two things right. He changed the password, called Jerry back at the number where his records said that he should be, and gave him the new password. Jerry dumped two files and then turned to the reporters. With a triumphant smile he said "You see!"

Fortune (obviously disappointed): No, No, Mr. Schneider! That is not what we wanted to see. What we wanted to see was a sophisticated penetration of the software controls.

Jerry: Why would anybody do THAT?

The cost of an exhaustive attack is an interesting number. It gives us an upper bound for the cost of efficient attacks. However, it is never, itself, an efficient attack. It is almost always orders of magnitude higher than the cost of alternative attacks. The very fact that its cost can be easily calculated ensures that no one will ever encrypt data under it whose value approaches the cost of a brute force attack.

History is very clear. "Black Bag" attacks are to be preferred; they are almost always cheaper than the alternatives. After those are attacks aimed against poor key management. These attacks will be very efficient when the keepers of the keys already work for you and where their continued cooperation and silence are assured.

William Hugh Murray, 49 Locust Avenue, Suite 104; New Canaan, Connecticut 06840 1-0-ATT-0-700-WMURRAY; WHMurray at DOCKMASTER.NCSC.MIL [WHMurray@DOCKMASTER.NCSC.MIL via risks-digest Volume 15, Issue 02]
Will be moved to Friday, September 3, 1993 - 15:45 # G!

Re: Cisco backdoor? (RISKS-14.87,88,89)

After consulting with Cisco, they have convinced me that the phenomenon I reported earlier in RISKS-14.87 was not a back door but was instead a unique situation to a particular company's equipment caused by an unrelated management issue. The explanation seems reasonable, and I am willing to assume that the supposed back door does not exist at this point, especially since several independent groups have not been able to confirm its existence. Those with Cisco routers can presumably relax, at least as far as this issue is concerned.

Al Whaley al@sunnyside.com +1-415 322-5411(Tel), -6481 (Fax) Sunnyside Computing, Inc., PO Box 60, Palo Alto, CA 94302

[At Al's request, and as a courtesy to CISCO, I have appended a note in the CRVAX ARCHIVE copy of RISKS-14.87 and RISKS-14.89 pointing to THIS issue. Other archive maintainers may wish to recopy those issues. Thanks. PGN] [Al Whaley via risks-digest Volume 15, Issue 01]
Will be moved to Wednesday, September 1, 1993 - 15:42 # G!

Risks of Discussing RISKS

Is discussing risks RISKY? I would like to see more discussion of this topic -- even though it's been discussed in years past. I agree completely with PGN, who suggests that many people (I'd argue the majority) are living with blinders on. Even those on the provider/vendor side who should understand the risks of certain technologies (cellular phones being an obvious example), have a) underrated the intelligence of potential adversaries, b) overestimated the cleverness of their own technology, c) underestimated the speed at which exploitation information and devices would be disseminated, d) assumed that the using public can't be hurt by what they don't know, and e) let the magnitude of the financial rewards overshadow everything. Perhaps, more open discussion -- and knowledge that such discussion -was- going to happen -- would encourage providers not to make naive assumptions regarding the risks and might cause users to demand more of the products they buy. (Where have we heard that before?)

Anyway -- one approach to the problem has developed over the last few years (since the Internet worm incident, to be more precise) that might be worth noting. A voluntary cooperative group of security incident response teams known as FIRST (Forum of Incident Response and Security Teams) has developed to address the problem of sharing potentially risky information without giving away the store in the process. Member teams include response teams representing a wide range of "constituencies", including the Internet (i.e., CERT), various government agencies (e.g., DISA/ASSIST for DoD, Dept of Energy's CIAC, CCTA for the UK, SurfNET in the Netherlands, etc.), private sector organizations, vendors, and academia. Member teams share information on both latent and active system vulnerabilities through a series of alerts issued by the various teams. The alerts attempt to walk the fine line of describing a problem in sufficient detail (along with corrective actions) without providing enough information for exploitation. By initially distributing alerts only among member teams (and careful vetting of members), there is reasonable control over distribution.

This certainly has not solved the problems associated with identifying and closing system or network risks, it has made, I believe, great strides toward building trust and mutual support through effective information sharing and cooperation. Other groups have use a similar approach to address similar problems -- e.g., the sharing of virus information. I would be quite interested to hear how others have addressed the problem. [dds@csmes.ncsl.nist.gov (Dennis D. Steinauer) via risks-digest Volume 15, Issue 01]
Will be moved to Monday, August 30, 1993 - 15:41 # G!

Re: Cisco backdoor? (RISKS-14.87,88,89)

I just spoke to Al, and found out what the story was. We hired a subcontractor and part of his deal with us is that we provide them access to the Internet through cisco's corporate network. Since we have a relationship and our networks are physically tied together, the routers are specifically configured to allow greater access between our site and theirs (at their request).

There was absolutely positively no "back door." Al never actually performed any tests with routers where he knew the configuration, and I would toss the entire thing up to some miscommunication. [Paul Traina via risks-digest Volume 15, Issue 01]
Will be moved to Friday, August 27, 1993 - 15:38 # G!

More on the Breeders Cup Pick-6 fix

Unsurprisingly, the *Daily Racing Form* is doing a better job of covering the scam than the general media, here are a couple of references:

The "3rd member" of the party (whose account was used to make the "test runs" of the fix prior to the Breeders Cup) was already under investigation by the OTB http://www.drf.com/members/web_news.generate_article_html?p_news_head=42153&p_arc=1

The fired Autotote employee pleads guilty to one count of wire fraud: http://www.drf.com/news/article/42458.html Also it seems that he (and his 2 confederates) were cashing unclaimed tickets that he found in the Autotote system.

As in many cases of this type it looks like greed was their undoing. Here's a link to another betting scam from the 1970's: http://www.drf.com/members/web_news.generate_article_html?p_news_head=42077&p_arc=1

I think that the fact that the OTB was already looking into the 3rd person's suspect account indicates that the checks were there, that they didn't act sooner indicates prudence on their part. The worst thing (from a creditability standpoint) that a betting organization can do is withhold a payout and accuse a player of cheating, only to be proved wrong.

--Danny Lawrence, Tiassa Technologies Inc. http://www.tiassatech.com/domino/saga.nsf/story/uk [Danny Lawrence via risks-digest Volume 22, Issue 39]
Will be moved to Thursday, November 21, 2002 - 15:14 # G!

Interesting new spammer trick

Since many of you are interested in the topic of E-mail spam, e.g., the techniques used by the spammers to evade filtering and the techniques used by everybody else to try to outsmart them, I thought you might be interested in the following new spammer trick which I first saw on October 17 and have seen numerous times since then.

I use a home-grown script to analyze the "Received:" headers of the spam that I receive, determine the appropriate sites to whom to complain, and generate the complaint messages. Spammers figured out quite a while ago to insert forged Received: headers in their messages, but they're usually pretty easy to weed out, e.g., they refer to nonexistent hosts, they list bogus envelope recipients, they have bogus dates, the destination host of one Received: header doesn't match the sender host of the next one in the chain, etc. However, at least one spammer has figured out how to forge a Received: header which more convincing than any I've seen. Here are some of the headers of a spam message I received on October 17:

  Received: from pacific-carrier-annex.mit.edu (PACIFIC-CARRIER-ANNEX.MIT.EDU [18.7.21.83])
	  by jik.kamens.brookline.ma.us (8.12.5/8.12.5) with ESMTP id g9HBd2aP009915
	  for ; Thu, 17 Oct 2002 07:39:02 -0400
  Received: from 146-153-179-208.pajo.com (146-153-179-208.pajo.com [208.179.153.146] (may be forged))
	  by pacific-carrier-annex.mit.edu (8.9.2/8.9.2) with SMTP id HAA12722
	  for ; Thu, 17 Oct 2002 07:39:01 -0400 (EDT)
  Received: from 13217 (20458 [53.86.86.54])
	    by 6432 (8.12.1/8.12.1) with ESMTP id 27244
	    for ; Thu, 17 Oct 2002 04:39:00 -0700
  From: "Consult" 
  To: "jik@mit.edu" 
  Subject:  îìïüþòåðû è êîìïëåêòóþùèå ïî ÑÀÃ&128;Ã íèçêèì öåíàì.
  Date: Thu, 17 Oct 2002 04:39:00 -0700
  Message-ID: <811325562@mlnCplw1hgx>

Note the last Received header. Both the date and the envelope recipient listed in it are correct, and the rest of the header is pretty much formatted correctly; the only tip-off that something strange is going on is the numeric host names. But whoever is doing this got a little smarter pretty quickly. Here's the last Received: header from a spam message I receive on November 6:

Received: from delphi.com (mailexcite.com [85.34.182.181]) by aol.com (8.11.6/8.11.6) with ESMTP id 9874 for ; Wed, 6 Nov 2002 09:37:38 +0000

Much better, eh? I've seen various incarnations of this since then with data that seems at first glance to be correct but does not withstand closer inspection.

Note that you can't use a simple regular expression match to filter out all messages with headers in this format, because this is a valid Received: header format and I've received real non-spam messages that use it (albeit with data that isn't bogus).

They're getting smarter. I just hope by bogofilter database can keep up with them :-). [Jonathan Kamens via risks-digest Volume 22, Issue 39]
Will be moved to Thursday, November 14, 2002 - 15:03 # G!

1.5 Seconds of fame

Hu, I'm mentioned in Chronicle of Higer Education:

But he and other researchers who are challenging government efforts to regulate technology are expressing themselves more broadly through blogs, as Web logs are known. Besides Mr. Felten's there are also Zimran Ahmed's winterspeak.com, Maximillian Dornseif's dysLEXia, and Frank R. Field's FurdLog, to name a few.

10:55 # G!

Maximillian Dornseif, 2002.
 
November 2002
Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
Oct   Dec

Search


Subsections of this WebLog


Subscribe to "disLEXia" in Radio UserLand.

Click to see the XML version of this web page.

Click here to send an email to the editor of this weblog.