Keeping IT Pure

来源:互联网 发布:mac 取消访客模式 编辑:程序博客网 时间:2024/06/08 13:29

Keeping IT Pure

By E. Eugene Schultz

Every aspect of an information system--from the networkinfrastructure down to bit sequences in data files--requires protectionfrom unauthorized changes. Providing it is a huge task.

Most organizations in commercial, government, academicand other arenas are deeply concerned about the continued onslaught of securityviolations in networks and computing systems. These incidents are damagingnot only in terms of data loss, embarrassment and legal repercussions butbecause of unauthorized changes in data and system integrity. The effortneeded to detect and analyze these changes, to restore compromised systemsto their normal mission status, and to verify that information and datavalues have not changed can cost more than all other losses associated withan incident.

Information security is commonly viewed in terms of the need for confidentiality,integrity and availability of systems and data. Other needs, such as protectingagainst unauthorized observation and possession of data, also should beconsidered. Yet not all needs are equally important; integrity is in manyrespects the most fundamental security consideration in today's computingenvironments. For example, in a significant compromise of system integrity,continued availability of corrupted services and data (as in life-criticalsystems) may be more detrimental than total unavailability, especially whenmanual completion of tasks normally performed by computing systems is anoption. Similarly, motivation for protecting corrupted data from unauthorizedobservation and possession is at best questionable.

A system or network has integrity if all its components are freeof unauthorized modification. Unauthorized modification of system and networksoftware and data files is a constant security threat. In fact, the vastmajority of today's information security-related incidents involve somekind of modification of system software to allow perpetrators to gain privilegedaccess, create back doors into systems and protect unauthorized activityfrom discovery. Data integrity is similar in that it requires that no unauthorizedchanges to data files and packets have occurred.

It is worth noting that the need to maintain integrity goes beyond security.System hardware failures, media corruption, system bugs, errors in algorithms,human error and other problems can result in unintended changes in integrity.These sources of integrity compromise, especially human error, can leadto considerable loss and disruption, but they involve issues other thanunauthorized human activity and malicious programs.

Integrity may superficially appear to be a global property that systemseither have or do not. Although this view is correct to the degree thatone change in a system is likely to affect the integrity of the entire system,integrity manifests itself in different ways. It is necessary to pay attentionto the challenges that each type presents.

From Top to Bottom

Computing environments are subject to attacks on their integrity at alllevels. Let's begin with systems and networks, which consist of a multitudeofhardware components. Although many unauthorized modificationswould result in system and/or network failure, some could produce changesin functionality and data. To cite only a few examples, a perpetrator couldreplace a hard drive with another that contains different executables anddata, replace a motherboard that provides unintended functionality or makeEPROM changes that cause a system to boot differently. Or "vampiretaps" can be attached to network cabling, allowing unauthorized captureof network traffic. Because hardware integrity can be compromised in manyways, verifying it can be difficult and time-consuming.

At the operating system level, intruders frequently attempt to modifysystem binaries, because modification can yield useful information,execute commands that escalate an attack, and/or put the intruder in anenvironment in which it is easier to read data or change other system components.Binaries that are part of authentication processes are particularly usefulin that they frequently require users to enter passwords and other authentication-relatedinformation. For example, perpetrators often modify the Unix login or in.telnetdprograms, both of which prompt the user for a login name and password. Suchmodifications enable the intruder to steal passwords and other access information.

Other binaries in Unix systems are also frequent targets of modificationattempts. The possibilities are seemingly endless, and the fact that somany system administrators do not regularly check the integrity of criticalsystem binaries makes these files an easy target during an attack.

System configuration files are another frequent target of intruders.In Unix systems, /etc/passwd is a popular target because a minimum of changesin this file can quickly allow access to a root shell. The /etc/group fileis also frequently targeted because group ownerships can allow read/writeaccess to critical configuration files.

Intruders are increasingly targeting platforms other than Unix with unauthorizedintegrity changes to configuration and other files; the result can be catastrophic.The Windows NT Registry, for example, is becoming a more frequent targetof attack, because the registry holds critical information used for authenticationand assignment of rights and abilities. In particular, files within theHKEY_LOCAL_MACHINE directory (which contains a number of critical registryfiles) are more at risk than other Windows NT files. Changes to CONTROL.EXEcan disrupt communications baud rates, and changes to SECEVENT.EVT and SECURITY.LOGcan result in unauthorized modification to security audit data.

From Broad to Narrow

Unauthorized changes to systems are by no means the only integrity-relatedsecurity threats. Attacking systems one-by-one is inefficient; attackersare increasingly focusing their efforts onnetwork infrastructures.Once they have control of a network infrastructure, individual systems withinand traffic passing through are easy prey.

Network infrastructure attacks often focus on key network components,such as routers, firewalls, bridges and Domain Name Service (DNS) servers.Unauthorized changes in routing rules, for example, can allow attackersto misdirect traffic to disrupt ongoing operations or capture informationcontained in packets. Attackers target firewalls to change access controllists or modify application proxies to eliminate access restrictions tohosts within the security perimeter enforced by the firewall.

In the most elementary sense, data integrity refers to preservationof the exact sequence of bits for a data file. In the case of ASCII andbinary files, integrity is simple to conceptualize, and changes are generallyeasy to detect. A large portion of data in today's computing systems, however,is not stored in flat files but in relational databases and bit maps withsophisticated formats. Verifying data integrity is, therefore, in many instancesa complex and demanding endeavor.

Application integrity ensures that applications are free of unauthorizedchanges. Although the majority of cases of computer crime reported overthe years has not involved changes to specific applications, some caseshave involved changes to financial applications that have resulted in majorfinancial fraud or disruption. For example, perpetrators have made verysmall changes to routines that compute and assign interest to customer accounts.The result over a period of several years was the diversion of large amountsof money to other accounts from which the perpetrators have made withdrawals.

Malicious code can also alter application integrity. The Microsoft WordWinword (or Concept) macro virus, for example, is normally sent from oneuser to another in an attached Microsoft Word file. When a user opens thefile, this virus activates and (among its many actions) modifies Word'sSave As routine, thereby corrupting this application.

Application integrity often receives the least emphasis of all typesof integrity, yet in many respects it is potentially the most disruptive,because applications often span many different platforms. Worse yet, mostcurrently available applications do not have built-in integrity checkingcapability.

Interface Integrity

A broad picture of the problem of integrity in information security alsodictates examining how information about the origin of communications, processesand other elements is preserved without unauthorized change.Interfaceintegrity refers to the problem of origin integrity in systems and networks.One example isuser interface integrity. Is the user who is tryingto login to a system or use network services really the user s/he claimsto be? Host interface identity is also a problem of user interface integrity.In IP spoofing, for example, a perpetrator fabricates packets that appearto originate from a certain host but which have originated instead fromanother, entirely different host.

In many respects interface integrity is the most fundamental type ofinformation systems integrity. Consider, for example, the importance ofuser authentication as a security control for access to systems. Users mustestablish their identify before being allowed access to a system. Authenticationcontrols are thus in many respects a special type of integrity control,and audit capabilities can also be considered a type of integrity checkingtool in that they enable system and security administrators to verify whethereach user login is legitimate.

Establishing and maintaining interface integrity, especially in networkenvironments, are often difficult. Changing host identity information sothat a host assumes the network address of another host is trivial in systemssuch as PCs and Macintoshes. Other platforms, such as Unix, usually areconfigured to allow general access to critical host identity and servicedefinition files, such as the ifconfig, inetd, services and other filesthat for the most part should be available only to root users.

A major problem in establishing and maintaining interface integrity innetworks is the Internet Protocol (IP) itself. IP services generally requirelittle more information than host identity for authentication, yet masqueradingas another host is relatively easy. The new generation of IP--IPv6--providesthe capability to establish the integrity of host identity and other informationcontained in packets (thereby making attacks such as IP spoofing extremelydifficult to accomplish); its imminent usage promises to improve interfaceintegrity considerably.

Corporate Concerns

Awareness of integrity-related threats and the priorities assigned todealing with integrity problems vary greatly among corporations. Most ofindustry appears to be more concerned with the threat of denial of servicethan loss of integrity, but more concerned with the threat of loss of integritythan loss of confidentiality. The strong concern about denial-of-serviceattacks may result from downtime incidents in which major financial lossoccurred because of the unavailability of critical systems such as billingsystems or because the media has sensationalized alleged extortion plotsagainst banks based on the threat of denial of service.

Denial-of-service attacks can be devastating, but industry should focusmore attention on the threat of loss of integrity than it currently does.Too often managers and business contingency planning experts overlook thecost of operating corrupted systems, running applications that have beenmodified without authorization and processing bad data. Also not to be overlookedwhen considering the threat of loss of integrity is the cost of restoringapplications and data to the last known "state of goodness," aswell as the potential for lawsuits, violation of law, loss of customers,jeopardy to human life and damage to reputation.

One particular sector within the commercial computing world--bankingand securities--pays more attention to and controls integrity-related threatsbetter than most of the rest of industry, for several reasons. As statedpreviously, small integrity changes in financial transaction systems canresult in major financial loss in addition to other catastrophic consequences.Laws and regulatory agencies also have motivated the banking and securitiessector to adopt and observe better security practices. As a whole, industryattempts to control integrity-related threats (and others) by establishingstrong interface integrity, especially with respect to user and host identity.For this purpose, Kerberos and the Distributed Computing Environment (DCE)have been integrated into many banking and securities computing environments.This segment of industry also typically establishes a strong audit and oversightfunction over systems, applications and user access patterns. Other measures,such as mandatory system integrity checks by system administrators and userpolicies that lessen the likelihood of integrity compromise by users, oftenare parts of a complete approach to the problem of maintaining integrity.

Some corporations in other arenas do as well in approaching the problemof maintaining integrity as the banking and securities sector. More common,however, is an approach in which integrity in certain applications and systemsis tightly controlled but in most other applications and systems is neglected.On the surface, this approach appears sound; the principle of business justificationdictates that in business environments the cost of controls should not exceedthe value of the assets to be protected. This principle is still appropriatefor stand-alone environments, but it is questionable in most of today'snetworked environments, in which weak security links are likely to leadto the compromise of an entire network and all hosts therein. For example,one sniffer installed on one host on a network segment in which integrityneeds are ignored is likely to compromise other hosts within that segment.All applications, systems and network components need at a minimum a baselineof integrity controls that make compromise of any one element difficult,so no element is the weak link in an expanding series of successful break-insand unauthorized uses of services.

Maintaining Integrity

The major problem with controlling the threat of loss of integrity isthat integrity is potentially much more transitory than most other informationsecurity needs and considerations (such as observation or possession ofdata). Any file on any system can, for example, be changed momentarily,then restored in another moment. Most integrity tools are not capable ofdetecting this type of change; most users and system administrators arenot likely to notice, either. Worse yet, most reasonably effective and affordabletools examine system integrity only at a particular slice in time, thenanother, then another, to determine whether any changes have occurred betweenone point and another. Perpetrators can gauge the timing of attacks accordingto the integrity checking cycle.

Consider, for example, how Tripwire, one of the best tools for checkingsystem integrity (available from the Computer Operations, Audit and SecurityTechnology [COAST] Laboratory at Purdue University), can be used. Supposethat a system administrator runs Tripwire (which compares files with a known,previous state) on a system every Thursday afternoon. Although using Tripwireevery week is a sound security practice, a perpetrator intent on compromisingsystem integrity would be smart to change critical files on Thursday night,gain unauthorized access and capture critical information at will for six-and-one-halfdays, then restore the files again at noon the next Thursday. The reportgenerated by Tripwire on Thursday afternoon would, in this instance, indicatethat all is in order, even though the perpetrator may have captured hundredsof passwords by installing a Trojan Horse version of the login program andmay also have temporarily added SUID-to-root scripts to allow backdoor access.

Although tools such as Tripwire are nevertheless quite useful, toolsthat detect transitory changes between fixed points in time would be moreeffective in detecting unauthorized changes in integrity. Because functionalitysuch as this is not yet available in tools, integrity checking is oftencumbersome to perform and manage.

In reality, many system administrators do too little to check the integrityof their systems, applications and data, and network administrators oftenneglect the integrity of key network components such as routers and firewalls.In most cases, companies that seek help investigating a security-relatedincident have few if any procedures and requirements for integrity checkingin place. Regular integrity checking activities and the ability to repelor at least quickly detect and eradicate security-related incidents areclosely related. Of course, system and network administrators often facethe overwhelming job of having too much to administer in too little time.Still, commitment to perform some manageable level of basic, systematicintegrity checking is an integral part of sound system and network administrationpractices.

Steps Toward Solutions

What then is a good approach for dealing with the problem of maintainingintegrity? The first and most basic step is tocreate an informationsecurity policy that prescribes regular integrity checking and delineatesappropriate responsibilities, either by developing a new one or amendingan existing one.Creating technology-specific security practicesthat provide detailed requirements and procedures for maintaining integrityin specific platforms is the next logical move in the quest to maintainintegrity.

Maintaining integrity requires appropriate technical knowledge and tools.One of the simplest ways to check for data integrity is tovisually inspecta system for obvious signs of change, such as an unexplained last timeof modification for files or the presence of a new, unfamiliar program ina temporary directory. In many incidents, casual observation of a smallchange to a file has been the first step in detecting a massive set of unauthorizedintegrity changes. Visual inspection is, however, too superficial to beused as the only approach to integrity maintenance. It also tends to betedious and excessively time-consuming. Nevertheless, spending some timeto display files and obtain listings to look for unexplained changes tofiles and systems is worthwhile.

A better approach is to use the diff command in Unix systems tocompare the current version of a program with the original installationversion, or torun a checksum program such as the Unix sum program,comparing checksums from the last time this program was run until now. Oneproblem with simple checksums, however, is that a clever perpetrator canmake changes to a file that will produce the same checksum as before. Althoughnot perfect, crypto checksum programs are superior to checksum programsin that they can detect subtle changes to files that checksum programs canmiss.

Public domain integrity checking tools that are reasonably effectiveare unfortunately scarce outside of the Unix arena; commercial data integritychecking tools (including tools that detect changes to firewalls and otherspecialized host machines) are in this case the only practical choice. Althoughintrusion detection tools are not normally viewed as integrity checkingtools, viewing interface integrity as a legitimate type of integrity placesthese tools in this category. Furthermore, many of these tools examine suspiciouschanges in systems as possible indicators of an intrusion, which enhancestheir value to the integrity checking effort.

Integrity is more complex and diverse than might superficially be envisioned.As noted, it encompasses at least seven categories, ranging from hardwareintegrity to interface integrity. It is also in many respects the most fundamentalof all security needs, in that loss of integrity often renders efforts tomeet other needs moot.

Other security needs are important, it should go without saying. An effortto establish and maintain only integrity at the expense of protecting againstunauthorized possession or denial of service is likely to lead to catastrophein today's computing environments. An approach that balances implementationof integrity controls with controls designed to address other security needsis the right approach. Finding the right balance depends on business needs;some environments, such as financial computing environments, require a highdegree of integrity, whereas others require less.

Establishing an integrity baseline throughout a corporate network isa key principle. Establishing an appropriate policy and a set of technicalpractices, as well as obtaining a suitable set of integrity-checking tools,are also essential. Finally, application integrity is too often neglected.To achieve more acceptable levels of application integrity, companies needto build integrity controls (including self-checks) directly into applications.Only through a rigorously planned and maintained program can an enterprisereasonably assume that it is doing enough to protect the integrity of itsIT assets.

E. Eugene Schultz, Ph.D., is program manager for informationsecurity at SRI Consulting in Menlo Park, CA. He can be reached atgene_schultz@qm.sri.com.

0 0
原创粉丝点击