翻译原文

来源:互联网 发布:时尚大码女装淘宝店 编辑:程序博客网 时间:2024/04/27 00:50

 

one can understand Block’s choice to organize each chapter

by thematic element, I found the subdivisions almost

spurious on occasion, both within and between chapters.

At times there is little evidence or reason as to why a particular

quotation is covered in one area rather than another.

There are multiple occasions of repetition that

leads one to wonder if a quotation is in the wrong place.

These result in a somewhat frustrating reading experience,

but perhaps the editor did not intend her audience

to read the book straight through as I did. With the jumble

of quotes and dates, it is almost impossible to see the

growth of the industry and profession through Quint’s

eyes. I wonder if Quint would have been better served by

a layout that arranged her quotations chronologically

within each theme. Such an arrangement would inform

the reader more and perhaps give additional insight into

the validity of Quint’s rants and raves. Without the chronological

organization, it is easy to miss Quint’s stunning

ability to predict (and perhaps shape) the future. This is

most unfortunate.

After finishing the book, I was convinced that Quint is

a dominant force in the world of online searching and in librarianship

as a whole, but I was left unsatisfied. The volume

would have been enhanced considerably by a selected

bibliography of Quint’s publications or at least a listing of

full citations for the articles from which the excerpts were

taken. Perhaps a “best of” volume would have been more

effective in both putting forth Quint’s ideas in a more logical

fashion and allowing the reader to “enjoy getting to

know bq” in a more contextualized manner (p. 6).

That said, however, this little book does provide insights

into Barbara Quint as a professional searcher and

as a person. If one were looking for savvy quotations for

presentations, papers, or just for a chuckle, this is the

book. If one really wants to understand the issues Quint

is addressing, turn to the articles she has written.

Note

1. For more information on Marylaine Block, see her Website at http://

marylaine.com (12 May 2002).

Evaluating Networked Information Sources:

Techniques, Policy, and Issues

edited by Charles R. McCune and John Carlo Bertot. Medford,

NJ: Information Today, Inc., 2002 (

ASIST Monograph Series

).

344 p. $44.50. ISBN 1-57387-118-4

James T. Deffenbaugh

Large organizations in all sectors of our society—especially

in business, government, and education—have spent and

continue to spend millions of dollars on the technology

for and operation of a variety of data and information

networks. There is general concern and interest in finding

sound ways to evaluate whether such networks are doing

what they are intended to do and whether they are worth

the great amounts of resources being expended on them.

It makes good sense, in other words, that the evaluation

of information service networks should be a hot topic.

The thirteen essays in this collection represent talks

given originally at the May 1999 meeting of the American

Society for Information Science (ASIS) in Pasadena,

CA. After the meeting, conference cochairs invited those

who attended to submit chapters based on presentations

and/or panels from the conference, which was entitled

Evaluating and Using Networked Information Resources

and Services

. This book is the result of that series of

efforts.

The editors note in their excellent introduction that

the field of information science is still struggling for

greater clarity with questions very basic to the issues of

evaluating information networks. That was a surprise to

this reviewer; indeed, it seems difficult to exaggerate how

basic are the levels of clarification still sought. The questions

include What is a network? What is networking?

Are there parts of a network that can be identified and

studied separately? Can the interaction between the various

aspects of a network actually be studied? What is the

best way to study a network? etc. (p. xiv). The editors’ intent

was that the wide diversity of approaches and/or

semianswers to these and related questions would be represented

in this collection, as they were at the conference.

In that sense, this collection probably cannot serve as

a how-to book, for, say, a committee of library staff who

want to evaluate which service provider gives the best patron

access to business journal articles, or even to a

county library board that wants to determine whether

their network really gives the desired level of connection

to the main library for all their branches. Instead, most of

these chapters tilt more to the realm of research or theoretical

foundations rather than daily application. That

may well be because these chapters had their origins in a

meeting of an eminent professional research society

(American Society of Information Science). Some of

these essays seem postdoctoral in their tenor and timbre,

full of jargon and multilevel charts in small print.

When the subject is evaluation, however, something

beyond theory and fundamental research is connoted.

Evaluation is an activity and a process: You want to

know how some system or arrangement is performing.

Toward the end of the book, this reviewer finally began

to understand that the subject at hand straddles the

realms of both the theoretical and the practical. Straddling

on whatever level (e.g., physical, emotional, intellectual)

tends to be less than comfortable because it is less

than fully clear. However, when practical considerations

share the arena with theoretical considerations still undecided

at such basic levels (note the questioning elicited

above), the realm of the less than fully clear becomes a

foggy maze. I was mired in such a maze most of the way

through the book. Fortunately, as I will detail below,

clarity came with the reading of the final essay; yet, until

the end, the experience of working one’s way through

this book somehow did not match well with the concept,

hot topic.” To their credit, the editors attempt to add

to the clarity of the book’s structure by dividing the

chapters into five thematic sections, each representing a

different overall perspective on network evaluation:

frameworks, methodology, usability, policy, and future

directions.

256

Malinowski / Serials Review 28/3 (2002) 251–257

The frameworks section includes a chapter by Geoffrey

Ford, “Theory and Practice in the Networked Environment:

European Perspective,” and three chapters that

could conceivably fit into the broad category of case

study: “Evaluating Children’s Resources and Services in

a Networked Environment” by Dresang and Gross,

Scenarios in the Design and Evaluation of Networked

Information Services: An Example from Community

Health” by Ann Peterson Bishop, Bahrat Mehra, Imani

Bazzell, and Cynthia Smith; and “Assessing the Provision

of Networked Services: ERIC as An Example” by David

Lankes. In this reviewer’s opinion, Dresang and Gross

gave the best presentation within this group. They clearly

make the best case for relevance and demonstrate nicely

some important general issues in the evaluation of networked

information sources within a particular context.

The most helpful aspect of the children’s resources chapter

was its illuminating statement of definitions, assumptions,

and kinds of evaluation.

The second section, on methodology, comprises three

chapters: William Moen’s “Assessing Interoperability in

the Networked Environment: Standards, Evaluation,

and Testbeds in the Context of Z39.50”; “Choosing

Measures to Evaluate Networked Information Resources

and Services: Selected Issues” by Joe Ryan, Charles R. Mc-

Clure, and John Carlo Bertot; and Jonathan Lazar and

Jennifer Pierce’s “Using Electronic Surveys to Evaluate:

From Idea to Implementation Networked Resources.”

The most enlightening essay in this section was the chapter

on selected issues in choosing ways to evaluate information

networks by Ryan et al. The material in this

chapter represents but one component of a recent study

entitled

Developing National Public Library and Statewide

Network Electronic Performance Measures and

Statistics

, an effort sponsored by the United States Institute

for Museum and Library Services and the state libraries

of Delaware, Maryland, Michigan, North Carolina,

Pennsylvania, and Utah. Although the title says the

essay treats “selected issues,” I found the following

breadth of issues impressive and, at the same time,

intimidating:

variety of data collection methods (p. 113)

influences on the evaluation process and means (p.

114)

elements of an evaluation planning strategy (p. 115)

how to identify issues of purpose, data collection,

and data analysis and use in the evaluation (p. 116)

different possibilities in the intended effect of evaluation

(p. 118)

different elements to measure in an evaluation (p.

119)

different data collection methodologies and why to

select one rather than another (p. 125)

estimating the costs/benefits of evaluation (p. 131)

Although succinct, the analytical thoroughness in all

these areas is so comprehensive as to leave one with the

impression that evaluation of networked information

services might as well be a lost cause because no one has

the time, staff, or money for proper evaluations. Perhaps

this is an illogical reaction. The chapter does, after all,

come from a huge national study that notes whatever

possibilities there might be in the evaluation of information

networks instead of telling each facility how its network

should be evaluated. However, it does hint without

much elaboration at a way to bridge the chasm between

the wide scheme of possibilities and application to concrete

particulars. Accordingly, the authors advise us not

to try to evaluate something just because we can (p. 166);

they instruct us to “recognize when data are ‘good

enough’” (p. 115) and to note that limitations of resources

must be accepted as these evaluation projects

are planned (p. 114). I fear we are overwhelmed nonetheless.

It’s the straddling problem again—research and

analysis on the one hand and practical application on

the other—and this essay does not explicitly recognize

it, deal with it, or try to resolve it, even though it is arguably

inherent.

The third section of the collection deals with the perspectives

of usability and users in network evaluation.

The chapters here are “User-Centered Evaluation and

Its Connection to Design” by Carol A. Hert, “Digital

Reference Services in Public and Academic Libraries” by

Joseph Janes, and “Introduction to Log Analysis Techniques:

Methods for Evaluating Networked Services” by

Jeffrey H. Rubin. Hert’s essay on user-centered evaluation

of networked information systems and their function

is probably the best of these because she attempts to

present the variety of possibilities that can be employed,

a continuum that goes from merely studying user behavior

to involving the user in defining what the evaluation

should seek to do and to measure.

In an evaluation context at the lower boundary of the usercentered

approach (where user behavior is studied, but the researcher

is still the expert), we would expect to see an evaluation

that employs metrics that capture aspects of that behavior

of the system. The evaluator would still define the system/

service to be evaluated, the evaluation context, what dimensions

of the user experience are to be evaluated, and the data

collection and analysis approaches and metrics. At the other

end of the continuum . . . the stakeholders jointly define the

phenomenon to be evaluated (with the potential to not examine

the system at all) and often the approaches to be used in the

evaluation. (p. 165)

Depending on which end of the continuum the evaluation

fits into, methods of data collection can go from

analysis of transaction logs; through e-mail messages and

comments, focus groups, and interviews of users; to observational

strategies such as recording the user responses

as they think aloud while performing action on

the system. In order to provide insight into what factors

within a system best provide user satisfaction, it may be

advisable to combine different segments along this continuum

in the evaluation process. Hert’s presentation is

comprehensive, although quite technical. She attempts to

emphasize the importance of finding ways to include user

input into information network evaluation design.

The chapter by Janes is simply a survey of how many

American public libraries have services that allow patrons

to ask reference questions electronically and what

the characteristics are of these services. It simply does not

257

Malinowski / Serials Review 28/3 (2002) 251–257

treat the subject of evaluation. Likewise, the Rubin article,

although it is a very good survey of information network

log analysis techniques, seems to provide no specific

tie-in to evaluation at all despite its subtitle,

Methods for Evaluating Networked Services.” Thus,

one wonders why both are included in this collection.

The fifth segment of the collection includes the element

of an information policy perspective on the assessment

of information networks and their operation. In

Policy Analysis and Networked Information: ‘There are

Eight Million Stories . . . ,’” Philip Doty provides a stunning

summary of history and development of policy analysis

as a discipline. He emphasizes the interplay of social

value and facts in the development of social and other

policies, and sees group and individual narratives as crucial

in determining what those values are; however, any

connections Doty makes between policy science or policy

evaluation and the evaluation of networked information

services seem haphazard and skimpy. One suspects that

any references to information network evaluation may

have been added as a gesture to acknowledge the stated

topic of the conference at which the original presentation

was given. In this section’s other chapter, “Using U.S. Information

Policies to Evaluate Federal Web Sites,”

Charles R. McClure and J. Timothy Sprehe attempt the

formidable task of listing and describing the many and

varied federal information policies. Their tie-in to evaluation

of networked information systems is simple: do

government networks follow these information policies

that the diverse branches of the federal government have

put into place.

The last segment of this collection, future directions,

features one chapter only—an essay that I consider to be

the gem of this very mixed collection—Clifford Lynch’s

Measurement and Evaluation in the Networked Information

World.” Lynch is highly reputed in both the information

science and library worlds. He was director

of Library Automation at the University of California

for ten years and instrumental in the development of

MELVYL, the university’s systemwide public access catalog.

He has been the director of the Coalition for Networked

Information (CNI) since 1997. CNI includes

about 200 organizations with an interest in the use of information

technology and networks to promote scholarship

and productivity.

Early in his essay, Lynch solved my straddling problem.

He said that he has found much of the evaluation in

the field of information science, and especially information

retrieval, disappointing because he is finding that

there are “limitations on our ability to even measure, let

alone evaluate, many of the things we intuitively believe

in” (p. 294). Furthermore, it is very costly both in staff

time and money. Lynch says we need to think critically

about whether evaluation is even always needed. “We

must not rush to evaluation,” he states (p. 295). “We

may learn more from careful analysis of well established,

mature production systems” than from formal evaluations

(p. 295).

Lynch believes that there are two elements in the information

technology environment now that make traditional

evaluations less applicable. The first is the problem

of volume or scale. “For almost all traditional user

studies or traditional evaluations of query processing algorithms

that have been the heart of so much information

science research over the past few decades, the data

management and analysis are trivial compared with system

studies. Analysis of system measurements is a serious

computational science because of the scale factor” (pp.

299–300). In Lynch’s view, “large-scale computational

measurement, analysis, and evaluation of systems are the

future” (p. 300). The second element is that the rate of

technological change is so fast that a system being evaluated

is often approaching obsolescence by the time a formal

evaluation can be planned, executed, and reported on.

So what are we to do in the day-to-day world of

nearby and smaller information networks in our work,

academic, or public library contexts when we need to get

some reading or measurement of performance? Lynch

advocates “quicker, less formal, and more expedient

ways of getting the same information quickly and cheaply,

e.g., focus groups, talking to users, examining transaction

logs” (p. 300). Even if these methods are not backed by

scientific verification in the small-scale use of them, they

are affordable and offer some indication of strengths or

weaknesses in the system and its use. Lynch believes that

they are probably as reliable as the more formal, costly

evaluations that still need more research by information

scientists working with experts in other disciplines (p.

294). Lynch settles my straddling problem quite nicely.

Lynch then notes some areas that need more attention,

of which some examples merit mention. So much data

collection is held by private corporations and not shared

with the research community at large. Perhaps public

policy changes are needed to extricate information

needed for societal well-being or advancement. In another

example, he says that the effect of metadata on cataloging

and information retrieval needs further examination

and testing; and in a third, he would like to see

information scientists, especially in the field of information

retrieval, gain greater knowledge about what the

proper object of measurement is in the use of networked

information systems. For example, whether the success

of database searching should be evaluated on the success

of individual queries or on entire sessions of querying (he

thinks the latter might give a better idea of usability).

Lynch concludes with a discussion of three “grand

challenges” in the evaluation of networked information

systems that he says probably need order-of-magnitude

changes in computational technology as well as major

changes in conceptual framework: evaluating intellectual

property in the digital environment, evaluating libraries,

and evaluating information technology in higher

education (pp. 314–321).

In summary,

Evaluating Networked Information

Sources

has pitfalls inherent in essay collections: the unevenness

of the contributions and the possible inappropriateness

of some for inclusion under this topic. The

book includes a serviceable index.

 

 

原创粉丝点击