Technology, Scholarship, and the Humanities:

The Implications of Electronic Information

The Intellectual Implications of Electronic Information

   Oleg Grabar

   School of Historical Studies
   Institute for Advanced Study


The paper argues or proposes for discussion the following points:

Thus far, scholarship (as distinguished from learning) in the humanities is neither hampered nor helped by the availability and apparent possibilities of electronic information. Access to the sources, secondary and eventually primary, essential to accomplish that scholarship is being revolutionized by electronic information. While theoretically and potentially positive, this revolution exhibits a number of problems reviewed in this essay.

Educational opportunities are easier to acknowledge and to imagine in the future than those of scholarship. The computer increases not only the confusion between "facts" and interpretation, but also the possibility of unexpected questions and results. This confusion could result in increasing analytic rigor that would make the humanities examine and explain its intellectual bases.

Since all existing information can never be processed, there are ethical and in a sense political or ideological problems in how the information to be made available is selected.

Oleg Grabar, 1993

Scholars in the humanities did not dream of computers or computer-like instruments, nor did they invent them. They rarely, if ever, contributed to their design or to the design of anything dealing with the mechanics of electronic information. And most of them were--many still are--appalled by the arrogant illiteracy of computer manuals, by the transformation of so many familiar and friendly words (icon, image, menu, window, document, etc.) into frightening expressions, and by the seemingly wasteful irrelevance of so many new gestures like fiddling with a mouse or constantly "returning," of new designs for working spaces, of the invasion of the sacred and private study by screens and keyboards more readily associated with public airline ticket offices or with anonymous banks.

Some of these negative reactions have been overcome, as the word processor's superiority over the most elaborate typewriter is recognized by most writers of learned articles or by most compilers of reading lists and bibliographies. Word processors, it is true, are a bit expensive and, perhaps because no one user's manual seems able to explain clearly what various keys can do, the feeling remains that heavy artillery is used to kill a fly, especially because no one really wants to do much that can be done with word processors.[1] But these doubts and annoyances do not really affect the acceptance of an instrument whose flexibility and versatility were enhanced by academic administrations which distributed them with so many discounts that any alert citizen of universities should have sensed something suspicious lurking behind the ivy.

Several parallel activities were in fact taking place. The most immediately significant one for daily life was the transformation of library catalogues into video machines. The old gesture of flipping cards, which made one feel in professional partnership with centuries of humanistic knowledge, disappeared as we were all transfigured into simple-minded operators of spaceships. We were made to relinquish past associations and to join a new crowd in which our students and our children were faster and more efficient than we were. The revolution in access to the holdings of collections is final (even if not complete as yet), and I will return to some of its implications.

There were also meetings with all sorts of intense young men and women brought around by one's more progressive colleagues to find out how computers could help us do whatever we were doing. It is not without amused sadness that I recall these meetings, for so many of us thought in messianic terms about the sudden availability of (for example) all works of Islamic architecture on a disk programmed so as to provide answers to all of one's questions today and in the future and with pictures on the screen as an added benefit; about all museum catalogues combined into one gigantic catalogue of all works of art available to all; about the Index of Christian Art being usable without the need to travel to Princeton or Washington; about information on legal documents, dates according to many calendars, and on biblical references accessible at the push of a few buttons in a correct sequence; and so on.

There was something desperate about these meetings (at least most of the ones I attended) and about the dreams that accompanied them, for two reasons. One was the vain and embarrassing feeling that the stature of our work would be enhanced by the use of all these new techniques that adorned the laboratories of our colleagues in the sciences and the NASA centers in Houston; finally, we thought, we would be recognized as the scientists we meant to be. The other and altogether far more important reason was that the new ways seemed to resolve one set of the humanist's traditional problems, especially in the competitive "scientistic" atmosphere of the decades after World War II.[2] The problem was the rapidly waning control over too many languages compounded with too many publications that were supposed to be read (not just listed), the endless surveys of sources without purpose, and repeated pronouncements on the "state of the art" of anything. These and other paraphernalia of scholarship appeared everywhere thanks to the multiplication of centers for learning which acquired visibility by showing off the work of others, to easy funds for travel to successions of learned meetings, and to the expansion of knowledge beyond its earlier Eurocentric confines. Some humanists, of course, did not notice these changes; to them the old adage that orientalia (or slavica) non legenda sunt remained in effect. Others, especially the ones with open and generous minds, had to find ways of incorporating all these novelties within their own work. If only a machine would free the scholar from learning Hungarian or Mongolian and tell him what Korean scholarship on the Italian Renaissance is up to!

Pipe--or real--dreams were one thing. There were also achievements, as disks and programs made their way into scholarly confines, some furtively and secretly like a disk that converts Hijrah dates into Common Era ones, others playfully like an atlas that provides the national anthem of every sovereign country, others yet as major scholarly achievements or projects expected to become such achievements (in particular collections of texts with sophisticated indices).

Professional life today is deluged with bytes, CD-ROMs, and mainframes. One can run to them as one goes to fashion shows, praise them all as wonderful but not up to snuff; yet, in the aggregate, they provide a very definite vision of a scholar a generation from now. His or her desk would go around the room and will have equipment that could offer simultaneously all of the following (all my examples are based on programs I know to exist or to be fairly far along in planning, especially in my own, particularly underdeveloped, field of the history of Islamic art): a personal word processor containing bibliographies, draft or completed studies, and dozens of special lists adjusted to one's scholarly and personal concerns; dictionaries of a dozen languages; immediate access to a complete index of all monuments of Islamic (or other) art or relevant texts in one's field; a thesaurus of texts in Arabic characters (let's say) indexed like the Thesaurus Linguae Graecae; an access to the catalogue not only of the nearest scholarly library but of hundreds of other research libraries; access to hundreds of bibliographic databases through commercial sources such as DIALOG; a computer-assisted design (CAD) program for architectural and urban investigations; some printers; one screen available for whatever visitors bring; a machine to reconcile disks from different sources; a modem-equipped telephone; and a fax machine. The cost of this equipment may restrict its appearance to institutional offices and, therefore, require a certain amount of sharing with others, as in most scientific labs. At home, a lonely modem telephone would then keep company with a portable computer used in the evening for war games.

The point of this exercise in simple-minded and currently expensive fantasy is that the changes, novelties, and possibilities introduced into the surroundings of the scholar in the humanities by electronic facilities affect much more than one's scholarly output; they end up by shaping one's life, and, thus, probably, modify the very nature of one's actual or potential knowledge. Is this a good thing? Is it immaterial? Have we opened a Pandora's box of destructive and harmful materials? How is one to deal with this invasion by technical information and by machineries that in themselves have nothing to do with the humanities? Is it possible that this new technology is not an unfortunate invasion from other worlds but a creative novelty revealing the very structures of humanistic knowledge and scholarship? Should one stop bemoaning a lost past and assume that new and possibly different ways of seeing and understanding our fields are about to emerge from the availability of new electronic techniques?

For purposes of discussion, I shall offer comments and observations under four categories: scholarship in the humanities; sources as resources; new horizons; and the ways of practical existence in an academic life of humanistic thought.

Scholarship in the Humanities

I would like to propose four assumptions underlying scholarship in the humanities.

First, there is the distinction between scholarship and learning, between a savant and an érudit. All scholars are learned in the sense that they possess large amounts of knowledge in many fields and subfields and control several means of access (e.g., primarily languages in the humanities, but also more specialized techniques like paleography, numismatics, diplomatics, metrics, drafting) to further knowledge. The most common vanity among humanists is to be able to provide information, factual or bibliographic, in foreign languages if possible, straight out of one's head. But, if all scholars are learned, not all learned people are scholars. A French historian was once defined by a colleague with these scathing words: "Il sait tout, mais il ne sait que ça." In a possibly more positive way, the story is told of an English classicist being asked by a physicist colleague, "And what is new in your field?" and responding, "Nothing, I hope." In short, there is an expectation or a fear that knowledge must be supplemented with something else.

This "something else" which makes a scholar out of a learned man is the ability, shown regularly or only occasionally, to modify the character or the quality of whatever one knows, to affect its understanding. Much discussion can be devoted to defining these modifications. There is quality of expression, as the ability to provide pleasure with information can be a justification for reading opinionated and even inaccurate statements, as often happens to those who read Michelet, Focillon, or Gibbon. There is sophistication in judgment, as Gombrich or Riegl always enlighten, whatever their topic, even if they are not fully informed about it. In short, next to the establishment of texts, images, or facts, which is a technically important activity of humanistic scholarship, there lie the evaluation of information and the expression of that evaluation. Scholars are, in fact, divided between those who see the establishment of a text (I obviously mean by that word more than a written document) as the highest form of scholarship[3] and those who consider such philological pursuits as useful drudgeries, like learning rules of grammar, that lead eventually to true scholarship.

A second assumption is that information in the humanities is largely finite and mostly known. Individually, we are unaware of most local histories, artistic developments, literatures, and religions, but we know that they exist, and it is usually fair to say that people and facilities ready to handle any field of learning are relatively easily available to anyone within the academic system.[4]

Discoveries will no doubt be made, for instance by archaeologists, but it is unlikely that a religion will appear that had not been known or suspected, that many languages are yet to be discovered, that a newly revealed language will elicit an unknown Bhagavad Gita, or that a major historical event will be reconstructed from previously unknown documents or artifacts. The fate of most of Raphael's paintings is known. In fact, it is essential to note how rarely discoveries are made in the humanities that radically alter the knowledge or understanding of anything. Exceptions like Lascaux or Ras Shamra (Ugarit) notwithstanding, the real point is that totally unexpected finds like those of Ebla, Panjikent, Khirbat al-Mafjar, Nag Hamadi, or the Dead Sea Scrolls have added footnotes,[5] altered some edges in knowledge (especially for the origins of already known phenomena like the alphabet, the religious and social context of early Christianity, or Persian painting), but did not fundamentally affect the processes of historical or esthetic thought; at best these discoveries were seen as confirmations of known theories and facts rather than as clear paths for new understanding.[6]

It is, in short, fairly easy to argue that there are clear and known procedures underlying the establishment of "facts" such as what happened at some time or place, the text of a written work, the documentation from an archive, or the visual data about a cathedral. Furthermore, most "facts" are known and competent scholars know more or less where and how to find those which are still unavailable or not established. From this cold-blooded fact-oriented "scientistic" point of view, it is easy to argue that much more energy should be devoted to Mongolian or Peruvian history and culture than to American civilization, to the arts of India than to nineteenth century Europe, and so on. For the ultimate objective of learning in the humanities is to establish equally clearly all "facts" about everything done by and for men and women everywhere.

Establishing facts is an integral part of training to enter any field of the humanities. To make an edition or a new edition of a text or to establish the catalogue (preferably raisonné) of an artist or of a collection are worthy, perhaps even important goals. But most written or visual texts, especially in Western civilization, are available in more or less acceptable forms, and those which are not may well not deserve to be.[7] However, my third assumption is that a humanist's glory is, most of the time, made not by the discovery or establishment of a text, but by the interpretations and judgments of the text already provided for him or her. These transformations are of two types. One is restricted to a "text" or to a "fact," which becomes illuminated or whose sense is extended much beyond itself, as happened, for instance, with Le Roy-Ladurie's Montaillou or might have happened to Bakran's publication of the foundation text of the Süleymaniye in Istanbul, if historians of architecture and economic life read Turkish. The second type is theoretical and independent of a "fact," as happens with Marxism, the so-called Annales approach to history, structuralism, deconstruction, feminism, or cognition. This host of attitudes, approaches, and judgments is at the core of what makes the humanities exciting, and there can even be judgments that illuminate or destroy a "fact." The celebrated study of Baudelaire's Les Chats by R. Jakobson and C. Levi-Strauss nearly ruined for me both structuralism and Baudelaire, while the reading of Stendhal's pages on the cupolas of Rome is always a pleasure and Panofsky's article on Poussin's Et ego in Arcadia has made me appreciate a painter toward whom I am usually quite indifferent. The fascinating part of this judgmental or interpretative aspect of humanistic scholarship--and something that distinguishes it quite strongly from natural or physical sciences--is that it is nearly always additive rather than cumulative. A new interpretation is simply added to earlier ones; the more such alternative interpretations one provides, the more scholarly the author and the richer the subject. At the extreme of Bakhtian analyses, interpretations become an integral part of every text.[8]

The last assumption about scholarship in the humanities is that it is a solitary venture by individuals operating alone. There are exceptions, as in much of archaeology, and during the process of working colleagues, students, relatives, friends, and even enemies can play a technical role for information or a more creative role as critics or as foils.[9] But, on the whole, the acquisition and use of operating tools like languages, whatever cultural or other knowledge may be necessary for any one study, and especially the final acts of writing or of correcting one's writing, are all things one does alone. Scholarship does not come out without organized or informal help (and to some of the elements in this help I shall return), but the humanist-scholar's world is that of silent libraries and collections beautifully run by diligent but quiet attendants and of peaceful home studies lined with books and images.

Within this sketch of a humanist-scholar's life and activities, the role or potential role of electronic information is as of now quite limited if the accomplishments, expectations, and ideals of humanists remain more or less as they are. There is, of course, the word processor, whose obvious benefits have often been praised, but the important point about it is that, while it helped immensely those who write easily, it did not help anyone write. At times, it gave the illusion that it could, and my own informal and most unscientific as well as probably unfair observation of its appearance on the academic scene was that many of the first to acquire a word processor were students and colleagues who could not write and hoped that the machine would help them. Beyond the processor, one could imagine a series of programs called Marxism, deconstruction, the new historicism, quality in French poetry, and esthetic values in general, feed to them the "facts" of the French Revolution, Hamlet, Verlaine's poetical oeuvre, or all of Manet, and come out with every 'fact' provided with a seat number in the airplane of scholarly judgment. Somehow I doubt that this will ever happen, but the reason is likely to be its cost rather than the principle of the thing.

Matters are different in the one area of humanistic research where electronic information has made its most consistent inroads and many permanent modifications in the conditions of a scholar's work. It is the area of "sources," that is to say of "facts" and often interpretations transformed into anonymous information rather than the personalized contact with a colleague.[10]

Sources as Resources

Forgetting for the moment the depressing possibility that no text is ever perfectly known, it is easy enough to argue that the availability of "texts" in easily accessible form is a good thing, that computers are excellent instruments for gathering such facts and making them accessible according to an almost infinite number of categories, and probably (although I am only quoting from hearsay) that there will ultimately be a cost advantage in the nearly total computerization of factual information like texts, images, library holdings, technical or general vocabularies in all languages, and so on.

The financial projection, if true, is an important argument for the growth of computerized information for two reasons. One is that computerized means such as disks are cheap to reproduce (although not to produce) and, therefore, that it will be possible to equip any new or deficient establishment with the factual or textual opportunities of the richest institutions in the world. Should one not, then, invest now in the expensive infrastructure needed to receive the eventual invasion of disks with total knowledge? Electronic information, far from being the privilege of the rich, like research libraries or museums were and still are, could be a factor in the democratization of knowledge. The other reason is that a collective agreement on a vision allows for a relatively rational planning of resources in training and forming students for tasks that can be clearly defined. Although the possibility of altered visions and the apparently very rapidly achieved obsolescence of expensive equipment may make this advantage somewhat dubious. One may end up by training scholars for processes of knowledge rather than for knowledge itself.

There are other problems with this idealized picture of a future that seems just around the corner. I will mention two of them and then turn to a broader philosophical question around what seems the most obvious use of electronic information, how to make resources out of sources.

The first issue is that, while verbal texts are relatively easy to incorporate into a program if they are in a language using an alphabet, matters are quite different when one comes to images (and, I suppose, pictographs and ideographs, although I have no knowledge of what has been done in these areas). Images can be digitized and made visible (although at considerable cost so far), but, to my knowledge at least, no program has as yet succeeded in recognizing sections of images by concept or in using words to retrieve details of an image, like recalling a "church" from the plan of a city or a "standing woman" from a painting. In all likelihood, the problem is not so much technical as it is intellectual. The phonetic (i.e., based on the smallest identifiable elements) structure of images and even of architecture is so loose (or, in the case of buildings, so meaningless) that it is nearly impossible to provide the computer with a descriptive structure that would be coherent and useful, yet not simple-minded. The various programs or projects that so far exist either end up by dealing essentially with words and use images only as illustrations or have bogged down in the intricacies of creating an elaborate thesaurus for describing artifacts in several languages but without a usable retrieval system. Some projects have done both.

The key point is that, whereas a technology exists to deal with numbers and with alphabets, and therefore with anything that can be turned easily into numbers and letters, the technology of computerizing images that exists for satellite photography or the recording of bodily behavior in medicine, has not been, to my knowledge at least, successfully transferred to currently existing man-made images or artifacts.[11] I will suggest later that there may be a way out of this difficulty, but it requires us to modify significantly the way we think about images.

The second issue concerns the most successful computerization activity affecting scholarship in the humanities; that is to say, the revolution in libraries and library-related activities. I do not mean only the catalogues of holdings and the various accesses to shelf lists and circulation files. I mean also indexed compilations of texts like the Thesaurus Linguae Graecae and the huge bibliographical services made available through electronic repositories like DIALOG or FRANCIS and through specialized ones like MLA Bibliography or America: History and Life.[12] For we are in fact facing a level of possible servicing by libraries that is far beyond anything known since the disappearance of the specialized learned conservator and reader of books. Whole staffs are now ready to help with searches in most areas requested by customers. They usually prefer to do the search themselves, largely, I am told, because the cost of searches requires professional training in order to avoid waste.

It is difficult to quarrel with the computerized catalogue of a library, and such quarreling as I would muster is probably based on the awkwardness of my own use of the computer catalogue, such as with errors in typing titles, or the difficulty of browsing through the computer catalogues of most research-oriented institutions. However, the implications of this for scholarship are that it puts control over resources into the hands of technicians who cannot possibly know or understand the subtleties necessary to the successful accomplishment of their tasks. They are too few in numbers, insufficiently rewarded, and trained in the mistaken assumption that titles or indices tell everything there is to know about a book or an article, when a most important and unexpected reasoning appears in an obscure footnote and when a judgment on the quality of an article is essential to its appropriate use by anyone. Publishers or registrars in graduate schools have been told to frown on poetic or fancy titles as misleading and, therefore, difficult to encapsulate into an appropriate spot within a data bank, because the point of an entry is to say what a published work is about without having to read it.

An example: As I was writing a small piece on a thirteenth century Arabic manuscript of Dioscorides' Materia Medica, I wanted to be sure that I had not missed some recent bibliographical item. I availed myself of the services of my library for a search for items on Dioscorides published since 1980. Some 220 turned up on the list given by the computer, including 180 items before 1700 because the list included every Dioscorides card computerized in American libraries since 1980. Only two of the 220 items dealt with scholarly work on medieval manuscripts after 1980. Whether the problem lay with my mis-communication or with the librarian's technique, the experiment was wasteful and a failure.

The same problems exist for the inputting of information. In the catalogue of a recent huge exhibition held in Berlin in 1989, the impressive bibliography of thousands of items includes one author named Grabar, Oleg-André. Some computer or user had simply conflated me with my father into a single, long-lived, and very prolific scholar. Clearly no one had bothered to look at any of the items listed. It is true, of course, that, in older times as well, bibliographical lists did not mean that items on them were read, but they were not misleading, as could happen with the Berlin catalogue because of insufficient controls over those who make up the lists.

As a bit of sheer nostalgia, I will mention that the distinguished humanist scholar Ernst Herzfeld, knowing that several of his books would appear after his death, asked that they not be provided with indices. Politeness, he argued, requires that books be read for learning and pleasure in their entirety and not browsed through for specific information or for inclusion in a footnote.

Bibliographical surveys raise even more problems. None of them is complete; and usually it is the rarer language and areas which have been eliminated or overlooked, thereby strengthening the Western-centeredness of scholarly knowledge and further marginalizing a large percentage of mankind. This very traditional and inflexible organization of the material makes anything but the most conservative research difficult. Thus, the manual for a data bank on biographies of artists sponsored by the Comité International d'Histoire de l'Art (CIHA, June 1989) argues correctly (p. 36) that "choices must rest first of all on the problem-setting (problématique) of the History of Art." But it ends with proposals of the utmost specificity and with a primary focus on the clarity of the texts summarizing the lives and works of artists--in other words, with a written text simply transmitted by electronic means.

The implication of these observations is a simple one. There is one area in which electronic transformation has not merely taken place but is a reasonable thing, perhaps even a good one, because its qualities of precision and the rapidity of its reactions to requests are progressive attributes that improve one's work. The area is that of sources and of facts transformed into resources and into artifacts: that is to say, into active components of the scholarly enterprise. The question is whether society, funding agencies, and administrations of all sorts are willing to spend the large sums necessary for the successful completion of a grand, universal data bank of information and then for its operation by competent, that is to say expensive, hands. The dangers of marginalization on the one hand and of dysfunctional use on the other may render whatever is created both useless and immoral.

But, since there is no way for all facts and all sources to be processed, we are faced with a frightening question, curiously enough the question which lurks behind so much of public life in the late twentieth century: who or what body is to decide what to include, what to exclude, and how to put it in, always recalling that the matter is not as simple as copying Greek texts into a new medium of transmission and training a machine to recognize certain combinations of numbers, letters, or forms. It is also a matter of putting in an infinite number of images, writings in nearly a dozen different alphabets, a million potsherds excavated every year, and some forty languages of scholarly discourse. With computerization, there is no return to "manual" except on a very limited scale. Should one make a list of needed items in some sort of pecking order? Are professional scholars to be involved in making such lists or are they the responsibility of civil authorities? Are problems to be handled in court? Are there boundaries in the uses for electronic information between necessary and optional items? Can one draw up a charter of rights of texts or opinions? Should one do so?

Let me push my argument a step further. A system that allows for easy recognition of words, expressions, quotations, and other details in written texts and, eventually, in images or in nature is a system which violates the conceptual, aesthetic, and stylistic unity of a text, of an image, or of an environment. It is useful to know whether a given word exists in Plato or whether Dickens quotes from the Bible, but the same mechanisms of availability cannot (or should not?) be used to read a whole Platonic dialogue or an entire nineteenth century novel. Yet it is the latter which is truly important, while the former is but a convenience. Or is it not, perhaps, that in our time texts are only sources for excerpts, manuals, and anthologies? The cuts are made by known or anonymous professionals, but it is easy to imagine that, just as with the rights to choice, in giving life, to die with dignity, to enjoy the amenities of life in spite of handicaps, in fact even the opportunity (if it is one) to become a nation-state, these decisions will become the privilege of deans, trustees, judges, lawyers, United Nations officials, doctors and priests, all remote decision-makers invented by contemporary management practices.

These questions clearly need further discussion. They exemplify, to my mind, a central and very humanistic dilemma: to move ahead before learning how to drive or to do nothing by being unable to decide what to do. The many examples of both tendencies from the past ten years explain the enduring (and endearing) frustration of a humanist's life.

New Horizons

Even my own limited experience has brought me in contact with areas where computer-based activities in the humanities have been creative, and in unsuspected ways.

The first one derives from a CAD project designing the early medieval city of Jerusalem from a mix of archaeological and written documents. The project is not yet complete, but it already exhibits some of the features expected of that type of program: near infinity of points of view for a single drawing, views of the city from unexpected places, and so on. In two instances, it led us beyond the expected. One is technical. For the drawing of the Holy Sepulchre, we used the standard most recent reconstruction proposed by the most competent scholar of the matter in a typical axonometric bird's-eye drawing. Once the computer tilted this drawing to show the building as it would have appeared from the street rather than from the point of view of angels above it (as in the traditional drawing of a scholar), the Holy Sepulchre appeared as a rather silly-looking silo behind a church. Did it really look like that? Is our contemporary judgment of adequate proportions wrong? Should we change the drawing simply because we don't like the result?

In a related instance, by introducing into the drawing of a mosque a personage praying and then looking around himself, we were able to explain a hitherto meaningless architectural modification to the structure of the building. In this example, the computer was not necessary to explain a puzzle, but it helped. The point is that the computer's flexibility and the manipulations of the completed architectural drawing allowed a much wider range of possible views (in this case, views of a work of architecture or of a whole city) than in conventional ways. At their creative best, computers can provide unexpected alternatives, which enlarge understanding--thus compelling new questioning of sources or more elaborate operations for imaginative understanding.

As one transfers information or draws images on a computer, the computer begins to ask questions about whatever you feed into it, and answers have to be provided before one can proceed. This important point was made by Marilyn Aronberg Lavin as she dealt with narrative frescoes in Italian churches:[13] Walls which can be just a line on a two-dimensional plan must be given a height in a three-dimensional drawing. Often the height is not known, and the scholar is compelled to invent (or hypothesize). The invention will become truth unless contradicted by some other evidence, which means that what originated as a mere suggestion will be perpetuated as accurate. Like the drawings of the National Geographic (and of many similar publications at the edges between restricted scholarship and popular culture), these reconstructed buildings and towns or the interpretation of stories on the walls of Italian churches become thought of as facts when some of them are only judgments. The intellectual implications of these examples are fundamental: A cardinal rule of humanistic scholarship has been broken, as fact and fancy are no longer separate, even if the latter is plausible.

Yet it is possible to argue quite differently from these examples. For another creative contribution from the computer is to have brought into sharp focus an aspect of scholarly thinking that has not been highlighted as clearly in the humanities as in the sciences.[14] Between facts and interpretations there lies an intermediary zone, somewhat akin to DOS, which has nothing to do with either fact or interpretation, although it partakes of both, but without which neither is possible. In a forthcoming book, I shall argue for the existence of this intermediary zone in the visual arts.[15] It is a zone of technical (stone or brick), affective (I like or dislike something), intellectual (I know or don't know), emotional (red makes me cry), or esthetic (it is beautiful) intermediaries or mediators that are necessary means of access to all works of art. But I wonder whether such mediating features are not necessarily present in the understanding of text-based arts as well and in the comprehension of any historical moment or event. For it is these intermediaries which transform facts and interpretation into information; they make them accessible to a variety of users. The puritanism of humanistic research and of modernist aesthetics had hidden that intermediary as something shameful like ornament on buildings, but the computer, by asking simultaneously about the process of seeing, the process of creating, and the product or the scope of the data provided, may well have brought to light a hitherto hidden but essential mechanism of scholarship. In this instance, electronic means compel a field of intellectual activities that neither sought nor needed those means to shed its reluctance to think and talk about itself.

The third "new horizon" is in education. Several electronic programs exist commercially (or will soon exist) that are not of significant use to the scholar in any one field, but that are meant to introduce a student or a layman to classical Greece (the Perseus Project), to the writing of English, or to other subfields in the humanities. Because these programs are not designed to help research, they are not the primary concern of this gathering. We may, however, wish to ponder them for two reasons. First, within one generation much of the information acquired by students and the scholars of the future will probably come through the manipulation of educational programs instead of (in addition to?) the traditional reading of books. The second is that, in the instance of visual information, the computer has truly unique advantages over earlier means of information. Both reasons require short comments.

I shall be particularly brief on the first issue, as my own experience with Perseus and with a highly sophisticated program like The Census of Antique Art and Architecture Known to the Renaissance led me to two contradictory conclusions. On the one hand, I was impressed by the intelligence and quality of the programs, but on the other I could not think of anything but contrived questions to ask of them (or else I knew in what book or article in my own library to look for answers to my questions or directions for further work). The point may be important precisely because these programs do not deal with topics of scholarly concern to me, but with that peculiar area of knowledge that lies somewhere between scholarship and a general culture whose definition would require another paper. But, then, my reaction as a professional scholar is probably not very useful, and I would urge thorough user surveys of existing programs before funds are invested in new ones. Enough programs of different kinds exist to justify an in-depth analysis of usefulness carried out by several different user groups ranging from financial sponsors to administrators, teachers, and students from varying types of institutions.

Finally, electronic information is unique in its potential for revealing "facts" and interpretations with a spatial context like all of architecture, a spatial connotation as in an opera or a play, or in nearly all aspects of religious liturgies or of pious behavior. No sequence of photographs and no book can provide as well as electronic means the immediate presence of a work of architecture or of anything connected with architecture like the sculpture of a Hindu temple, the paintings of the Sistine Chapel, of a performance on a stage, or of the intensity of the pilgrimage to Mekka. Education can be revolutionized by bringing the Taj Mahal into the classroom, into any public library, or into one's study. The cost of doing it well may be staggering, especially if one weighs in the ethical issues of what would be excluded from a real visual survey of a significant part of the arts of mankind. But it may well be worth considering, for, like the genetic map being proposed by biologists, what would be provided is truly revolutionary and profoundly democratic and egalitarian: to make available in any classroom or study the fullness of experiences restricted so far to the rich or to those imaginative enough to translate what they read into images.

Learned studies will follow like so many appendices; scholarship of a more traditional kind should perhaps wait until the effects of this availability are better known. Although this last point goes against the tradition of free choice in scholarly endeavor, it may just be that electronic information will lead to a new kind of historical scholarship based on the needs of today's users rather than on the demands of the past. In a curious way which I do not entirely understand, while traditional scholarship searched for authors in the arts and causes or contexts in history, and while a "modern" or "new" scholarship focused on specific events or individual works of art, the scholarship induced by electronic means may end up by being centered on the ever-shifting receiver and user of information.[16]

In short, I am arguing that the educational potential of electronic means seems to me far greater than its potential for scholarship, at least in the humanities. Education is understood as a complex procedure of acquiring a culture and a knowledge, not as technical training. This possible educational revolution might in turn lead to a new scholarship, for the scholar in the humanities may well become less the weaver of culture than the processor of information. The danger is that he may end up by controlling what is available.

Scholarly Existence in a New Electronic Era

The popular press has begun to worry about such disabilities as may arise from too much sitting in front of screens, aching backs, swollen fingers, damaged eyes. But my aim in this section is not medically concrete, for the practice of electronically transmitted data has already created and will further develop habits of intellectual and professional life which will in turn affect scholarship as we now know it. I will mention only two points.

One is that the cost of the necessary equipment on the one hand and the existence of functions like E-mail on the other one modify the status of the researcher. He or she is less solitary, more dependent on institutional support, more in need of all sorts of technical services or else forced to acquire dozens of ancillary skills which take him or her away from working in areas of unique competence, more accessible and vulnerable to the intrusion by others (friendly or not), and less clear on the expectations made of him or her for the rewards of the academic system. It seems difficult to imagine that traditional learned articles and elegant books will emerge from the frenzy of adjusting to new programs and the investment in expensive software. Reading, the mainstay of humanistic knowledge, will no longer roam over the endless shelves of public or private libraries, but will concentrate on the needs of the moment. A certain kind of literate culture may well disappear and perhaps with it something of the imaginative creativity which fed the humanities for the past century.

The humanities are always vulnerable to the criticism of prejudice, because quality is less firmly identified with measurable achievements. As the activities of the scholar turn more and more to an area where everything is measured, from the time spent looking at a screen to the bytes logged in, the minutes spent on the telephone, or the data perused, the evaluation of quality and originality will become more difficult and the ways to reward either mechanized or arbitrary. Altogether a new kind of work ethic will probably evolve. Its rules are unclear, but they are not those of the past.

An even more important aspect of this new era lies in a new intellectual ethic. Scholarship in the humanities will be able to maintain its universal potential, its assumption of ubiquitous validity, and its availability for all fields only if the data available is either universal itself (the preferred solution) or has clearly proclaimed limits. The issue here is one of investment, but is it proper, legitimate, and worthwhile to invest partially? The transformation of the humanities by electronic means may only be worth accomplishing if it is done on a grand scale and fairly rapidly, and if it involves deprived countries and institutions as well as well-heeled institutions at the forefront of knowledge. The gap that technology has created between haves and have-nots in the sciences need not be repeated in the humanities, because the humanities depend much less on technology. But it will take place if techniques that can bridge these gaps are used to increase them by remaining even more Western-oriented than they were before. Thought should be given to the ways of avoiding these results. But, in the meantime, the fear remains that scholarship in the humanities will be absorbed in education. This may be a short-term gain, but is, without doubt, a long-term tragedy.

My concluding remarks are two. One is that the educational (in the broadest sense) possibilities of electronic devices seem to me much greater than the intellectual ones, as no new scholarship has yet emerged to replace the traditional one and the new possibilities have only occasionally led to unexpected and fruitful results. My argument is that this may be so because the most interesting and most creative aspect of electronic information is that it has opened up what I called the mediatory side of knowledge. It did not invent it, but made it more visible. And we must await the passing of half a generation before novelties enter into the humanistic mainstream. An intellectual potential exists, which may, however, be preempted by the more immediate and more rewarding educational potential.

Second, it is possible to argue that one key problem dominates both accomplishments and projects. As choices have to be made between information to include or to exclude, languages to use or to forget, audiences to target or to abandon, ways of evaluation and judgments to develop, degrees of universalism, and so on, no mechanism exists about how these choices are to be made. Should cost dominate the decision-making process? Or need? And whose need? Should users predominate in making decisions? Or wise Grand Inquisitors?


[1] To convince oneself, it is enough to wonder how often one is likely to do what so many accessory manuals to word processing manuals teach one to do. And I still remember how, once curiosity had led me to push buttons whose meaning I could not understand from the manuals, a learned article, fortunately a short one, was transformed into several pages of Christmas trees.

[2] A "scientistic" attitude is probably the product of the second half of the nineteenth century. Its key assumption is that there is a truth whose discovery is possible through the artful and thorough knowledge of original sources and through the critical evaluation of past scholarship everywhere. The objective of every savant or gelehrte is to accumulate as much information as possible in order to establish these truths. Once established, the truths are final, at least until replaced by new ones reached according to the same criteria.

[3] A colleague argued many years back that to change one word in the Greek text of a Platonic dialogue was the ultimate achievement for a Hellenist.

[4] The picture is somewhat idealized in this paragraph, but I do believe that total and equal coverage of all knowledge about mankind was the goal of research universities and other comparable institutes issued out of the European Enlightenment. Whether anyone still believes in this ideal is another story. Yet it has affected the prevalent thinking about research and scholarship in the humanities.

[5] I am limiting myself to the area of western Asia with which I have some familiarity; others would make different lists. An informal experiment of asking distinguished colleagues in traditional fields of the humanities whether they could name a single "discovery" that would have altered the direction of the field failed to elicit a single example. It is true, of course, that all could identify some fact established by scholarship which would have been irretrievably modified. All instances seemed secondary to me as an outsider to their fields.

[6] The examples I have given are mostly of archaeologically retrieved information, because archaeology always leads to new information. One can discover new data in archives or new texts in manuscript collections, but the collections or archives are already available, unless closed for political or managerial reasons. Access to them is also relatively cheap, whereas archaeology is relatively expensive. At the same time, the setting up and maintenance of archives, like the preservation of monuments and of environments, are costly affairs. Some debate has begun on the cost-effectiveness of preservation, but, as so often is the case, financial and budgetary decisions have preceded thinking about the issues.

[7] There lie behind this statement two interesting issues. One is ethical. "Texts" (written or visual, even eventful) by women and "others" were no doubt neglected and, thus, made inaccessible for generations. Such conscious or unconscious prejudices thus justify the continuing uncovering of hitherto unknown "texts," regardless of their apparent worthiness or lack thereof. Does this mean that all unavailable texts should be made accessible, leaving the decision of whether and how to handle them to users who did not participate in the choices? The other issue is a practical one. Some argue that no text is ever fully established, neither what happened on July 14, 1789, nor what Shakespeare wrote or Rembrandt painted. Every text is always in an asymptotic state, striking for an unreachable perfection, but the point may only matter if texts have rights, as some have recently argued about works of architecture and of other cultural artifacts. Although ultimately important for the purposes of this paper, the nature of a text or of an event cannot be discussed more fully at this juncture.

[8] Although Mikhail Bakhtin is the critic who developed these thoughts, their independently achieved demonstration has been most successful in movies and short stories by Woody Allen.

[9] It is easy for an older scholar like myself to argue that the recent habit of younger scholars to thank so many people even when writing a short article is a sign of decadence in humanist behavior. For, among other attributes, a humanist is defined by the ability to have opinions and the willingness to defend them.

[10] There is another interesting philosophical issue here which is that, in order to be useful to anyone, knowledge or data, even ideas, must be transformed into information. When restricted by language, the most brilliant thoughts or the most important facts simply do not exist. Does it matter? Furthermore, it would be interesting to know whether scholars do not acquire most of their information through the informal network of friends and colleagues (the locker-room syndrome) rather than through formal means. In fact you have "arrived" when you no longer have to use a bibliography.

[11] A comparable failure exists with respect to the images and our reaction to them that characterize daily life. Programs exist to help us with banking needs, but not with interior decorating or gardening, nor with our judgment of people on the basis of the color or shape of their clothes. For in all these examples we make immediate judgments without retrieving or recalling the information on which the judgments were based.

[12] I should add that I am not familiar with these specific programs and that I have always been critical of the Western-centeredness of bibliographies, computerized or not, in my own fields.

[13] Marilyn Aronberg Lavin, The Place of Narrative: Mural Decoration in Italian Churches 431-1600 (Chicago, 1990), esp. pp. 261-263.

[14] See, for instance, Gerald Holton, Thematic Origins of Scientific Thought (Cambridge, 1973), esp. pp. 36 and 57-58. Other historians of science and of culture made comparable points, like Thomas Kuhn and his paradigms, and, on a broader philosophical basis, Michel Foucault's idea of an épistemé is related.

[15] Oleg Grabar, The Mediation of Ornament (Princeton, 1992), forthcoming.

[16] If true, this point would be an interesting justification for the Rezeptiontheorie in the arts which is so popular in contemporary German criticism.