Just as all humanistic study of ancient texts is enhanced by the application of computing technologies, the modern study of biblical texts is increasingly being transformed by the microcomputer revolution. In particular, electronic digital media and computer programs are ideally suited to the tasks of organization, manipulation, storage, and dissemination of textual information. The democratization of “personal computer” technology enables anyone who uses a computer for writing or accounting tasks to explore scriptural texts with associated linguistic data bases and reference tools in a digital environment.
Manuscript Collation and Production of Critical Editions.
Computer programs help scholars reconstruct whole “texts” from ancient manuscript fragments in the same way that programs assist archaeologists in reconstructing ceramic vessels: once the physical and textual features of manuscript fragments are described, pattern‐matching programs may be used to hypothesize text reconstruction based upon groupings and physical joins. When individual texts are restored and fully encoded, programs using “genetic” knowledge may be used to suggest stemmatic (genealogical) and typological relationships, dividing texts or recensions into families. Manuscript evidence may then be manipulated programmatically to create critical editions of the text, whether on a small scale or in the production of a major edition. The advantages of creating paper critical editions from electronic data bases are great: far fewer mistakes are made in printing, and the logical and physical formats of print editions are entirely negotiable, being defined in variable sets of rules similar to electronic style sheets. (See also Printing and Publishing, articles on Production and Manufacturing and Economics.)
Data Bases for Linguistic and Literary Annotation.
Biblical texts held in simple electronic format are immediately useful since they may be edited, queried, and displayed in various ways. More subtle inquiry into the text requires that words, clauses, sentences, paragraphs, pericopes—even individual characters within words—be supplied with linguistic and literary description. Computer programs have been used to “parse” texts, assigning lexical and grammatical features to words, but in general these annotations must be made manually. Morphological data bases have been created for the Hebrew Bible, New Testament, Septuagint, and related corpora. Each word might be lemmatized (given a normalized spelling and dictionary form), augmented with a morphological description and similar linguistic‐literary annotations. Literary structural markers are placed within the texts, in addition to markers used in canonical referencing schemes such as chapters and verses. Once descriptive enhancements are made to the text, scholars may frame queries in terms of lexical, grammatical and syntactic textual features, not merely in terms of a fixed character stream. Of course, all assignment of literary and linguistic markup is subjective, so the results of searches, however quantified, must also be qualified. In addition to the widely accessible linguistic data bases for biblical texts, rich collections of rabbinic, Greek and Latin (classical, medieval, epigraphic‐ephemeral), Muslim, and Buddhist text materials are also publicly available.
Perhaps the most popular computer applications for general users are programs that permit dynamic “concording” of texts and user‐specified displays of text in concordance formats. Whereas printed concordances are static (i.e., based upon a dictionary form or other organizing principle), a computer concordance program is dynamic. Thus, rather than scan excerpted passages containing the single word “compassion,” a user specifies the search criteria, limited only by the research goals and imagination. These are some examples of searches: all sentences containing “wine” or “strong drink,” as well as “joy”; all verses in the Septuagint containing more than two imperative verb forms; all conditional clauses in the book of Exodus; all interrogative sentences in the NRSV version of the book of Job. Of course, the concordance query may address only those features supported by the data base and search program.
Hypertext and Hypermedia Displays.
Because the Bible has been the object of intensive textual focus for many centuries, a rich network of commentary and linguistic annotation has grown up around it. A relatively new technology for managing this network of knowledge is called “hypertext.” Hypertext, and hypermedia, which includes digitized graphic images and other media formats, exploits a primary distinguishing feature of “electronic” text: nonlinearity. An electronic document may be rearranged, compressed, expanded, split into logical subdocuments, or in other ways liberated from its linear‐sequential format as determined in traditional books. The concept of linking primary text with its reference works (grammar, lexicon, encyclopedia, commentary, theological wordbook) and with “parallel” texts has led to the creation of electronic books, usually scanned or keypunched, using standard reference tools. In a hypertext computer environment, each biblical verse or single word of base text is linked to portions of documents in the associated works. A hypertext application with several windows makes it possible to enjoy synchronous scrolling of several parallel texts or text versions or synchronized display of commentary text with base text. Control and navigation within such networks is not yet perfected, but hypertext technology shows great promise for individualized and interdisciplinary study of biblical texts.
Critical Study of the Bible.
Most computer applications described above are general, conceptually simple, and noncontroversial. The use of quantitative and computational linguistics methodologies for the critical study of biblical texts is still immature by comparison. Within small corners of the world of biblical studies, computing techniques have been used to study such historical‐ and literary‐critical features as textual transmission (subtle trends in spelling), authorship (unity or composite character of texts, measuring subconscious parameters of authorial style), metrical systems, translational features in ancient versions, and syntax (word order, clause patterns). The impact on mainline biblical studies has been minimal, though not insignificant. Some of the chief obstacles to acceptance of quantitative and computational methods are the difficulty of providing formal conceptual models and text representation schemes for critical inquiry; the greater appropriateness of currently understood methods to synchronic study of texts, rather than to higher‐order analytic investigation; and the preference of biblical scholars for older, proven methods of research and publishing, with concomitant slowness to embrace newer methods of working in the global electronic workspace.
The growing popularity of international academic networks for collaborative research, the rapidly improving software for electronic publishing, and newer conceptual models for managing multilingual text all promise a bright future for computing in biblical studies. A point of critical mass has already been reached in the popular sector among Bible enthusiasts; the equivalent threshold of scholarly involvement has nearly been crossed. As trends increase toward compact mass storage (one CD‐ROM disk now contains the equivalent of 150 books) and smaller, more powerful microcomputers, the growth of humanities computing appears certain. Current reports on these developments may be found in the conference activities and publication organs of the Society of Biblical Literature's Computer Assisted Research Group (CARG) and the Association Internationale Bible et Informatique (AIBI), and in annual sections of the Humanities Computing Yearbook (Oxford).
Robin C. Cover