Blog Post 2.0

Well, I am naming this as Blog Post 2.0, as there has been a serious revamping in terms of my project goals. Firstly, the work on 3D-modelling software is done. I will no longer be talking more about that. My whole focus will be on the data collection software, i.e. Suma. Oh wait, that’s off the table as well.

Will (my advisor) and I have come to a conclusion that after numerous failed attempts at implementing Suma, we will work on our own platform for data collection, which will be molded according to the needs of the Science and Engineering Library, and would be scalable enough, so that other libraries can, with some amount of work, implement this platform there as well.

Currently, the work that has been done is that I have created an array of 6 Library computers, and the librarians can just click on each computer to indicate that it is being used, or double click to indicate that the seat is occupied, but the PC is not being used and a personal Laptop is being used instead.

The next things that I am working on is to add the possibility of indicating that a certain PC is non-functional, and storing that information through different sessions of data collection.

The website is up and running (alpha-version) on www.columbia.edu/~nk2639.

Lastly, let me introduce you to an amazing website that I used for the initial development of this website. It’s called codepen.io, and it allows you to type in code for HTML5, CSS, and JS on the same portal, and it incorporates the code as soon as you complete typing. So, for the sake of testing, it is quite good a platform.

Hoping to complete a lot more by Blog Post 2.1

App4Apis (Update)

Phase: Final stages of completion

As we are going into the spring break, we want to the update the status of the project. Before diving into the details, a quick introduction of the project.

App4Apis: A one stop solution to access APIs that are designed to take the parameters in the query and return a json object. We have two types of configurations. The first is a list of preset APIs (Geocode, Human Resources Archive, Internet Archive) where we are well aware of the structure of the API and provide a form to update the necessary information for querying. The second is more generic where user provides an example url (including the query parameters) from which we identify the API request pattern. This pattern is then used to query the API with a larger dataset in the next step. We let the user download the results or the results can be sent as an email (helpful for particularly large datasets).

Status: In the last blog post, our to-do list was

  1. Finish few pending screens towards our goal.
  2. Integrate Geocode API into preset.
  3. Develop the functionality to support user to upload a file in the ad hoc query case.
  4. Cosmetic changes and make the website visually appealing.
  5. End to end exhaustive testing.
  6. Deploy!

From that list, we are able to finish the layout of pending screens, completed the integration of Geocode into the preset list, and made the website more visually appealing. We are in the process of end-to-end testing of the preset phase before moving onto the ad hoc query case. The redesigned screens of the website are attached at the end of the blog. Any feedback is welcome.

We are able to send the results to the user through email in the chunks of 1000 results per csv file. This will help the users to submit a task to the system and receive the results at a later point of time, enabling the system to handle large inputs and giving an option to user to not wait for the results.

Current tasks at hand are:

  1. Make the email sending process offline: Currently, we can send an email but we are not able to schedule it for a later point of time and allow the user to exit the screen. He can leave the application after submitting the tasks but we are not able to provide it as an option. I am working on taking the input as a task and schedule it for a later point of time.
  2. Complete testing of the preset APIs workflow.
  3. Complete testing of the ad-hoc requests workflow. 
  4. Deploy.

I am expecting to finish the first 3 tasks by this month end so that we can turn our focus on deployment from the start of next month.

Thanks,

Rohit Bharadwaj Gernapudi

The website screens are:

Capture_1

 

Capture_3Capture_4

Capture_2

Motivated Object-oriented programming: Build something from scratch!

This semester, I am putting together a 3D data visualization tool using Processing, to demonstrate the usefulness of this language for rapid prototyping of ideas for 3D graphics applications, including virtual reality. The code, documentation, and issue tracking are, as of recently, hosted on Github here.

If you’ve ever taken a programming course or encountered an intermediate online tutorial in a language that encourages object-oriented programming (OOP), you will have probably studied an example that describes the language feature in terms of a motivating example (or two). You might have designed Dogs that subclass Animals, which emit ‘woof!’s; Cars with max_speeds; Persons with boolean genders, and so on…

But these are toy examples that are underwhelming and too straightforward to help in practice. Furthermore, they’re not all that interesting. Once you’ve completed the typical OOP introduction, you often end up with code that performs a function implemented elsewhere a hundred times over, with greater efficiency. That in itself is not that bad: we all need pedagogical examples simple enough to introduce in a 75-minute lecture. What’s worse is that you’ve written code that you simply don’t care about. And that’s a recipe for demoralization.  For the beginning programmer (yours truly) OOP becomes a convenient abstraction that bored you to death once or twice (and went ‘woof!’).

Nothing motivates design like the task of modeling a system with which you are otherwise quite familiar. Daniel Shiffman’s free online book, The Nature of Code, focuses on the simulation of natural (physical) systems and gently introduces OOP as a means to modeling “the real world” (rather than a toy example of a Car with an Engine). Of course, Shiffman’s models are simplistic too, relying basic mechanics and vector math to animate their construction. Nevertheless, his examples and exercises leverage your best guesses about how the world works and challenge you to implement them in code, which is the nature of (many kinds of) programming: to take the big world and, in code, make a small world that — invariably imperfectly — reflects the large.

My advice, then, is dive right in with a project that you care about, preferably a project that requires many “moving parts” such as interdependent entities (nodes that talk to and consume others), user extensibility, and large amounts of object reuse: a model of a mini-universe of sorts. The model doesn’t have to be physical: it can be of social relationships, knowledge, data. All it has to do is matter.

In the remainder of this post I will sketch the design of the project I am currently working.

Project design

The goal of the software is to generate 3D data visualizations from quantitative and qualitative data ingested from a CSV file. The display of the visualization should be separate from its construction, so that ultimately different display ‘engines’ can be swapped in and out to allow for the presentation of the visualization on, for example, the computer screen, a VR headset, a smartphone, or even in the form of a 3D-printed model.

As it stands, the engine has a structure as depicted below. Incidentally, there is a well-documented and ‘popular’ domain-specific language for the description of the relationships between objects, which can generate similar-looking diagrams out of code, called the Unified Modeling Language (UML), but to take it on requires its own post. So, the picture below is a rough approximation of the design of the engine rather than a reproducible blueprint (such as that provided by UML and the like).

There is exactly one Scene in the application, which contains a list of PrimitiveGroups, which themselves contain Primitives. A Primitive corresponds to a single data point: one row of the CSV input. A Primitive has a location in 3D space, as well as a velocity, which allows for the animated restructuring of the Primitives on the fly. Some simple primitives are included: a sphere (PrimitiveSphere), a cube (PrimitiveCube). New Primitives must subclass the Primitive class (which should never be instantiated: it is an abstract class). Primitives must have display() and update() methods. The display() method contains the calls to Processing’s draw functions (e.g. box()). At this point, you realize that Primitive should (and can) be implemented as a Java interface. After all, Processing.org is Java at base. The Scene also contains an Axis object which can be switched on or off.

How does the engine generate the Primitives per the contents of the data file? And according to what rules? In many ways, this is the heart of any visualization engine. The concept of a DataBinding is introduced.

A DataBinding realizes a one-way mapping from the columns of a data source (i.e. kinds of data) to the properties of a Primitive, by returning a PrimitiveGroup which contains one Primitive for every row in the data source (read by the DataHandler, which is a very thin wrap around Processing’s Table object).

The mapping is specified by the contents of a DataBindingSchema, which is a hashmap (read in from a YAML file, see examples 1 2) in which the keys are the properties of Primitives and the values are column names in the data source. As a consequence, the DataBindingSchema specifies how the visual properties of Primitives respond to the data stored in the CSV file that is being read in. The DataBinding also has a validation method which throws a custom exception when the DataBindingSchema refers to column names and/or primitive properties which do not exist. It will ultimately also do type-checking.

 

Papal Documents Project Part #3 by Yanchen Liu

This post focuses on my work to digitize and transcribe Western MS 82, a major canon law text, describing some of the goals of this project as well as to indicate why this particular text should be of interest to the scholarly community and hence, why its introduction into broader circulation is particularly warranted. I then conclude with a brief note updating my work to produce a new, expanded version of the Libraries’ webpage Papal Documents: A Finding Aid.

Western MS 82, currently preserved in the Rare Book & Manuscript Library of Columbia University, is the most de luxe one of the six surviving manuscripts of Collectio Sinemuriensis, or Semur Collection. Modern-day scholars generally consider the second recension of this collection to be the earliest “Gregorian Reform” canonical collection, which embodies the spirit of the great Church reform movement of the eleventh and twelfth centuries. According to Linda Fowler-Magerl, the initial version of this canonical collection was composed at Reims at the close of the tenth century. While we know something of this first version, many of its canons have not survived. The second version, however, is better attested. There are six surviving examples of the second iteration, which date from the second half of the eleventh century and the early twelfth century. The earliest, most complete of these extant manuscripts, MS Semur-en-Auxois, BM M. 13, has given the collection its name. Columbia’s Western MS 82 seems to have been copied during the first decades of the twelfth century in northern France. To date there has been no critical edition or systematic study of either the Collectio Sinemuriensis or of Western MS 82.

Columbia acquired Western MS 82 in 2004. At the end of last year (2015) the manuscript was scanned into high-resolution images by the Preservation and Digital Conversion Division of Columbia. My project on this manuscript aims at producing a digital transcription that preserves nearly all the scribal and spelling features of the manuscript to provide historians, paleographers and medievalists not only with the contents of the full text but also textual clues that can provide valuable data for paleographical, linguistic, and historical investigations.

Western MS 82, as a parchment manuscript, comprises 15 quires and 119 folios, with the first folio and the last two folios of the last extant quire missing. On the verso of the first flyleaf, an eighteenth-century annotation indicates that this manuscript contains “Notitia provinciarum ecclesiasticarum Galliae“, also known as Notitia Galliarum, and “Collectio canonum“. Texts on the first nine folios, composed of the Notitia Galliarum and the capitulatio, a list of the rubrics of the canons in the collection, are laid out in two columns. The remaining text on ff. 10-119, i.e., the canons themselves, are written in single column. The canons are grouped into three books. Each book begins with an exaggerated and decorated initial (on folio 10r, 49v and 99v) that is nine to twenty-one lines high. The bodies of the canons are copied in a neat book hand. However, the rubrics of the canons, which are written in red ink, are very probably later add-ons, as they are often written in uneven lines at the end or even on the edge of paragraphs, and are enclosed by a curled line drawn on the left side of them to distinguish them from the canons. The script has an appearance of early stage Proto-Gothic.

Compared with other manuscripts of the Collectio Sinemuriensis, one of the most significant and intriguing features of Western MS 82 is that it opens with Notitia provinciarum ecclesiasticarum Galliae, a list of metropolitan cities and provinces of Gaul

.MS 82 picture 2

We do not have the first half of the list as a result of the missing first folio. Nevertheless, the surviving second half of the document, as well as the very fact that the compiler(s) of Western MS 82 chose to incorporate this document, raises many questions and invites further examination. This document, also known as Notitia Galliarum, was originally composed between the late fourth and the very early fifth century, before the massive Germanic invasions by the end of the first decade of the fifth century that isolated much of Gaul from the remainder of the fracturing Western Empire. There is still wide debate as to whether Notitia Galliarum was initially created out of administrative interests by a political institution or by a local church. Nevertheless, the earliest manuscript of this document, dating to the seventh-century, contains in its rubric “ut ordo exposcit pontificum” suggesting that at least since the early Middle Ages, Notitia Galliarum has been regarded and employed as a religious document mapping an ecclesiastical space

.MS 82 picture 3

The Notitia appears in several medieval canonical collections. There are, nevertheless, several peculiar facts about the specific version included in Western MS 82. In the first place, of all the surviving manuscripts of Collectio Sinemuriensis, only Western MS 82 incorporates this document. Further, while versions appears in other texts have been updated, this version appears to have retained its late antique form almost entirely. In other words, it was not updated to represent the actual provincial configuration of early twelfth century France. Only two new cities, civitas Nivernensium and civitas Nundunum, were added to what is found in the oldest extant version, included in a seventh-century manuscript. Some cities included in the list were not actually not bishoprics during the eleventh to the twelfth century, e.g., civitas Oscismorum, civitas Diablintum, civitas Bononensium and civitas Tungrorum. Why did the patron of Western MS 82 want such a “dated” text to stand at the beginning of the manuscript? Why did he invoke the ancient divisions of Gaul? What kind of a “conceptual territory” did he envision this canonical manuscript to represent? There may never be precise answers. Two hypotheses, however, could be employed to conjecture.

The first possibility is that this text is a product of territorial conflicts in northern France between ecclesiastical institutions. These were certainly common in the high Middle Ages. The patron of Western MS 82 might have requested the incorporation of old Roman texts, such as Notitia Galliarum, to buttress his territorial claims. The second possibility is that this ancient document, together with the lands of Gaul that it delineates, may have nothing to do with the real space, but that the patron of Western MS 82, by incorporating in the manuscript an ancient text that depicts the administrative system of the area, sought to invoke a sense of authority.

Both of these possibilities point to an emphasis on the authority and legitimacy manifested through antiquitas. Such emphasis is further accentuated in this manuscript through canons like the one that opens the second book (ff. 49v – 50r), where the initial letter P is exaggeratedly decorated (twenty-one lines high with a form of a bird or dragon vomiting tendrils and flowers). The red-ink rubric of the canon reads “Quod non liceat apostolicis successoribus constituta predecessorum infringere.” This canon, possibly drafted by Hincmar of Reims, is ascribed to Pope Symmachus (r. 498-514) and prohibits the successors of the ancient popes from abrogating the administrative and legal decisions of their predecessors.

MS 82 picture 1

At the same time, Western MS 82, through this specific canon, appears to have employed antiquitas to suppress, rather than buttress, the legislative power and ecclesiastical rights of the contemporary papacy. Such a feature would seem to distinguish this canon law manuscript from the kind of canonical collection likely to be favored by the reform papacy of the eleventh and the twelfth century, which generally asserted the intrinsic juridical power of the popes. Hence, modern scholars’ widely shared view of this canon law manuscript being essentially a product of the Gregorian Reform may have oversimplified the character of Western MS 82. Last but not least, this canon, together with other canons in this manuscript, appears to indicate a connection between Western MS 82 and Reims. These facts about this manuscript seem point us to the historical political tension between the archiepiscopal see Reims and the papacy, and the power relations between Rome and Reims during the Middle Ages.

The digital transcription of Western MS 82 will hopefully make this this manuscript more accessible to such investigation. The framing of the transcription with XMLtags in accordance with the rules of the Text Encoding Initiative will also hopefully enable users of the project to easily navigate and position themselves within the canons, to search for specific terms and variants within the codex and retrieve data in a more orderly fashion, and eventually to more easily conduct comparisons between Western MS 82 and the other surviving manuscripts of Collectio Sinemuriensis.

In closing, I should note that my project of updating the Papal Documents finding aid is under its final review. Most of the entries now have an annotation which introduces and summarizes the work. In addition, structure of the whole document has been adjusted to enhance its navigability. The “Background Bibliography” part has been considerably augmented, in order to help researchers and students to grasp the significance of individual works, and the history and terminology relevant to the study of papal documents.

Knight Lab Timeline JS3, an easy to use multimedia timeline tool

TimelineJS is an easy to use timeline creator developed by Northwestern University’s Knight Lab. The timelines created with TimelineJS can include a variety of multimedia, are easily publishable and embedded in websites, and is generated on demand for the user. This means that a timeline can be updated after being published — a very useful feature for projects which continue past a publication date. Further, TimelineJS is open source, highly customizable (for those technically inclined), and available through the Mozilla Public License (MPL), version 2.0. Most importantly, TimelineJS allows you to create beautiful projects with relative ease. You can see some examples of timelines created with TimelineJS by clicking here.

The documentation for TimelineJS is extensive and covers most of the basic topics, so there is no point in creating a basic tutorial in this blog post. A timeline can be created by following the 4 basic steps listed here. Instead of giving a crude tutorial, I will use this post to highlight some potential uses for this tool that are not immediately evident. With a little bit of work, this can be a fantastic tool to for film education and film production planning.

Two other uses for this tool come to mind for filmmakers and film students. The first is creating timelines that break down movies by shot in order to explain them more easily. I have prepared a timeline just for this purpose as an example, it can be seen below. Click here to see it in full screen.

The second use for this tool includes planning productions. With a little help, a director could easily upload his storyboards and visual inspirations for a scene into a timeline like this, making it easy for the crew to get a chronological list of inspirations for any given scene. A crew with iPads could have easy access to the entire archive of visual references for a scene without much trouble.

Timeline JS is a wonderful tool that, with a little bit of creativity, can be used for a variety of different purposes.

App4Apis (Pre-Alpha)

Phase: Pre-Alpha.

A month from the last update and we are back with another blogpost to quickly update our status. We will start with a brief summary of our work and then the mandatory status update.

What are we trying to do?

With an increase in the data resources around the world and as most of them are exposed through a REST API, we felt the need to develop an application that can ease the access of these datasets. Towards that direction, we intend to develop a web application which can help the users query the API of their choice without developing any new tools or scripts. More details on the project can be found at link.

Current Status:

During our last update, we were working on the designing the application and building the code to access 3 APIs of our interest (Geocode, Internet Archive, and Human Resources). This month, we freezed the design for our initial version. (Designer alert: Design is freezed only on paper and can be/ will be modified till project is released, and sometimes even after the release!) The screens below provide the barebones visualization of the application (by barebones, it is not visually appealing and reflects just the functionality). The cosmetic changes will be duly applied as we finish the skeleton and flesh of the application. The screens (in the order of their access are)Capture1

 

There are 2 flows in our application.

Flow 1: User will use any one of the preset APIs by uploading a csv file and give the column numbers of the required parameters to query the API. We will run the preset configuration for the csv file and will return a csv that can be downloaded with the results column populated from the API results. Corresponding screens are

Selecting the preset:

Capture2-1File upload and Values to be entered:

Capture2-2

File populated with the results:

Capture2-3

 

 

 

 

 

Flow 2: User will enter a new API url. We expect this url to be complete with all the necessary parameters to query the API as this url is used as an example for all the subsequent new requests. For example,

https://api.cityofnewyork.us/geoclient/v1/address.json?houseNumber=314&street=west 100 st&borough=manhattan&app_id=7a4e791c&app_key=2015DigitalIntern

Here the base url is https://api.cityofnewyork.us/geoclient/v1/address.json? and the query parameters are houseNumber=314&street=west 100 st&borough=manhattan&app_id=7a4e791c&app_key=2015DigitalIntern

We will query the API with this sample url to get all the output parameters and will allow the user to select the fields of his interest. We then allow him to give new values for the query parameters and display the results for the specified output parameters.  Corresponding screens are

User to enter the URL:Capture3-1

User selects the output parameters:

Capture3-2User gets the output:

Capture3-3

 

As you can see, the screens are designed only to implement the required functionality. That is why we are still in the pre-alpha phase of the project development. The tasks that are in the pipeline are

  • Finish few pending screens towards our goal.
  • Integrate Geocode API into preset.
  • Develop the functionality to support user to upload a file in the adhoc query case.
  • Cosmetic changes and make the website visually appealing.
  • End to end exhaustive testing.
  • Deploy!

We are planning to finish these tasks by the end of January, 2016 and release the first version of our product.

Until next blog post, ciao.

Thanks,

Rohit

OpenSCAD & SUMA updates (DCIP Blog Post 1.1)

As promised in the last blog post, here is a writeup about the “coding” based (amazing) 3D modelling software. OpenSCAD is a 3D modeling program based on constructive solid geometry (CSG), i.e. a complex surface or object can be created using Boolean operators to combine objects. There are some basic commands that one should know the usage of, before working on the software, and if they do, 3D modelling will be a “relative” piece of cake!

These 10 commands are in three different categories: shapes (cube, sphere, cylinder), transforms (translate, scale, rotate, mirror), and CSG (Boolean) operations (union, difference, intersection). Most of the operations that you need to do can be done with these 10 commands. So, to summarize, 3D modelling in OpenSCAD is as simple as mastering the usage of 10 commands, and then converting your problem (3d-model) into a combination of the shapes using CSG.

A use of the above commands is shown below for seeing just how powerful these commands can be — click to enlarge!

power of OpenSCAD commands

The other project that I mentioned in my last post is SUMA. There are two possible ways to go about it. A trial version can be developed using Vagrant and Virtual Box (they have put a version on their machine, and it can be accessed (theoretically) from your own PC).  I have been in constant touch with the group at NCSU Libraries, who developed this system, and they have been helping me with trying to debug the system, but there has been only limited success in this direction. I am able to run the client part of the system, but there are issues with the Ansible (automation platform) file, and I am waiting for them to release the 2.0 version of the same. It has been quite an informative interaction with them, and they have been quite helpful in this regard.

The other way is to go in full-fledged, and develop a complete system, coding all the server and client pages, and have everything on the same machine (instead of logging onto the virtual machine as in the case of the trial version). I have done a fair bit of coding, along with SQL integration for creating necessary databases for the program, managing correct permissions for admin and user. I am still working on the server part of things here, and since this is the first time I am working on something like that, there is a lot of reading involved as well. The plan is to first develop the framework on a PC, and then repeat it on a more restricted mobile/tablet platform, after the required system works seamlessly on the PC.

Cheers,
Nikhil

Perceptual Bases for Virtual Reality: Part 1, Audio

An important part of creating a truly immersive VR experience is the accurate representation of sounds in space to the user. If a sound source is in motion in virtual space, it stands to reason that we ought to hear the sound source moving.

One solution is to this problem is to use an array of loudspeakers arranged in space around the user. This technique – so-called ‘ambisonics’ – is not only expensive, but also requires space way in excess of the footprint of the average user seated at a consumer-grade computer. For example, Tod Machover’s (MIT) setup is shown below, and is typical of some ambisonic setups. The 5.1 standard for surround sound in home theatres (or related extensions, such as 7.1 – meaning 7 speakers plus a subwoofer) is consumer-grade technology which operates on a similar principle. Clever mixing and editing of movie soundtracks aims to trick the listener in to perceiving tighter sound-image associations by cueing sounds, the sources of which are apparent from the visual content of the media being displayed or projected, in a location in the sonic field corresponding to their virtual source.

Ambisonic sound set up with a circular array of Bowers and Wilkins loudspeakers surrounding a listenr

Tod Machover’s Ambisonic Setup (Source: http://blog.bowers-wilkins.com/sound-lab/tod-machovers-ambisonic-sound-system/)

It might seem counterintuitive, but most of the psycho-acoustical cues that humans use to localize sounds in space can be replicated using headphones. This follows from the unsubtle observation that we have only two ears, and the slightly more subtle reflection on the results of experiments designed to establish precisely which sources of information our brains depend on in determining the perceived location of a sound source. This behavior is known in the related psychological literature as acoustic (or sound) source localization.

Jobbing programmers, however, don’t have to wade through the reams of scientific research that substantiate the details of the various mechanisms of acoustic source localization, as well their limitations and contingencies. The 3D Audio Rendering and Evaluation Guidelines (Level 1) spec provides baseline requirements for a minimally convincing 3D audio rendering, and provides physiological and psychological justifications for these requirements. Whilst it is exceptionally outdated and outmoded, it still provides a useful overview of the important perceptual bases for VR audio simulation. In particular, this specification is one of the motivating documents in the design of the (erstwhile) open source OpenAL 3D audio API and its descendants. In the remainder of this post, I briefly describe the most important binaural (i.e. stereo) audio cues which are thought to facilitate acoustic source localization in the human brain.

Interaural Intensity Difference

In plain terms: the intensity of the sound entering your ears will be different for each ear, depending on the location of the sound with respect to your head. This is due to two factors:

  1. sound attenuates in intensity over time as it passes through a medium, your ears are a non-zero distance apart, and sound propagates at a finite speed
  2. (more significantly) your head may ‘shadow’ the source of the sound when the source is off-center

You might think that you don’t have a big head, but it’s big enough to make a difference!

Interaural Time Difference

Since sound has a relatively unchanging velocity as it passes through the most common media that we may wish to model virtually, the time that it takes for sound to propagate from the source to one ear differs, very slightly. Our mind is sensitive to these differences, perhaps owing to the evolutionary utility of knowing the location of noisy predators (or prey). Knowing that the speed of sound is roughly constant, the mind performs a rudimentary triangulation in order to locate the sound source in the relevant plane, relative to the listener.

Audio-Visual Synergy

Finally, a less physiological cue: the co-incidence of aural and visual stimuli tricks the brain into attributing the contemporaneous sound to the source denoted or signified by the visual stimulus. By keeping latency between aural and visual stimuli low, we improve the likelihood of the perception of audio-visual synergy. This, in combination with the careful modeling of the above phenomena (amongst many others), contributes towards a more immersive aural experience. In turn, this improves the credibility of VR simulations that have an aural component.

Why Papal Documents? Post #2

As the first phase of my project of updating the Papal Documents: A Finding Aid is approaching the end, I discovered many possibilities of making this document a more elaborate and helpful tool for papal documents historians, European history scholars and all medievalists. However, before summarizing my work during the second phase of my project at DHC and discussing the possibilities for the next phase, I would like to talk about why papal documents deserve attention and close studies from scholars and students.

Briefly speaking, papal documents are one of the key lenses for investigating the history of papacy, which, as a governmental institution, has been playing a vital role on the stage of Western European history for nearly two millennia, especially during the Middle Ages. These documents, issued from the Roman Curia as Apostolic letters under the order of Roman pontiffs, carry invaluable information that covers various sorts of ecclesiastical, imperial and social affairs. These affairs comprise, but are not limited to, convocations of general councils, appointments and deprivations of episcopal orders, canonical constitutions for licit marriage, grants of indulgences and provisions, excommunications of criminals and numerous conflicts between Roman pontiffs, local bishops, emperors, kings, religious groups and communities all across the Europe. Medieval Christendom cannot be sufficiently mapped without tracing the contacts of the Apostolic See from the Mediterranean to the North Sea, nor can the political, economic and cultural transitions on the European lands be fully understood without taking into consideration the official pronouncements from the highest authority of the Status Pontificius. For scholars of political and church history, the political implications of the Walk to Canossa have to be detected and analyzed appropriately with reference to the correspondence between Pope Gregory VII and King Henry IV. For paleographers and researchers of medieval documents, Pope Innocent III’s letter to the Archbishop of Milan on the techniques identifying forged papal bulls – including the examination of the formula of the writing – provides an indispensable image of the production of both genuine and faked documents during the High Middle Ages. For scholars of Western European legal history, especially the history of medieval canon law, papal decretals have been one of the principal resources for canonical compilations and legal studies. For medievalists who are interested in material culture, the papyrus rolls, parchments, lead seals (bullae) and different kinds of threads used by the Papal Chancery can also serve as crucial instruments for examining medieval manuscript culture.

Creating an up-to-date, easy-to-use guide to help scholars and students search for and learn to use papal documents for their own studies, therefore, has been my goal in updating the existing Libraries’ Papal Documents Finding Aid.

 

Over the past several weeks, I finished replacing all the outdated entries in the Finding Aid with their most recent editions. Further, other useful reference books and important introductory articles, such as Christopher R. Cheney’s The Study of the Medieval Papal Chancery: the Second Edwards Lecture delivered within the University of Glasgow on 7th December, 1964, which was published in 1964, have been added. Together with these works, entries of paramount collections such as Harald Zimmermann’s Papsturkunden, 896-1046 and Dietrich Lohrmann’s Das Register Papst Johannes’ VIII, as well as online databases of papal documents such as Thomas Frenz’s Materialien zur Apostolischen Kanzlei were also added to the Finding Aid.

Further, every entry in the Finding Aid now has a link that directs researchers to the exact CLIO page, which will be especially helpful for the searching of articles that do not have their own CLIO page but are located in particular volumes of research journals.  Moreover, I have found that some important existing entries for monographic works, such as Johannes Ramackers’ Papsturkunden in den Niederlanden: Belgien, Luxemburg, Holland und Französisch-Flandern, cannot be found through CLIO due to Columbia’s cataloging system, even though they are actually in the stacks. These entries are now provided with detailed Call Numbers that can point researchers to their precise location in the libraries.

Moreover, I have made two supplementary lists in the process of updating the Finding Aid: one, a list of collections that Columbia libraries currently do not hold, and another list of duplicate entries in CLIO that should ideally be brought together in a single record. These lists help enrich Columbia’s collection of important works on papal documents and avoid confusions in case researchers encounter redundant entries of the works they are looking for in CLIO.

My next step in this project will further strengthen the usability of this document, by 1) adding a glossary of the terminology commonly used in the field of papal documents, 2) attaching brief annotations for every entry, and 3) strategically re-arranging the entries to build up a systematic, user-friendly guide that is not only informative for scholars and researchers in this specific area, but also easy-to-access for all interested medievalists and students. I also plan to provide pointers to the digitized manuscript images of individual papal registers located the Digital Humanities Center to make this useful resource more readily accessible to Columbia scholars.

Papal Document Finding Aid & The Digitization of Western MS 82

 

Greetings, digitalists and medievalists.

I am Yanchen Liu. I will be working at the Digital Humanities Center of Columbia University Libraries as an intern this year. I entered the Ph.D. program in the History of Christianity at Columbia University in the fall of 2015. My primary research interest concerns the interaction between the medieval papacy and the compilations of medieval canon law during the eleventh to the thirteenth century in Western Europe.

My project at the Digital Humanities Centers comprises two parts. The first part originally aimed to develop a prototype for an online database, based on a digitization of older finding tools, that could enable scholars to quickly access papal documents. I planned to focus on the pontificates from Pope Leo IX to Gregory VII (1002-1073), using the second edition of Philip Jaffé’s Regesta Pontificum Romanorum as the basic text. The plan was to update the information, references and bibliographies in Jaffé and to connect the basic text to other online resources and databases on papal documents. However, during the process of finding the most recent resources on papal documents to update the information in Jaffé, I have come across two current projects of the Göttingen Academy of Sciences and Humanities: Regesta Pontificum Romanorum online, as well as a third edition of Philip Jaffé’s Regesta Pontificum Romanorum, which will eventually produce something like what I wanted to begin developing.  I have established connections with the researchers of these projects to learn more about their work, and hope to continue to follow those projects closely to see if they are implementing all of the features that I had hoped to include in my database.

From this experience I have discovered a large number of key research projects, online databases, paper publications and analytical works on medieval papal documents produced during the last few decades scattered around the world, and unfamiliar, or even unknown to scholars and students. Therefore, I have decided to switch my goal for this part of my internship to contribute my findings to Karen Green’s current project to update the Libraries’ Ancient and Medieval resource guide. I believe this new information can prove useful for scholars and students as they seek to navigate across the complex world of medieval resources.

The second part of my project will involve a digitalization of Western MS 82, a twelfth-century medieval Canon Law manuscript preserved in the Rare Book & Manuscript Library of Columbia University.

Columbia acquired this manuscript in 2004. The codex has 119 folios and was probably produced in the region of Champagne in France. High-resolution images of nine pages in this manuscript are available on Digital Scriptorium.

Currently I am transcribing the capitulatio of the canonical collection Collectio Sinemuriensis in the manuscript. With the help of Dr. Consuelo Dutschke, curator of the Rare Book & Manuscript Library, this manuscript is being processed by the Conservation Department of Columbia. Hopefully, this preparation work will be done by the end of November, and further transcription and digitization will follow immediately after that.