Skip to main content

Review: Enslaved

A review of Enslaved, a discovery hub for information about people involved in the historic slave trade, directed by Daryle Williams, Walter Hawthorne, and Dean Rehberger

Published onMay 28, 2024
Review: Enslaved

Enslaved: Peoples of the Historical Slave Trade

Project Directors
Daryle Williams, University of California, Riverside
Walter Hawthorne, Michigan State University
Dean Rehberger, Michigan State University

Project URL

Project Reviewer
Toby Burrows, University of Oxford and University of Western Australia

Project Overview

Daryle Williams, Walter Hawthorne, and Dean Rehberger

In recent years, a growing number of archives, databases, and collections that organize and make sense of records of enslavement have become freely and readily accessible for scholarly and public consumption. More and more researchers of slavery studies are accessing both online and physical archives for their scholarly work and publications. At the same time, there has been a surge in popularity and interest by public and descendant communities to recover lives and history of those enslaved. This proliferation of projects and scholarly work presents many challenges. The siloed nature of the work makes disambiguating and merging individuals across multiple projects very difficult. Likewise, given the variety of methods and approaches for collecting information, searching, browsing, and doing qualitative and quantitative analysis across information sets is all but impossible. Digital projects regularly run out of funding and can disappear offline; in a similar way, the research work of scholars in the humanities that informs publications often remains inaccessible in desk drawers. This inaccessibility forces scholars and the public alike to recover the strands of information about enslaved individuals from the same archives over and over again. Worse yet, the information is gobbled up by pay-for-access services that keep the information, which should be free, behind paywalls.

In response to these challenges, Matrix: Center for Digital Humanities & Social Sciences at Michigan State University (MSU), in partnership with the MSU Department of History, University of California, Riverside, and dozens of scholars at multiple institutions, has begun work on Enslaved: Peoples of the Historical Slave Trade, a constellation of projects, software, and services built to recover information about individuals who were enslaved, owned slaves, or participated in the historical slave trade and make that information searchable, accessible, and sharable., then, is: 1) a long-term initiative that accepts the idea that it will take decades of work to begin to make even a small dent in recovering information about millions of individuals and their stories, 2) a web of interconnected projects that need varying types of expertise and workflows, and 3) at heart an aspirational Linked Open Data (LOD) initiative. 

The projects comprising are as follows (more information can be found online):

  • Ethics: Responsible stewardship of historical data about enslaved people in digital spaces is the cornerstone that informs the development of the different projects. Although the historical record and data-driven approaches to history have often rendered enslaved people as nameless, we aspire to use historical data collections to recover, aggregate, and make accessible the names and life stories of enslaved people. We recognize that primary source material reflects the perspectives and biases of the material’s authors, reflecting the systems of power and racist ideologies of the periods in which they were written. Data and information ethics is its own ongoing project that informs the work of the other projects.

  • Journal of Slavery and Data Preservation: The Journal of Slavery and Data Preservation is a unique peer-reviewed journal that publishes the evidence and data behind DOI peer-reviewed publications of humanist scholars in the field. It is the portal for all information and data that enters as Linked Open Data into the hub. It also acts as a place to preserve and review online projects (all data and information is not only held by the journal but also at Harvard’s Dataverse). It has become the most successful part of the project, promising double the amount of issues in 2024.

  • Stories: To make the stories of individuals central as well as provide educational materials, we included a number of stories (over 100 and growing) that can be explored by place, gender, and life events. Newer stories are being added by contributors. Place and life events also connect to new data and information that enters the hub.

  • Education: As part of the project, a number of lesson plans and opportunities for learning are being developed including Summer Research Opportunity Programs and an NEH Institute.

  • Hub: One mistake is to think about the hub where researchers can discover records and linked data as magical tools for discovering individuals, a way to bring together widely scattered shards and fragments to help rebuild lives often lost to history. While this is an aspiration and it can happen, to do so will require billions of bits of more information to make this a regular experience. What is key to Linked Data is not simply the ability to draw together and disambiguate records, but, most important, the ability for scholars and the public to discover the original records (which allows the scholar and public to reconstruct lives).

  • Infrastructure and Best Practices: One of the key parts of the project is to build the capacity to bring together billions of bits of disparate data. To do so required building a knowledge graph that makes use of a modular ontology and Linked Open Data (LOD). Much of the work for the projects is hidden beneath the surface in the use and feature additions to Wikibase and Open Refine and triple stores. This represents weeks with 20 or more scholars in a room developing Competency Questions and Controlled Vocabularies. 

In addition to the project directors, dozens of people and institutions have helped to develop, ranging from pioneers in the field like Gwen Midlo Hall, who inspired the project (people are too numerous to mention here but appear on the projects sites), to the Hutchins Center for African & African American Research at Harvard University, the Data Semantics lab at Kansas State University, OCLC, Social Networks and Archival Context project, Wikimedia Deutschland, among many more. The work has been generously supported by the Mellon Foundation, National Endowment for the Humanities (NEH), the New York Life Foundation, and many individual donations. News of the project, as well as more detailed information and documentation, can be found on the site.

Project Review

Toby Burrows

Enslaved: Peoples of the Historical Slave Trade is described as “a constellation of projects, software, and services built to recover information about individuals who were enslaved, owned slaves, or participated in the historical slave trade.” It brings together data from 27 projects relating to the African slave trade, listing more than 678,000 people, 433,000 events, and 9,400 places. 

The project aggregates data extracted from primary sources but also includes more than 120 narrative stories of individual enslaved people, written to provide a more humanistic context than the basic data around names, events, and places. Some of these stories — and some of the data — are contributed by researchers external to the project.

The project has a Statement of Ethics which addresses questions around the “responsible stewardship of historical data about enslaved people in digital spaces.” The focus on individual people and their stories is intended to go beyond the limits and biases of existing archival structures and sources. The project’s controlled vocabularies engage with “the evolving scholarship on anti-racist classification and terminology in information studies and black digital humanities.”

The technology used by the project is described as “building a knowledge graph that makes use of a modular ontology and Linked Open Data (LOD),” but little further detail is given. Blazegraph is used as the triple store. Every entity has a unique identifier, in the QID format used by Wikibase and Wikidata.

The ontology is extensible to accommodating new data sources.1 Its development was guided by a set of Competency Questions (natural-language search queries) collected from the public, schools, and researchers.2 There are controlled vocabularies covering Events, Persons, Places, and Sources.3

The project is a uniquely valuable way of aggregating a large amount of data relating to African enslaved peoples, building on a wide number of specific projects. It also aims to add a human perspective to the raw data through narrative stories for specific individuals. A good deal of contextual material is provided through articles in the project’s data journal, lesson plans for use in schools, and full accounts of the various sources. The result is a rich and sophisticated listing of nearly 700,000 people connected with the African slave trade, with scope to keep adding more data and stories. deploys various standard Linked Open Data elements (ontology, unique identifiers, graph database) to provide a framework for aggregating and exploring data from disparate sources. Searching and browsing are relatively straightforward, using the different facets of the ontology. Visualizations, in the form of bar graphs and pie charts, are also provided to summarize the data numerically.

The work of the project has been supported by the Mellon Foundation, the National Endowment for the Humanities (NEH), and the New York Life Foundation, as well as through individual donations. Engagement activities take the form of stories and lesson plans developed with teacher partners and aligned with school curricula. Other learning opportunities include an annual Summer Research Opportunity Program aimed at “encouraging talented underrepresented undergraduates from across the country to pursue graduate study,” and a four-week NEH Summer Institute for higher education faculty and graduate students. The project has been reviewed in various media, including the Washington Post, the Smithsonian Magazine, and National Public Radio.

The project also produces the Journal of Slavery and Data Preservation, a peer-reviewed journal which publishes the data behind the work of scholars in this field, and also helps to preserve and review online projects. Over the last four years, the journal has published almost 70 articles.

There are a few areas for future growth. The record for an individual person includes a link to the data source from which it originates, but not to the actual record for that person. Person-level links would be helpful for checking for additional information in data sources. Individual person records and narrative stories ought to be cross-linked. Another valuable addition would be a map interface to browse through entities which have a geographical location. Network graph interfaces for browsing through the data using a visual representation of the connections between people, places, and events, would also be useful. Additionally, there is no public access to the project’s SPARQL endpoint, though this might be a useful alternative for advanced searching. The entire files and individual entries can be downloaded, but specific results sets (e.g., all infants) cannot.

No comments here
Why not start the discussion?