As Secretary of the Smithsonian Institution Wayne Clough greeted visitors to Smithsonian X 3D Conference, he casually rattled off a rounded figure of visitors to the Smithsonian in 2012: 30 million. If those were all Americans, that means about 300 million never crossed the threshold of any of their 19 museums, or the National Zoo. For Clough – a man from rural Georgia who, in his opening remarks of the conference, admitted to not knowing about the Smithsonian until he was 20 – how can the Smithsonian reach that audience? Or, for that matter, the rest of the world?
The Smithsonian X 3D conference, which took place November 13 and 14 in the Freer Galleries in Washington DC, featured the Beta launch of a possible solution. Smithsonian X 3D is a webGL-based viewer that uses an OBJ file to zoom, rotate, and adjust lighting or surface quality, virtually. (The “x 3D” signifies the Smithsonian, multiplied by 3D.) While the idea of downloading and printing a model on a desktop 3D printer is intriguing, it’s the notion that you can get closer to the virtual object than you could to the physical object: at least, without getting arrested and charged with a Federal crime.
At present, Smithsonian X 3D contains about 20 scans from seven of the Smithsonian’s 19 museums, and from four of its nine research centers. For a project that has been in the works for the last three years, 20 models doesn’t seem like a lot. However, for Adam Metallo and Vince Rossi—the Smithsonian’s so-called “Laser Cowboys”– the goal of launching the project has not been one of volume so much as it has been about education. And part of that education has been learning on-the job, and letting their colleagues know what was possible, both in terms of capturing techniques and the final output.
Metallo and Rossi grabbed some attention in early 2012 for their collaborative work with Studio EIS to scan and print a 1:1-scale replica of a Thomas Jefferson bronze sculpture from Monticello. The final output was considered museum quality, and was intended for display in a touring exhibit, Slavery at Jefferson’s Monticello: Paradox of Liberty. While it cast a light on the potential for a museum to scan and display replica of artifacts (a feat previously accomplished by Staab Studios for the 2010 tour of King Tut, it also signified the potential for users anywhere in the world to download, print, and study a museum-quality replica from the Smithsonian collection. Although, the Jefferson reproduction also posed one minor problem: it’s not a part of the Smithsonian collection. The original sculpture remains property of Monticello, and the 3D model is the intellectual property of Studio EIS, so the scan is not amongst the available options on Smithsonian X 3D. (See Michael Weinberg’s whitepaper “What’s the Deal with Copyright and 3D Printing” for more insight on 3D printing legality issues.)
Throughout its three years, the 3D Digitization program has consisted mostly of two guys, and a hand full of volunteers (full disclosure: I was one for about 40 hours, and as of March a third member, Jonathan Blundell, was hired onto the team, full time). Over the course of those three years, they’ve worked with and learned from numerous partners in scanning and printing technologies, and software design. They’ve applied that knowledge at research sites in Africa, South America, and Indonesia. They’ve shared that knowledge not only at the Smithsonian, but also at conferences across Europe and North America, presenting their processes, and informing audiences the potential 3D has for exhibitions, preservation, archeology, and research.
The Smithsonian Institution houses over 137-million objects. Fortunately for Metallo, Rossi, and Blundell, it’s not their job to scan it all: it would be impossible. At a rate of one object per minute, it would take a person working round the clock, seven days a week, 260 years to accomplish that task. And there is no one-size-fits-all approach. Scanning a small and simple object in one minute with a single hand-held scanner can be a difficult task. Now, imagine the time it takes to scan a fossil of a mammoth, or a pod of forty complete whale fossils. Reverse the scale and scan an object as intricate as an orchid or as small as its pollinator. Apart from laser scanners, there are choices to be made to use photogrammetry or CT scans to capture the data, and ways of treating an object for a better scan (the orchid had to be freeze-dried). Each object presents a different challenge: some of scale, some of delicacy. Other objects have intriguing narratives, some specific to the locations of an object’s discovery; preserving that information is just as necessary. While scaling up the capture of data is the next big goal of the project, more important is educating their colleagues within the Smithsonian on methods to most effectively capture and render the data for the website. Apart from achieving Clough’s goal of bringing the Smithsonian to more people, it would also provide fruitful ground for exchanges of research.
For instance, the Buddha Vairochana, nicknamed the Cosmic Buddha, was sculpted approximately 1500 years ago. Standing at roughly the height of a person, It’s hands and head have been lost to time. What remains of the sculpture is a long robe draped over the human figure: a robe that once contained an intricate low relief carving. Over time that carving has been worn down by the elements and the countless hands that rubbed its surfaces during teaching. To the naked eye, that carving remains somewhat visible, but the detail is difficult to read. Prior to placement in the Smithsonian, it is assumed that researches may have made paper rubbings of the surface in an effort to extract from the stone what the naked eye could not see. This is the physical experience of seeing the Cosmic Buddha.
As a 3D model, the story is different. Thanks to a highly detailed laser scan of the sculpture a user can illuminate the sculpture with an occlusion map: a map of light that basically pushes the contrast between shadow and light, revealing intimate detail lost to the naked eye. Also, thanks to the 3D model, researchers can digitally unwrap the Buddha, and view the entire relief, rather than in the round. Now a scholar on the other side of the globe who has access to similar artifacts can compare the carvings within the Cosmic Buddha to other existing sculptures, relief carvings, and rubbings.
In the future, that researcher might also be able to add to the Smithsonian X 3D website. At present, visitors can take a guided tour courtesy of annotated pins that contain text, image, and video. Visitors can also share snapshots of the object, change the light source, and make measurements. But, one day, the Smithsonian hopes to create opportunities for visitors to make their own guided tours for class presentations, or to add to the catalog of information available on the website.
The Smithsonian is not the only research institution committed to 3D scanning. The Metropolitain Museum of Art has held hackathons in recent years, inviting artists to scan objects in the collection and reconfigure the models into new creations. Student scans from the collection of the Art Institute of Chicago can be found on Thingiverse, a free on-line repository of 3D scans, inventions, cell-phone cases, and Yoda-heads. There is also the Idaho Virtualization Laboratory at Idaho State University, which has scanned hundreds of zoological specimens, and has also made them available on line, but its interface is clunky and requires a plug-in. Each example has limitations of access, quality resolution, or usability. The Smithsonian, at present, simply has the limitation of available volume. Whether that changes remains to be seen. Regardless of the limitation of volume, at the very least, Smithsonian X 3D gives us a glimpse of the future to come, potentially changing how we view objects within the context of the museum, and beyond.
By John Anderson
Image of the Cosmic Buddha courtesy of Smithsonian Digitization Program Office
Note: Updated 12-13-13 2:30pm