This page provides an overview of the value we can bring to museums as well as a discussion of timelines and possibilities for a pilot project.

What Lightbox does for museums

We create 3D digital twins of every item in your collections. These digital assets can then be used to enable

(1) Computer generated
photographs and imagery

(2) New web experiences
for the public

The digital twin creation process

Object is placed in the robotic scanner. It captures 2000 unique DSLR photos of the object in 5 minutes or less.

Data from the scan is automatically processed in the cloud using our AI-based algorithm.

That's it! The digital twin has been created. This asset can now be leveraged to create a variety of imagery and experiences.

The above process does not involve 3D artists at any point. No manual “touch-up” or editing is needed. Because of this, our process is capable of extremely high throughput compared to other 3D pipelines.

Scan an object once, never photograph it again

Imagine a photographer took a set of photos of a museum asset, but didn’t capture an important angle or get a close-up shot of an important feature. This situation can be frustrating, especially if there’s no way to re-shoot additional photos of the asset. With Lightbox, once an object has been scanned, we can generate the 2D images you need, on demand, whenever you need them.

from any angle

with shadows

for 360

with annotations

with different backgrounds

with context

Object flows are better than photos

Object flows are a new type of experience we’ve invented to replace white-background photos on the web. In the example below, use the left and right arrows to navigate to different views.

Note: This demo does not work on certain mobile devices and browsers. Please use a desktop or laptop and Google Chrome.

Adding movement to photographs creates a 3D effect, making it much easier to understand the shape of an object. It also communicates how light plays off different surfaces, giving us a sense of shininess and texture.

Best of all, this experience requires no additional load time over simply displaying 2D photos.

Object flows can also help tell a story around an object. Take a look at the Google Arts and Culture experience called Art Zoom shown below. Now Imagine creating a similar experience, but for any 3D object in your collections.

Give researchers the power of measurement

Taking measurements of artifacts and specimens can be critical in research contexts. Lightbox enables researchers to take their own measurements. Simply select two points on any photo to find their distance in 3D space. A rough mockup of this experience is shown below.

Distance = ?

Researchers can also download a mesh file and take more sophisticated measurements in their 3D software of choice.

Seeing multiple objects in the same scene gives an immediate sense of how they compare

Its difficult to tell from the following two images which of the two objects is larger and by how much.

Cactus

American, ca. 2022

Geode

American, ca. 2022

Comparing their numerical dimensions can help, but nothing beats seeing the two objects next to each other in the same scene.

These images can be generated in real time for any two items in your collections that a viewer wants to compare. We can also generate images where objects are shown next to scale objects such as an ID card or being touched by an average-sized hand.

We can also create images with many scanned objects in them that are artistically arranged. Below is an example of an image from the OMCA website that we could create digitally. These kinds of images can serve as landing pages for an online exhibit. Users can click on different objects in the image to go to that object’s page where they can find more information and additional imagery.

Create 3D content effortlessly

Today, it is challenging to create a large library of 3D models due to the amount of manual work that must be done by 3D artists. With Lightbox, once you’ve scanned an object, a 3D model is generated automatically. This model can be used in ARKit/ARCore experiences, virtual reality, video games, and more.

We continue to work on improving the fidelity of these auto-generated 3D models. As our algorithms improve, so will the quality of your library of assets.

Pilot details

We are looking to partner with a museum for a pilot that would consist of 100 scans to be delivered on a 12 month timeline. We are currently in the process of building a new version of our scanner with a larger scan volume, and we expect to complete development during this pilot.

Current Scanner
Scanner V2
100 Scans
Time to scan: 20 minutes
Scan volume: 9in3
Time to scan: 5 minutes
Scan volume: 36in3
Scans and web experiences delivered