It was during a trip to the Centre for Integrated Facility Engineering (CIFE) at Stanford University in early 2013 that I first heard of a new technology utilising Xbox Kinect-type cameras to interpret the 3D space in which we live, work and play.
The initial concept was to literally sit the camera in the middle of a room and have it scan the physical constraints of the room.
What we're now seeing are devices such as US-based technology Company Occipital's Structure Sensor. A mobile data capture device which can collect and register 3D special data directly to any Apple iOS device with no additional computing power required.
The Structure Sensor is a tiny add-on for mobile devices that works to create a 3D image of basically anything you can view through the device's camera.
Occipital define their new technology as a hardware platform that gives developers the ability to easily create applications that take advantage of a 3D sensor on an Apple iOS device for the first time ever.
This enables a completely new set of mobile application possibilities, including:
- 3D object scanning for easy 3D content creation. With the Structure Sensor, 3D scanning no longer requires a dedicated hardware device; rather, it's simply an app that uses the Structure Sensor's depth stream as its foundation.
- 3D mapping of indoor spaces for instant measurements and virtual planning.
- Body scanning for fitness tracking, virtual clothes fitting and ergonomical purposes.
- Augmented reality (AR) where virtual objects interact precisely with the geometry of the physical world.
- Virtual reality training using 3D environments imported from the real world.
This versatile and truly mobile solution allows users to capture and process 3D data as they view the area of interest resulting in amazingly fast data capture with virtually no setup. By viewing the data as it is collected, the user can be sure they have collected all the information they require.
Another significant feature is the sensor's compatibility with any device that has a USB port, ensuring almost instant use of the data across a number of other applications and technologies e.g. 3D printers, AR devices and BIM platforms.
So how does it work?
The Structure Sensor uses structured light to capture depth data. Structured light uses a laser projector to cast a precise pattern of thousands of invisible infrared dots onto objects and spaces. It then uses a frequency-matched infrared camera to record how the pattern changes, thereby understanding the geometry of those objects and spaces. As a result, the Structure Sensor can generate a VGA depth stream at 30 frames per second, with each pixel representing an exact distance to a real-world point.
We are eagerly awaiting the mid-2014 release date so we can get it out into the field for extensive testing. The sheer concept of creating a 3D image within minutes during a site inspection is hugely exciting for us. We see this reducing the time and cost investment during site measures as well as increasing the accuracy of the output.
For our clients in the Food and Manufacturing Industry, we will soon have the ability to capture the physical constraints of equipment allowing us to better design to these constraints. We will also be able to overlay concept designs on the iPad showing the actual space intended for the equipment in the background, one step closer to being able to virtually stand inside the future of your facility.
Any technology that advances the easier interpretation and speed of design and then delivery of these complex processing facilities to become more efficient and cost effective is a win for best practice project delivery and our manufacturing clients.
Technical specs include:
- Depth sensing range 40 cm – 3.5m
- 30-60 frames per second
- 640×480 resolution
- Separate Li-Ion battery equals 3-4 hours of use, 1000+ hours of standby
- Made from anodised aluminium
- iPad attachment bracket
- Compatible with any device that has a USB port
- Hackable with Xcode