Is Vision Pro Apple's first 3D camera? If we go by how Apple is marketing this pioneering device, the answer is yes. However, those of us who witnessed the introduction of the LiDAR sensor back in 2020 might beg to differ.
At LiDAR3D we have successfully completed iOS 3D scan-to-CAD conversion projects for hundreds of customers and we regularly collaborate and offer consulting services to businesses that are integrating this innovative technology for various use cases.
A 3D model captured in 10 minutes with an iPhone equipped with a first-generation LiDAR.
Upon their release, LiDAR scanners were relatively unknown. In fact, Apple scarcely promoted the 3D capturing abilities, leaving many potential users unaware of its existence, power, and how it stands out against similar solutions introduced in consumer electronics over the past decade.
Over the past three years since its debut, iPad and iPhone Pro LiDAR sensors have been tested in a variety of environments, with both professionals and amateurs reporting success. Later models have been shown to be slightly more accurate than their predecessors, largely due to advancements in multiple areas that affect the simultaneous localization and mapping process.
While the first generation of LiDAR was largely ignored (partly due to Apple's understated marketing), Apple is now proudly declaring this as their first 3D camera. This suggests a high level of confidence in Vision Pro's reality-capturing capabilities.
Let's take a snapshot of where we stand with the existing hardware:
Since 2020, iOS LiDAR devices have been equipped with the following features:
Front: Main camera
Front: LiDAR
Back: TrueDepth camera
A: Main Camera Photogrammetry/NeRF: This is my first-ever attempt at a NeRF capture, completed with the Luma App in October 2022.
B: LiDAR, which made its debut in the iPhone Pro 2020, provides fast and accurate interior space capturing.
C: TrueDepth, first introduced in the iPhone X back in 2017, is ideal for recognizing and capturing detailed 3D objects like faces. Note that the Vision Pro will be the first instance where both sensors face the same direction.
Apple's innovative strides have placed it at the forefront of this field, leaving Android devices trailing behind. The announcement of the second-generation hardware was expected, but the feature-packed Vision Pro was a surprise. Those attempting to keep pace with the first generation of sensors may feel despair when they see what the Vision Pro offers.
Vision Pro hardware includes:
LiDAR, 2nd generation: This new device, produced by a different manufacturer, is primarily used to place XR experiences accurately within captured spaces. The first-generation LiDAR was a significant improvement over pre-2020 devices, and my expectation is that this new version will be virtually flawless.
TrueDepth: This sensor is as shown above, good for capturing details at close range. With the sensor facing the same way as the LiDAR and main camera system, it could play a bigger role in object modeling in the future.
Three new Vision Pro features we're excited about:
Stereoscopic perspective with Two Main cameras: The enhanced cameras will allow for better photogrammetry/NeRF 3D models, capturing objects such as sneakers, furniture, and appliances.
IR illuminators: These might enable capturing in low or no lighting conditions, which could revolutionize cave mapping and exploration, where first-generation iOS LiDAR scanners have already made a mark.
Side and down cameras: These are likely to assist with device localization in space and capturing panoramic images.
Other relevant improvements for generating 3D models include:
Advances in software, like AR Kit
RoomPlan API, Apple's automatic Scan-to-CAD. Watch my video review LINK HERE.
The new M2 chip.
Incremental improvements across all aspects of the devices.
Stereoscopic Vision and its Implications for 3D Modeling:
Ask anyone with monocular vision about the importance of stereoscopic vision for depth perception. Alongside all the new and upgraded sensors, the stereoscopic perspective is crucial for rendering 3D images. It will be fascinating to see how apps like PolyCam Metascan and others develop new capture modes leveraging all these sensors and perspectives in Vision Pro.
The form factor:
We've noted that iPad scans tend to be slightly better on average than iPhone scans, probably due to the increased stability from using two hands. The Vision Pro, being more stationary, offers even more stability. However, I'm unsure if this extra stability will compensate for a reduced range and if it will be practical to use in the field. The device's placement will be advantageous for capturing panoramic images as it allows for steadier positioning and rotation closer to the center point.
I also envision that navigating the completed capture for accuracy checks will be easier than on a handheld device (especially the iPhone), and having your hand completely free would be great for annotating in 3D either after or during the capture session. These annotations could manually input data about materials and objects obscured from view or hidden behind the surface layer of the 3D scan file.
Conclusion:
We very much look forward to testing this new device! Its potential impact on property professionals, 3D object modelers, and various other businesses could be significant.
The quality of the 3D objects and spaces captured with this device will likely be virtually indistinguishable from reality. I don't believe Apple would confidently market this as a 3D camera unless it was truly exceptional at recognizing spaces and objects in 3D. Although no reality-capturing specialist has yet tested this device, any flaws in the XR experiences during the official 30-minute demo would likely have made headlines.
The iPhone 15 will probably be the first opportunity to test some of the new hardware and sensors.
All the examples in this article were created with an iPhone 12 Pro.
Thanks for reading and don't hesitate to reach out to us via social media, the contact form, or email us at: Contact@LiDAR3D.io
Kommentare