Gui Rambo: Why an AR skeptic is excited about RealityKit and Reality Composer
Guest writer Gui Rambo shares his thoughts on the introduction of RealityKit and Reality Composer — two technologies introduced at WWDC 2019, that have the potential to fundamentally change the augmented reality landscape.
People who know me will know that I’ve been skeptic about augmented reality for a long time. By that I don’t mean I’m “against” AR or that I think that it’s bad. Quite the opposite — I think AR is super cool and I love playing around with it.
My skepticism is around the potential AR has to become a mainstream technology in the current form factors that it’s available on. If I can point my iPhone at something to see information about it in 3D, that’s really cool, but if I have to pick up my iPhone, I might as well just use a regular app to look up the information that I need.
Let’s say that my bank comes up with an AR mode in which I can point my iPhone’s camera at my credit card in real life to see my current balance. That’s certainly a really cool demo, but it’s impractical for real life use. I have to pick up my iPhone, launch my bank’s app, wait for the AR environment to be calibrated, and then point the camera at my credit card. At that point, it would’ve been much easier to just launch the app and see the balance right there.
If you think about the example I just gave, you’ll notice that the real problem with it is the device — my phone. Now imagine if I was wearing a pair of glasses, which look pretty much the same as normal glasses, but have AR built into them. Now all of a sudden it becomes a completely different reality (no pun intended). All I have to do is to look at my credit card — and I’ll see its balance. No need to pick up a device, no need to launch an app, no need to wait for any calibration. It just works™.
That’s what the future of augmented reality looks like to me, and I have a strong belief that Apple is working towards building that future. Now, to make that future possible, there is one major problem that needs to be fixed — and that’s making sure that a rich ecosystem of apps and services is available for the AR world.
Apple solved that problem for the iPhone — and its other platforms — by introducing a powerful SDK that gives any developer the ability to create user interfaces, perform data storage, data syncing, video and image manipulation, machine learning, and more. You can be just a “regular developer” and create rich experiences for your users — without having to be a specialist in UX, data structures, networking, or image processing.
That’s still not true for 3D though. We do have SceneKit and ARKit, which are powerful frameworks for building 3D scenes and integrating them with the real world — but to actually create the content and interfaces that appear in this 3D world, you still need a lot of knowledge of complicated 3D authoring tools, texturing, lighting, rendering, and more.
With RealityKit and Reality Composer, Apple is finally starting to introduce ways for developers who, like myself, are not professional 3D artists (and can’t hire one) to create rich 3D experiences for our users. The combination of my belief in a future where we have AR glasses made by Apple, and the introduction of these new technologies, is what makes me so excited.
Apple is slowly building up a future where AR is going to be accessible to the general public, and empowering developers to deliver great experiences in that world — just like they’ve done for the desktop and tablets, the smartphone and the watch.
Thanks a lot to Gui Rambo for this guest article. Make sure to follow him on Twitter @_inside, and you can also listen to our analysis of the WWDC opening keynote on the Stacktrace podcast.