Daily coverage of Apple’s WWDC 2019 conference, by John Sundell.

A Swift by Sundell spin-off.

Developer interview: Paul Hudson’s WWDC favorites, and his take on SwiftUI’s present and future

It’s time for the last WWDC developer interview of 2019, in which I talk to my good friend Paul Hudson — about his favorite developer announcements from the conference, and his take on SwiftUI — where it currently is at, and where it might go in the future.

John: Wow, what an incredible edition of WWDC! With the event now almost over, what are some of the newly announced features and developer tools that you’re the most excited about?

Paul: Honestly, if this were any other year then we’d be talking about dark mode, about Catalyst, about CryptoKit, or about a dozen other things – this week has seen an extraordinary amount of work come to fruition all at the same time, and I think many people are having a hard time trying to see the forest from the trees.

But SwiftUI happened, so all those incredible, breakthrough technologies almost feel like a rounding error; folks here are excited about them all, but the talks that are the busiest are by far Swift and SwiftUI.

If we put SwiftUI to one side, then there are two things I’m particularly excited about.

First is the new Vision OCR system, which lets you scan images for text and read it back as strings using some fairly intensive processing to try to get the best possible scan of the text. This is one of those technologies that will enable so many apps to really shine – scanning a receipt or a business card, scanning a sign for translation, scanning a book to search it, and more, are all now just a few lines of code away.

Second is the new Core Haptics framework, which gives you extraordinarily precise control over how we use vibration in our apps. You can create quick taps (“transient”), or longer buzzes (“continuous”), you can create multiple of these either overlaid or in sequence with precise timings, and you can even specify parameters that change the effect over time — you can make a buzz that fades away, then comes back in again. And if you think that’s cool, wait until you realize they can now generate sound using the haptics system. This framework won’t attract any noise in the media because it naturally just works in the background, but it’s going to generate a whole world of possibilities for games developers and more.

John: Yeah, those are some really exciting technologies. I think it’s fair to say that this year’s WWDC has been heavily focused on moving Apple’s various platforms closer together in terms of developer tools — with technologies like SwiftUI and Catalyst. Do you think that these frameworks are stepping stones towards a completely unified “AppleOS”?

Paul: A lot of folks have been speculating about that for some time, although Apple don’t like to move quickly, so it will still be a few years before such a thing could come to fruition.

In the nearer term, what I am seeing is that the balance of power is shifting between frameworks: Vision and ML are increasingly picking up the load (object saliency is astonishing!), Metal is doing a ton of work to make SwiftUI faster rather than always going through Core Animation, and SceneKit is fading away while RealityKit is rising.

Honestly, I’m amazed that more people aren’t talking about the list of games sessions this year, because there isn’t a single one about SceneKit — it has had changes this year, but Apple isn’t talking about them. Instead there are five talks about Metal and four more about RealityKit.

Seriously, the writing is on the wall. I think it’s pretty clear to everyone that SpriteKit and SceneKit are lovely frameworks that just never got the traction they deserved — particularly in the face of cross-platform game engines like Unity and Unreal Engine. If Apple really are going to release AR glasses I think it’s all but certain that RealityKit will be at the center, rather than SceneKit.

John: Yeah, I agree. While games in general have seemingly always been fairly far down on the list in terms of Apple’s priorities when it comes to developer tools — AR seems to be almost at the top of that list, so any technology that’s associated with AR in any way is likely to be getting a huge amount of internal resources and attention.

So let’s dive deeper into SwiftUI, which you definitely have started doing — you’ve even already released your first book on the topic — incredibly impressive! SwiftUI represents not only a shift in frameworks, but a quite major paradigm shift — going from imperative to declarative UI programming. What are your thoughts on that, and what are your initial impressions of SwiftUI’s API?

Paul: I think it’s hard not to be impressed by SwiftUI. That something so massive could be kept almost completely secret is just incredible! And it is massive. Sure, it doesn’t have everything just yet, so there’s no native UITextView or UICollectionView or similar, but it does give us so many other things. And from talking to Apple’s engineers at the WWDC labs, it seems clear that other Apple frameworks are keen to come onboard – I’m hoping to see SwiftUI wrappers for MapKit, WebKit, and more real soon now, with the added benefit that it allows those teams to ensure their API is identical across all platforms.

So, yeah, I did write a book about it! It’s called “SwiftUI by Example” and it’s effectively the carefully ordered collection of all my findings so far. I’ve been at every SwiftUI lab at WWDC, to the point where I think Kyle Macomber (one of the SwiftUI engineers) groans inwardly when he sees me because he knows I’ll have a big stack of questions for him. But it’s given me a really fast turnaround: I can work on some code to the point where I think it’s right, but being able to show the folks who made it and say “how’s this?” is huge. And that’s what makes WWDC so special, right?

I don’t think switching to declarative UI is easy for anyone. I learned React and React Native a few years ago, and it was intense — I actually paid for tuition to help make sure I was really nailing it! So for folks like me who are already used to this way of thinking, SwiftUI feels very natural, but I appreciate we’re in the minority.

However, what I think adds to the mental complexity of SwiftUI is that it landed next to a series of big, important Swift language changes. So folks are looking at SwiftUI code trying to figure out what all the views and modifiers do, and they are also looking at the Swift code trying to figure out what all the @ and $s do. It feels a bit magical right now, but that will pass.

One important thing about SwiftUI’s API is that it’s actually quite small. Yes, I know I said it’s massive only a minute ago, but it has depth rather than breadth – they have picked a subset of important tasks and have done a really thorough job with them, but haven’t tried to do everything in this first release.

So, they’ve added extraordinary functionality for things like bordering, blurring, stacking, and more, but it seems clear that Apple has learned a huge amount from their experience with Swift. We’re not going to see a “SwiftUI 3” happen, where everything breaks overnight. Instead, the API we’re seeing now is the first step, and next year onwards I’m sure we’ll see more and more as the team slowly expand outwards.

John: Do you think SwiftUI will make UI development for Apple’s platforms more approachable for beginners? I like to think so, because the syntax is so incredibly lightweight and the declarative model should come more natural to many beginners — but at the same time, it’s a highly complex system built on some of Swift’s most advanced generics features — what do you think?

Paul: At this early point SwiftUI is exposing a few sharp edges both in Apple and in Swift, but I know all the teams have worked so hard to get to this point and I feel confident those sharp edges will be smoothed off – whether or not that’s before the GM remains to be seen!

For example, if you make a mistake in your SwiftUI code you will often get almost incomprehensible error messages. Sometimes it’s hard to read because it has angle brackets scattered all over the place, but other times it’s hard to read because it leverages advanced Swift features that change what your code means – it might complain about some code on line 39, but it’s not code you wrote, so it isn’t actually visible. The Swift team know this and are working hard to improve the situation.

Also I think it’s going to cause Apple to rethink the way they do documentation. Apple’s documentation system is really based around function signatures as the source of truth, and I think it’s struggling because SwiftUI is just so darned clever.

For example, here’s one for setting a background on a view:


func background<S>(_ content: StaticMember<S>)
    -> _ModifiedContent<VStack<Content>, _BackgroundModifier<ShapeView<Rectangle, S>>>
    where S : ShapeStyle
                    

That’s all accurate, but I don’t imagine anyone will look at that and think it’s cleared things up.

John: Yeah, I totally agree. On the surface-level, SwiftUI can seem so incredibly simple and clean in terms of its syntax — but appearances can be really deceiving in this case.

So since SwiftUI can be mixed-and-matched with both AppKit and UIKit, it should enable most developers to adopt it gradually. What would be your overall recommended strategy for doing so? Let’s say given that you’d only have to support iOS 13 for the next version of your app, would you start moving your UI to SwiftUI already?

Paul: This has been asked in so many different ways in the last few days, and will continue to be asked for some time yet. There’s nothing stopping people moving to SwiftUI for side projects, and in fact that’s the best way to learn – try things out and make stuff, without having to worry about deadline pressures and suchlike.

But moving to SwiftUI isn’t the same problem as moving to Swift. With Swift we could migrate a small chunk and move on, and while that’s possible in SwiftUI we don’t have the same analogies to work from. If you remember, in the first couple of years in Swift, our method calls were identical to Objective-C’s, so it was pretty easy to move across – you could almost do a line-by-line translation. This isn’t possible with SwiftUI: you need to toss away what you have and replace it with code that looks almost entirely unlike what you have right now.

That’s expensive, and that’s challenging; it’s not going to be easy to justify commercially. And there’s the added pain that we have so little example code to work from, so if you hit problems you can’t just use Google or Stack Overflow and hope for the best. That combination – not being able to translate code from Objective-C or being able to find answers anywhere online – is going to make migrating to SwiftUI difficult for many people in this early period.

So, my advice to folks is this: go big on SwiftUI in your personal projects and any side projects, because that’s where you’ll learn the fastest and get some great results – and have a lot of fun too. For commercially sensitive stuff, it’s a more complicated story and I think teams would be wise to wait six months until there’s a more thorough understanding of best practices before they start to even think about introducing it.

John: I think that’s excellent advice.

Thanks so much to Paul for his insights, recommendations, and thoughts on SwiftUI, and some of the other exciting new technologies that Apple introduced this week.

Make sure to check out Paul’s excellent writing over at hackingwithswift.com, especially his new book on SwiftUI, and you can also follow him on Twitter @twostraws.

This was the last WWDC developer interview for 2019! I hope you’ve enjoyed hearing from some of my friends from around the Apple developer community about their thoughts on some of Apple’s announcements — and if you have any feedback on this series (or my other WWDC coverage), then feel free to find me on Twitter @johnsundell.

Thanks for reading! 🚀