Blog

Conference, iOS, UI Design

WWDC 2017, what’s new for your applications: our top 5 features!

We promised we would come back with a more detailed article and here it is. If you haven’t seen the previous one about the WWDC2017 keynote’s announcements, you can click here.

1. The new App Store

The App Store is changing soon with iOS11. As it is the main platform to download Apps on iOS, using all the capabilities of it can lead to better download rates of your apps. Let’s have a look on the changes.

Overall design

First, the new App Store features a new “Today” tab that will presents exclusives and innovatives apps selected by Apple. Having your app displayed in the today tab comes with being interviewed in order to display a story about your app and how it was imagined and developed.

This App Store tends to display more selections of apps that have been chosen by Apple or have a great download growth. The “apps” tab doesn’t display the complete app ranking list directly anymore, this historical ranking is still available but is a little bit harder to find as there is only a short version of it in this tab that can be expended.

Games now have their own tab as they represent a very important part of the App Store.

App Pages

Your app presentation page will soon change for the better.

With the new App Store, you will be able to add a subtitle below your app’s title. A nice way to avoid using the title to give a micro description of your product, which will lead to make it’s name more impactful.

You will also now be able to add up to three video previews of your app (only the first one will be displayed on devices that don’t have iOS11 installed). Your app first preview will automatically play without sound on the search page of the app store.

The app description won’t be editable anymore after a successful publication on the App Store. All promotional information can now be placed before this description in a dedicated section and can be updated at any time.

As for your in-app purchases, you can now make them available directly from the App Store though it will require a little update on your app to support it. For each in-app purchase, you can specify a name, an image and a description that will make it available directly in the search section of the AppStore. If a user wants to download an in-app purchase without owning the corresponding app on his device, he’ll be invited to download it as well.

Another important change is about the app rating. It will now be calculated on the basis of all reviews of all versions of your app. But don’t worry. If a version had an issue that led to a big decrease of your rate, you can reset it when updating your app.

App Store Rules

A few rules have changed with this update:

  1. 32 bit apps won’t be supported anymore.
  2. Apple will refuse your application if it asks the user to rate the app without using the native popup introduced with iOS 10.3.
  3. Since iOS 10.3, you can reply to comments.

2. CoreML & Vision : Machine Learning for all

Nowadays, machine learning is a very important subject to be aware of. There are infinite possibilities that come with it and Apple is giving us the keys to make it easy to implement some of it in iOS apps.

If you don’t know about Machine Learning yet, it’s a way to work with data and process to discover patterns that can later be used to analyse new data and make predictions. ML usually relies on a specific data representation, a set of “features” that are understandable for a computer.

CoreML

CoreML is a great way to use pre-trained Machine Learning models directly into your application for offline predictions. Apple gave a very large list of domains it can be used on :

As a demo of CoreML, we’ve been shown an application able to recognize the kind of flower from a picture.

Previously, using ML models in an iOS app was a very hard task. But the CoreML API is easy to use and can work with machine learning models generated from a bunch of existing ML libraries such as: Caffe, Keras, Turi, XGBoost, libSVM or scikit-learn. These models can be transformed into an Xcode compatible format with an open source tool provided by Apple (as it is open source, it might soon support models from more libraries). Importing this transformed model into your application automatically generates a code wrapper in order to use it really easily.

For exemple, let’s say I trained with Keras library a ML model that can find the name of a car through a picture. I can transform the resulting model to be Xcode compliant and then use it in my project with a few lines of code by providing an image and retrieving the result as a string of characters.

Even though CoreML has a lot of pros, it also has some cons :

  • You cannot train your model directly into your application.
  • The model converter doesn’t support models from famous libraries like TensorFlow yet.
  • You cannot look at the output produced by intermediate layers of your model’s network. You only get the predictions that comes out of the last layer.

Vision

On the top of CoreML, Apple now provides “Vision” API, allowing developers to create great experiences by providing several tools to analyse photos and videos.

In vision, there are tools to detect faces (and parts of them : eyes, mouth, ears…), detect emotions, detect and track objects, detect barcodes, detect text, etc… Associated with ARKit (cf: 3rd feature of this article) and you can create features like Snapchat’s AR filters way faster then before.

3. ARKit : Augmented Reality made easy

Augmented reality is being more and more discussed. With Microsoft launching HoloLens, for exemple, the market appears to be slowly moving towards AR solutions for new user experiences. ARKit provides great solutions to add Augmented Reality capabilities to your iOS apps without having to care about using other proprietary tools than Apple’s. ARKit doesn’t need markers or calibration to work, it simply “works”.

How does it work?

ARKit is a Visual Inertial Odometry (VIO) system, which means it only detects 2D plans, that maps real world position from pixels of the camera’s image and uses others sensors of the device to track its position in space. As most iPhones have only one camera and not a “depth camera”, moving the phone allows it to take pictures from various angles and still knowing from where in space it has been taken by comparing to the previous pictures. With a lot of complexe calculations, that can only be carried by powerful phones (the iPhone 6S is the minimum configuration), this VIO system allows ARKit to create a real-time 3D mapping (world tracking) used in augmented reality.

What can you do with it?

The most famous use of augmented reality resides in adding 2D or 3D objects, animated or not, in a real-time capture of the real world. Thanks to the world tracking of ARKit, theses objects will  remain where they belong. For exemple, if you add a 3D model of a lamp on a real table, moving around the table won’t change the position of the lamp. Also, in ARKit, a light exposure filter is added to 3D objects so they can blend perfectly into the real world and its actual luminosity.

Augmented reality can also be used in sync with Machine Learning to display informations about real objects. For exemple, an AR langage translation application will detect text in the real world, read it and translate it thanks to machine learning. Then, with augmented reality, the application can display a replacement text at the exact position of the original one with the desired langage.

There can be unlimited possibilities with augmented reality for professional as well as for private use in a very large scale of domains.

Why using ARKit?

ARKit solved a lot of issues in coupling VIO Algorithms and Sensors as Apple only works with specific hardware they choose, making it a reliable technology for Augmented Reality.

You might want to try ARKit to make “proof of concept” apps, making your AR ideas real on iOS, deploying them on that plateform first as it is already widely available (on the contrary, Hololens or Tango on Android are not ready for mass distribution yet as they only work on really few devices). Of course, there are a lot of different AR solutions, and ARKit may not be technically much better. But its strength lies in its reliability on existing devices and the way Apple made it easy to use in the iOS SDK.

4. Drag and Drop : The iPad to the next level

Drag and drop is a feature all computer users are quite used to. Especially Mac users as it is an important part of the OSX user experience to be able to drag and drop almost anything anywhere (as long as it makes sense to do so, of course). With the iPad Pro, Apple has taken a step forward in making the iPad a viable replacement for a Mac (for standard usage at least).

But then, the iPad was still missing real interactions between apps. In iOS11, with Drag and Drop and a brand new multitasking system, the iPad is now reaching a new level. Making the multitouch capability of the device a real strength.

How does it work?

Just as you would expect, drag and drop on the iPad will work by long pressing an item. This item will then follow the finger that picked it until the user drops it somewhere. But that’s not it! You can also select multiple items (even of different kinds) by using another finger to select them. With your other hand, you can navigate easily between apps to drop your items at the right place. As it is better to see it with your own eyes, here’s the keynote demo of Drag and Drop:

Using it in your apps

With the new Drag and Drop API, you can make any visual item of your app draggable. Any view can receive dropped item as long as it is configured to do so. You can choose to receive some types of objects, even custom ones, that will only work between your very own apps.

About security

Drag and Drop is secured. Until the dragged items are dropped, the linked objects wont be exposed to the apps they fly over. When you drop an item, if the drop is valid (object type supported by the destination), the object is downloaded asynchronously to the target app if the source app allows it.

What about the iPhone?

Drag and Drop WILL be available on the iPhone too. Although it will only work inside a single application, which means you won’t be able to drag and drop between apps. This decision makes sense as the iPhone can’t display multiple applications at the same time, and using both hands to navigate while dragging would be a bad user experience on such a small screen.

5. Document based Applications

iOS11 (finally) includes a consistent system-provided user interface for opening and creating documents. Users can now explore files saved by apps or themselves from a new “Files” application as well as from third party applications.

What kind of applications?

To use Apple’s exemple, let’s say you want to develop an app that allows you to create and save 2d particules effects. Saving and loading a file containing a 2d particules effect can now be done by using the new interface provided by iOS11. This way, the user can choose where he wants the file to be saved, he can manage his files as he wants. This interface can be customised to fit the design of your application.

To give a nice appearance to your custom files, you can even provide a preview that will be generated by your application and displayed, instead of the standard file icon just as photos and pdf appears in this file browser.

To sum up, this new system looks a lot like the one you can experience on OSX, and every app that creates or modifies files should now be using this interface for the sake of consistency in the user experience on iOS.

 

 

We really enjoyed WWDC 2017 and we hope you now have a lot of new ideas to create applications or enhance existing ones.