Basics

With WWDC 2019 over, it’s a good time to reflect on everything that was announced and start seeing how we can put it into practice. The purpose of this article is to round up the changes to the Metal framework and tools and provide pointers to where you can learn more. This was a banner year, with significant improvements to Metal itself; major advancements in related frameworks like MetalPerformanceShaders (MPS), CoreML, and ARKit; and the exciting introduction of an all-new augmented reality engine, RealityKit.

Since this was such a big year, this post can’t hope to cover all the improvements that are included in macOS 10.15 Catalina and iOS 13. In particular, raytracing with MPS was a hot topic, but you’ll have to look elsewhere for coverage. This session should get you up-to-speed.

The purpose of this article is to describe and explore a foundational concept in Metal: vertex descriptors. Vertex descriptors are the glue between your application code (written in Objective-C or Swift) and your shader functions (written in the Metal Shading Language). They describe the shape of data consumed by your shaders.

When I was writing the first articles for this site (over four years ago), I didn’t have a firm grasp on the purpose of vertex descriptors, and since their use is optional, this omission persisted for far too long. The sample code for most articles has been updated periodically, and most samples now use vertex descriptors, so it seemed fitting to write about them.

Getting Started

This article is a quick introduction to how to use the Metal, MetalKit, and Model I/O frameworks in Swift. If you know your way around UIKit or Cocoa development, you should be able to follow along for the most part. Some things like shaders and matrices will be foreign to you, but you can learn them as you go about exploring Metal on your own. The purpose here is to give you a template to build on.

First things first. Use Xcode to create a new project from the iOS Single View App template. Add import MetalKit at the top of the ViewController.swift file. We could use the Game template instead and have some of the boilerplate written for us, but writing it out long-hand will give us more of an appreciation for the moving parts. The Game template also includes a lot of moving parts that get in the way of understanding the basics.

MetalKit is a forthcoming framework on iOS 9 and OS X El Capitan that greatly eases certain tasks such as presenting Metal content in a UIView or NSView, texture loading, and working with model data.

This post is an overview of the features offered by MetalKit. Many of our articles so far have focused on details that are not expressly related to Metal, but are instead required to give Metal something to draw on the screen: texture loading, 3D model loading, and setting up the interface between Metal and UIKit.

MetalKit seeks to make these tasks easier by providing classes that perform common operations. In this article, we’ll look briefly at these capabilities.

In November 2014, I was privileged to deliver a talk to San Francisco’s Swift Language User Group, hosted by Realm. They’ve now uploaded the video, with subtitles and synchronized slide deck to their site. You can view the video here.

In this article we will learn about mipmapping, an important technique for rendering textured objects at a distance. We will find out why mipmapping is important, how it complements regular texture filtering, and how to use the blit command encoder to generate mipmaps efficiently on the GPU.

The sample app illustrates the effect of various mipmapping filters. The mipmap used here is artificially colored for demonstration purposes.

Textures are a central topic in rendering. Although they have many uses, one of their primary purposes is to provide a greater level of detail to surfaces than can be achieved with vertex colors alone.

In this post, we’ll talk about texture mapping, which helps us bring virtual characters to life. We’ll also introduce samplers, which give us powerful control over how texture data is interpreted while drawing. Along the way, we will be assisted by a cartoon cow named Spot.

In this post, we’ll finally start rendering in 3D. In order to get there, we’ll talk about how to load 3D model data from disk, how to tell Metal to draw from a vertex buffer using indices, and how to manipulate objects in real time.

This post assumes that you know a little linear algebra. I have written an incomplete introduction to the subject in this post. There are also many excellent resources around the Internet should you need more information on a particular topic.

In our inaugural post of the series, we got a glimpse of many of the essential moving parts of the Metal framework: devices, textures, command buffers, and command queues. Although that post was long, it couldn’t possibly cover all these things in detail. This post will add a little more depth to the discussion of the parts of Metal that are used when rendering geometry. In particular, we’ll take a trip through the Metal rendering pipeline, introduce functions and libraries, and issue our first draw calls.

The end goal of this post is to show you how to start rendering actual geometry with Metal. The triangles we draw will only be in 2D. Future posts will introduce the math necessary to draw 3D shapes and eventually animate 3D models.

This post covers the bare minimum needed to clear the screen to a solid color in Metal. Even this simple operation requires many of the concepts exposed by the Metal framework. The next few posts in the Up and Running series will build on what we discuss here and take us through the basics of 3D rendering and image processing.