Let’s continue to look at the functionalities and APIs offered by SceneKit. In a previous tutorial, I highlighted some of the basic classes you can use to create and render a 3D scene and add 3D objects to the scene using the builtin shapes. In this second part tutorial, we are going to see how to handle the appearance of a node. The information provided in this post and the previous one is relevant if you are building ARKit applications, so I suggest you to review the concepts highlighted in Part I and check the new topics provided here.

Geometry

An SCNGeometry object provides the visible part of a node. As shown in Part I of this tutorial, SceneKit provides developers with some basic 3D shapes, such as boxes, spheres, cones, planes, and more. You position and orient a geometry in a scene by attaching it to an instance of an SCNNode and using the position and the orientation of the node.

Creating a geometry

SceneKit allows to create a geometry in different ways. The most common approach is to load the geometry from a scene file created using external 3D authoring tools. The reason for that is that SceneKit is not really a 3D modeling tool. So, if you want to build complex 3D models for your apps, the best approach is to use Cinema4D, Maya, Modo, and so on. I personally use Cheetah3D by MW3D-Solutions. This tool is really easy to learn and exports to different 3D file formats and one of them is Collada (DAE) very popular for iOS and mobile platforms.

Geometries can be created by using and customizing SceneKit’s built-in primitive shapes (SCNBox, SCNSphere, SCNCone, etc.) as I highlighted in the previous post.

You can also create geometries from vertex data. The surface of a geometry can be obtained connecting together a huge collection of 3D points (vertexes) using triangles. In general, the vertexes associated with a geometry are immutable, but SceneKit provides several ways to modify (animate) them. For example, you can use an SCNMorpher or SCNSkinner object to deform a geometry’s surface, or run animations created in an external 3D authoring tool and loaded from a scene file. You can also use methods in the SCNShadable protocol to add custom GLSL or Metal shaders that alter SceneKit’s rendering of a geometry.

Multiple nodes can reference the same geometry object, allowing it to appear at different positions in a scene.

Custom geometries

If you want to build a custom geometry programmatically, you can use the init(sources:elements:) method and provide the geometry sources and geometry elements.

Geometry source

A geometry source is an instance of the SCNGeometrySource class. It is a container for vertex data forming part of the definition for a three-dimensional object. You create a geometry source using one of the following initializers:

Let’s give a look at the first init of the above list. This init method takes an instance of a Data element as the first argument (NSData if you are working on Objective-C). The data is a collection of values representing different type of information. The type of each data element is defined by the second argument (semantic) of the init method. SceneKit offers the following semantic types (defined in the SCNGeometrySource.Semantic enumeration):

  • vertex defines that the data argument contains the position (x, y, and z) of each vertex. Vertex position data is typically an array of 3- or 4-component vector (SCNVector3 or simd_float3 or SCNVector4 or simd_float4);
  • normal defines that the data argument contains the surface normal vector at each vertex in the geometry. Vertex normal data is typically an array of 3- or 4-component vectors;
  • color defines that the data argument contains the color for each vertex in the geometry. Vertex color data is typically an array of 3- or 4-component vectors;
  • texcoord defines that the data argument contains texture mapping coordinates for each vertex in the geometry. Unlike other semantics, a geometry may contain multiple sources for texture coordinates – each corresponds to a separate mappingChannel number that you can use when associating textured materials. Texture coordinate data is typically an array of two-component vectors (simd_float2);
  • tangent defines that the data argument contains the surface tangent vector at each vertex in the geometry. SceneKit uses this information to compute advanced lighting effects on the surface. Vertex tangent data is typically an array of 3- or 4-component vectors;
  • vertexCrease defines that the data argument contains crease data for each vertex in the geometry. SceneKit uses this information to determine the sharpness of corners and smoothness of surfaces when you change a geometry’s subdivisionLevel property. Vertex crease data is an array of scalar floating-point values, where each value determines the smoothness or sharpness of the corresponding vertex. A value of 0.0 specifies a completely smoothed corner, and a value of 10.0 or greater specifies an infinitely sharp point.
  • edgeCrease defines that the data argument contains crease data for each vertex in the geometry. SceneKit uses this information to determine the sharpness of edges and smoothness of surfaces when you change a geometry’s subdivisionLevel property. Edge crease data is an array of scalar floating-point values, where each value determines the smoothness or sharpness of the edge identified by the primitive at the corresponding index in the geometry’s SceneKit Constants geometry element. A value of 0.0 specifies a completely smoothed edge, and a value of 10.0 or greater specifies an infinitely sharp edge.
  • boneWeights defines that the data argument contains skeletal animation data for each vertex in the geometry. SceneKit uses this information to determine how much a vertex’s position is influenced by the positions of bone nodes in the skeleton. I will cover Skeletal animations in a future post. You can find more information in the SCNSkinner class description.
  • boneIndices defines that the data argument contains skeletal animation data for each vertex in the geometry. SceneKit uses this information to determine which bone nodes in the skeleton affect the behavior of each vertex.

The third argument in the above init method is an integer representing the number of elements provided at the data argument. The usesFloatComponents argument is a boolean specifying that the data are floats (more accuracy) or integers (better performance). The componentsPerVector defines how many components each element of the data argument has. The bytesPerComponent defines the size, in bytes, of each vector component of the data argument. The offset argument defines an offset, in bytes, from the beginning of the data to the first vector component to be used in the geometry source. Finally, the stride argument defines the number of bytes from each vector to the next in the data. You can use the offset and stride parameters together to interleave data for multiple geometry sources in the same array, improving rendering performance.

The remaining init methods are a simplified version of the first init method, so I will not go through the details. However, I want to mention the second init method that represents a small variation of the first one. This method is very important when you work with Metal, because it takes a Metal buffer (MTLBuffer) as argument instead of a Data instance.

Geometry element

A geometry element is a container for indices describing how vertices are connected together to define the same three-dimensional object. You instantiate a geometry element object using one of the following init methods:

The first init method requires a list of indices organized in a Data instance. The second argument (primitiveType) defines the type of drawing primitives connecting the vertexes. The drawing primitives are defined in the SCNGeometryPrimitiveType enumeration:

  • point: The geometry element’s data is a sequence of unconnected points;
  • line: The geometry element’s data is a sequence of line segments, with each line segment described by two new vertices;
  • triangles: The geometry element’s data is a sequence of triangles, with each triangle described by three new vertices;
  • triangleStrip: The geometry element’s data is a sequence of triangles, with each triangle described by one new vertex and two vertices from the previous triangle; and
  • polygon: Introduced in iOS 10.

Example of a custom geometry

Let’s see how to build a custom geometry. Launch Xcode and create a single view application. Name it CustomGeometry. Open the Main.storyboard file and add a scene kit view to the main view of the view controller. Add constraints to the safe area. Now, open the ViewController.swift file and add the following code:

Connect the outlet to the scene kit view in Interface Builder. Now, let’s add a new SceneKit scene file to our project and name it scene.scn. Add a new Group to the project and move the scene.scn file in the new group. Then rename the group art.scnassets. Now, edit the viewDidLoad() method in the following way:

The generateIcosahedron() method does all the work:

If you want to increase the line width, you can use the scene renderer to do so. Add the SCNSceneRendererDelegate to the view controller and make the view controller the delegate of the scene renderer:

We can also add colors to each vertex. In the generateIcosahedron() method, add the following code after you generate the geometry source vertexSource.

Finally, edit the last line of the generateIcosahedron() method in this way:

Build and run and you should see something like the following figure.

Figure 1. Custom geometry.
Figure 1. Custom geometry.

In a future post, we will see how to use Metal shaders to modify this shape in realtime.

Lights and Materials

When building a 3D scene in SceneKit, you usually create nodes with their own geometry. To make the scene look realistic, you have to add two important components to your scene: materials and lights. Materials define the appearance of the geometry’s surface when rendered. Lights illuminate and affect the appearance of a scene. Both element are very important and need to be defined. The final appearance of the scene is obtained by how lights and materials influence each other.

Lights

Light simulation is a very complex topic in computer graphics. I will try here to simplify the concepts. Light is a particular kind of electromagnetic radiation of a frequency that can be detected by the human eye. A light starts from its source (an object emitting light - sun, light bulb, etc.). Then, the light travels through a medium (air, space, glass, water or any material able to transmit the light) until it reaches a surface. When it reaches a surface, part of the light is reflected and part of it is transmitted (absorbed by the surface).

In SceneKit, a light source is represented by an instance of an SCNLight class. You create a light object and then attach it to a node to illuminate the scene. SceneKit provides different types of light sources:

  • ambient light illuminates the scene from all directions. Because the intensity of light from an ambient source is the same everywhere in the scene, its position and direction have no effect. Also, attenuation, spotlight angle, and shadow attributes do not apply to ambient lights.
  • omni light is an omnidirectional light source. Because an omnidirectional light casts equal illumination in all directions, the orientation of the node containing the light has no effect. Spotlight angle and shadow attributes do not apply to directional lights.
  • directional light illuminates the scene with constant direction and intensity. The position of the node containing this light has no effect on the scene. Attenuation and spotlight angle attributes do not apply to directional lights.
  • spot light illuminates a cone-shaped area. The position and orientation of the node containing the light determines the area lit by the spotlight, and all lighting attributes affect its appearance.
  • IES (Illumination Engineering Society) light illuminates the scene with shape, direction and intensity of illumination specified in a IES standard file. This file format was created for the electronic transfer of photometric data over the web. IES light files are created by many major lighting manufacturers and can be downloaded freely from their websites. SCNLight provides also the iesProfileURL property to import the light IES standard file into SceneKit. After setting this property to a URL that references a valid IES profile, you must set the light’s type property to IES to load that file and apply its effect. You can download an example of a light IES file from here.
  • probe light can be use to measure the direction and the intensity of the light in a point of the 3D space. They are local lights facing towards the scene. They capture the local diffused contribution. So when shading a point on a surface we can find the closest light probes and interpolate lighting from these probes. Light probes are really lightweight and efficient. Apple recommends to add many of them to the scene.

Depending on the final visual appearance you want to provide to your scene, you can choose one or more light source types. To create a light source, you create an instance of SCNLight. Then, you specify the type of light and, depending on the type you choose, you can configure different properties of the light. Finally, you can either create a new node or use an existing node of your scene and attach the light to it. For example, here how to create a spotlight source:

Light probes

A particular attention is needed for light probes. As discussed above, a light probe is a kind of tool you can use to measure the light of your scene. Why do want to do that? Well, when simulating the light in a scene is very difficult to estimate the light if there is an object between your nodes and the light source. This object can be there because is part of the environment.

In a future post, we will talk about Physically-Based Rendering (PBR) techniques. Here, I just want to mention that there is a way to render a scene so that it looks real. This is done using photorealistic rendering. Light probes are very important in this case, because the can help the simulation to compute ambient occlusions.

Working with lights

Once you create a light and specify its type, you can set and modify different light features depending on the selected light type. Features such as the light attenuation (attenuationStartDistance, attenuationEndDistance, attenuationFalloffExponent), color (color), temperature and intensity give you a powerful control of the lights.

Shadows

Light objects are responsible for shadows. If you want to render shadows in your scene, you need to set the castsShadow property to true (by default this property is set to false). The property shadowColor allows to control the color of the shadow. The default value is black with 50% opacity. shadowRadius is a CGFloat specifying the amount of blur around the edges of shadows. The shadowMapSize property represents the size of the 2D shadow map generated by SceneKit. Setting this value to CGSize.zero makes SceneKit choose the size automatically. Keep in mind that a large shadow map size implies higher rendering costs.
The shadowMode property can be set to forward (default value), deferred or modulated. The first value makes SceneKit render the shadow when rendering the scene. Deferred mode means that SceneKit renders shadows in a post processing pass. The third mode (modulated) refers to the gobo property. A gobo (also known as a flag or cookie) is a stencil, gel, or other object placed just in front of a light source, shaping or coloring the beam of light. The gobo property is an SCNMaterialProperty type. The gobo property applies only to lights whose type property is spot.

Shadows example

Last time, we created a very simple scene with a sphere. Let’s use this scene to add a light and shadows.

Create a new Xcode project and choose a single-view application template. Add a new Scene and create a new scene.scnassets folder as I showed you last time. Add a SCNView to the view controller main view (don’t forget to add constraints). Edit the ViewController.swift file in the following way:

Let’s open the scene file. Add to the scene file a sphere at position [0, 1, 0]. Add also a floor object to the scene. Then, change the camera position to [0, 5, 6] and its Euler angles to [–45, 0, 0]. Finally, add a directional light to the scene and change its Euler angles to [–30, 0, 0].

Let’s now add the shadows. Select the directional light and Attributes Inspector. In the Shadow section, check the Casts Shadows flag and you are done. From the attribute inspector you can change many (and more) parameters I highlighted above. You can download this simple project from here.

Try to add more objects to the scene and change the light attributes.

Conclusions

In this post, we analyzed how to affect the visual appearance of a SceneKit scene. We covered concepts such as lights and shadows. We also gave a look at the geometry of a node. Putting together these new concepts, and those highlights in the previous post, you can now start to build your realistic 3D scenes.
Next time we are going to look at the materials another important component that will make your scene looking realistic.

Geppy

Geppy Parziale (@geppyp) is cofounder of InvasiveCode. He has developed many iOS applications and taught iOS development to many engineers around the world since 2008. He worked at Apple as iOS and OS X Engineer in the Core Recognition team. He has developed several iOS and OS X apps and frameworks for Apple, and many of his development projects are top-grossing iOS apps that are featured in the App Store. Geppy is an expert in computer vision and machine learning.

iOS Consulting | INVASIVECODE

iOS Training | INVASIVECODE

(Visited 772 times, 1 visits today)