Machine readable code reader in iOS

Apple’s introduction of the innovative Pass Kit and the Passbook, in 2012, provided developers with an expansive new range of applications. This interesting technology, however, was incomplete, as there was no ability to read a barcode. This caused developers to use costly third-party solutions in order to implement a barcode scanner. Now, in 2013, Apple has updated this technology, providing developers with the ability to automatically generate machine readable barcodes that are scannable with an iOS device. In this post we will closely examine these updated API’s.

Additionally, iNVASVIECODE offers in-depth explanations and hands-on learning examples of the latests API’s through specialized consulting sessions and comprehensive iOS training classes.

The AVFoundation framework has improved functionality. Now an iPhone or iPad camera is all you need to read a barcode using iOS 7. In our previous post we demonstrated how to use the AVFoundation to build a custom camera. Lets review the primary steps of this demonstration while also checking the new AVFoundation features in iOS 7.

Let’s create a new iOS 7 iPhone Xcode project using the single-view application template and name it Scanner. Once done, open the ViewController.h and add the following modules:

Modules are a new Objective-C features. It allows you to replace the #import and avoid you to add the framework to your project. Xcode will look for the right framework for you.

Let’s go back to the project. In the ViewController.m, add the following properties to the class extension:

We need these properties to build the AVFoundation stack. Notice that I also added the AVCaptureMetadataOutputObjectsDelegate protocol. The single method in this protocol is captureOutput:didOutputMetadataObjects:fromConnection: that allows the delegate to respond when a capture metadata output object receives relevant metadata objects through its connection.

Now, change the viewDidLoad method in this way:

The custom method setupCameraSession (line 1) is used to setup the AVFoundation capture stack. The line 2 is used to start the capture session.

Let’s check the setupCameraSession method that I use to create and configure the session, its input and its output. Let’s look at this method step by step.

Here, I just check if the an AVCaptureSession already exists. If not, I create one and I assign it to the prepared property. Then, I start the session configuration:

Then, I create a capture device using the rear camera of the device:

Now, I need to lock the configuration and fix the camera autofocus. New in iOS 7 is the possibility to restrict the focus to a given range. You have 3 possibilities: AVCaptureAutoFocusRangeRestrictionNear, AVCaptureAutoFocusRangeRestrictionFar, and AVCaptureAutoFocusRangeRestrictionNone. I am going to choose the near version. This setting will simply help the camera to focus quicker. Once the configuration is changed, I can unlock it.

After that, I create a device input for the previously create capture device and I add it to the capture sesion:

To show real time results on the iPhone screen, I now create an AVCaptureVideoPreviewLayer using the capture session and I add it to the layer of the main view of the viewcontroller:

Here, the new part: I add an AVCaptureMetadataOutput object to the capture session.

Next, I create a dispatch queue and I assign it to the just created AVCaptureMetadataOutput object:

On the same object, you can set the type of metadata you want to read. I included here all the admitted types. However, you should use just the types you are interested in. This will improve the recognition performance:

Finally, I commit the configuration:

When the capture session receives the startRunning message, the output are set to the above dispatch queue and the captureOutput:didOutputMetadataObjects:fromConnection: of the AVCaptureMetadataOutputObjectsDelegate protocol will be fired every time a machine readable code is recognized. Let’s implement this code.

The AVMetadataMachineReadableCodeObject is a subclass of AVMetadataObject. It comes with 2 properties:

The first property is the list of points representing the corners of the machine readable code area. The second property contains the extracted string from the recognized code.

This example, which simply prints the extracted string in the console, can yield more interesting results. Such as, drawing in realtime a box highlighting the found machine code, as shown in the picture bellow:

I hope you enjoyed this feature.

Geppy

iOS Consulting | INVASIVECODE

iOS Training | INVASIVECODE

(Visited 38 times, 1 visits today)