Machine readable code reader in iOS
Apple’s introduction of the innovative Pass Kit and the Passbook, in 2012, provided developers with an expansive new range of applications. This interesting technology, however, was incomplete, as there was no ability to read a barcode. This caused developers to use costly third-party solutions in order to implement a barcode scanner. Now, in 2013, Apple has updated this technology, providing developers with the ability to automatically generate machine readable barcodes that are scannable with an iOS device. In this post we will closely examine these updated API’s.
Additionally, iNVASVIECODE offers in-depth explanations and hands-on learning examples of the latests API’s through specialized consulting sessions and comprehensive iOS training classes.
The AVFoundation framework has improved functionality. Now an iPhone or iPad camera is all you need to read a barcode using iOS 7. In our previous post we demonstrated how to use the AVFoundation to build a custom camera. Lets review the primary steps of this demonstration while also checking the new AVFoundation features in iOS 7.
Let’s create a new iOS 7 iPhone Xcode project using the single-view application template and name it Scanner. Once done, open the ViewController.h
and add the following modules:
1 2 |
@import AVFoundation; @import QuartzCore; |
Modules are a new Objective-C features. It allows you to replace the #import
and avoid you to add the framework to your project. Xcode will look for the right framework for you.
Let’s go back to the project. In the ViewController.m
, add the following properties to the class extension:
1 2 3 4 5 6 |
@interface ViewController () <AVCaptureMetadataOutputObjectsDelegate> @property (nonatomic) AVCaptureSession *captureSession; @property (nonatomic) AVCaptureDeviceInput *captureDeviceInput; @property (nonatomic) AVCaptureVideoPreviewLayer *capturePreviewLayer; @property (nonatomic) AVCaptureVideoDataOutput *captureVideoDataOutput; @end |
We need these properties to build the AVFoundation stack. Notice that I also added the AVCaptureMetadataOutputObjectsDelegate
protocol. The single method in this protocol is captureOutput:didOutputMetadataObjects:fromConnection:
that allows the delegate to respond when a capture metadata output object receives relevant metadata objects through its connection.
Now, change the viewDidLoad
method in this way:
1 2 3 4 5 |
- (void)viewDidLoad { [super viewDidLoad]; [self setupCameraSession]; // 1 [self.captureSession startRunning]; // 2 } |
The custom method setupCameraSession
(line 1) is used to setup the AVFoundation capture stack. The line 2 is used to start the capture session.
Let’s check the setupCameraSession
method that I use to create and configure the session, its input and its output. Let’s look at this method step by step.
1 2 3 4 5 |
- (void)setupCameraSession { // Creates a capture session if (self.captureSession == nil) { self.captureSession = [[AVCaptureSession alloc] init]; } |
Here, I just check if the an AVCaptureSession
already exists. If not, I create one and I assign it to the prepared property. Then, I start the session configuration:
1 2 |
// Begins the capture session configuration [self.captureSession beginConfiguration]; |
Then, I create a capture device using the rear camera of the device:
1 2 |
// Selects the rear camera AVCaptureDevice *captureDevice = [self captureDeviceWithPosition:AVCaptureDevicePositionBack]; |
Now, I need to lock the configuration and fix the camera autofocus. New in iOS 7 is the possibility to restrict the focus to a given range. You have 3 possibilities: AVCaptureAutoFocusRangeRestrictionNear
, AVCaptureAutoFocusRangeRestrictionFar
, and AVCaptureAutoFocusRangeRestrictionNone
. I am going to choose the near version. This setting will simply help the camera to focus quicker. Once the configuration is changed, I can unlock it.
1 2 3 4 5 6 7 8 9 10 |
// Locks the configuration BOOL success = [captureDevice lockForConfiguration:nil]; if (success) { if ([captureDevice isAutoFocusRangeRestrictionSupported]) { // Restricts the autofocus to near range (new in iOS 7) [captureDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear]; } } // unlocks the configuration [captureDevice unlockForConfiguration]; |
After that, I create a device input for the previously create capture device and I add it to the capture sesion:
1 2 3 4 5 |
NSError *error; // Adds the device input to capture session self.captureDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error]; if ( [self.captureSession canAddInput:self.captureDeviceInput] ) [self.captureSession addInput:self.captureDeviceInput]; |
To show real time results on the iPhone screen, I now create an AVCaptureVideoPreviewLayer
using the capture session and I add it to the layer of the main view of the viewcontroller:
1 2 3 4 5 6 7 8 |
// Prepares the preview layer self.capturePreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession]; CGRect frame = [[UIScreen mainScreen] bounds]; [self.capturePreviewLayer setFrame:frame]; [self.capturePreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill]; // Adds the preview layer to the main view layer [self.view.layer insertSublayer:self.capturePreviewLayer atIndex:0]; |
Here, the new part: I add an AVCaptureMetadataOutput
object to the capture session.
1 2 3 4 5 |
// Creates and adds the metadata output to the capture session AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init]; if ([self.captureSession canAddOutput:metadataOutput]) { [self.captureSession addOutput:metadataOutput]; } |
Next, I create a dispatch queue and I assign it to the just created AVCaptureMetadataOutput
object:
1 2 3 |
// Creates a GCD queue to dispatch the metadata dispatch_queue_t metadataQueue = dispatch_queue_create("com.invasivecode.metadataqueue", DISPATCH_QUEUE_SERIAL); [metadataOutput setMetadataObjectsDelegate:self queue:metadataQueue]; |
On the same object, you can set the type of metadata you want to read. I included here all the admitted types. However, you should use just the types you are interested in. This will improve the recognition performance:
1 2 3 4 5 6 7 8 9 |
// Sets the metadata object types. Essentially, here you can choose the barcode type. NSArray *metadataTypes = @[ AVMetadataObjectTypeUPCECode, AVMetadataObjectTypeCode39Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeCode93Code, AVMetadataObjectTypeCode128Code, AVMetadataObjectTypeCode39Mod43Code ]; [metadataOutput setMetadataObjectTypes:metadataTypes]; |
Finally, I commit the configuration:
1 2 3 |
// Commits the camera configuration [self.captureSession commitConfiguration]; } |
When the capture session receives the startRunning
message, the output are set to the above dispatch queue and the captureOutput:didOutputMetadataObjects:fromConnection:
of the AVCaptureMetadataOutputObjectsDelegate
protocol will be fired every time a machine readable code is recognized. Let’s implement this code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection { if ([metadataObjects count] < 1) { return; } for (id item in metadataObjects) { if ([item isKindOfClass:[AVMetadataMachineReadableCodeObject class]]) { if (item) { NSLog(@"%@", [item stringValue]); dispatch_async(dispatch_get_main_queue(), ^{ NSArray *corners = [item corners]; CGMutablePathRef path = [self createBoxWithCorners:corners]; [self.boxLayer setPath:path]; }); } } } } |
The AVMetadataMachineReadableCodeObject
is a subclass of AVMetadataObject
. It comes with 2 properties:
1 2 |
@property(readonly) NSArray *corners @property(readonly) NSString *stringValue |
The first property is the list of points representing the corners of the machine readable code area. The second property contains the extracted string from the recognized code.
This example, which simply prints the extracted string in the console, can yield more interesting results. Such as, drawing in realtime a box highlighting the found machine code, as shown in the picture bellow:
I hope you enjoyed this feature.
Geppy