添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
才高八斗的香蕉  ·  Python ...·  7 月前    · 
咆哮的墨镜  ·  Unity 2023.1.0·  1 年前    · 

5. AVFoundation Programming Guide - Still and Video Media Capture.md

File metadata and controls

5. AVFoundation Programming Guide - Still and Video Media Capture

原文地址: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2

  • 使用采集会话来协调数据流 Use a Capture Session to Coordinate Data Flow
  • 配置采集会话 Configuring a Session
  • 监视采集会话的状态 Monitoring Capture Session State
  • 输入设备的 AVCaptureDevice 对象 An AVCaptureDevice Object Represents an Input Device
  • 设备特征 Device Characteristics
  • 采集设备的设置 Device Capture Settings
  • 对焦模式 Focus Modes
  • 曝光模式 Exposure Modes
  • 闪光灯模式 Flash Modes
  • 电筒模式 Torch Mode
  • 视频稳定功能 Video Stabilization
  • 白平衡 White Balance
  • 设置设备的方向 Setting Device Orientation
  • 配置设备 Configuring a Device
  • 设备间切换 Switching Between Devices
  • 使用采集输入将采集设备添加到会话中 Use Capture Inputs to Add a Capture Device to a Session
  • 输出到影片文件 Saving to a Movie File
  • 开始录制 Starting a Recording
  • 确定文件被正确的写入了 Ensuring That the File Was Written Successfully
  • 往文件中添加元数据 Adding Metadata to a File
  • 对视频帧进行处理 Processing Frames of Video
  • 视频处理中对于性能的考虑 Performance Considerations for Processing Video
  • 拍照 Capturing Still Images
  • 像素格式和编码格式 Pixel and Encoding Formats
  • 获取图片 Capturing an Image
  • 为用户显示录制的内容 Showing the User What’s Being Recorded
  • 视频预览 Video Preview
  • 视频拉伸模式 Video Gravity Modes
  • 在预览图层中使用触摸对焦 Using “Tap to Focus” with a Preview
  • 显示音频电平 Showing Audio Levels
  • 整合在一起:将采集的视频帧转换为图片 Putting It All Together: Capturing Video Frames as UIImage Objects
  • 创建和配置采集会话 Create and Configure a Capture Session
  • 创建和配置设备输入 Create and Configure the Device and Device Input
  • 创建和配置视频输出 Create and Configure the Video Data Output
  • 实现采样缓冲委托方法 Implement the Sample Buffer Delegate Method
  • 开始和结束录制 Starting and Stopping Recording
  • 高速视频拍摄 High Frame Rate Video Capture
  • 播放 Playback
  • 编辑 Editing
  • 输出 Export
  • 录制 Recording
  • 照片和视频的采集 Still and Video Media Capture

    To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need:

  • An instance of AVCaptureDevice to represent the input device, such as a camera or microphone
  • An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device
  • An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image
  • An instance of AVCaptureSession to coordinate the data flow from the input to the output
  • To show the user a preview of what the camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer ).

    You can configure multiple inputs and outputs, coordinated by a single session, as shown in Figure 4-1

    要想使用摄像头和麦克风设备采集,你需要组合输入和输出,并使用一个 AVCaptureSession 对象来协调两者之间的数据流。至少你要完成以下:

  • 创建一个 AVCaptureDevice 作为输入设备,比如摄像头或者麦克风;
  • 创建一个具体的输入对象 AVCaptureInput 来配置输入设备的端口 ( ports );
  • 创建一个具体的输出对象 AVCaptureOutput 用来管理输出视频或者图片;
  • 创建一个 AVCaptureSession 对象来协调输入和输出之间的数据流;
  • 你可以使用 AVCaptureVideoPreviewLayer (他是 CALayer 子类)来进行摄像头的预览画面。

    你还可以配置多个输入和输出,但是只使用一个 AVCaptureSession 来进行管理,图 4-1

    For many applications, this is as much detail as you need. For some operations, however, (if you want to monitor the power levels in an audio channel, for example) you need to consider how the various ports of an input device are represented and how those ports are connected to the output.

    对于大多数程序而言,以上就已经足够了。但是有些操作,比如你想监视音频的电平 ( power levels in an audio channel ),你需要考虑一个输入设备的不同端口是如何再次被展现 ( represented ) 出来的,同时也你也要知道这些输入端口是如何连接到输出的。

    A connection between a capture input and a capture output in a capture session is represented by an AVCaptureConnection object. Capture inputs (instances of AVCaptureInput ) have one or more input ports (instances of AVCaptureInputPort ). Capture outputs (instances of AVCaptureOutput ) can accept data from one or more sources (for example, an AVCaptureMovieFileOutput object accepts both video and audio data).

    在采集过程中的输入设备和输出设备之间的连接是通过 AVCaptureConnection 对象来表示的 ( represented )。采集输入 capture inputs (作为 AVCaptureInput 对象)可以有多个输入端口 input ports ( AVCaptureInputPort 对象)。采集输出 capture ouptuts (作为 AVCaptureOutput 对象)可以从一个或者多个资源获取数据。(比如 AVCaptureMovieFileOutput 对象可以同时接收视频和音频数据)

    When you add an input or an output to a session, the session forms connections between all the compatible capture inputs’ ports and capture outputs, as shown in Figure 4-2. A connection between a capture input and a capture output is represented by an AVCaptureConnection object.

    当你往采集过程中 ( capture session ) 中添加输入和输出的时候,他会在所有合适的输入端口和输出端口之间形成连接(如图 4-2).输出设备和输出设备之间的连接是通过 AVCaptureConnection 对象来表示的。

    You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel.

    你可以通过连接来允许或者禁止一个输入或者输出的数据流。你还可以通过连接来监视音频中的平均电平和峰值电平( average and peak power levels )。

    Note: Media capture does not support simultaneous capture of both the front-facing and back-facing cameras on iOS devices.

    提示: iOS 模拟器的前后摄像头都不能模拟器采集功能;

    使用采集会话来协调数据流 Use a Capture Session to Coordinate Data Flow

    An AVCaptureSession object is the central coordinating object you use to manage data capture. You use an instance to coordinate the flow of data from AV input devices to outputs. You add the capture devices and outputs you want to the session, then start data flow by sending the session a startRunning message, and stop the data flow by sending a stopRunning message.

    AVCaptureSession 对象是数据采集过程中承担着中心调节的作用。你通过这个对象来协调输入设备和输出之间的数据流。当你在采集会话 AVCaptureSession 中添加输入输出设备之后,你可以通过 startRunning 来开始发送数据流,结束的时候调用 stopRunning

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    // Add inputs and outputs.
    [session startRunning];
    

    配置采集会话 Configuring a Session

    You use a preset on the session to specify the image quality and resolution you want. A preset is a constant that identifies one of a number of possible configurations; in some cases the actual configuration is device-specific:

    通过一个预设的配置 (preset) 来配置采集器的画质和分辨率。他是一组键和值的组合的常量,有些时候实际的配置都是和设备相关的,

    以下每列依次 Symbol (值), Resolution(分辨率), Comments (说明)

  • AVCaptureSessionPresetHigh, High, Highest recording quality. This varies per device. 录像时的最高画质,不同的设备是不一样的
  • AVCaptureSessionPresetMedium, Medium, Suitable for Wi-Fi sharing. The actual values may change. 适用于 Wi-Fi 分享,实际值是可能会变的
  • AVCaptureSessionPresetLow, Low, Suitable for 3G sharing. The actual values may change. 适用于 3G 分享,实际值也会变;
  • AVCaptureSessionPreset640x480, 640 x 480, VGA
  • AVCaptureSessionPreset1280x720, 1280x720,720p HD.
  • AVCaptureSessionPresetPhoto, Photo, 只适用于照片,不能输出视频
  • If you want to set a media frame size-specific configuration, you should check whether it is supported before setting it, as follows:

    如果你想要设置一个尺寸,你要先检查一下设备是不是支持这个选项,类似下面这样

    if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
        session.sessionPreset = AVCaptureSessionPreset1280x720;
    else {
        // Handle the failure.
    

    If you need to adjust session parameters at a more granular level than is possible with a preset, or you’d like to make changes to a running session, you surround your changes with the beginConfiguration and commitConfiguration methods. The beginConfiguration and commitConfiguration methods ensure that devices changes occur as a group, minimizing visibility or inconsistency of state. After calling beginConfiguration, you can add or remove outputs, alter the sessionPreset property, or configure individual capture input or output properties. No changes are actually made until you invoke commitConfiguration, at which time they are applied together.

    如果你需要对采集会话参数 (session parameters) 做更细致的设置,或者想在运行的时候来修改参数。你需要用 beginConfigurationcommitConfiguration 方法来包裹设置的代码。这样可以让修改操作一次完成,减少对于画面和状态一致性的影响. 调用 beginConfiguration 后,你可以添加删除输出,修改 sessionPreset 属性,或者单独修改采集的输入输出属性。只有在 commitConfiguration 调用之后,才会真正的起作用。

    [session beginConfiguration];
    // Remove an existing capture device.
    // Add a new capture device.
    // Reset the preset.
    [session commitConfiguration];
    

    监视采集会话的状态 Monitoring Capture Session State

    A capture session posts notifications that you can observe to be notified, for example, when it starts or stops running, or when it is interrupted. You can register to receive an AVCaptureSessionRuntimeErrorNotification if a runtime error occurs. You can also interrogate the session’s running property to find out if it is running, and its interrupted property to find out if it is interrupted. Additionally, both the running and interrupted properties are key-value observing compliant and the notifications are posted on the main thread.

    采集会话 AVCaptureSession 会发出通知,比如开始运行,结束,或者被打断的时候。你可以通过注册AVCaptureSessionRuntimeErrorNotification 来获得出错时的通知。你还可以检查会话的 running 属性来确定他是不是正在运行,或者通过 interrupted 属性来确定他是不是被打断来。这两个属性值其实还支持 key-value 观察者模式,你可以在主线程上获得他们的修改通知。

    输入设备的 AVCaptureDevice 对象 An AVCaptureDevice Object Represents an Input Device

    An AVCaptureDevice object abstracts a physical capture device that provides input data (such as audio or video) to an AVCaptureSession object. There is one object for each input device, for example, two video inputs—one for the front-facing the camera, one for the back-facing camera—and one audio input for the microphone.

    AVCaptureDevice 是对具体的采集设备(比如音频,视频)的抽象。每个实际的输入设备都有一个对应的 AVCaptureDevice 对象,比如两个视频输入,前后摄像头,以及一个音频输入的麦克风。

    You can find out which capture devices are currently available using the AVCaptureDevice class methods devices and devicesWithMediaType:. And, if necessary, you can find out what features an iPhone, iPad, or iPod offers (see Device Capture Settings). The list of available devices may change, though. Current input devices may become unavailable (if they’re used by another application), and new input devices may become available, (if they’re relinquished by another application). You should register to receive AVCaptureDeviceWasConnectedNotification and AVCaptureDeviceWasDisconnectedNotification notifications to be alerted when the list of available devices changes.

    你可以通过 AVCaptureDevice 的类型方法 devicesdeviceWithMediaType 来确定当前有哪个设备是可用的。此外,如果有必要的话你还可以查明 iPhone, iPadiPod 各自提供哪些特性(参考 Device Capture Settings). 但是实际可用的设备列表可能是会变化的。当前的输入设备可能会不可用(如果被另一个应用占用),也有可能会有新的设备可用(因为另一个应用不再使用他)。你可以注册 AVCaptureDeviceWasConnectedNotificationAVCaptureDeviceWasDisconnectedNotification 来接收通知消息,以此来告诉用户有可用的设备发生变化了。

    You add an input device to a capture session using a capture input (see Use Capture Inputs to Add a Capture Device to a Session).

    你要通过使用采集输入 (capture input) 将输入设备添加到会话中 (参考 Use capture Inputs to Add a Capture Device to a Session)。

    设备特征 Device Characteristics

    You can ask a device about its different characteristics. You can also test whether it provides a particular media type or supports a given capture session preset using hasMediaType: and supportsAVCaptureSessionPreset: respectively. To provide information to the user, you can find out the position of the capture device (whether it is on the front or the back of the unit being tested), and its localized name. This may be useful if you want to present a list of capture devices to allow the user to choose one.

    Figure 4-3 shows the positions of the back-facing (AVCaptureDevicePositionBack) and front-facing (AVCaptureDevicePositionFront) cameras.

    你可以获取设备的不同特征,也可以测试他是否提供特定的媒体类型或者是否支持采集会话 (capture session) 中一些预设参数选项,可以分别通过调用 hasMediaType:supportsAVCaptureSessionPreset: 来实现。你可以获取到设备的位置(比如是前置还是后置的摄像头)以及设备名字来提供给用户。当你需要为用户提供一个设备列表供选择的时候就会非常有用。

    图 4-3 显示了前置 (front-facing camera, AVCaptureDevicePositionBack) 和后置 (back-facing camera, AVCaptureDevicePositionFront)的相机

    Note: Media capture does not support simultaneous capture of both the front-facing and back-facing cameras on iOS devices.

    提示:iOS 模拟器中不支持前后摄像头的模拟采集。

    The following code example iterates over all the available devices and logs their name—and for video devices, their position—on the unit.

    下面的代码例举了可用的设备列表并显示他们的名字,对于视频设备,还会显示他的位置:

    NSArray *devices = [AVCaptureDevice devices];
    for (AVCaptureDevice *device in devices) {
        NSLog(@"Device name: %@", [device localizedName]);
        if ([device hasMediaType:AVMediaTypeVideo]) {
            if ([device position] == AVCaptureDevicePositionBack) {
                NSLog(@"Device position : back");
            else {
                NSLog(@"Device position : front");
    

    In addition, you can find out the device’s model ID and its unique ID.

    除此之外,你可以获取设备的 model IDunique ID

    采集设备的设置 Device Capture Settings

    Different devices have different capabilities; for example, some may support different focus or flash modes; some may support focus on a point of interest.

    The following code fragment shows how you can find video input devices that have a torch mode and support a given capture session preset:

    不同的设备有不同的功能,比如有的支持不同的对焦模式和闪光灯模式,有的支持按兴趣点对焦 (focus on a point of interest);

    下面的代码片段显示了如哪些设备支持手电筒模式 (torch mode) ,哪些支持某种采集会话设置 (capture session preset)。

    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    NSMutableArray *torchDevices = [[NSMutableArray alloc] init];
    for (AVCaptureDevice *device in devices) {
        [if ([device hasTorch] &&
             [device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) {
            [torchDevices addObject:device];
    

    If you find multiple devices that meet your criteria, you might let the user choose which one they want to use. To display a description of a device to the user, you can use its localizedName property.

    如果有多个设备满足要求,你就得让用户选择一个。你可以将设备的 localizedName 属性显示给用户。

    You use the various different features in similar ways. There are constants to specify a particular mode, and you can ask a device whether it supports a particular mode. In several cases, you can observe a property to be notified when a feature is changing. In all cases, you should lock the device before changing the mode of a particular feature, as described in Configuring a Device.

    用类似的方法你可以获取到其他多种特性。他们都是某个模式的常量,或者你也可以通过查询设备是否支持来确定。在很多时候,你可以通过通知观察这个属性的变化。但是在任何时候,只要你修改某个模式的特性,你必须要先锁定设备。参考 Configuring a Device

    Note: Focus point of interest and exposure point of interest are mutually exclusive, as are focus mode and exposure mode.

    提示:按兴趣点对焦和曝光是互斥的,他们在对焦模式和曝光模式中介绍。

    对焦模式 Focus Modes

    There are three focus modes:

  • AVCaptureFocusModeLocked: The focal position is fixed.This is useful when you want to allow the user to compose a scene then lock the focus.
  • AVCaptureFocusModeAutoFocus: The camera does a single scan focus then reverts to locked.This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.
  • AVCaptureFocusModeContinuousAutoFocus: The camera continuously autofocuses as needed.
  • 有三种对焦模式

  • AVCaptureFocusModeLocked: 锁定对焦位置。你可以让用户来构建场景,然后锁定一个对焦位置;
  • AVCaptureFocusModeAutoFocus: 让相机进行一次扫描然后锁定对焦。你可以让用户选择一个物体进行对焦,对焦完成后就会完成锁定(即使他不在画面的中间);
  • AVCaptureFocusModeContinuousAutoFocus:: 相机持续的进行自动对焦;
  • You use the isFocusModeSupported: method to determine whether a device supports a given focus mode, then set the mode using the focusMode property.

    在使用 focusMode 属性设置对焦模式前,你要通过 isFocusModeSupported: 来检查他是否支持这种对焦模式。

    In addition, a device may support a focus point of interest. You test for support using focusPointOfInterestSupported. If it’s supported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

    此外,设备可能还支持按兴趣点对焦 (focus point of interest)。你可以通过 focusPointOfInterestSupported 来确认。如果支持的话,你可以在 focusPointOfInterest 中设置一个对焦点,他用 {0,0} 代表画面的左上角,用 {1,1} 代表横屏(home 键在右边)的右下角,在竖屏模式下也是一样。

    You can use the adjustingFocus property to determine whether a device is currently focusing. You can observe the property using key-value observing to be notified when a device starts and stops focusing.

    If you change the focus mode settings, you can return them to the default configuration as follows:

    你可以通过 adjustingFocus 属性来确定他是不是正在进行对焦。你还可以通过 key-value 在对焦开始和结束的时候来接收到通知。

    在你修改对焦模式之后,你可以用下面的方法来回到默认的对焦方式

    if ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
        CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
        [currentDevice setFocusPointOfInterest:autofocusPoint];
        [currentDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
    

    曝光模式 Exposure Modes

    There are two exposure modes:

  • AVCaptureExposureModeContinuousAutoExposure: The device automatically adjusts the exposure level as needed.
  • AVCaptureExposureModeLocked: The exposure level is fixed at its current level.
  • 有两种曝光模式:

  • AVCaptureExposureModeContinuousAutoExposure: 设备进行自动的持续曝光;
  • AVCaptureExposureModeLocked: 曝光锁定模式
  • You use the isExposureModeSupported: method to determine whether a device supports a given exposure mode, then set the mode using the exposureMode property.

    在对 exposureMode 进行设置前,你要通过 isExposureModeSupported: 测试一下设备是不是支持这种曝光模式。

    In addition, a device may support an exposure point of interest. You test for support using exposurePointOfInterestSupported. If it’s supported, you set the exposure point using exposurePointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

    此外,有些设备还可能支持按兴趣点曝光(指定位置曝光 exposure point of interest). 可以通过 exposurePointOfInterestSupported 来测试是否支持他。你可以通过 exposurePointOfInterest 来设置曝光点。 他使用 {0,0} 代表画面的左上角,使用 {1,1} 代表画面的右下角 (横屏的时候指 home 键在右侧,或者是竖屏模式的时候)。

    You can use the adjustingExposure property to determine whether a device is currently changing its exposure setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its exposure setting.

    If you change the exposure settings, you can return them to the default configuration as follows:

    你可以通过 adjustingExposure 属性来检查设备是否正在进行曝光。你还可以通过 key-value 观察这个属性来获得设备开始和结束曝光的通知。

    如果你修改了曝光设置,可以通过下面的代码来回到默认的设置

    if ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
        CGPoint exposurePoint = CGPointMake(0.5f, 0.5f);
        [currentDevice setExposurePointOfInterest:exposurePoint];
        [currentDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
    

    闪光灯模式 Flash Modes

    There are three flash modes:

  • AVCaptureFlashModeOff: The flash will never fire.
  • AVCaptureFlashModeOn: The flash will always fire.
  • AVCaptureFlashModeAuto: The flash will fire dependent on the ambient light conditions.
  • 有三种闪光灯模式

  • AVCaptureFlashModeOff: 关闭闪光灯
  • AVCaptureFlashModeOn: 打开闪光灯
  • AVCaptureFlashModeAuto: 根据环境光线自动打开闪光灯
  • You use hasFlash to determine whether a device has a flash. If that method returns YES, you then use the isFlashModeSupported: method, passing the desired mode to determine whether a device supports a given flash mode, then set the mode using the flashMode property.

    你通过 hasFlash 来确定设备是否有闪光灯。如果有闪光灯,你还要用 isFlashModeSupported: 来确定是否支持某个闪光灯模式,然后才可以设置 flashMode 属性。

    电筒模式 Torch Mode

    In torch mode, the flash is continuously enabled at a low power to illuminate a video capture. There are three torch modes:

  • AVCaptureTorchModeOff: The torch is always off.
  • AVCaptureTorchModeOn: The torch is always on.
  • AVCaptureTorchModeAuto: The torch is automatically switched on and off as needed.
  • 在电筒模式下,闪光灯会持续以低功耗来照亮拍摄区域。有三种模式:

  • AVCaptureTorchModeOff: 关闭电筒;
  • AVCaptureTorchModeOn: 打开电筒;
  • AVCaptureTorchModeAuto: 根据环境自动打开和关闭电筒;
  • You use hasTorch to determine whether a device has a flash. You use the isTorchModeSupported:method to determine whether a device supports a given flash mode, then set the mode using the torchMode property.

    你可以通过 hasTorch 来确定设备是否有闪光灯。在使用 torchMode 属性前,需要用 isTorchModeSupported: 来确定他是否支持某一个模式。

    For devices with a torch, the torch only turns on if the device is associated with a running capture session.

    对于有手电筒模式支持的设备,只有相关的采集会话运行的时候才可以打开这个手电筒模式。

    视频稳定功能 Video Stabilization

    Cinematic video stabilization is available for connections that operate on video, depending on the specific device hardware. Even so, not all source formats and video resolutions are supported.

    视频稳定功能是依靠特定硬件实现的,他使得视频拍摄时能提供稳定的画面。但是,并不是所有的格式和分辨率都支持这个功能。

    Enabling cinematic video stabilization may also introduce additional latency into the video capture pipeline. To detect when video stabilization is in use, use the videoStabilizationEnabled property. The enablesVideoStabilizationWhenAvailable property allows an application to automatically enable video stabilization if it is supported by the camera. By default automatic stabilization is disabled due to the above limitations.

    启用视频稳定功能会造成视频采集过程中额外的延迟。你可以通过 videoStabilizationEnabled 属性来检查他是否被打开了。还可以通过 enablesVideoStabilizationWhenAvailable 来让应用程序在摄像头支持的时候自动启用稳定功能。由于以上的限制,这个功能是默认不开启的。

    白平衡 White Balance

    There are two white balance modes:

  • AVCaptureWhiteBalanceModeLocked: The white balance mode is fixed.
  • AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: The camera continuously adjusts the white balance as needed.
  • 有两种白平衡模式:

  • AVCaptureWhiteBalanceModeLocked: 白平衡被锁定
  • AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: 摄像头持续的自动修改白平衡。
  • You use the isWhiteBalanceModeSupported: method to determine whether a device supports a given white balance mode, then set the mode using the whiteBalanceMode property.

    在使用 whiteBalanceMode 前,你需要通过 isWhiteBalanceModeSupported: 来检查设备是否支持这种白平衡模式。

    You can use the adjustingWhiteBalance property to determine whether a device is currently changing its white balance setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its white balance setting.

    你可以通过 adjustingWhiteBalance 属性来确定设备是否正在进行拍平衡设置。你可以用 key-value 来观察这个属性,以此来获得白平衡开始设置和结束设置的通知。

    设置设备的方向 Setting Device Orientation

    You set the desired orientation on a AVCaptureConnection to specify how you want the images oriented in the AVCaptureOutput (AVCaptureMovieFileOutput, AVCaptureStillImageOutput and AVCaptureVideoDataOutput) for the connection.

    你要通过 AVCaptureConnection 来设置输出图像的方向(他们通过 AVCaptureOutput 的具体的子类 AVCaptureMovieFileOutput, AVCaptureStillImageOutput, AVCaptureVideoDataOutput 实现)。

    Use the AVCaptureConnectionsupportsVideoOrientation property to determine whether the device supports changing the orientation of the video, and the videoOrientation property to specify how you want the images oriented in the output port. Listing 4-1 shows how to set the orientation for a AVCaptureConnection to AVCaptureVideoOrientationLandscapeLeft:

    你需要先通过 AVCaptureConnectionsupportsVideoOrientation 属性来确定是否在视频中支持这个方向。然后通过输出端口 (output port) 的 videoOrientation 属性来设置图像的方向。下面代码示例了如何在 AVCaptureConnection 中将方向设置为 AVCaptureVideoOrientationLandscapeLeft:

    AVCaptureConnection *captureConnection = <#A capture connection#>;
    if ([captureConnection isVideoOrientationSupported])
        AVCaptureVideoOrientation orientation = AVCaptureVideoOrientationLandscapeLeft;
        [captureConnection setVideoOrientation:orientation];
    

    配置设备 Configuring a Device

    To set capture properties on a device, you must first acquire a lock on the device using lockForConfiguration:. This avoids making changes that may be incompatible with settings in other applications. The following code fragment illustrates how to approach changing the focus mode on a device by first determining whether the mode is supported, then attempting to lock the device for reconfiguration. The focus mode is changed only if the lock is obtained, and the lock is released immediately afterward.

    在设置采集设备的属性前,你必须要先对设备锁定 lockForConfiguration:。这样做会避免你的操作对其他应用程序的设置产生影响。下面的代码片段示例了如何先检查是否支持这个对焦模式,然后他尝试锁定设备后再次设置对焦模式。只要在获得锁定的前提下才能修改对焦模式,在设置完成后应该立刻释放他。

    if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) {
        NSError *error = nil;
        if ([device lockForConfiguration:&error]) {
            device.focusMode = AVCaptureFocusModeLocked;
            [device unlockForConfiguration];
        else {
            // Respond to the failure as appropriate.
    

    You should hold the device lock only if you need the settable device properties to remain unchanged. Holding the device lock unnecessarily may degrade capture quality in other applications sharing the device.

    你只有在真正需要让设备不被外部修改的时候才去保持设备的锁定。因为这样做会导致其他应用程序在使用设备的时候降低采集的质量。

    设备间切换 Switching Between Devices

    Sometimes you may want to allow users to switch between input devices—for example, switching from using the front-facing to to the back-facing camera. To avoid pauses or stuttering, you can reconfigure a session while it is running, however you should use beginConfiguration and commitConfiguration to bracket your configuration changes:

    有时你想要用户可以切换输入设备,比如在前后摄像头之间切换。为了避免暂停和卡顿,你可以通过对运行中会话 (session) 的重新配置来实现,前提是你必须用 beginConfigurationcommitConfiguration 来包裹你的配置代码:

    AVCaptureSession *session = <#A capture session#>;
    [session beginConfiguration];
    [session removeInput:frontFacingCameraDeviceInput];
    [session addInput:backFacingCameraDeviceInput];
    [session commitConfiguration];
    

    When the outermost commitConfiguration is invoked, all the changes are made together. This ensures a smooth transition.

    当最外层的 commitConfiguration 被调用的时候,所有的配置会一起生效,以此来实现一个平滑的过渡。

    使用采集输入将采集设备添加到会话中 Use Capture Inputs to Add a Capture Device to a Session

    To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete subclass of the abstract AVCaptureInput class). The capture device input manages the device’s ports.

    要把一个采集设备添加到会话 (capture session) 中,你可以创建一个 AVCaptureDeviceInput 实例(他是 AVCaptureInput 的具体的子类)。这个设备输入 AVCaptureInput 管理着设备的端口。

    NSError *error;
    AVCaptureDeviceInput *input =
            [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
    

    You add inputs to a session using addInput:. If appropriate, you can check whether a capture input is compatible with an existing session using canAddInput:.

    在将输入用 addInput: 添加到会话前,最好要检查一下这个输入是否可以被添加进去,使用 canAddInput: 测试

    AVCaptureSession *captureSession = <#Get a capture session#>;
    AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>;
    if ([captureSession canAddInput:captureDeviceInput]) {
        [captureSession addInput:captureDeviceInput];
    else {
        // Handle the failure.
    

    See Configuring a Session for more details on how you might reconfigure a running session.

    对于如何在会话运行的过程中配置他的详细细节,可以参考 Configuring a Session

    An AVCaptureInput vends one or more streams of media data. For example, input devices can provide both audio and video data. Each media stream provided by an input is represented by an AVCaptureInputPort object. A capture session uses an AVCaptureConnection object to define the mapping between a set of AVCaptureInputPort objects and a single AVCaptureOutput.

    在采集输入 (AVCaptureInput) 提供了多种媒体的数据流。比如一个输入设备可能同时提供音频和视频数据。每个媒体流都用 AVCaptureInputPort 对象来表示。在会话 (capture session) 中,使用连接 (AVCaptureConnection) 对象来定义一组 输入 AVCaptureInputPort 对象和一个输出对象 AVCaptureOutput 之间的映射关系。

    从采集会话中获取输出 Use Capture Outputs to Get Output from a Session

    To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput. You use:

  • AVCaptureMovieFileOutput to output to a movie file
  • AVCaptureVideoDataOutput if you want to process frames from the video being captured, for example, to create your own custom view layer
  • AVCaptureAudioDataOutput if you want to process the audio data being captured
  • AVCaptureStillImageOutput if you want to capture still images with accompanying metadata
  • 你可以从一个采集会话 (capture session) 中添加多个输出方式。每个输出都是一个 AVCaptureOutput 具体的子类的对象:

  • AVCaptureMovieFileOutput 用来输出影片文件;
  • AVCaptureVideoDataOutput 可以用来对采集的视频帧进行处理,比如用来创建自己的播放图层;
  • AVCaptureAudioDataOutput 用来对获取的音频数据进行处理;
  • AVCaptureStillImageOutput 用来采集静态图片,也就是拍照,他还带有设备的元数据;
  • You add outputs to a capture session using addOutput:. You check whether a capture output is compatible with an existing session using canAddOutput:. You can add and remove outputs as required while the session is running.

    你使用 addOutput: 来向采集会话中添加输出,在这之前可以先通过 canAddOutput: 来确定这个输出是否可以被添加进去。你可以在采集会话运行的时候添加和删除输出。

    AVCaptureSession *captureSession = <#Get a capture session#>;
    AVCaptureMovieFileOutput *movieOutput = <#Create and configure a movie output#>;
    if ([captureSession canAddOutput:movieOutput]) {
        [captureSession addOutput:movieOutput];
    else {
        // Handle the failure.
    

    输出到影片文件 Saving to a Movie File

    You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure various aspects of the movie file output, such as the maximum duration of a recording, or its maximum file size. You can also prohibit recording if there is less than a given amount of disk space left.

    你可以使用 AVCaptureMovieFileOutput 对象 (他是 AVCaptureFileOutput 的具体子类) 来将影片内容保存到文件。 在输出的时候你可以进行多项设置,比如录制是的最大时长,最大文件尺寸等。你还可以在没有足够磁盘空间的时候禁止录像。

    AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
    CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>;
    aMovieFileOutput.maxRecordedDuration = maxDuration;
    aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>;
    

    The resolution and bit rate for the output depend on the capture session’s sessionPreset. The video encoding is typically H.264 and audio encoding is typically AAC. The actual values vary by device.

    使用这种方式输出的分辨率和比特率取决于采集会话中的参数配置 sessionPreset。视频一般使用 H.264 进行编码,而音频一般使用 AAC 编码。但是实际的值会根据设备而变化。

    开始录制 Starting a Recording

    You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a file-based URL and a delegate. The URL must not identify an existing file, because the movie file output does not overwrite existing resources. You must also have permission to write to the specified location. The delegate must conform to the AVCaptureFileOutputRecordingDelegate protocol, and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

    当调用 startRecordingToOutputFileURL:recordingDelegate: 的时候就会开始录制 QuickTime 影片。你需要提供一个基于文件的 URL和实现一个委托。其中的 URL 不可以是已经存在的文件,因为他不会覆盖他,此外你还必须拥这个地址的写权限。你还需要实现 AVCaptureFileOutputRecordingDelegate 协议的委托方法 captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:

    AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
    NSURL *fileURL = <#A file URL that identifies the output location#>;
    [aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
    

    In the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:, the delegate might write the resulting movie to the Camera Roll album. It should also check for any errors that might have occurred.

    在实现的这个方法中 captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:,你可将影片保存到相机胶卷库。在这过程中你需要处理可能发生的错误。

    确定文件被正确的写入了 Ensuring That the File Was Written Successfully

    To determine whether the file was saved successfully, in the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: you check not only the error but also the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary:

    captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 中你可以通过 error 和其中 user info 中的 AVErrorRecordingSuccessfullyFinishedKey 值来确定文件是否被正确的写入。

    - (void)captureOutput:(AVCaptureFileOutput *)captureOutput
            didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
            fromConnections:(NSArray *)connections
            error:(NSError *)error {
        BOOL recordedSuccessfully = YES;
        if ([error code] != noErr) {
            // A problem occurred: Find out if the recording was successful.
            id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
            if (value) {
                recordedSuccessfully = [value boolValue];
        // Continue as appropriate...
    

    You should check the value of the AVErrorRecordingSuccessfullyFinishedKeykey in the user info dictionary of the error, because the file might have been saved successfully, even though you got an error. The error might indicate that one of your recording constraints was reached—for example, AVErrorMaximumDurationReached or AVErrorMaximumFileSizeReached. Other reasons the recording might stop are:

  • The disk is full—AVErrorDiskFull
  • The recording device was disconnected—AVErrorDeviceWasDisconnected
  • The session was interrupted (for example, a phone call was received)—AVErrorSessionWasInterrupted
  • 你必须要检查 error 中的 user info 字段内的 AVErrorRecordingSuccessfullyFinishedKeykey值。这是因为有时即使有错误发生,仍然可能正确的写入了文件。错误对象可能表明了遇到了录制中的一些限制,比如达到最大时长 AVErrorMaximumDurationReached,到达最大文件尺寸 AVErrorMaximumFileSizeReached。还有其他一些原因可能会导致录制停止:

  • 磁盘已满 AVErrorDiskFull
  • 录制设备断开 AVErrorDeviceWasDisconnected
  • 采集会话 session 被中断 AVErrorSessionWasInterrupted (比如电话呼入的时候)
  • 往文件中添加元数据 Adding Metadata to a File

    You can set metadata for the movie file at any time, even while recording. This is useful for situations where the information is not available when the recording starts, as may be the case with location information. Metadata for a file output is represented by an array of AVMetadataItem objects; you use an instance of its mutable subclass, AVMutableMetadataItem, to create metadata of your own.

    即使是在录制的过程中,你可以在如何时候往影片文件中添加元数据。这在录制开始时无法获得信息的情况下尤为有用,比如开始时还无法获得位置信息。输出文件的元数据是用一组 AVMetadataItem 对象来表示的。实际使用的时候可以使用 AVMutableMetadataItem 来创建自己的元数据。

    AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
    NSArray *existingMetadataArray = aMovieFileOutput.metadata;
    NSMutableArray *newMetadataArray = nil;
    if (existingMetadataArray) {
        newMetadataArray = [existingMetadataArray mutableCopy];
    else {
        newMetadataArray = [[NSMutableArray alloc] init];
    AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init];
    item.keySpace = AVMetadataKeySpaceCommon;
    item.key = AVMetadataCommonKeyLocation;
    CLLocation *location - <#The location to set#>;
    item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/"
        location.coordinate.latitude, location.coordinate.longitude];
    [newMetadataArray addObject:item];
    aMovieFileOutput.metadata = newMetadataArray;
    

    对视频帧进行处理 Processing Frames of Video

    An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using setSampleBufferDelegate:queue:. In addition to setting the delegate, you specify a serial queue on which they delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate in the proper order. You can use the queue to modify the priority given to delivering and processing the video frames. See SquareCam for a sample implementation.

    使用 AVCaptureVideoDataOutput 对象可以获取到视频帧。你可以通过setSampleBufferDelegate:queue: 来设置委托,同时你还要创建一个串行队列用来执行这个委托方法。这个队列必须是串行的,这样才能在保证正确的顺序。你可以在处理和传递视频帧的时候修改队列的优先级。参考 SquareCam 中的具体实现。

    The frames are presented in the delegate method, captureOutput:didOutputSampleBuffer:fromConnection:, as instances of the CMSampleBufferRef opaque type (see Representations of Media). By default, the buffers are emitted in the camera’s most efficient format. You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel formats are returned by the availableVideoCVPixelFormatTypes property , and the availableVideoCodecTypes property returns the supported values. Both Core Graphics and OpenGL work well with the BGRA format:

    视频帧在委托函数 captureOutput:didOutputSampleBuffer:fromConnection: 中获取,他是一个不透明的类型 CMSampleBufferRef 对象(参考 Representations of Media). 默认情况下,这些 buffers 在摄像头中是以最高效的格式进行分发的。你可以通过 videoSettings 属性来自定义输出格式。这个属性是一个字典,现在只包含一个字段 kCVPixelBufferPixelFormatTypeKey. 推荐的像素格式 pixel formats 是来自 availableVideoCVPixelFormatTypes 属性,而他的值是从 availableVideoCodecTypes 属性获得。Core GraphicsOpenGL 都兼容 BGRA 格式。

    AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
    NSDictionary *newSettings =
                    @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
    videoDataOutput.videoSettings = newSettings;
     // discard if the data output queue is blocked (as we process the still image
    [videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];)
    // create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
    // a serial dispatch queue must be used to guarantee that video frames will be delivered in order
    // see the header doc for setSampleBufferDelegate:queue: for more information
    videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
    [videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
    AVCaptureSession *captureSession = <#The Capture Session#>;
    if ( [captureSession canAddOutput:videoDataOutput] )
         [captureSession addOutput:videoDataOutput];
    

    视频处理中对于性能的考虑 Performance Considerations for Processing Video

    You should set the session output to the lowest practical resolution for your application. Setting the output to a higher resolution than necessary wastes processing cycles and needlessly consumes power.

    在你的应用程序中,应该总是将会话输出的分辨率设置的尽可能低。使用更高的分辨率会增加处理周期 (processing cycles)以及无谓的电量损耗。

    You must ensure that your implementation of captureOutput:didOutputSampleBuffer:fromConnection: is able to process a sample buffer within the amount of time allotted to a frame. If it takes too long and you hold onto the video frames, AV Foundation stops delivering frames, not only to your delegate but also to other outputs such as a preview layer.

    你要确保在 captureOutput:didOutputSampleBuffer:fromConnection: 中每一帧对于 sample buffer 的处理不会消耗太多时间。如果每一帧的处理都用很多时间,那不仅仅是在回调函数过程中, AVFoundation 甚至会在输出和预览中都停止帧的传输。

    You can use the capture video data output’s minFrameDuration property to be sure you have enough time to process a frame—at the cost of having a lower frame rate than would otherwise be the case. You might also make sure that the alwaysDiscardsLateVideoFrames property is set to YES (the default). This ensures that any late video frames are dropped rather than handed to you for processing. Alternatively, if you are recording and it doesn’t matter if the output fames are a little late and you would prefer to get all of them, you can set the property value to NO. This does not mean that frames will not be dropped (that is, frames may still be dropped), but that they may not be dropped as early, or as efficiently.

    你可以在采集视频的输出中设置 minFrameDuration 来确保有足够的时间处理一帧,虽然这样做会损失一点帧率。你也可以将 alwaysDiscardsLateVideoFrames 属性设置为 YES(默认)。这样就会让一些晚的视频帧被丢弃。如果你正在录制的视频并不在乎一点延时的话,那你可以将他设置为 NO,这样仍然会对他们进行处理。但是这并不意味着就不再会丢帧了,只是他们不会再因为处理速度或者效率问题而被过早丢弃。

    拍照 Capturing Still Images

    You use an AVCaptureStillImageOutput output if you want to capture still images with accompanying metadata. The resolution of the image depends on the preset for the session, as well as the device.

    你可以使用 AVCaptureStillImageOutput 输出来拍摄照片(带有照片元数据)。图片的分辨率取决于会话创建时使用的预设参数。

    像素格式和编码格式 Pixel and Encoding Formats

    Different devices support different image formats. You can find out what pixel and codec types are supported by a device using availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes respectively. Each method returns an array of the supported values for the specific device. You set the outputSettings dictionary to specify the image format you want, for example:

    不同设备支持的图像格式是不一样的。你可以通过 availableImageDataCVPixelFormatTypes 来获取设备支持的像素格式,通过 availableImageDataCodecTypes 获取设备支持的编码类型。他们都会返回设备支持的一组值。你可以通过设置 outputSettings 字典来指定图像格式,比如:

    AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};
    [stillImageOutput setOutputSettings:outputSettings];
    

    If you want to capture a JPEG image, you should typically not specify your own compression format. Instead, you should let the still image output do the compression for you, since its compression is hardware-accelerated. If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to get an NSData object without recompressing the data, even if you modify the image’s metadata.

    如果你想拍摄一个 JPEG 的图像,那你就不应该自己进行压缩。因为输出的过程中会自动为你进行压缩,而且是经过硬件加速的。如果你想要获取这个图像的 data,或者你是要来修改图像的元数据,你都可以通过 jpegStillImageNSDataRepresentation: 来获得他的 NSData 对象,这样他就不用反复压缩数据。

    获取图片 Capturing an Image

    When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:

    当你想要获得图片的时候,只需要调用输出的 captureStillImageAsynchronouslyFromConnection:completionHandler:。第一个参数指的是需要进行采集的连接 (connection)。你需要在会话中查找哪个连接的输入端口是用来进行视频采集的。

    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;
                break;
        if (videoConnection) { break; }
    

    The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer opaque type containing the image data, and an error. The sample buffer itself may contain metadata, such as an EXIF dictionary, as an attachment. You can modify the attachments if you want, but note the optimization for JPEG images discussed in Pixel and Encoding Formats.

    第二个参数是一个回调 captureStillImageAsynchronouslyFromConnection:completionHandler:。他有两个参数,CMSampleBuffer 包含了封装好的图片数据和错误信息 errorCMSampleBuffer 本身还包含了附带元数据 (metadata)(比如 EXIF 信息字典)。如果有需要的话,你可以修改附带的元数据,但是需要注意对于 JPEG 的优化事项,他们在像素和编码格式中提到。

    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
        ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
            CFDictionaryRef exifAttachments =
                CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
            if (exifAttachments) {
                // Do something with the attachments.
            // Continue as appropriate.
    

    为用户显示录制的内容 Showing the User What’s Being Recorded

    You can provide the user with a preview of what’s being recorded by the camera (using a preview layer) or by the microphone (by monitoring the audio channel).

    你可以通过预览图层 (preview layer) 将摄像头拍摄的视频内容预览给用户,也可以将麦克风录制的声音播放给用户(通过监听音频 monitoring the audio channel )。

    视频预览 Video Preview

    You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass of CALayer (see Core Animation Programming Guide. You don’t need any outputs to show the preview.

    你可以使用 AVCaptureVideoPreviewLayer 对象将正在采集的视频内容预览给用户。他是 CALayer 的子类(参考 Core Animation Programming Guide)。你并不需要为预览设置任何输出。

    Using the AVCaptureVideoDataOutput class provides the client application with the ability to access the video pixels before they are presented to the user.

    通过使用 AVCaptureVideoDataOutput 类,你可以在应用程序呈现给用户前先访问到视频像素内容。

    Unlike a capture output, a video preview layer maintains a strong reference to the session with which it is associated. This is to ensure that the session is not deallocated while the layer is attempting to display video. This is reflected in the way you initialize a preview layer:

    和输出不一样的是,视频的预览图层总是对采集会话进行强引用 (strong reference)。这是为了确保在图层显示视频的过程中,会话不会被释放掉。这反映在你初始化预览图层的方式上:

    AVCaptureSession *captureSession = <#Get a capture session#>;
    CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;
    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
    [viewLayer addSublayer:captureVideoPreviewLayer];
    

    In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide). You can scale the image and perform transformations, rotations, and so on just as you would any layer. One difference is that you may need to set the layer’s orientation property to specify how it should rotate images coming from the camera. In addition, you can test for device support for video mirroring by querying the supportsVideoMirroring property. You can set the videoMirrored property as required, although when the automaticallyAdjustsVideoMirroringproperty is set to YES (the default), the mirroring value is automatically set based on the configuration of the session.

    总的来说,预览图层和渲染树 (render tree) 中的其他 CALayer 对象行为差不多(参考 Core Animation Programming Guide). 你可以像操作其他图层一样设置缩放图像,变形,旋转等。区别在与,你需要设置预览图层的 orientation 属性来适应相机中出来的画面旋转。此外,你还可以通过检查 supportsVideoMirror 属性来确定设备是否支持画面镜像。如果有需要的话可以设置 videoMirrored 属性。在 automaticallyAdjustVideoMirroring 属性被设置为 YES (默认)时,这个值会根据会话配置自动被设置。

    视频拉伸模式 Video Gravity Modes

    The preview layer supports three gravity modes that you set using videoGravity:

  • AVLayerVideoGravityResizeAspect: This preserves the aspect ratio, leaving black bars where the video does not fill the available screen area.
  • AVLayerVideoGravityResizeAspectFill: This preserves the aspect ratio, but fills the available screen area, cropping the video when necessary.
  • AVLayerVideoGravityResize: This simply stretches the video to fill the available screen area, even if doing so distorts the image.
  • 预览图层 (preview layer) 通过 videoGravity 可以设置三种拉伸模式

  • AVLayerVideoGravityResizeAspect: 保持比例缩放,如果画面无法填充整个区域的时候会出现黑边;
  • AVLayerVideoGravityResizeAspectFill: 保持比例缩放,但是让画面填充整个区域,对于超出的一边会进行剪裁;
  • AVLayerVideoGravityResize: 将画面拉伸到填充整个画面,这样会让画面失真。
  • 在预览图层中使用触摸对焦 Using “Tap to Focus” with a Preview

    You need to take care when implementing tap-to-focus in conjunction with a preview layer. You must account for the preview orientation and gravity of the layer, and for the possibility that the preview may be mirrored. See the sample code project AVCam-iOS: Using AVFoundation to Capture Images and Movies for an implementation of this functionality.

    当你在预览图层中实现触摸对焦 tap-to-focus 的时候需要注意。你需要考虑到预览图层的方向,画面拉伸,以及界面镜像的影响。可以参考示例项目 AVCam-iOS: Using AVFoundation to Capture Images and Movies 中如何实现这些功能。

    显示音频电平 Showing Audio Levels

    To monitor the average and peak power levels in an audio channel in a capture connection, you use an AVCaptureAudioChannel object. Audio levels are not key-value observable, so you must poll for updated levels as often as you want to update your user interface (for example, 10 times a second).

    为了要监视采集连接 (capture connection) 中音频的平均电平和峰值电平,你要使用到 AVCaptureAudioChannel 对象。音频电平不是 key-value observable 可观察对象,所以你必须直接来轮询电平数值并更新到界面上。

    AVCaptureAudioDataOutput *audioDataOutput = <#Get the audio data output#>;
    NSArray *connections = audioDataOutput.connections;
    if ([connections count] > 0) {
        // There should be only one connection to an AVCaptureAudioDataOutput.
        AVCaptureConnection *connection = [connections objectAtIndex:0];
        NSArray *audioChannels = connection.audioChannels;
        for (AVCaptureAudioChannel *channel in audioChannels) {
            float avg = channel.averagePowerLevel;
            float peak = channel.peakHoldLevel;
            // Update the level meter user interface.
    

    整合在一起:将采集的视频帧转换为图片 Putting It All Together: Capturing Video Frames as UIImage Objects

    This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:

  • Create an AVCaptureSession object to coordinate the flow of data from an AV input device to an output
  • Find the AVCaptureDevice object for the input type you want
  • Create an AVCaptureDeviceInput object for the device
  • Create an AVCaptureVideoDataOutput object to produce video frames
  • Implement a delegate for the AVCaptureVideoDataOutput object to process video frames
  • Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object
  • 下面这个示例代码展示了如何采集视频,并将视频帧转换为图片对象。他包括:

  • 创建采集会话 AVCaptureSession 对象来管理输入输出设备间的数据流;
  • 获取指定的输入设备 AVCaptureDevice;
  • 为设备创建输入对象 AVCaptureDeviceInput;
  • 创建一个输出对象 AVCaptureVideoDataOutput对象来处理视频帧;
  • 实现委托 AVCaptureVideoDataOutput 来处理视频帧;
  • 在委托方法内将获取的 CMSampleBuffer 转换为图片对象 UIImage;
  • 提示: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

    注意:这个示例为了突出相关代码,忽略了作为一个完整的应用程序所必须的一些处理部分,包括内存管理。在使用 AVFoundation 框架的时候,你必须对 Cocoa 开发有足够多的经验来弥补这些忽略的代码。

    创建和配置采集会话 Create and Configure a Capture Session

    You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium-resolution video frames.

    采集会话 AVCaptureSession 是用来协调输入输出设备之间数据流的对象。下面的代码创建了一个会话,并采集中等画质的视频。

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;
    

    创建和配置设备输入 Create and Configure the Device and Device Input

    Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration.

    采集设备是用 AVCaptureDevice 对象来表示的,他提供了让你获取输入对象 (input) 的方法。一个设备有一个或者多个端口,他们都通过 AVCaptureInput 对象来配置。一般情况下,你只要使用输入设备的默认配置就可以了。

    Find a video capture device, then create a device input with the device and add it to the session. If an appropriate device can not be located, then the deviceInputWithDevice:error: method will return an error by reference.

    先找到一个视频采集设备 (video capture device),然后用他来创建一个设备输入 (device input) 并添加到会话 session 中。如果没有找到合适的视频采集设备, deviceInputWithDevice:error: 方法就会返回一个错误对象。

    AVCaptureDevice *device =
            [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    NSError *error = nil;
    AVCaptureDeviceInput *input =
            [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
    [session addInput:input];
    

    创建和配置视频输出 Create and Configure the Video Data Output

    You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property and cap the frame rate by setting the minFrameDuration property.

    通过使用 AVCaptureVideoDataOutput 对象来获取采集到的未压缩的视频帧。你可以为输出进行多种配置。比如对于视频,你可以用 videoSettings 属性来设置像素格式 (pixel format),通过设置 minFrameDuration 属性来修改帧率。

    Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:

    以下代码创立一个视频数据输出并添加到会话中。然后将 minFrameDuration 属性设置为 1/15 秒 来将帧率降低到15帧。

    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [session addOutput:output];
    output.videoSettings =
                    @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
    output.minFrameDuration = CMTimeMake(1, 15);
    

    The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.

    视频输出对象通过委托来获取视频帧。这个委托要实现 AVCaptureVideoDataOutputSampleBufferDelegate 协议。同时你还要提供一个队列来处理委托中的回调函数调用。

    dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    

    You use the queue to modify the priority given to delivering and processing the video frames.

    你要通过这个队列来修改传输和处理视频帧的优先级。

    实现采样缓冲委托方法 Implement the Sample Buffer Delegate Method

    In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffer opaque types, so you need to convert from the CMSampleBuffer opaque type to a UIImage object. The function for this operation is shown in Converting CMSampleBuffer to a UIImage Object.

    每当采样缓冲 (sampe buffer) 被写入的时候,都会调用委托类中的 captureOutput:didOutputSampleBuffer:fromConnection:。视频数据输出对象 (video data output object) 传输来的视频帧是一种名为 CMSampleBuffer 的不透明类型,所以你需要将他转换到图像对象 (UIImage)。这部分函数在 Converting CMSampleBuffer to a UIImage Object 中给出了。

    - (void)captureOutput:(AVCaptureOutput *)captureOutput
             didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
             fromConnection:(AVCaptureConnection *)connection {
        UIImage *image = imageFromSampleBuffer(sampleBuffer);
        // Add your code here that uses the image.
    

    Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread.

    要记住的是这个委托方法是在 setSampleBufferDelegate:queue: 指定的队列中运行的,所以如果你需要更新界面的话,必须要在主线程中来执行相关代码。

    开始和结束录制 Starting and Stopping Recording

    After configuring the capture session, you should ensure that the camera has permission to record according to the user’s preferences.

    在配置完采集会话 (capture session) 后,你还需要确保已经从用户那里获取到相机权限了。

    NSString *mediaType = AVMediaTypeVideo;
    [AVCaptureDevice requestAccessForMediaType:mediaType completionHandler:^(BOOL granted) {
        if (granted)
            //Granted access to mediaType
            [self setDeviceAuthorized:YES];
            //Not granted access to mediaType
            dispatch_async(dispatch_get_main_queue(), ^{
            [[[UIAlertView alloc] initWithTitle:@"AVCam!"
                                        message:@"AVCam doesn't have permission to use Camera, please change privacy settings"
                                       delegate:self
                              cancelButtonTitle:@"OK"
                              otherButtonTitles:nil] show];
                    [self setDeviceAuthorized:NO];
    

    If the camera session is configured and the user has approved access to the camera (and if required, the microphone), send a startRunning message to start the recording.

    当采集会话配置好并获取到相机权限(如果有需要,还要获取麦克风权限)后,你可以调用 startRunning 来开始录制。

    Important: The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive). See AVCam-iOS: Using AVFoundation to Capture Images and Movies for the canonical implementation example.

    重要的事项:startRunning 方法需要消耗一些时间来执行并会阻塞当前操作,所以你需要在一个串行线程中来执行以此来避免阻塞主线程(会影响 UI 响应)。参考 AVCam-iOS: Using AVFoundation to Capture Images and Movies 的示例来实现。

    [session startRunning];
    

    To stop recording, you send the session a stopRunning message.

    调用 stopRunning 来停止录制。

    高速视频拍摄 High Frame Rate Video Capture

    iOS 7.0 introduces high frame rate video capture support (also referred to as “SloMo” video) on selected hardware. The full AVFoundation framework supports high frame rate content.

    从 iOS 7.0 开始在一些设备硬件上支持进行高速视频拍摄(称为 慢动作 视频)。AVFoundation 框架也支持这种高帧率的内容。

    You determine the capture capabilities of a device using the AVCaptureDeviceFormat class. This class has methods that return the supported media types, frame rates, field of view, maximum zoom factor, whether video stabilization is supported, and more.

  • Capture supports full 720p (1280 x 720 pixels) resolution at 60 frames per second (fps) including video stabilization and droppable P-frames (a feature of H264 encoded movies, which allow the movies to play back smoothly even on slower and older hardware.)
  • Playback has enhanced audio support for slow and fast playback, allowing the time pitch of the audio can be preserved at slower or faster speeds.
  • Editing has full support for scaled edits in mutable compositions.
  • Export provides two options when supporting 60 fps movies. The variable frame rate, slow or fast motion, can be preserved, or the movie and be converted to an arbitrary slower frame rate such as 30 frames per second.
  • 你可以通过 AVCaptureDeviceFormat 来确定这个设备可提供那些功能。这个类的方法可以返回支持的媒体类型 (supported media types),帧率,视野,最大方法倍率,是否支持视频防抖等功能。

  • 可以拍摄全高清 720p (1280 x 720) 分辨率,支持每秒60帧,支持视频防抖功能和可丢弃的 p 帧(H264 编码视频中的一种特性,使他可以在较老的设备上也可以获得比较顺畅的播放)。
  • 增强支持音频的慢放和快放,使得在进行变速播放的时候可以保持音调 (time pitch)。
  • 可变合成器 (mutable compositions) 中的编辑已经全面支持标尺编辑 (scaled edits);
  • 对于 60 帧的影片在输出的时候可以有两种选择。选择可变帧率,保留慢速或者快速运动画面。或者就是直接选择一个较低的帧率输出,比如30帧。
  • The SloPoke sample code demonstrates the AVFoundation support for fast video capture, determining whether hardware supports high frame rate video capture, playback using various rates and time pitch algorithms, and editing (including setting time scales for portions of a composition).

    SloPoke 示例代码展示了如何用 AVFoundation 进行高速视频拍摄,检查设备是否支持高速摄影,用可变帧率 (various rates)和音调算法 (time pitch algorithms) 进行播放, 和编辑(包括在合成器内将一部分内容进行时间标度设置 time scales)。

    播放 Playback

    An instance of AVPlayer manages most of the playback speed automatically by setting the setRate:method value. The value is used as a multiplier for the playback speed. A value of 1.0 causes normal playback, 0.5 plays back at half speed, 5.0 plays back five times faster than normal, and so on.

    AVPlayer 对象通过设置 setRate: 来自动管理播放时的速度。这个参数是播放速度的倍速,1.0 表示正常的速度,0.5 表示一半的速度, 5.0 表示正常速度的5倍速度等等。

    The AVPlayerItem object supports the audioTimePitchAlgorithm property. This property allows you to specify how audio is played when the movie is played at various frame rates using the Time Pitch Algorithm Settings constants.

    AVPlayerItem 对象有一个 audioTimePitchAlgorithm 属性,通过他可以设置音频在帧率变化的时候是如何根据音调算法设置 Time Pitch Algorithm Settings constants 来播放声音的。

    The following table shows the supported time pitch algorithms, the quality, whether the algorithm causes the audio to snap to specific frame rates, and the frame rate range that each algorithm supports.

    下面的表列出了可使用的音调算法 (time pitch algorithms), 质量 quality, 是否会导致声音突然到另一个帧率,以及每个算法可支持的帧率范围。

    编辑 Editing

    When editing, you use the AVMutableComposition class to build temporal edits.

  • Create a new AVMutableComposition instance using the composition class method.
  • Insert your video asset using the insertTimeRange:ofAsset:atTime:error: method.
  • Set the time scale of a portion of the composition using scaleTimeRange:toDuration:
  • 通过 AVMutableComposition 来进行一些时间编辑。

  • composition 类方法创建一个可修改的合成器对象 AVMutableComposition;
  • insertTimeRange:ofAsset:atTime:error: 方法来插入一个视频 asset;
  • scaleTimeRange:toDuration: 在合成器中的一部分进行时间标度 (time scale) 设置。
  • 输出 Export

    Exporting 60 fps video uses the AVAssetExportSession class to export an asset. The content can be exported using two techniques:

  • Use the AVAssetExportPresetPassthrough preset to avoid reencoding the movie. It retimes the media with the sections of the media tagged as section 60 fps, section slowed down, or section sped up.
  • Use a constant frame rate export for maximum playback compatibility. Set the frameDuration property of the video composition to 30 fps. You can also specify the time pitch by using setting the export session’s audioTimePitchAlgorithm property.
  • 使用 AVAssetExportSession 将 60 帧的影片导出,可以使用两种技术:

  • 使用 AVAssetExportPresetPassthrough 预设参数来避免对影片再次编码。影片中有些部分被标记为 60 帧,有些地方变慢,有些地方变快,他将这些媒体重新设置时间。
  • 用一个可以兼容的最大的帧率来输出,将 frameDuration 设置为 30帧。你也可以通过会话输出 (export) 的 audioTimePitchAlgorithm 属性来指定音调 (time pitch)算法。
  • 录制 Recording

    You capture high frame rate video using the AVCaptureMovieFileOutput class, which automatically supports high frame rate recording. It will automatically select the correct H264 pitch level and bit rate.

    你可以用AVCaptureMovieFileOutput 来输出,他自动支持高速视频录制。他会自动选择正确的 H264 Level (pitch level) 和比特率。

    To do custom recording, you must use the AVAssetWriter class, which requires some additional setup.

    如果要做一些自定义的录制,你需要使用 AVAssetWriter 并且需要一些额外的设置。

    assetWriterInput.expectsMediaDataInRealTime=YES;
    

    This setting ensures that the capture can keep up with the incoming data.

    这个设置可以保证采集和输入时速度保持一致。