Processing Images(处理图像)

来源:互联网 发布:mysql复合主键的优缺点 编辑:程序博客网 时间:2024/06/06 12:52

Processing Images(处理图像)

  Core Image has three classes thatsupport image processing on iOS and OS X:(在IOS和OS X上 Core Image 有三个类支持图像的处理)

· CIFilter is a mutable object that represents an effect. A filter object hasat least one input parameter and produces an outputimage.(CIFilter是一个可变对象,代表了一种效果。一个过滤器对象至少有一个输入参数,产生一个输出图像。CIFilter是个滤镜类,Core Image框架中对图片属性进行细节处理的类。它对所有的像素进行操作,用一些键-值设置来决定具体操作的程度)

· CIImage is an immutable object that represents an image. You can synthesize image data orprovide it from a file or the output of anotherCIFilter object.(CIImage是一个不可变对象,代表了一种形象。你可以从一个文件或合成图像数据或提供另一个CIFilter对象的输出)

   CIContext is an object through which Core Image    draws the results produced by a filter. ACore Imagecontext can be based on the CPU or the GPU.(CIContext是物体通过其核心图像绘制的滤波器产生的结果。  CIContext可以基于CPU或GPU)   The remainder of this chapterprovides all the details you need to use Core Image filters and theCIFilter,CIImage, andCIContext classes on iOS and OS X.(本章提供了所有的细节,你需要使用的Core Image滤镜和CIFilter,CIImage和CIContext的所有细节,在iOS和OS X上)

Overview(概要)

   Processing an image isstraightforward as shown in Listing 1-1. Thisexample uses Core Image methods specific to iOS; see below for thecorresponding OS X methods. Each numbered step in the listing is described inmore detail following the listing.(处理简单的图像,如清单1-1所示。这个例子使用核心图像的方法具体到iOS;看下面相应的OS X的方法。在清单中的每一个编号的步骤在上市后更详细地描述)

Note: To use Core Image in your app, youshould add the framework to your Xcode project (CoreImage.framework on iOS orQuartzCore.framework on OS X) and import the corresponding header inyour source code files (

CIContext *context = [CIContext  contextWithOptions:nil];     //1CIImage *image = [CIImage imageWithContentsOfURL:myURL];   // 2CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];   // 3 CISepiaTone是过滤器滤镜效果的滤镜名字[filter  setValue: image  forKey:kCIInputImageKey];//标识需要修改的原图像[filter setValue:@0.8f  forKey: kCIInputIntensityKey];//滤镜的参数名和设置的参数值CIImage *result = [filter valueForKey:kCIOutputImageKey];  // 4滤镜后的效果CGRect extent = [result  extent];CGImageRef cgImage = [context   createCGImage: result  fromRect: extent]; // 5

上面代码的解释:

Here’swhat the code does:(
设置输入的参数值的方法,可参考下面的Creating a CIFilter Object and SettingValues详细查看))

  1. Create aCIContext object.ThecontextWithOptions: method isonly available on iOS. For details on other methods for iOS and OS X seeTable 1-2.(创建CIContext对象, contextWithOptions的方法只适用于IOS上,其他适用于IOS和 OS X上的具体的方法,参考下图Table1-2.)

  2. Create aCIImage object.You can create aCIImage from a variety ofsources, such as a URL. SeeCreating a CIImage Object for moreoptions.(创建一个CIImage对象。您可以创建一个CIImage从各种各样的来源,如一个URL。参考下面的Creating a CIImage Object查看更多创建一个CIImage对象的方法)

  3. .Create the filter andset values for its input parameters. There are more compact ways to set valuesthan shown here. SeeCreating a CIFilter Object and SettingValues.(创建过滤器,并设置输入的参数Values值,有很多

  4. Get the output image. The output image is arecipe for how to produce the image. The image is not yet rendered. SeeGetting the Output Image.(得到一个输出的CIImage对象,这个输出对象是为如何产生一个image做准备的材料,可参考下面的Getting the Output Image详细查看)

  5. Render theCIImage to a CoreGraphics image that is ready for display or saving to a file.(渲染一个CIImage成核心图形图像,为显示或者保存一个文件做准备)

    Important: SomeCore Image filters produce images of infinite extent, such as those in theCICategoryTileEffect category. Prior to rendering, infinite images must eitherbe cropped (CICrop filter) or you must specify a rectangle of finite dimensionsfor rendering the image.(一些核心图像过滤器产生无限范围的图像,比如那些CICategoryTileEffect类别。渲染之前,无限的图片必须是剪裁(CICrop过滤器)或者你必须指定一个矩形的有限维渲染图像放大)

The Built-in Filters(内置的过滤器)

  Core Image comes with dozens ofbuilt-in filters ready to support image processing in your app.Core Image Filter Reference liststhese filters, their characteristics, their iOS and OS X availability, andshows a sample image produced by the filter.The list ofbuilt-in filters can change, so for that reason,Core Image provides methods that let you query the system for theavailable filters(seeQuerying the System for Filters).

翻译:CoreImage为你的app提供了许多内置的过滤器,来支持对图像进行处理。Core Image Filter Reference提供了很多过滤器。显示由过滤器产生的示例图像。内置的过滤器列表可以改变,所以,因为这个原因,核心图像提供的方法,让您查询可用的过滤器的系统。

   A filter category specifies the type of effect—blur,distortion, generator, and so forth—or its intended use—still images, video,nonsquare pixels, and so on. A filter can be a member of more than onecategory. A filter also has a display name,which is the name to show to users and afiltername, which is the name you must use to access the filterprogrammatically.

翻译:一个过滤器类别指定的效果,模糊、失真、发电机的类型,等等,或其使用静止图像、视频、非方形像素,等等。过滤器可以是一个以上的类的成员。过滤器也显示名称,这是展示给用户的名字和一个过滤器的名称,其名称必须使用访问过滤程序。

   Most filters have one or more input parameters that let you control how processing is done. Each input parameter has anattribute class that specifies its data type, such asNSNumber. An input parameter can optionally have other attributes, such as its default value, the allowable minimum and maximum values, the display name for the parameter, and any other attributes that are described inCIFilter Class Reference.

翻译:大多数的过滤器有一个或多个输入参数,让您控制如何完成处理。每个输入参数有一个属性类,指定其数据类型,比如NSNumber。一个输入参数可以有其他属性,如它的默认值,允许的最大值和最小值,该参数的显示名称和其他属性,在CIFilter Class Reference.参考描述

  For example, the CIColorMonochromefilter has three input parameters—the image to process, a monochrome color, andthe color intensity. You supply the image and have the option to set a colorand its intensity. Most filters, including the CIColorMonochrome filter, havedefault values for each nonimage input parameter. Core Image uses the defaultvalues to process your image if you choose not to supply your own values for theinput parameters.

翻译:例如 CIColorMonochrome滤波器有三个参数,图像处理参数的设置,黑白色参数的设置,颜色强度参数的设置,你需要提供图片,可以选择设置颜色和强度。包括CIColorMonochrome滤波器在内的大多数滤波器,它们的参数一般都有默认值,如果你不设置参数值的话,那就默认为是默认值,然后来为你处理你选择的图片。

   Filter attributes are stored askey-value pairs. The key is a constant that identifies the attribute and thevalue is the setting associated with the key. Core Image attribute values aretypically one of the data types listed in Table 1-1.

翻译:滤波器的参数是以键值对的方式存储的,键是一个常数,标识属性和值是与键关联的设置。Core Image属性的值通常是一个数据类型的表1-1列出

  Core Image uses key-value coding, which means you can get and set values for the attributes of a filter by using the methods provided by theNSKeyValueCoding protocol. (For more information, seeKey-Value Coding Programming Guide.)

Creating a Core ImageContext

  To renderthe image, you need to create a Core Image context and then use that context todraw the output image. A Core Image context represents a drawing destination.The destination determines whether Core Image uses the GPU or the CPU forrendering. Table 1-2 lists the various methodsyou can use for specific platforms and renderers

翻译:要渲染图像,您需要创建一个核心图像上下文,然后使用该上下文来绘制输出图像。一个核心图像上下文表示一个绘图目标。目的确定Core Image使用的是GPU还是 CPU渲染。表1 - 2列出了针对特定平台和渲染器的各种方法可以使用。

 Creating a Core ImageContext on iOS When You Don’t Need Real-TimePerformance  If your app doesn’t require real-time display, you can create aCIContext object as follows:
CIContext *context = [CIContext contextWithOptions:nil];
  This method can use either the CPUor GPU for rendering. To specify which to use, set up an options dictionary andadd the keykCIContextUseSoftwareRenderer with the appropriate Boolean value foryour app. CPU rendering isslower than GPU rendering.But in the case of GPU rendering, the resultingimage is not displayed untilafter it is copied back to CPU memory and converted to another imagetype such as aUIImage object.

翻译:这种方法可以使用CPU或GPU渲染。指定使用,设置一个选项字典和添加你的应用程序的相应的布尔值的关键kCIContextUseSoftwareRenderer。CPU渲染比GPU渲染速度慢。但在GPU渲染的情况下,由此产生的图像不显示,直到它复制回CPU内存和转换到另一个图像类型如UIImage对象

Creating a Core Image Contexton iOS When You Need Real-Time Performance

  If your app supports real-timeimage processing you should create aCIContext object from an EAGL context ratherthan usingcontextWithOptions: and specifying the GPU. The advantage isthat the rendered image stays on the GPU and never gets copied back to CPUmemory. First you need to create an EAGL context:

翻译:如果您的应用程序支持实时图像处理,您应该创建一个从一个EAGL上下文,而不是使用contextWithOptions这个方法,并指定GPU。它的优势是渲染后的图像保持GPU, CPU不会被复制回内存。首先,您需要创建一个EAGL上下文

EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
Then use the method contextWithEAGLContext: as shown in Listing1-2 to create aCIContext object.

然后使用方法contextWithEAGLContext:见清单1 - 2创建一个CIContext对象。

  You should turn off color management by supplyingnull for the working color space. Color managementslows down performance. You’ll want to use colormanagement for situations that require color fidelity. But in a real-time app,color fidelity is often not a concern. (SeeDoes Your App Need Color Management?.)

翻译:你应该关掉颜色管理通过提供零的色彩空间工作。颜色管理会降低性能。你想在要求色彩保真度情况下使用颜色管理。但在一个实时应用,颜色保真度往往不是一个关注点。(看你的应用程序需要颜色管理吗?)

Listing 1-2 Creatinga CIContext on iOS for real-time performance

NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];

————————以下是OSX内容,不翻译—————————-

Creating a Core Image Context from aCGContext on OS X

   You can create a Core Imagecontext from a Quartz 2D graphics context using code similar to that shown in Listing 1-3, which is an excerpt from thedrawRect: method in a Cocoa app. You get the currentNSGraphicsContext, convert that to a Quartz 2D graphicscontext (CGContextRef), and then provide the Quartz 2D graphics context asan argument to thecontextWithCGContext:options: method of theCIContext class. For information on Quartz 2D graphicscontexts, seeQuartz 2D Programming Guide.

Listing 1-3 Creatinga Core Image context from a Quartz 2D graphics context

context = [CIContext contextWithCGContext:[[NSGraphicsContext currentContext] graphicsPort] options: nil]

Creatinga Core Image Context from an OpenGL Context on OS X

  The code in Listing 1-4 shows how to set up a Core Image contextfrom the current OpenGL graphics context. It’s important that the pixel formatfor the context includes theNSOpenGLPFANoRecovery constant as an attribute. Otherwise Core Image may not be able tocreate another context that shares textures with this one. You must also makesure that you pass a pixel format whose data type isCGLPixelFormatObj, as shown in Listing1-4. For more information on pixel formats and OpenGL, seeOpenGL Programming Guide for Mac.

Listing 1-4 Creatinga Core Image context from an OpenGL graphics context

const NSOpenGLPixelFormatAttribute attr[] = {        NSOpenGLPFAAccelerated,        NSOpenGLPFANoRecovery,        NSOpenGLPFAColorSize, 32, 0    };NSOpenGLPixelFormat *pf = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void *)&attr];CIContext *myCIContext = [CIContext contextWithCGLContext: CGLGetCurrentContext()pixelFormat: [pf CGLPixelFormatObj]options: nil];

Creatinga Core Image Context from an NSGraphicsContext on OS X

   The CIContext method of theNSGraphicsContext class returns aCIContext object that you can use to render into theNSGraphicsContext object. TheCIContext object is created on demand and remains in existencefor the lifetime of its owningNSGraphicsContextobject. You create the Core Image context using a line of code similar to thefollowing:
[[NSGraphicsContextcurrentContext] CIContext];

For more information on thismethod, seeNSGraphicsContext ClassReference

——————————-以上是OSX内容,不翻译————————–

Creating a CIImageObject

   Core Image filters process CoreImage images (CIImage objects). Table 1-3lists the methods that create aCIImage object. The method you usedependsonthe source of the image. Keep in mind that aCIImage object is really an image recipe; Core Image doesn’tactually produce any pixels until it’s called on to render results to adestination.

翻译:CoreImage滤镜处理Core Image图片(CIImage objects),表1 - 3列出了创建一个CIImage对象的方法。方法是取决于图片的来源。记住记住CIImage对象实际上是一个图像配方,核心图像实际上并没有产生任何像素直到它来渲染结果到目的地。

Note: In OS X v10.5 and later, you can supply RAW image data directly to a filter. See “RAW Image Options” inCIFilter Class Reference.

Creating a CIFilterObject and Setting Values

  The filterWithName: method creates a filter whose type is specified by the name argument. Thename argument is a string whose value must match exactly the filter name of a built-in filter (seeThe Built-in Filters). You can obtain a list of filter names by following the instructions inQuerying the System for Filters or you can look up a filter name in Core Image Filter Reference..

翻译:filterWithName:方法创建一个指定类型的滤波器参数的名称。名称参数是一个字符串,其值必须完全匹配一个内置的过滤器的过滤器名称(见内置的过滤器The Built-in Filters),您可以获得一个过滤列表名称按照指令在过滤器Querying the System for Filters或查询系统可以查找一个过滤器的名字在Core Image Filter Reference参考

  On iOS, the input values for afilter are set to default values when you call thefilterWithName: method.

翻译:在IOS上,当你唤醒filterWithName:方法时,这个名字的滤镜的参数值就都是默认值

  On OS X, the input values for afilter are undefined when you first create it, which is why you either need tocall thesetDefaults method to set the default values or supply values forall input parameters at the time you create the filter by calling the methodfilterWithName:withInputParameters:. If you callsetDefaults, you can callsetValue:forKey: later to change the input parameter values.(无翻译)   If you don’t know the inputparameters for a filter, you can get an array of them using the methodinputKeys. (Or,you can look up theinput parameters for most of the built-in filters inCore Image Filter Reference.)Filters,except forgenerator filters, requirean input image. Some require two or more images or textures. Set a value foreach input parameter whose default value you want to change by calling themethodsetValue:forKey:.

翻译:如果你不知道一个滤波器的输入参数,你可以使用inputKeys方法得到一系列参数。(或,您可以查考Core Image Filter Reference.上大部分内置过滤器的输入参数)。过滤器,除了发电机过滤器,需要输入图像。一些滤镜需要两个或更多的图像或纹理。你可以通过调用setValue:forKey:方法,设置每个输入参数的默认值。

   Let’s look at an example ofsetting up a filter to adjust the hue of an image. The filter’s name isCIHueAdjust.As long as you enter the namecorrectly, you’ll be able to create the filter with this line of code:

翻译:让我们来看一个设置一个过滤器来调整图像色调的例子。过滤器的名字是CIHueAdjus,只要你输入的名字正确,

你可以用这行代码创建过滤器:

hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"];
  Defaults are set for you on iOSbutnot on OS X. When youcreate a filter on OS X, it’s advisable to immediately set the input values. Inthis case, set the defaults:
[hueAdjust setDefaults];
  This filter has two inputparameters: the input image and the input angle. The input angle for the hueadjustment filter refers to the location of the hue in the HSV and HLS color spaces.This is an angular measurement that can vary from 0.0 to 2 pi. A value of 0indicates the color red; the color green corresponds to 2/3 pi radians, and thecolor blue is 4/3 pi radians.

翻译:这个CIHueAdjust的滤镜有2个参数,一个是输入image对象参数,一个是设置angle(角)参数。色调调整滤波器的输入角度的色调位置在HSV色彩和HLS颜色空间。这个角度测量范围可以从0.0到2π。0值表示红色;绿色对应2/3π弧度,蓝色是4/3π弧度。

  Next you’ll want to specify aninput image and an angle. The image can be one created from any of the methodslisted inCreating a CIImage Object. Figure 1-1shows the unprocessed image.

翻译:接下来你要指定一个输入图像和一个角度。图像可以从任何一个创建的方法创建一个CIImage对象。图1 - 1显示了未经处理的图像

The floating-point value in thiscode specifies a rose-colored hue:(在这段代码中的浮点值指定一个玫瑰色的色调:)

[hueAdjust setValue: myCIImage forKey: kCIInputImageKey];[hueAdjust setValue: @2.094f forKey: kCIInputAngleKey];
  Tip: A filter’s documentation is built-in.You can find out programmatically its input parameters as well as the minimumand maximum values for each input parameter. SeeQuerying the System for Filters.

提示:一个过滤器的文档是内置的。你可以找到编程的输入参数以及各输入参数的最大值和最小值。参见Querying the System for Filters.

  The following code shows a morecompact way to create a filter and set values for input parameters:

下面的代码显示了一个更紧凑的方法来创建一个过滤器并设置输入参数且对输入参数赋值。

hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"] withInputParameters:@{  kCIInputImageKey: myCIImage,  kCIInputAngleKey: @2.094f,}];
  You can supply as many inputparameters a you’d like, but you must end the list withnil.

您可以提供尽可能多的输入参数你想,但是你必须以零结束列表

Getting the OutputImage

  You get the output image byretrieving the value for theoutputImage key:
CIImage *result = [hueAdjust valueForKey: kCIOutputImageKey];
  Core Image does not perform any image processing until you call a method that actually renders the image (seeRendering the Resulting Output Image). When you request the output image, Core Image assembles the calculations that it needs to produce an output image and stores those calculations (that is, image “recipe”) in aCIImage object. The actual image is only rendered (and hence, the calculations performed) if there is an explicit call to one of the image-drawing methods. SeeRendering the Resulting Output Image.)

翻译:核心图像不执行任何图像处理,直到你调用一个方法,来渲染图像的时候(见Rendering the Resulting Output Image)。当你请求到输出图像,Core Image开始组装计算一个需要产生的一个输出图像并存储这些计算(即图像的“配方”)在CIImage对象。实际的图像只呈现(因此,计算执行),如果有一个明确的调用图像绘制许多方法中的一个方法。见Rendering the Resulting Output Image

  Deferring processing untilrendering time makes Core Image fast and efficient. At rendering time, CoreImage can see if more than one filter needs to be applied to an image. If so,it automatically concatenates multiple “recipes” into one operation, whichmeans each pixel is processed only once rather than many times. Figure 1-2 illustrates a multiple-operations workflowthat Core Image can make more efficient. The final image is a scaled-down versionof the original. For the case of a large image, applying color adjustmentbefore scaling down the image requires more processing power than scaling downthe image and then applying color adjustment. By waiting until render time toapply filters, Core Image can determine that it is more efficient to performthese operations in reverse order.

翻译:推迟处理直到渲染时间使核心图像快速高效。在渲染时,核心图像可以看到,如果一个以上的过滤器需要被应用到图像。如果是这样的话,它会自动连接多个图像“配方”为一个操作,这意味着每个像素只处理一次而不是许多次。图1-2说明了一个核心图像可以使用更有效的多个操作流程。最后的图像是一个缩小版本的原始。对于一个大的图像的情况下,应用颜色调整之前缩小的图像需要更多的处理能力比缩放的图像,然后应用颜色调整。通过等待,直到渲染时间来应用过滤器,核心图像可以确定,它是更有效的以相反的顺序来执行这些操作。

Rendering the Resulting Output Image(渲染所得到的输出图像)

  Rendering the resulting output image triggers the processor-intensive operations—either GPU or CPU, depending on the context you set up. The following methods are available for rendering:

渲染结果输出图像触发处理器密集型操作GPU和CPU、根据上下文设置。下列方法可用于呈现:

  To render the image discussed in Creating a CIFilter Object and Setting Values, you can use this line of code on OS X to draw the result onscreen:
[myContext drawImage:result inRect:destinationRect fromRect:contextRect];

在Creating a CIFilter Object and Setting Values上讨论了渲染图像,你可以使用这行代码在OS X的结果在屏幕上绘制:

  The original image from this example (shown in Figure 1-1) now appears in its processed form, as shown in Figure 1-3.

从这个例子中的原始图像(如图1-1所示)现在出现在其处理的形式,如图1-3所示。

Maintaining Thread Safety

维护线程安全

  CIContext and CIImage objects are immutable, which means each can be shared safely among threads. Multiple threads can use the same GPU or CPU CIContext object to render CIImage objects.However, this is not the case for CIFilter objects, which are mutable. A CIFilter object cannot be shared safely among threads. If your app is multithreaded, each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly.

CIContext和CIImage对象是不可变的,这意味着每一个可以在线程上安全的共享。多个线程可以使用相同的GPU或CPUCIContext对象呈现CIImage对象。然而,这不是CIFilter对象的情况下,这是可变的。一个CIFilter对象不能在线程安全的共享。如果你的应用程序是多线程,每个线程必须创建自己的CIFilter对象。否则,你的应用程序可能会表现得出乎意料。

Chaining Filters(链接器)

  You can create amazing effects by chaining filters—that is, using the output image from one filter as input to another filter. Let’s see how to apply two more filters to the image shown inFigure 1-3—gloom (CIGloom) and bump distortion (CIBumpDistortion).

你可以通过链接过滤器创建令人惊叹的效果,使用过滤器作为输入输出图像从一个到另一个过滤器。让我们看看如何将两个过滤器应用到图像如图1-3 ——-gloom(CIGloom)和凹凸变形(CIBumpDistortion)

  The gloom filter does just that; it makes an image gloomy by dulling its highlights. Notice that the code in Listing 1-5 is very similar to that shown in Creating a CIFilter Object and Setting Values. It creates a filter and sets default values for the gloom filter. This time, the input image is the output image from the hue adjustment filter. It’s that easy to chain filters together!

变暗过滤器就是干这个的,它使图像通过削弱其亮点暗淡。注意,在清单1-5代码,创建一个CIFilter对象和设置值显示很相似。它创建一个过滤器,并设置阴影过滤器的默认值。这一次,输入图像是由色调调整滤波器输出的图像。这是很容易链过滤器在一起!

Listing 1-5 Creating, setting up, and applying a gloom filter

CIFilter *gloom = [CIFilter filterWithName:@"CIGloom"];[gloom setDefaults];                                        // 1[gloom setValue: result forKey: kCIInputImageKey];[gloom setValue: @25.0f forKey: kCIInputRadiusKey];         // 2[gloom setValue: @0.75f forKey: kCIInputIntensityKey];      // 3result = [gloom valueForKey: kCIOutputImageKey];            // 4

Here’s what the code does:

  Sets default values. You must set defaults on OS X.On iOS you do not need to set default values because they are set automatically.   Sets the input radius to 25. The input radius specifies the extent of the effect, and can vary from 0 to 100 with a default value of 10. Recall that you can find the minimum, maximum, and default values for a filter programmatically by retrieving the attribute dictionary for the filter.  (输入半径设置为25。输入指定半径的影响程度,并且可以从0到100的默认值10。回想一下,你可以找到的最小、最大,和一个过滤器以编程方式通过检索属性的默认值字典的过滤器)                                                           Sets the input intensity to 0.75. The input intensity is a scalar value that specifies a linear blend between the filter output and the original image. The minimum is 0.0, the maximum is 1.0, and the default value is 1.0.(设置输入强度为0.75。输入强度是一个标量值,它指定滤波器输出和原始图像之间的线性混合。最小值为0,最大值为1,默认值为1)  Requests the output image, but does not draw the image.(请求输出图像,但不绘制图像)  The code requests the output image but does not draw the image.Figure 1-4shows what the image would look like if you drew it at this point after processing it with both the hue adjustment and gloom filters.

该代码请求输出图像,但不绘制图像。图1-4所示的图像,看起来就像你在它上面画了一点,然后通过色调调整过滤器和变暗的过滤器处理后的结果图像。

Figure 1-4 The image after applying the hue adjustment and gloom filters

The image after applying the hue adjustment and gloom filters  The bump distortion filter (CIBumpDistortion) creates a bulge in an image that originates at a specified point.Listing 1-6 shows how to create, set up, and apply this filter to the output image from the previous filter, the gloom filter. The bump distortion takes three parameters: a location that specifies the center of the effect, the radius of the effect, and the input scale.

凹凸失真滤波器(CIBumpDistortion)创建的图像的起源于一个指定点凸起。清单1-6显示了如何创建、设置、应用这个过滤器的输出图像前面的过滤器(即把Figure 1-4过滤器处理后的图像作为这次滤波器的输入图像),变暗过滤器。凹凸变形需要三个参数:一个指定的位置,该位置指定的效果的中心,半径的效果,和输入规模

Listing 1-6 Creating, setting up, and applying the bump distortion filter

CIFilter *bumpDistortion = [CIFilter filterWithName:@"CIBumpDistortion"];    // 1[bumpDistortion setDefaults];                                                // 2[bumpDistortion setValue: result forKey: kCIInputImageKey];[bumpDistortion setValue: [CIVector vectorWithX:200 Y:150] forKey: kCIInputCenterKey];                              // 3[bumpDistortion setValue: @100.0f forKey: kCIInputRadiusKey];                // 4[bumpDistortion setValue: @3.0f forKey: kCIInputScaleKey];                   // 5result = [bumpDistortion valueForKey: kCIOutputImageKey];

Here’s what the code does:

   Creates the filter by providing its name.On OS X, sets the default values (not necessary on iOS).   Sets the center of the effect to the center of the image.(将效果的中心设置为图像的中心,也可以不将效果的中心点设置为图像的中心点)   Sets the radius of the bump to 100 pixels.(设置凸点的半径为100像素)   Sets the input scale to 3. The input scale specifies the direction and the amount of the effect. The default value is –0.5. The range is –10.0 through 10.0. A value of 0 specifies no effect. A negative value creates an outward bump; a positive value creates an inward bump.(将输入规模设置为3。输入量表指定了效果的方向和数量。默认值是0.5。范围是-10到10。值为0指定没有效果。一个负值创建一个向外的凹凸,一个正值创建一个向内的凸点)

Figure 1-5 shows the final rendered image.

Figure 1-5 The image after applying the hue adjustment along with the gloom and bump distortion filters

The image after applying the hue adjustment, gloom, and bump distortion filters

Using Transition Effects(使用过渡效果)

  Transitions are typically used between images in a slide show or to switch from one scene to another in video. These effects are rendered over time and require that you set up a timer. The purpose of this section is to show how to set up the timer. You’ll learn how to do this by setting up and applying the copy machine transition filter (CICopyMachine) to two still images. The copy machine transition creates a light bar similar to what you see in a copy machine or image scanner. The light bar sweeps from left to right across the initial image to reveal the target image.Figure 1-6 shows what this filter looks like before, partway through, and after the transition from an image of ski boots to an image of a skier. (To learn more about specific input parameter of the CICopyMachine filter, seeCore Image Filter Reference.)

过渡通常用于幻灯片放映的图像或在视频中从一个场景切换到另一个场景(即 图像之间的转换通常使用幻灯片或视频从一个场景转换到另一个)。这些效果是随着时间的推移而呈现的,需要你设置一个计时器。本节的目的是展示如何设置计时器。你将学会如何建立和运用复制机转换过滤器(CICopyMachine)处理两张静止图像。复制机转换创建一个光栏类似于复印机或你所看到的图像扫描仪。光栏从左到右扫过初始图像揭示了目标图像。图1 - 6显示了这个过滤器是什么样子,它是从一个滑雪靴的图像过渡到另一个滑雪者的图像的过程,(要了解更多有关的CICopyMachine滤波器,见Core Image Filter Reference.)

Figure 1-6 A copy machine transition from ski boots to a skier

A copy machine transition from ski boots to a skier

Transition filters require the following tasks:(过渡过滤器需要以下任务)

Create Core Image images (CIImage objects) to use for the transition.(创建一个过滤器需要的图片对象CIImage)Set up and schedule a timer.(创建一个定时器)Create a CIContext object.(创建一个CIContext对象)Create a CIFilter object for the filter to apply to the image.(创建需要的过滤器,注意名字不要写错了)On OS X, set the default values for the filter.(仅针对OS X)Set the filter parameters.(设置过滤器参数)Set the source and the target images to process.(设置源和目标图像的处理)Calculate the time.(计算时间)Apply the filter.Apply the filter(应用过滤器)Draw the result.(画出结果)Repeat steps 8–10 until the transition is complete.(重复步骤8 - 10,直到完成过渡)   You’ll notice that many of these tasks are the same as those required to process an image using a filter other than a transition filter. The difference, however, is the timer used to repeatedly draw the effect at various intervals throughout the transition.

您会注意到,这些任务中有许多任务都是 跟其他 需要用过滤器对图像处理的方式是一样的,除了一个过渡过滤器。然而,不同的是,在整个过渡中,计时器用于多次反复绘制在不同的间隔上的图片效果。

  The awakeFromNib method, shown in Listing 1-7, gets two images (boots.jpg andskier.jpg) and sets them as the source and target images. Using theNSTimer class, a timer is set to repeat every 1/30 second. Note the variables thumbnailWidth andthumbnailHeight. These are used to constrain the rendered images to the view set up in Interface Builder.

Listing 1 - 7所示的awakeFromNib方法,得到两个图像(靴子.jpg 和 skier.jpg ),并设置源和目标图像。使用NSTimer类,创建一个计时器,设置其重复时间为1/30秒。注意 两个变量 thumbnailWidth 和 thumbnailHeight。这些都是用来约束的图像呈现在视图中设置界面构建器

  Note: The NSAnimation class, introduced in OS X v10.4, implements timing for animation on OS X. TheNSAnimation class allows you to set up multiple slide shows whose transitions are synchronized to the same timing device. For more information see the documentsNSAnimation Class Reference and Animation Programming Guide for Cocoa.

注意:NSAnimation类,介绍了OS X v10.4,实现时机动画在OS X NSAnimation类允许您设置多个幻灯片过渡的同步定时装置。更多信息见文档NSAnimation类引用和动画编程指南可可。

Listing 1-7 Getting images and setting up a timer

- (void)awakeFromNib{    NSTimer    *timer;//定时器    NSURL      *url;    thumbnailWidth  = 340.0;//设置的图像宽度    thumbnailHeight = 240.0;//设置的图像高度    url   = [NSURL fileURLWithPath: [[NSBundle mainBundle] pathForResource: @"boots" ofType: @"jpg"]];    [self setSourceImage: [CIImage imageWithContentsOfURL: url]];//设置源图像    url   = [NSURL fileURLWithPath: [[NSBundle mainBundle]  pathForResource: @"skier" ofType: @"jpg"]];    [self setTargetImage: [CIImage imageWithContentsOfURL: url]];//设置目标图像    timer = [NSTimer scheduledTimerWithTimeInterval: 1.0/30.0  target: self  selector: @selector(timerFired:)  userInfo:nil repeats: YES];    base = [NSDate timeIntervalSinceReferenceDate];    [[NSRunLoop currentRunLoop] addTimer: timer   forMode: NSDefaultRunLoopMode];    [[NSRunLoop currentRunLoop] addTimer: timer  forMode: NSEventTrackingRunLoopMode];}
  You set up a transition filter just as you’d set up any other filter.Listing 1-8 uses thefilterWithName: method to create the filter. It then calls setDefaults to initialize all input parameters. The code sets the extent to correspond with the thumbnail width and height declared in theawakeFromNib: method, shown inListing 1-7.  The routine uses the thumbnail variables to specify the center of the effect. For this example, the center of the effect is the center of the image, but it doesn’t have to be.

Listing 1-8 Setting up the transition filter (设置过渡效果的)

- (void)setupTransition{    CGFloat w = thumbnailWidth;    CGFloat h = thumbnailHeight;    CIVector *extent = [CIVector vectorWithX: 0  Y: 0  Z: w  W: h];    transition  = [CIFilter filterWithName: @"CICopyMachineTransition"];    // Set defaults on OS X; not necessary on iOS.    [transition setDefaults];    [transition setValue: extent forKey: kCIInputExtentKey];}The drawRect: method for the copy machine transition effect is shown in Listing 1-9. This method sets up a rectangle thats the same size as the view and then sets up a floating-point value for the rendering time. If the CIContext object hasnt already been created, the method creates one. If the transition is not yet set up, the method calls thesetupTransition method (seeListing 1-8). Finally, the method calls the drawImage:inRect:fromRect: method, passing the image that should be shown for the rendering time. The imageForTransition: method, shown in Listing 1-10, applies the filter and returns the appropriate image for the rendering time.绘制矩形:方法复制机过渡效应 在清单1 - 9所示。此方法设置一个与视图相同大小的矩形,然后为渲染时间设置一个浮点值。如果CIContext 对象还没有被创建,该方法创建一个。如果转型尚未建立,方法调用的方法(参见清单 setuptransition 1-8)。最后,该方法调用drawImage:inrect:fromrect:方法,传递要渲染的时间图像。imagefortransition:清单1-10所示,应用过滤器并返回适当的图像渲染时间。Listing 1-9  The drawRect: method for the copy machine transition effect- (void)drawRect: (NSRect)rectangle{    CGRect  cg = CGRectMake(NSMinX(rectangle),NSMinY(rectangle),NSWidth(rectangle), NSHeight(rectangle));    CGFloat t = 0.4 * ([NSDate timeIntervalSinceReferenceDate] - base);    if (context == nil) {        context = [CIContext contextWithCGContext:   [[NSGraphicsContext currentContext] graphicsPort]  options: nil];    }    if (transition == nil) {        [self setupTransition];//参考清单 1-8    }    [context drawImage: [self imageForTransition: t + 0.1] inRect: cg  fromRect: cg];}
  The imageForTransition: method figures out, based on the rendering time, which is the source image and which is the target image. It’s set up to allow a transition to repeatedlyloop back and forth. If your app applies a transition that doesn’t loop, it would not need the if-else construction shown inListing 1-10.

imageForTransition方法基于渲染时间,指出哪个是源图像,哪个是目标图像。它的设置允许过渡反复来回循环。如果你的app应用程序 应用过渡不循环,那在imageForTransition方法中的 就把 if-else 的代码移除

  The routine sets the inputTime value based on the rendering time passed to the imageForTransition: method. It applies the transition, passing the output image from the transition to the crop filter (CICrop).Cropping ensures the output image fits in the view rectangle. The routine returns the cropped transition image to the drawRect: method, which then draws the image.

在常规的基础上设置渲染时间,通过imageForTransition中的kCIInputTimeKey 参数值来设置渲染时间。它适用于过渡,通过 输出图像从过渡到裁剪过滤器,裁剪确保输出图像适合在视图矩形中,通常裁剪过渡图片是在drawRect方法中进行,然后进行绘制图像。

Listing 1-10 Applying the transition filter

- (CIImage *)imageForTransition: (float)t{    // Remove the if-else construct if you don't want the transition to loop(如果你不想过渡循环,就把下面的if-else代码删除)    if (fmodf(t, 2.0) < 1.0f) {        [transition setValue: sourceImage  forKey: kCIInputImageKey];        [transition setValue: targetImage  forKey: kCIInputTargetImageKey];    } else {        [transition setValue: targetImage  forKey: kCIInputImageKey];        [transition setValue: sourceImage  forKey: kCIInputTargetImageKey];    }    [transition setValue: @( 0.5 * (1 - cos(fmodf(t, 1.0f) * M_PI)) )  forKey: kCIInputTimeKey];//设置渲染时间    CIFilter  *crop = [CIFilter filterWithName: @"CICrop"    withInputParameters:@{kCIInputImageKey: [transition valueForKey: kCIOutputImageKey],@"inputRectangle": [CIVector vectorWithX:0  Y:0  Z: thumbnailWidth  W: thumbnailHeight],}];    return [crop valueForKey: kCIOutputImageKey];}
  Each time the timer that you set up fires, the display must be updated.Listing 1-11 shows a timerFired: routine that does just that.

Listing 1-11 Using the timer to update the display(定时器要反复做的图像过渡绘制,1/30秒绘制一次)

- (void)timerFired: (id)sender{    [self setNeedsDisplay: YES];}
  Finally, Listing 1-12 shows the housekeeping that needs to be performed if your app switches the source and target images, as the example inListing 1-10 does.

Listing 1-12 Setting source and target images(设置源图像和目标图像)

- (void)setSourceImage: (CIImage *)source{    sourceImage = source;}- (void)setTargetImage: (CIImage *)target{    targetImage = target;}

Applying a Filter to Video(将过滤器应用到视频)

  Core Image and Core Video can work together to achieve a variety of effects. For example, you can use a color correction filter on a video shot under water to correct for the fact that water absorbs red light faster than green and blue light. There are many more ways you can use these technologies together.

Follow these steps to apply a Core Image filter to a video displayedusing Core Video on OS X:

  When you subclass NSView to create a view for the video, declare aCIFilter object in the interface, similar to what’s shown in this code:
@interface MyVideoView : NSView    {        NSRecursiveLock     *lock;        QTMovie             *qtMovie;        QTVisualContextRef  qtVisualContext;        CVDisplayLinkRef    displayLink;        CVImageBufferRef    currentFrame;        CIFilter            *effectFilter;        id                  delegate;    }
  When you initialize the view with a frame, you create aCIFilter object for the filter and set the default values using code similar to the following:
    effectFilter = [CIFilter filterWithName:@"CILineScreen"];    [effectFilter setDefaults];
  This example uses the Core Image filter CILineScreen, but you’d use whatever is appropriate for your app.  Set the filter input parameters, except for the input image.  Each time you render a frame, you need to set the input image and draw the output image. YourrenderCurrentFrame routine would look similar to the following. To avoid interpolation, this example uses integral coordinates when it draws the output.
- (void)renderCurrentFrame{      NSRect frame = [self frame];       if (currentFrame) {            CIImage *inputImage = [CIImage imageWithCVImageBuffer:currentFrame];            CGRect imageRect = [inputImage extent];            CGFloat x = (frame.size.width - imageRect.size.width) * 0.5;            CGFloat y = (frame.size.height - imageRect.size.height) * 0.5;            [effectFilter setValue:inputImage forKey:kCIInputImageKey];            [[[NSGraphicsContext currentContext] CIContext]  drawImage:[effectFilter valueForKey:kCIOutputImageKey]        atPoint:CGPointMake(floor(x), floor(y))  fromRect:imageRect];       } }
阅读全文
0 0
原创粉丝点击