ios ipod library 读,写,等常见管理方法

来源:互联网 发布:淘宝李涛是真是假 编辑:程序博客网 时间:2024/05/01 18:34

转载一篇精妙的牛X哥的文章,对操作ipod 音乐有很大的启发。

原文:

From iPhone Media Library to PCM Samples in Dozens of Confounding, Potentially Lossy Steps

 

http://www.subfurther.com/blog/2010/07/19/from-iphone-media-library-to-pcm-samples-in-dozens-of-confounding-potentially-lossy-steps/

 

 

 

iPhone SDK 3.0 provided limited access to the iPod Music Library on the device, allowing third party apps to search for songs (and podcasts and audiobooks, but not video), inspect the metadata, and play items, either independently or in concert with the built-in media player application. But it didn’t provide any form of write-access — you couldn’t add items or playlists, or alter metadata, from a third-party app. And it didn’t allow for third-party apps to do anything with the songs except play them… you couldn’t access the files, convert them to another format, run any kind of analysis on the samples, and so on.

So a lot of us were surprised by the WWDC keynote when iMovie for iPhone 4 was shown importing a song from the iPod library for use in a user-made video. We were even more surprised by the subsequent claim that everything in iMovie for iPhone 4 was possible with public APIs. Frankly, I was ready to call bullshit on it because of the iPod Library issue, but was intrigued by the possibility that maybe you could get at the iPod songs in iOS 4. A tweet from@ibeatmaker confirmed that it was possible, and after someclarification, I found what I needed.

About this time, a thread started on coreaudio-api about whether Core Audio could access iPod songs, so that’s what I set out to prove one way or another. So, my goal was to determine whether or not you could get raw PCM samples from songs in the device’s music library.

The quick answer is: yes. The interesting answer is: it’s a bitch, using three different frameworks, coding idioms that are all over the map, a lot of file-copying and possibly some expensive conversions.

It’s Just One Property; It Can’t Be That Hard

The big secret of how to get to the Music Library isn’t much of a secret. As you might expect, it’s in the MediaLibrary.frameworkthat you use to interact with the library. Each song/podcast/audiobook is a MPMediaItem, and has a number of interesting properties, most of which are user-managed metadata. In iOS 4, there’s a sparkling new addition to the the list of “General Media Item Property Keys”: MPMediaItemPropertyAssetURL. Here’s the docs:

A URL pointing to the media item, from which an AVAssetobject (or other URL-based AV Foundation object) can be created, with any options as desired. Value is an NSURLobject.

The URL has the custom scheme of ipod-library. For example, a URL might look like this:

ipod-library://item/item.m4a?id=12345

OK, so we’re off and running. All we need to do is to pick an MPMediaItem, get this property as an NSURL, and we win.

Or not. There’s an important caveat:

Usage of the URL outside of the AV Foundation framework is not supported.

OK, so that’s probably going to suck. But let’s get started anyways. I wrote a throwaway app to experiment with all this stuff, adding to it piece by piece as stuff started working. I’m posting it here for anyone who wants to reuse my code… all my classes are marked as public domain, so copy-and-paste as you see fit.

MediaLibraryExportThrowaway1.zip

Note that this code must be run on an iOS 4 device and cannot be run in the Simulator, which doesn’t support the Media Library APIs.

The app just starts with a “Choose Song” button. When you tap it, it brings up an MPMediaPickerController as a modal view to make you choose a song. When you do so, the -mediaPicker:didPickMediaItems: delegate method gets called. At this point, you could get the first MPMediaItem and get itsMPMediaItemPropertyAssetURL media item property. I’d hoped that I could just call this directly from Core Audio, so I wrote a function to test if a URL can be opened by CA:

BOOL coreAudioCanOpenURL (NSURL* url) {OSStatus openErr = noErr;AudioFileID audioFile = NULL;openErr = AudioFileOpenURL((CFURLRef) url, kAudioFileReadPermission , 0, &audioFile);if (audioFile) {AudioFileClose (audioFile);}return openErr ? NO : YES;}

Getting a NO back from this function more or less confirms the caveat from the docs: the URL is only for use with the AV Foundation framework.

 

AV for Vendetta

OK, so plan B: we open it with AV Foundation and see what that gives us.

AV Foundation — setting aside the simple player and recorder classes from 3.0 — is a strange and ferocious beast of a framework. It borrows from QuickTime and QTKit (the capture classes have an almost one-to-one correspondence with their QTKit equivalents), but builds on some new metaphors and concepts that will take the community a while to digest. For editing, it has a concept of acomposition, which is made up of tracks, which you can create fromassets. This is somewhat analogous to QuickTime’s model that “movies have tracks, which have media”, except that AVFoundation’s compositions are themselves assets. Actually, reading too much QuickTime into AV Foundation is a good way to get in trouble and get disappointed; QuickTime’s most useful functions, likeAddMediaSample() and GetMediaNextInterestingTime() are antithetical to AV Foundation’s restrictive design (more on that in a later blog) and therefore don’t exist.

Back to the task at hand. The only thing we can do with the media library URL is to open it in AVFoundation and hope we can do something interesting with it. The way to do this is with anAVURLAsset.

NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];

If this were QuickTime, we’d have an object that we could inspect the samples of. But in AV Foundation, the only sample-level access afforded is a capture-time opportunity to get called back with video frames. There’s apparently no way to get to video frames in a file-based asset (except for a thumbnail-generating method that operates on one-second granularity), and no means of directly accessing audio samples at all.

What we can do is to export this URL to a file in our app’s documents directory, hopefully in a format that Core Audio can open. AV Foundation’s AVAssetExportSession has a class methodexportPresetsCompatibleWithAsset: that reveals what kinds of formats we can export to. Since we’re going to burn the time and CPU of doing an export, it would be nice to be able to convert the compressed song into PCM in some kind of useful container like a.caf, or at least an .aif. But here’s what we actually get as options:

compatible presets for songAsset: ( AVAssetExportPresetLowQuality, AVAssetExportPresetHighestQuality, AVAssetExportPreset640x480, AVAssetExportPresetMediumQuality, AVAssetExportPresetAppleM4A )

So, no… there’s no “output to CAF”. In fact, we can’t even useAVAssetExportPresetPassthrough to preserve the encoding from the music library: we either have to convert to an AAC (in an .m4acontainer), or to a QuickTime movie (represented by all the presets ending in “Quality”, as well as the “640×480″).

 

This Deal is Getting Worse All the Time!

So, we have to export to AAC. That’s not entirely bad, since Core Audio should be able to read AAC in an .m4a container just fine. But it sucks in that it will be a lossy conversion from the source, which could be MP3, Apple Lossless, or some other encoding.

In my GUI, an “export” button appears when you pick a song, and the export is kicked off in the event-handler handleExportTapped. Here’s the UI in mid-export:

MediaLibraryExportThrowaway1 UI in mid-export

To do the export, we create an AVExportSession and provide it with an outputFileType and outputIURL.

AVAssetExportSession *exporter = [[AVAssetExportSession alloc]initWithAsset: songAssetpresetName: AVAssetExportPresetAppleM4A];NSLog (@"created exporter. supportedFileTypes: %@", exporter.supportedFileTypes);exporter.outputFileType = @"com.apple.m4a-audio";NSString *exportFile = [myDocumentsDirectory()stringByAppendingPathComponent: @"exported.m4a"];myDeleteFile(exportFile);[exportURL release];exportURL = [[NSURL fileURLWithPath:exportFile] retain];exporter.outputURL = exportURL;

A few notes here. The docs say that if you set the outputURLwithout setting outputFileType that the exporter will make a guess based on the file extension. In my experience, the exporter prefers to just throw an exception and die, so set the damn type already. You can get a list of possible values from the class methodexporter.supportedFileTypes. The only supported value for the AAC export is com.apple.m4a-audio. Also note the call to amyDeleteFile() function; the export will fail if the target file already exists.

Aside: I did experiment with exporting as a QuickTime movie rather than an .m4a; the code is in the download, commented out. Practical upshot is that it sucks: if your song isn’t AAC, then it gets converted to mono AAC at 44.1 KHz. It’s also worth noting that AV Foundation doesn’t give you any means of setting export parameters (bit depths, sample rates, etc.) other than using the presets. If you’re used to the power of frameworks like Core Audio or the old QuickTime, this is a bitter, bitter pill to swallow.

 

Block Head

The code gets really interesting when you kick off the export. You would probably expect the export, a long-lasting operation, to be nice and asynchronous. And it is. You might also expect to register a delegate to get asynchronous callbacks as the export progresses. Not so fast, Bucky. As a new framework, AV Foundation adopts Apple’s latest technologies, and that includes blocks. When you export, you provide a completion handler, a block whose no-arg function is called when necessary by the exporter.

Here’s what mine looks like.

// do the export[exporter exportAsynchronouslyWithCompletionHandler:^{int exportStatus = exporter.status;switch (exportStatus) {case AVAssetExportSessionStatusFailed: {// log error to text viewNSError *exportError = exporter.error;NSLog (@"AVAssetExportSessionStatusFailed: %@",exportError);errorView.text = exportError ?[exportError description] : @"Unknown failure";errorView.hidden = NO;break;}case AVAssetExportSessionStatusCompleted: {NSLog (@"AVAssetExportSessionStatusCompleted");fileNameLabel.text =[exporter.outputURL lastPathComponent];// set up AVPlayer[self setUpAVPlayerForURL: exporter.outputURL];[self enablePCMConversionIfCoreAudioCanOpenURL:exporter.outputURL];break;}case AVAssetExportSessionStatusUnknown: {NSLog (@"AVAssetExportSessionStatusUnknown"); break;}case AVAssetExportSessionStatusExporting: {NSLog (@"AVAssetExportSessionStatusExporting"); break;}case AVAssetExportSessionStatusCancelled: {NSLog (@"AVAssetExportSessionStatusCancelled"); break;}case AVAssetExportSessionStatusWaiting: {NSLog (@"AVAssetExportSessionStatusWaiting"); break;}default: { NSLog (@"didn't get export status"); break;}}}];

This kicks off the export, passing in a block with code to handle all the possible callbacks. The completion handler function doesn’t have to take any arguments (nor do we have to set up a “user info” object for the exporter to pass to the function), since the block allows anything in the local scope to be called from the block. That means the exporter and its state don’t need to be passed in as parameters, because the exporter is a local variable that can be accessed from the block and its state inspected via method calls.

The two messages I handle in my block areAVAssetExportSessionStatusFailed, which dumps the error to a previously-invisible text view, andAVAssetExportSessionStatusCompleted, which sets up anAVPlayer to play the exported audio, which we’ll get to later.

After starting the export, my code runs an NSTimer to fill aUIProgressView. Since the exporter has a progress property that returns a float, it’s pretty straightforward… check the code if you haven’t already done this a bunch of times. Files that were already AAC export almost immediately, while MP3s and Apple Lossless (ALAC) took a minute or more to export. Files in the old .m4pformat, from back when the iTunes Store put DRM on all the songs, fail with an error, as seen below.

 

The Invasion of Time

Kind of as a lark, I added a little GUI to let you play the exported file.AVPlayer was the obvious choice for this, since it should be able to play whatever kind of file you export (.m4a.mov, whatever).

This brings up the whole issue of how to deal with the representation of time in AV Foundation, which turns out to be great for everyone who ever used the old C QuickTime API (or possibly QuickTime for Java), and all kinds of hell for everyone else.

AV Foundation uses Core Media’s CMTime struct for representing time. In turn, CMTime uses QuickTime’s brilliant but tricky concept of time scales. The idea, in a nutshell, is that your units of measurement for any particular piece of media are variable: pick one that suits the media’s own timing needs. For example, CD audio is 44.1 KHz, so it makes sense to measure time in 1/44100 second intervals. In a CMTime, you’d set the timescale to 44100, and then a given value would represent some number of these units: a single sample would have a value of 1 and would represent 1/44100 of a second, exactly as desired.

I find it’s easier to think of Core Media (and QuickTime) timescales as representing “nths of a second”. One of the clever things you can do is to choose a timescale that suits a lot of different kinds of media. In QuickTime, the default timescale is 600, as this is a common multiple of many important frame-rates: 24 fps for film, 25 fps for PAL (European) TV, 30 fps for NTSC (North America and Japan) TV, etc. Any number of frames in these systems can be evenly and exactly represented with a combination of value and timescale.

Where it gets tricky is when you need to work with values measured in different timescales. This comes up in AV Foundation, as your player may use a different timescale than the items it’s playing. It’s pretty easy to write out the current time label:

CMTime currentTime = player.currentTime;UInt64 currentTimeSec = currentTime.value / currentTime.timescale;UInt32 minutes = currentTimeSec / 60;UInt32 seconds = currentTimeSec % 60;playbackTimeLabel.text = [NSString stringWithFormat:@"%02d:%02d", minutes, seconds];

But it’s hard to update the slider position, since the AVPlayer and the AVPlayerItem it’s playing can (and do) use different time scales. Enjoy the math.

if (player && !userIsScrubbing) {CMTime endTime = CMTimeConvertScale (player.currentItem.asset.duration,currentTime.timescale,kCMTimeRoundingMethod_RoundHalfAwayFromZero);if (endTime.value != 0) {double slideTime = (double) currentTime.value /(double) endTime.value;playbackSlider.value = slideTime;}}

Basically, the key here is that I need to get the duration of the item being played, but to express that in the time scale of the player, so I can do math on them. That gets done with theCMTimeConvertScale() call. Looks simple here, but if you don’t know that you might need to do a timescale-conversion, your math will be screwy for all sorts of reasons that do not make sense.

Oh, you can drag the slider too, which means doing the same math in reverse.

-(IBAction) handleSliderValueChanged {CMTime seekTime = player.currentItem.asset.duration;seekTime.value = seekTime.value * playbackSlider.value;seekTime = CMTimeConvertScale (seekTime, player.currentTime.timescale,kCMTimeRoundingMethod_RoundHalfAwayFromZero);[player seekToTime:seekTime];}

One other fun thing about all this that I just remembered from looking through my code. The time label and slider updates are called from an NSTimer. I set up the AVPlayer in the completion handler block that’s called by the exporter. This call seems not to be on the main thread, as my update timer didn’t work until I forced its creation over to the main thread withperformSelectorOnMainThread:withObject:waitUntilDone:. Good times.

 

Final Steps

Granted, all this AVPlayer stuff is a distraction. The original goal was to get from iPod Music Library to decompressed PCM samples. We used an AVAssetExportSession to produce an .m4a file in our app’s Documents directory, something that Core Audio should be able to open. The remaining conversion is a straightforward use of CA’s Extended Audio File Services: we open an ExtAudioFileRef on the input .m4a, set a “client format” property representing the PCM format we want it to convert to, read data into a buffer, and write that data back out to a plain AudioFileID. It’s C, so the code is long, but hopefully not too hard on the eyes:

-(IBAction) handleConvertToPCMTapped {NSLog (@"handleConvertToPCMTapped");// open an ExtAudioFileNSLog (@"opening %@", exportURL);ExtAudioFileRef inputFile;CheckResult (ExtAudioFileOpenURL((CFURLRef)exportURL, &inputFile), "ExtAudioFileOpenURL failed");// prepare to convert to a plain ol' PCM formatAudioStreamBasicDescription myPCMFormat;myPCMFormat.mSampleRate = 44100; // todo: or use source rate?myPCMFormat.mFormatID = kAudioFormatLinearPCM ;myPCMFormat.mFormatFlags =  kAudioFormatFlagsCanonical;myPCMFormat.mChannelsPerFrame = 2;myPCMFormat.mFramesPerPacket = 1;myPCMFormat.mBitsPerChannel = 16;myPCMFormat.mBytesPerPacket = 4;myPCMFormat.mBytesPerFrame = 4;CheckResult (ExtAudioFileSetProperty(inputFile,kExtAudioFileProperty_ClientDataFormat,sizeof (myPCMFormat), &myPCMFormat),  "ExtAudioFileSetProperty failed");// allocate a big buffer. size can be arbitrary for ExtAudioFile.// you have 64 KB to spare, right?UInt32 outputBufferSize = 0x10000;void* ioBuf = malloc (outputBufferSize);UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket;UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;// set up output fileNSString *outputPath = [myDocumentsDirectory()stringByAppendingPathComponent:@"export-pcm.caf"];NSURL *outputURL = [NSURL fileURLWithPath:outputPath];NSLog (@"creating output file %@", outputURL);AudioFileID outputFile;CheckResult(AudioFileCreateWithURL((CFURLRef)outputURL,   kAudioFileCAFType,   &myPCMFormat,   kAudioFileFlags_EraseFile,   &outputFile),  "AudioFileCreateWithURL failed");// start convertin'UInt32 outputFilePacketPosition = 0; //in byteswhile (true) {// wrap the destination buffer in an AudioBufferListAudioBufferList convertedData;convertedData.mNumberBuffers = 1;convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame;convertedData.mBuffers[0].mDataByteSize = outputBufferSize;convertedData.mBuffers[0].mData = ioBuf;UInt32 frameCount = packetsPerBuffer;// read from the extaudiofileCheckResult (ExtAudioFileRead(inputFile,  &frameCount,  &convertedData), "Couldn't read from input file");if (frameCount == 0) {printf ("done reading from file");break;}// write the converted data to the output fileCheckResult (AudioFileWritePackets(outputFile,   false,   frameCount,   NULL,   outputFilePacketPosition / myPCMFormat.mBytesPerPacket,   &frameCount,   convertedData.mBuffers[0].mData), "Couldn't write packets to file");NSLog (@"Converted %ld bytes", outputFilePacketPosition);// advance the output file write locationoutputFilePacketPosition +=(frameCount * myPCMFormat.mBytesPerPacket);}// clean upExtAudioFileDispose(inputFile);AudioFileClose(outputFile);// GUI update omitted}

Note that this uses a CheckResult() convenience function thatKevin Avila wrote for our upcoming Core Audio book… it just looks to see if the return value is noErr and tries to convert it to a readable four-char-code if it seems amenable. It’s in the example file too.

 

Is It Soup Yet?

Does all this work? Rather than inspecting theAudioStreamBasicDescription of the resulting file, let’s do something more concrete. With Xcode’s “Organizer”, you can access your app’s sandbox on the device. So we can just drag the Application Data to the Desktop.

In the resulting folder, open the Documents folder to find export-pcm.caf. Drag it to QuickTime Player to verify that you do, indeed, have PCM data:

So there you have it. In several hundred lines of code, we’re able to get a song from the iPod Music Library, export it into our app’s Documents directory, and convert it to PCM. With the raw samples, you could now draw an audio waveform view (something you’d think would be essential for video editors who want to match video to beats in the music, but Apple seems dead-set against letting us do do with AV Foundation or QTKit), you could perform analysis or effects on the audio, you could bring it into a Core Audio AUGraph and mix it with other sources… all sorts of possibilities open up.

Clearly, it could be a lot easier. It’s a ton of code, and two file exports (library to .m4a, and .m4a to .caf), when some apps might be perfectly happy to read from the source URL itself and never write to the filesystem… if only they could. Having spent the morning writing this blog, I may well spend the afternoon filing feature requests on bugreport.apple.com. I’ll update this blog withOpenRadar numbers for the following requests:

  • Allow Core Audio to open URLs provided by MediaLibrary’s MPMediaItemPropertyAssetURL
  • AV Foundation should allow passthrough export of Media Library items
  • AV Foundation export needs finer-grained control than just presets
  • Provide sample-level access for AVAsset

Still, while I’m bitching and whining, it is remarkable that iOS 4 opens up non-DRM’ed items in the iPod library for export. I never thought that would happen. Furthermore, the breadth and depth of the iOS media APIs remain astonishing. Sometimes terrifying, perhaps, but compared to the facile and trite media APIs that the other guys and girls get, we’re light-years ahead on iOS.

Have fun with this stuff!

Update: This got easier in iOS 4.1. Please forget everything you’ve read here and go read From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary instead.

24 Comments

  • 1. cadamson replies at 20th July 2010 um 8:38 am :

    Art Gillespie posts to the coreaudio-api list that he’s taken this idea further and hacked out a way to get passthrough to work, thereby skipping the AV Foundation re-encoding. Project is TSLibraryImportand details are on his blog. Also, in iOS 4.1, you can just [redacted by Apple ninja lawyers].

  • 2. Guit replies at 22nd July 2010 um 1:23 pm :

    Hi,
    I was trying to make AVAssetExportSession and I found your blog.
    I have try your sample, and I have always a AVAssetExportSessionStatusFailed with all my library.
    Which version of iOS are you using?

  • 3. cadamson replies at 22nd July 2010 um 2:15 pm :

    Guit: I’m running iOS 4.0.1. Source items from the library were in AAC, MP3, and ALAC. The ALAC was re-sampled to 44.1 because the original was 88.2, which exceeds the iPhone’s maximum sample rate.

  • 4. Guit replies at 23rd July 2010 um 1:36 am :

    Ok I will try to update my phone from 4.0 to 4.0.1. None of my file are ok, even MP3.

  • 5. gatormha replies at 3rd August 2010 um 9:46 pm :

    I’m having a similar problem to Guit’s, except I can play media other than mp3. My mp4 files export fine and play, but mp3 files all give me a failure message -11820 “Cannot Complete Export”.

    I’m running iOS 4.0.1.

    Guit, were you ever able to get up and running?

  • 6. [Time code];… replies at 30th August 2010 um 1:50 pm :

    [...] Philip, being a Mac guy and not an iOS guy, blogged that he was surprised my presentation wasn’t an NDA violation. Actually, AV Foundation has been around since 2.2, but only became a document-based audio/video editing framework in iOS 4. The only thing that’s NDA is what’s in iOS 4.1 (good stuff, BTW… hope we see it Wednesday, even though I might have to race out some code and a blog entry to revise this beastly entry). [...]

  • 7. [Time code];… replies at 9th September 2010 um 8:01 am :

    [...] possible before (can you say “ScreenFlow for iOS”?), as well as simplifying things like my music library PCM converter. I’m doing the talk on AV Foundation, and you can count on these new classes being [...]

  • 8. Another84 replies at 24th September 2010 um 7:59 am :

    Hi guys! I have the same issue – eror 11820.
    I have always a AVAssetExportSessionStatusFailed with all my library :( 
    Is this possible to export .mp3 ?

    Anyway thanks for this great post!

  • 9. cadamson replies at 24th September 2010 um 8:29 am :

    Another84: None of the iOS frameworks (AV Foundation, Core Audio, etc.) support encoding of .mp3, or export to .mp3, probably because it’s a much more expensive license. You could find code to do an MP3 export (probably from ffmpeg or LAME?), but you’d still be legally obligated to pay an MP3 license fee.

  • 10. Another84 replies at 24th September 2010 um 8:46 am :

    cadamson, thanks a lot for reply!!!

  • 11. Another84 replies at 24th September 2010 um 9:06 am :

    But! How it’s possible for this app http://www.youtube.com/watch?v=8pVGqx63-i0 ?

  • 12. cadamson replies at 24th September 2010 um 9:19 am :

    Another84: No, it’s not. When he uses iTunes to list the files created by Ringtones, notice that the file it created is an “.m4r”. That’s just another form of AAC (which is richly supported by AV Foundation and Core Audio), not MP3.

  • 13. Another84 replies at 24th September 2010 um 10:28 am :

    Yes! But how i can create this .m4r from .mp3 ?
    My AVAssetExportSession always return AVAssetExportSessionStatusFailed :(

    I tried to use AVAssetExportSession with timeRange and audioMix to trim part of initial .mp3 (from ipod library). But it doesn’t work :(

    What do you think about using AVAssetReader and AVAssetWriter for this purpose?

  • 14. cadamson replies at 24th September 2010 um 10:32 am :

    Another84: I’m planning to cover AVAssetReader / AVAssetWriter in an upcoming blog. They should make this *much* easier.

    I was able to export MP3 to M4A with this technique just fine, though it was much slower than using M4As source files (which is most of my music library). Same goes for starting with ALAC songs. Have you tried starting with M4As?

  • 15. Another84 replies at 24th September 2010 um 11:02 am :

    > I was able to export MP3 to M4A with this technique just fine

    “this technique” – what do you mean? AVAssetExportSession technique or AVAssetReader / AVAssetWriter technique ?

    > Have you tried starting with M4As?
    Not yet

  • 16. Another84 replies at 24th September 2010 um 5:12 pm :

    > Have you tried starting with M4As?
    Yes, it works fine with M4A. But I have to work with MP3 too.

  • 17. Another84 replies at 26th September 2010 um 12:19 pm :

    is this possible to make fade in and fade out effect that way:

    ...

    CMTime startFadeInTime = startTrimTime;
    CMTime endFadeInTime = CMTimeMakeWithSeconds(startTime + 5.0, 1);
    CMTime startFadeOutTime = CMTimeMakeWithSeconds(endTime - 5.0, 1);
    CMTime endFadeOutTime = endTrimTime;

    AVMutableAudioMix *exportAudioMix = [AVMutableAudioMix audioMix];
    AVMutableAudioMixInputParameters *exportAudioMixInputParameters = [AVMutableAudioMixInputParameters audioMixInputParameters];
    [exportAudioMixInputParameters setVolume:0.0 atTime:startFadeInTime];
    [exportAudioMixInputParameters setVolume:1.0 atTime:endFadeInTime];
    [exportAudioMixInputParameters setVolume:1.0 atTime:startFadeOutTime];
    [exportAudioMixInputParameters setVolume:0.0 atTime:endFadeOutTime];
    exportSession.audioMix = exportAudioMix;

    ...

    It seems doen’t work

  • 18. openid.daum.net/kci replies at 9th October 2010 um 2:43 am :

    @Another84 just use audioMixInputParametersWithTrack instead of audioMixInputParameters.. then you may solve the problem.

  • 19. Another84 replies at 14th October 2010 um 7:37 pm :


    NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
    AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];

    AVAssetExportSession *exportSession = [[AVAssetExportSession alloc]
    initWithAsset:songAsset
    presetName:AVAssetExportPresetPassthrough];

    NSArray *tracks = [songAsset tracksWithMediaType:AVMediaTypeAudio];
    AVAssetTrack *track = [tracks objectAtIndex:0];
    id desc = [track.formatDescriptions objectAtIndex:0];
    const AudioStreamBasicDescription *audioDesc = CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)desc);
    FourCharCode formatID = audioDesc->mFormatID;

    // trim
    CMTime startTrimTime = CMTimeMakeWithSeconds(startTime, 1);
    CMTime endTrimTime = CMTimeMakeWithSeconds(endTime, 1);
    CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTrimTime, endTrimTime);
    exportSession.timeRange = exportTimeRange;

    //fade in, fade out
    CMTime startFadeInTime = startTrimTime;
    CMTime endFadeInTime = CMTimeMakeWithSeconds(startTime + 4.0, 1);

    CMTimeRange fadeInTimeRange = CMTimeRangeFromTimeToTime(startFadeInTime, endFadeInTime);

    CMTime startFadeOutTime = CMTimeMakeWithSeconds(endTime - 4.0, 1);
    CMTime endFadeOutTime = endTrimTime;
    CMTimeRange fadeOutTimeRange = CMTimeRangeFromTimeToTime(startFadeOutTime, endFadeOutTime);

    AVMutableAudioMix *exportAudioMix = [AVMutableAudioMix audioMix];
    AVMutableAudioMixInputParameters *exportAudioMixInputParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:track];

    [exportAudioMixInputParameters setVolumeRampFromStartVolume:0.0 toEndVolume:1.0 timeRange:fadeInTimeRange];
    [exportAudioMixInputParameters setVolumeRampFromStartVolume:1.0 toEndVolume:0.0 timeRange:fadeOutTimeRange];

    exportAudioMix.inputParameters = [NSArray arrayWithObject:exportAudioMixInputParameters];

    exportSession.audioMix = exportAudioMix;

     

    Anyway it doesn’t work (((( (Trimming works fine!)
    Please help!

  • 20. brian replies at 20th October 2010 um 2:05 pm :

    @Another84 Thanks for that code! I was trying everything from Audio Units to creating algorithms in custom buffers. It was a nightmare. I am taking a file from the library and writing it to my doc directory. I dont know if this helps what you are trying to accomplish but I have the fade working with by adding this to your code….


    AVAssetExportSession *exportSession = [[AVAssetExportSession alloc]
    initWithAsset:songAsset
    presetName: AVAssetExportPresetAppleM4A];
    NSLog (@"created exporter. supportedFileTypes: %@", exportSession.supportedFileTypes);
    exportSession.outputFileType = @"com.apple.m4a-audio";

    NSString *myFileName = [NSString stringWithFormat:@"%@.m4a", fileNameCompleted];
    NSString *exportFile = [myDocumentsDirectory() stringByAppendingPathComponent:myFileName;

    then of course beneath it


    myDeleteFile(exportFile);

    [exportURL release];
    exportURL = [[NSURL fileURLWithPath:exportFile] retain];
    exportSession.outputURL = exportURL;

    Hope this helps. Thanks again for the snippet.

  • 21. Another84 replies at 26th October 2010 um 6:51 pm :

    > I have the fade working with by adding this to your code….

    Really? Please!!!! Post your code in full! My doesn’t work anyway :( (
    Thanks a lot for your reply!

  • 22. [Time code];… replies at 13th December 2010 um 9:42 am :

    [...] a July blog entry, I showed a gruesome technique for getting raw PCM samples of audio from your iPod library, by [...]

  • 23. google.com/profiles/b.… replies at 28th February 2011 um 6:18 pm :

    Thanks for sharing!

    I do wonder: is this still the only way to access sample data from the library when using iOS 4.2 ?

  • 24. cadamson replies at 28th February 2011 um 7:07 pm :

    B.stolk: It got better in 4.1:http://www.subfurther.com/blog/2010/12/13/from-ipod-library-to-pcm-samples-in-far-fewer-steps-than-were-previously-necessary/

原创粉丝点击