python pydub 用法 (2)
来源:互联网 发布:网络综艺节目受众人群 编辑:程序博客网 时间:2024/05/18 03:01
AudioSegment.silent()
创建一段没有声音的音频片段from pydub import AudioSegmentten_second_silence = AudioSegment.silent(duration=10000)
参数:duration :持续时间(毫秒)frame_rate
:频率(默认 11025
(11.025 kHz))
AudioSegment.from_mono_audiosegments()
将两个单声道合并为一个多声道音频from pydub import AudioSegmentleft_channel = AudioSegment.from_wav("sound1.wav")right_channel = AudioSegment.from_wav("sound1.wav")stereo_sound = AudioSegment.from_mono_audiosegments(left_channel, right_channel)
AudioSegment(…).dBFS
取得音频文件音量分贝数from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")loudness = sound.dBFS
AudioSegment(…).channels
取得音频文件声道数from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")channel_count = sound.channels
AudioSegment(…).sample_width
取得音频文件采样宽度from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")bytes_per_sample = sound.sample_width
AudioSegment(…).frame_rate
取得音频文件采样频率from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")frames_per_second = sound.frame_rate
AudioSegment(…).frame_width
frame_width=sample_width*channels
from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")bytes_per_frame = sound.frame_width
AudioSegment(…).rms
获取音频音量大小,该值通常用来计算分贝数(dB= 20×lgX)from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")loudness = sound.rms
AudioSegment(…).max
取得音频中的最大振幅
from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")normalized_sound = sound.apply_gain(-sound.max_dBFS)
AudioSegment(…).duration_seconds
取得音频的持续时间,同 len()from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")assert sound.duration_seconds == (len(sound) / 1000.0)
AudioSegment(…).raw_data
取得音频数据from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")raw_audio_data = sound.raw_data
AudioSegment(…).frame_count()
取得音频的frame数量from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")number_of_frames_in_sound = sound.frame_count()number_of_frames_in_200ms_of_sound = sound.frame_count(ms=200)
参数:ms:0~ms 毫秒内的frame数
AudioSegment(…).append()
拼接sound1与sound2,返回一个新的AudioSegment实例from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")sound2 = AudioSegment.from_file("sound2.wav")# default 100 ms crossfadecombined = sound1.append(sound2)# 5000 ms crossfadecombined_with_5_sec_crossfade = sound1.append(sound2, crossfade=5000)# no crossfadeno_crossfade1 = sound1.append(sound2, crossfade=0)# no crossfadeno_crossfade2 = sound1 + sound2
参数:cossfade:交叉渐变间隔
AudioSegment(…).overlay()
把sound2覆盖在sound1上,两个音频文件会叠加,如果sound2较长,则会被截断。from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")sound2 = AudioSegment.from_file("sound2.wav")played_togther = sound1.overlay(sound2)sound2_starts_after_delay = sound1.overlay(sound2, position=5000)volume_of_sound1_reduced_during_overlay = sound1.overlay(sound2, gain_during_overlay=-8)sound2_repeats_until_sound1_ends = sound1.overlay(sound2, loop=true)sound2_plays_twice = sound1.overlay(sound2, times=2)# assume sound1 is 30 sec long and sound2 is 5 sec long:sound2_plays_a_lot = sound1.overlay(sound2, times=10000)len(sound1) == len(sound2_plays_a_lot)
参数:position:覆盖起始位置(毫秒)loop:是否循环覆盖(true/false)times:重复覆盖次数(默认1)gain_during_overlay:调整被覆盖音频的音量(eg,-6.0)
AudioSegment(…).apply_gain(gain
)
调整音量大小from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")# make sound1 louder by 3.5 dBlouder_via_method = sound1.apply_gain(+3.5)louder_via_operator = sound1 + 3.5# make sound1 quieter by 5.7 dBquieter_via_method = sound1.apply_gain(-5.7)quieter_via_operator = sound1 - 5.7
AudioSegment(…).fade()
淡出from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")fade_louder_for_3_seconds_in_middle = sound1.fade(to_gain=+6.0, start=7500, duration=3000)fade_quieter_beteen_2_and_3_seconds = sound1.fade(to_gain=-3.5, start=2000, end=3000)# easy way is to use the .fade_in() convenience method. note: -120dB is basically silent.fade_in_the_hard_way = sound1.fade(from_gain=-120.0, start=0, duration=5000)fade_out_the_hard_way = sound1.fade(to_gain=-120.0, end=0, duration=5000)
参数:to_gain:淡出结束时音频音量下降到的分贝数from_gain:设置淡出前的所有音频分贝数start:淡出的起始位置end:淡出的结束位置duration:淡出持续时间
AudioSegment(…).fade_out()
淡出到无声参数:duration:淡出持续时间
AudioSegment(…).reverse()
生成一个该音频反向播放的音频
AudioSegment(…).set_sample_width()
生成一个该音频的新副本,同时改变采样宽度,增加该值不会丢失精度,而减少该值会丢失精度
AudioSegment(…).set_frame_rate()
创建一个该音频的副本,同时改变采样率,增加该值不会丢失精度,而减少该值会丢失精度
AudioSegment(…).set_channels()
创建一个该音频的副本,同时改变声道数,单声道到多声道不会降低音频质量,多声道到单声道时,若左右声道不同,则会降低质量
AudioSegment(…).split_to_mono()
把一个多声道音频分解成两个单声道index[0]为左声道index[1]为右声道
AudioSegment(…).apply_gain_stereo()
调整多声道音频的左右声道音量如果单声道音频调用此方法,它将先被转换为多声道from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")# make left channel 6dB quieter and right channe 2dB louderstereo_balance_adjusted = sound1.apply_gain_stereo(-6, +2)
AudioSegment(…).pan()
左右声道平衡,按百分比增大一边,减小另一边from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")# pan the sound 15% to the rightpanned_right = sound1.pan(+0.15)# pan the sound 50% to the leftpanned_left = sound1.pan(-0.50)
A
AudioSegment(…).get_array_of_samples()取得音频文件原始数据samples数组如果是多声道文件则返回类似[sample_1_L, sample_1_R, sample_2_L, sample_2_R, …].
from pydub import AudioSegmentsound = AudioSegment.from_file(“sound1.wav”)samples = sound.get_array_of_samples()# then modify samples...new_sound = sound._spawn(samples)
将samples数组转回音频import arrayimport numpy as npfrom pydub import AudioSegmentsound = AudioSegment.from_file(“sound1.wav”)samples = sound.get_array_of_samples()shifted_samples = np.right_shift(samples, 1)# now you have to convert back to an array.arrayshifted_samples_array = array.array(sound.array_type, shifted_samples)new_sound = sound._spawn(shifted_samples_array)
AudioSegment(…).get_dc_offset()
取得一个channel的直流偏移量,返回值=偏移分贝/最大分贝很多声音处理工具都是默认声音的平衡位置在中间的,如果偏移了,就会造成效果上的变化。参数:channel:1 左声道,2 右声道,单声道音频无此参数
AudioSegment(…).remove_dc_offset()
消除直流偏移。该方法基于audioop。bias(),可能会产生溢出
参数:channel:1/2/None ,1 左声道,2 右声道,None 所有声道offset:偏移量百分比, -1.0~1.0
AudioSegment(…).invert_phase()
基于DSP的渲染产生一个反向信号的副本,来消除反相位波,或者降低噪音
frame_rate
:频率(默认 11025
(11.025 kHz))AudioSegment.from_mono_audiosegments()
from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")loudness = sound.rms
AudioSegment(…).duration_seconds
取得音频的持续时间,同 len()
from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")assert sound.duration_seconds == (len(sound) / 1000.0)
AudioSegment(…).raw_data
取得音频数据
from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")raw_audio_data = sound.raw_data
AudioSegment(…).frame_count()
取得音频的frame数量
from pydub import AudioSegmentsound = AudioSegment.from_file("sound1.wav")number_of_frames_in_sound = sound.frame_count()number_of_frames_in_200ms_of_sound = sound.frame_count(ms=200)
参数:
ms:0~ms 毫秒内的frame数
AudioSegment(…).append()
拼接sound1与sound2,返回一个新的AudioSegment实例
from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")sound2 = AudioSegment.from_file("sound2.wav")# default 100 ms crossfadecombined = sound1.append(sound2)# 5000 ms crossfadecombined_with_5_sec_crossfade = sound1.append(sound2, crossfade=5000)# no crossfadeno_crossfade1 = sound1.append(sound2, crossfade=0)# no crossfadeno_crossfade2 = sound1 + sound2
参数:
cossfade:交叉渐变间隔
AudioSegment(…).overlay()
把sound2覆盖在sound1上,两个音频文件会叠加,如果sound2较长,则会被截断。
from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")sound2 = AudioSegment.from_file("sound2.wav")played_togther = sound1.overlay(sound2)sound2_starts_after_delay = sound1.overlay(sound2, position=5000)volume_of_sound1_reduced_during_overlay = sound1.overlay(sound2, gain_during_overlay=-8)sound2_repeats_until_sound1_ends = sound1.overlay(sound2, loop=true)sound2_plays_twice = sound1.overlay(sound2, times=2)# assume sound1 is 30 sec long and sound2 is 5 sec long:sound2_plays_a_lot = sound1.overlay(sound2, times=10000)len(sound1) == len(sound2_plays_a_lot)
参数:
position:覆盖起始位置(毫秒)
loop:是否循环覆盖(true/false)
times:重复覆盖次数(默认1)
gain_during_overlay:调整被覆盖音频的音量(eg,-6.0)
AudioSegment(…).apply_gain(gain
)
调整音量大小
from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")# make sound1 louder by 3.5 dBlouder_via_method = sound1.apply_gain(+3.5)louder_via_operator = sound1 + 3.5# make sound1 quieter by 5.7 dBquieter_via_method = sound1.apply_gain(-5.7)quieter_via_operator = sound1 - 5.7
AudioSegment(…).fade()
淡出
from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")fade_louder_for_3_seconds_in_middle = sound1.fade(to_gain=+6.0, start=7500, duration=3000)fade_quieter_beteen_2_and_3_seconds = sound1.fade(to_gain=-3.5, start=2000, end=3000)# easy way is to use the .fade_in() convenience method. note: -120dB is basically silent.fade_in_the_hard_way = sound1.fade(from_gain=-120.0, start=0, duration=5000)fade_out_the_hard_way = sound1.fade(to_gain=-120.0, end=0, duration=5000)
参数:
to_gain:淡出结束时音频音量下降到的分贝数
from_gain:设置淡出前的所有音频分贝数
start:淡出的起始位置
end:淡出的结束位置
duration:淡出持续时间
AudioSegment(…).fade_out()
淡出到无声
参数:
duration:淡出持续时间
AudioSegment(…).reverse()
生成一个该音频反向播放的音频
AudioSegment(…).set_sample_width()
生成一个该音频的新副本,同时改变采样宽度,增加该值不会丢失精度,而减少该值会丢失精度
AudioSegment(…).set_frame_rate()
创建一个该音频的副本,同时改变采样率,增加该值不会丢失精度,而减少该值会丢失精度
AudioSegment(…).set_channels()
创建一个该音频的副本,同时改变声道数,单声道到多声道不会降低音频质量,多声道到单声道时,若左右声道不同,则会降低质量
AudioSegment(…).split_to_mono()
把一个多声道音频分解成两个单声道
index[0]为左声道
index[1]为右声道
AudioSegment(…).apply_gain_stereo()
调整多声道音频的左右声道音量
如果单声道音频调用此方法,它将先被转换为多声道
from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")# make left channel 6dB quieter and right channe 2dB louderstereo_balance_adjusted = sound1.apply_gain_stereo(-6, +2)
AudioSegment(…).pan()
左右声道平衡,按百分比增大一边,减小另一边
from pydub import AudioSegmentsound1 = AudioSegment.from_file("sound1.wav")# pan the sound 15% to the rightpanned_right = sound1.pan(+0.15)# pan the sound 50% to the leftpanned_left = sound1.pan(-0.50)
A
AudioSegment(…).get_array_of_samples()
取得音频文件原始数据samples数组
如果是多声道文件则返回类似[sample_1_L, sample_1_R, sample_2_L, sample_2_R, …].
from pydub import AudioSegmentsound = AudioSegment.from_file(“sound1.wav”)samples = sound.get_array_of_samples()# then modify samples...new_sound = sound._spawn(samples)
将samples数组转回音频
import arrayimport numpy as npfrom pydub import AudioSegmentsound = AudioSegment.from_file(“sound1.wav”)samples = sound.get_array_of_samples()shifted_samples = np.right_shift(samples, 1)# now you have to convert back to an array.arrayshifted_samples_array = array.array(sound.array_type, shifted_samples)new_sound = sound._spawn(shifted_samples_array)
AudioSegment(…).get_dc_offset()
取得一个channel的直流偏移量,返回值=偏移分贝/最大分贝
很多声音处理工具都是默认声音的平衡位置在中间的,如果偏移了,就会造成效果上的变化。
参数:
channel:1 左声道,2 右声道,单声道音频无此参数
AudioSegment(…).remove_dc_offset()
消除直流偏移。该方法基于audioop。bias(),可能会产生溢出
参数:
channel:1/2/None ,1 左声道,2 右声道,None 所有声道
offset:偏移量百分比, -1.0~1.0
AudioSegment(…).invert_phase()
基于DSP的渲染
产生一个反向信号的副本,来消除反相位波,或者降低噪音
阅读全文
0 0
- python pydub 用法 (2)
- python pydub用法(1)
- Windows python pydub 安装
- Python中pydub使用详解
- 【Python】利用pydub库操作音频文件
- pydub简单介绍
- python天天进步(2)--enumerate用法
- Python部分函数用法(2)
- python中的string相关用法(2)
- python argparse用法2
- ubuntu Mp3 to WAV pydub install
- python With关键字用法(2)
- python sorted()函数用法
- Python 学习系列(2)and、or用法
- python:4:列表基本用法及相关函数(2)
- python 用法
- 【python】'''用法
- python @ 用法
- 集合(3)张飞
- 考察Hadoop的底层rpc通信(二)
- Java经典算法题(五)
- 使用Java蓝牙无线通讯技术API概述
- leetcode 257. Binary Tree Paths
- python pydub 用法 (2)
- Sticks POJ
- javaee学习日记之java基础之类和对象
- (三)libevent源文件结构
- JAVA中的反射机制
- memcached会话共享+分布式+Thinkphp5
- Struts2 值栈和ognl
- zookeeper安装(单节点)及基本操作
- “并查集”问题:抢银行问题