Investigation report of MSSQL data read and write performance
来源:互联网 发布:北京居住证 知乎 编辑:程序博客网 时间:2024/06/05 15:02
MSSQL 2008
CPU Xeon E5 2G (12 CORE ) X 2
RAM 128G
case 1:
while @i<1,000,000beginInsert into T1(column1,column2,column3,...column34) values(@i,getdate(),rand()*getdate,......)end
Test result : 1,000,000 rows time: 00:04:28 size 503MB
case 2:
insert into T1 select * from T1
Test result : 1,000,000 rows insert to Table(1,000,000 rows exist) time: 00:00:51
case 3:
select * into NewTableName from T1
Test result : 2,000,000 rows insert to NewTable time: 00:00:22
select * into TableName from T1
Test result : 2,000,000 rows insert to TableName(2,000,000 rows exist) time: 00:01:00
case 4:
Tmp(20,000,000 rows) Target ( 20,000,000 rows)
Open 10 threads run this sql in same time
insert into Target select * from Tmp
Test result : 2,000,000X10 rows insert to Target(20,000,000 rows exist) time: 04:29:00
case 5:
cmd>bcp T1 out t1.dat -f bcp.Xml -T -d database
Test result : 20,000,000 rows export to t1.dat time: 00:00:20
cmd>bcp Target in t1.dat -f bcp.Xml -T -d database
Test result : 20,000,000 rows import from t1.dat to Target(500,000,000 rows exist ) time: 00:50:01
case 6:
bcp import 20,000,000 X 2 rows use 2 thread to same blank table
cmd>bcp Target in t1.dat -f bcp.Xml -T -d database
Test result : thread 1 time: 00:08:35 speed :38761 rows/sec ; thread 2 time: 00:07:37 speed: 48598 Total speed: 87359 rows/sec
case 7:
bcp export 639,160,000 rows to a file
cmd>bcp Target out target.dat -f bcp.Xml -T -d database
Test result : time : 01:24:17 speed: 126,380 rows/sec
case 8:
bcp import 639,160,000 rows to target table
cmd>bcp Target in target.dat -f bcp.Xml -T -d database -b 5000
Test result : Very slow, canceled. So, you'd better don't use -b (aka batch size) option. it would split job into lots of transactions. if without -b option, it would be only one transaction.
use bcp to import 639,160,000 rows to target table( over 700,000,000 rows exist )
cmd>bcp Target in target.dat -f bcp.Xml -T -d database -h "TABLOCK"
Test result: time: 12:50:35 speed: 13,824 rows / sec
case 9:
use bulk insert clause to import 20,000,000 rows to target table ( 640,000,000 rows exist )
bulk insert Target from 'tmp.dat' with (formatfile = 'bcp.Xml', tablock)
Test result: time : 01:33:06 ( compare with case 5 )
大家有更快的方案可以分享一下吗?
- Investigation report of MSSQL data read and write performance
- Investigation on performance of data load
- Read and write bytes data
- Configure file like format of ini and read data from ini file or write data to ini file
- UIPasteboard Example – Read, Write and Share data between apps
- UTL_FILE - read and write file or Random Access of Files
- Read and Write XML
- Read and Write Excel
- file read and write
- write and read
- read and write files
- Python read and write
- Performance of checksums and CRCs over real data
- 以byte[]类型复制文件 Copy a file with read(byte[] data) and write(byte[] data)
- read and write binary xml
- Read and Write File Code
- read and write '.bmp' file
- QT之write and read
- Android开发之怎样得到版本号
- C++输入输出状态枚举量
- sqlserver链接数据库
- TZU2014年省赛个人热身赛1 3741:Singlehood Number
- DFS与BFS
- Investigation report of MSSQL data read and write performance
- js回调函数复习
- TZU2014年省赛个人热身赛1 1264:Definite Values
- 运行jsp文件后浏览器出现的是:<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"
- mysql数据库的基本操作命令
- zoj 3720 几何 判断点在多边形内
- c++ fstream中seekg()和seekp()的用法
- GitHub for Windows 安装失败,An error occurred attempting to install github 的解决办法
- Java模拟HTTP的Get和Post请求(增强) Apache-httpclient