hadoop2.2.0 core-site.xml--i/o properties
来源:互联网 发布:亚马逊选择centos系统 编辑:程序博客网 时间:2024/06/05 14:57
<!-- i/o properties -->
<property> <name>io.file.buffer.size</name> <value>4096</value> <description>The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.</description></property>
注释:操作sequence文件时的缓存大小。
<property> <name>io.bytes.per.checksum</name> <value>512</value> <description>The number of bytes per checksum. Must not be larger than io.file.buffer.size.</description></property>
注释:
<property> <name>io.skip.checksum.errors</name> <value>false</value> <description>If true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception.</description></property>
注释:
<property> <name>io.compression.codecs</name> <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value> <description>A list of the compression codec classes that can be used for compression/decompression.</description></property>
注释:
<property> <name>io.serializations</name> <value>org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization</value> <description>A list of serialization classes that can be used for obtaining serializers and deserializers.</description></property>
注释:
<property> <name>io.seqfile.local.dir</name> <value>${hadoop.tmp.dir}/io/local</value> <description>The local directory where sequence file stores intermediate data files during merge. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. </description></property>
注释:
<property> <name>io.map.index.skip</name> <value>0</value> <description>Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large MapFiles using less memory.</description></property>
注释:
property> <name>io.map.index.interval</name> <value>128</value> <description> MapFile consist of two files - data file (tuples) and index file (keys). For every io.map.index.interval records written in the data file, an entry (record-key, data-file-position) is written in the index file. This is to allow for doing binary search later within the index file to look up records by their keys and get their closest positions in the data file. </description></property>
注释:
0 0
- hadoop2.2.0 core-site.xml--i/o properties
- hadoop2.2.0 mapred-site.xml--i/o properties
- hadoop2.2.0 core-site.xml--global properties
- hadoop2.2.0 core-site.xml--security properties
- hadoop2.2.0 core-site.xml--ipc properties
- hadoop2.2.0 core-site.xml--file system properties
- hadoop2.2.0 maprd-site.xml--jobhistory properties
- hadoop2.2.0 core-site.xml-- roxy Configuration
- hadoop2.2.0 core-site.xml--Rack Configuration
- hadoop2.2.0 core-site.xml--Tfile
- hadoop2.2.0 core-site.xml--Local file system
- hadoop2.2.0 core-site.xml--s3 File System
- hadoop2.2.0 core-site.xml--s3native File System
- hadoop2.2.0 core-site.xml--Kosmos File System
- hadoop2.2.0 core-site.xml--FTP file system
- hadoop2.2.0 core-site.xml--HTTP web-consoles Authentication
- hadoop2.2.0 hdfs-site.xml
- hadoop2.2.0 mapred-site.xml
- Hadoop添加删除节点
- hadoop搭建
- hadoop 1.2.1 hadoop-env.sh
- hadoop2.2.0 core-site.xml--global properties
- hadoop2.2.0 core-site.xml--security properties
- hadoop2.2.0 core-site.xml--i/o properties
- hadoop2.2.0 core-site.xml--file system properties
- 继续Debug
- hadoop2.2.0 core-site.xml--ipc properties
- hadoop2.2.0 core-site.xml-- roxy Configuration
- hadoop2.2.0 core-site.xml--Rack Configuration
- Oracle报错,ORA-28001:the password has expired
- hadoop2.2.0 core-site.xml--Local file system
- hadoop2.2.0 core-site.xml--s3 File System