CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1
来源:互联网 发布:黄海真实故事知乎 编辑:程序博客网 时间:2024/04/30 05:12
大数据学习环境搭建(CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1)
www.ljt.cosa
192.168.1.11 www.ljt.cos02
192.168.1.12 www.ljt.cos03
192.168.1.13 备注
NameNode Hadoop Y Y 高可用
DateNode Y Y Y
ResourceManager Y Y 高可用
NodeManager Y Y Y
JournalNodes Y Y Y 奇数个,至少3个节点
ZKFC(DFSZKFailoverController) Y Y 有namenode的地方就有ZKFC
QuorumPeerMain Zookeeper Y Y Y
MySQL HIVE Y Hive元数据库
Metastore(RunJar) Y
HIVE(RunJar) Y
HMaster HBase Y Y 高可用
HRegionServer Y Y Y
Spark(Master) Spark Y Y 高可用
Spark(Worker) Y Y Y
以前搭建过一套,带Federation,至少需4台机器,过于复杂,笔记本也吃不消。现为了学习Spark2.0版本,决定去掉Federation,简化学习环境,不过还是完全分布式
所有软件包:
apache-ant-1.9.9-bin.tar.gz
apache-hive-1.2.1-bin.tar.gz
apache-maven-3.3.9-bin.tar.gz
apache-tomcat-6.0.44.tar.gz
CentOS-6.9-x86_64-minimal.iso
findbugs-3.0.1.tar.gz
hadoop-2.7.3-src.tar.gz
hadoop-2.7.3.tar.gz
hadoop-2.7.3(自已编译的centOS6.9版本).tar.gz
hbase-1.3.1-bin(自己编译).tar.gz
hbase-1.3.1-src.tar.gz
jdk-8u121-linux-x64.tar.gz
mysql-connector-java-5.6-bin.jar
protobuf-2.5.0.tar.gz
scala-2.11.11.tgz
snappy-1.1.3.tar.gz
spark-2.1.1-bin-hadoop2.7.tgz
关闭防火墙
[root@www.ljt.cosa ~]# service iptables stop
[root@www.ljt.cosa ~]# chkconfig iptables off
zookeeper
[root@www.ljt.cosa ~]# wget -O /root/zookeeper-3.4.9.tar.gz https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
[root@www.ljt.cosa ~]# tar -zxvf /root/zookeeper-3.4.9.tar.gz -C /root
[root@www.ljt.cosa ~]# cp /root/zookeeper-3.4.9/conf/zoo_sample.cfg /root/zookeeper-3.4.9/conf/zoo.cfg
[root@www.ljt.cosa ~]# vi /root/zookeeper-3.4.9/conf/zoo.cfg
[root@www.ljt.cosa ~]# vi /root/zookeeper-3.4.9/bin/zkEnv.sh
[root@www.ljt.cosa ~]# mkdir /root/zookeeper-3.4.9/logs
[root@www.ljt.cosa ~]# vi /root/zookeeper-3.4.9/conf/log4j.properties
[root@www.ljt.cosa ~]# mkdir /root/zookeeper-3.4.9/zkData
[root@www.ljt.cosa ~]# scp -r /root/zookeeper-3.4.9 www.ljt.cos02:/root
[root@www.ljt.cosa ~]# scp -r /root/zookeeper-3.4.9 www.ljt.cos03:/root
[root@www.ljt.cosa ~]# touch /root/zookeeper-3.4.9/zkData/myid
[root@www.ljt.cosa ~]# echo 1 > /root/zookeeper-3.4.9/zkData/myid
[root@www.ljt.cos02 ~]# touch /root/zookeeper-3.4.9/zkData/myid
[root@www.ljt.cos02 ~]# echo 2 > /root/zookeeper-3.4.9/zkData/myid
[root@www.ljt.cos03 ~]# touch /root/zookeeper-3.4.9/zkData/myid
[root@www.ljt.cos03 ~]# echo 3 > /root/zookeeper-3.4.9/zkData/myid
环境变量
[root@www.ljt.cosa ~]# vi /etc/profile
export JAVA_HOME=/root/jdk1.8.0_121
export SCALA_HOME=/root/scala-2.11.11
export HADOOP_HOME=/root/hadoop-2.7.3
export HIVE_HOME=/root/apache-hive-1.2.1-bin
export HBASE_HOME=/root/hbase-1.3.1
export SPARK_HOME=/root/spark-2.1.1-bin-hadoop2.7
export PATH=.: PATH: JAVA_HOME/bin: SCALAHOME/bin: HADOOP_HOME/bin: HADOOPHOME/sbin:/root: HIVE_HOME/bin: HBASEHOME/bin: SPARK_HOME
export CLASSPATH=.: JAVAHOME/jre/lib/rt.jar: JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
[root@www.ljt.cosa ~]# source /etc/profile
[root@www.ljt.cosa ~]# scp /etc/profile www.ljt.cos02:/etc
[root@www.ljt.cos02 ~]# source /etc/profile
[root@www.ljt.cosa~]# scp /etc/profile www.ljt.cos03:/etc
[root@www.ljt.cos03 ~]# source /etc/profile
Hadoop
[root@www.ljt.cosa ~]# wget -O /root/hadoop-2.7.3.tar.gz http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
[root@www.ljt.cosa ~]# tar -zxvf /root/hadoop-2.7.3.tar.gz -C /root
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
dfs.replication
2
dfs.blocksize
64m
dfs.permissions.enabled
false
dfs.nameservices
mycluster
dfs.ha.namenodes.mycluster
nn1,nn2
dfs.namenode.rpc-address.mycluster.nn1
www.ljt.cosa:8020
dfs.namenode.rpc-address.mycluster.nn2
www.ljt.cos02:8020
dfs.namenode.http-address.mycluster.nn1
www.ljt.cosa:50070
dfs.namenode.http-address.mycluster.nn2
www.ljt.cos02:50070
dfs.namenode.shared.edits.dir
qjournal://www.ljt.cosa:8485;www.ljt.cos02:8485;www.ljt.cos03:8485/mycluster
dfs.journalnode.edits.dir
/root/hadoop-2.7.3/tmp/journal
dfs.ha.automatic-failover.enabled.mycluster
true
dfs.client.failover.proxy.provider.mycluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/core-site.xml
fs.defaultFS
hdfs://mycluster
hadoop.tmp.dir
/root/hadoop-2.7.3/tmp
ha.zookeeper.quorum
www.ljt.cosa:2181,www.ljt.cos02:2181,www.ljt.cos03:2181
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/slaves
www.ljt.cosa
www.ljt.cos02
www.ljt.cos03
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/yarn-env.sh
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
www.ljt.cosa:10020
mapreduce.jobhistory.webapp.address
www.ljt.cosa:19888
mapreduce.jobhistory.max-age-ms
6048000000
[root@www.ljt.cosa ~]# vi /root/hadoop-2.7.3/etc/hadoop/yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce_shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.cluster-id
yarn-cluster
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
www.ljt.cosa
yarn.resourcemanager.hostname.rm2
www.ljt.cos02
yarn.resourcemanager.webapp.address.rm1
www.ljt.cosa:8088
yarn.resourcemanager.webapp.address.rm2
www.ljt.cos02:8088
yarn.resourcemanager.zk-address
www.ljt.cosa:2181,www.ljt.cos02:2181,www.ljt.cos03:2181
yarn.resourcemanager.recovery.enabled
true
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
yarn.log-aggregation-enable
true
yarn.log.server.url
http://www.ljt.cosa:19888/jobhistory/logs
[root@www.ljt.cosa ~]# mkdir -p /root/hadoop-2.7.3/tmp/journal
[root@www.ljt.cos02 ~]# mkdir -p /root/hadoop-2.7.3/tmp/journal
[root@www.ljt.cos03 ~]# mkdir -p /root/hadoop-2.7.3/tmp/journal
将编译的本地包中的native库替换/root/hadoop-2.7.3/lib/native
[root@www.ljt.cosa ~]# scp -r /root/hadoop-2.7.3/ www.ljt.cos02:/root
[root@www.ljt.cosa ~]# scp -r /root/hadoop-2.7.3/ www.ljt.cos03:/root
查看自己的Hadoop是32位还是64位
[root@www.ljt.cosa native]# file libhadoop.so.1.0.0
libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
[root@www.ljt.cosa native]# pwd
/root/hadoop-2.7.3/lib/native
启动ZK
[root@www.ljt.cosa ~]#/root/zookeeper-3.4.9/bin/zkServer.sh start
[root@www.ljt.cos02 ~]#/root/zookeeper-3.4.9/bin/zkServer.sh start
[root@www.ljt.cos03 ~]#/root/zookeeper-3.4.9/bin/zkServer.sh start
格式化zkfc
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/bin/hdfs zkfc -formatZK
[root@www.ljt.cosa ~]# /root/zookeeper-3.4.9/bin/zkCli.sh
启动journalnode
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start journalnode
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start journalnode
[root@www.ljt.cos03 ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start journalnode
Namenode格式化和启动
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/bin/hdfs namenode -format
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start namenode
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/bin/hdfs namenode -bootstrapStandby
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start namenode
启动zkfc
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start zkfc
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start zkfc
启动datanode
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode
[root@www.ljt.cos03 ~]# /root/hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode
启动yarn
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/sbin/yarn-daemon.sh start resourcemanager
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/sbin/yarn-daemon.sh start resourcemanager
[root@www.ljt.cosa ~]# /root/hadoop-2.7.3/sbin/yarn-daemon.sh start nodemanager
[root@www.ljt.cos02 ~]# /root/hadoop-2.7.3/sbin/yarn-daemon.sh start nodemanager
[root@www.ljt.cos03 ~]# /root/hadoop-2.7.3/sbin/yarn-daemon.sh start nodemanager
[root@www.ljt.cosa ~]# hdfs dfs -chmod -R 777 /
安装MySQL
[root@www.ljt.cosa ~]# yum remove -y mysql-libs
[root@www.ljt.cosa ~]# yum install mysql-server
[root@www.ljt.cosa ~]# service mysqld start
[root@www.ljt.cosa ~]# chkconfig mysqld on
[root@www.ljt.cosa ~]# mysqladmin -u root password ‘AAAaaa111’
[root@www.ljt.cosa ~]# mysqladmin -u root -h www.ljt.cosa password ‘AAAaaa111’
[root@www.ljt.cosa ~]# mysql -h localhost -u root -p
Enter password: AAAaaa111
mysql> GRANT ALL PRIVILEGES ON . TO ‘root’@’%’ IDENTIFIED BY ‘AAAaaa111’ WITH GRANT OPTION;
mysql> flush privileges;
[root@www.ljt.cosa ~]# vi /etc/my.cnf
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
[mysqld]
character-set-server=utf8
lower_case_table_names = 1
[root@www.ljt.cosa ~]# service mysqld restart
HIVE安装
由于官方提供的spark-2.1.1-bin-hadoop2.7.tgz包中集成的Hive是1.2.1,所以Hive版本选择1.2.1
[root@www.ljt.cosa ~]# wget http://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz
[root@www.ljt.cosa ~]# tar -xvf apache-hive-1.2.1-bin.tar.gz
将mysql-connector-java-5.6-bin.jar 驱动放在 /root/hive-1.2.1/lib/ 目录下面
[root@www.ljt.cosa ~]# cp /root/apache-hive-1.2.1-bin/conf/hive-env.sh.template /root/apache-hive-1.2.1-bin/conf/hive-env.sh
[root@www.ljt.cosa ~]# vi /root/apache-hive-1.2.1-bin/conf/hive-env.sh
export HADOOP_HOME=/root/hadoop-2.7.3
[root@www.ljt.cosa ~]# cp /root/apache-hive-1.2.1-bin/conf/hive-log4j.properties.template /root/apache-hive-1.2.1-bin/conf/hive-log4j.properties
[root@www.ljt.cosa ~]# vi /root/apache-hive-1.2.1-bin/conf/hive-site.xml
hive.metastore.uris
thrift://www.ljt.cosa:9083
<property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value></property><property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://www.ljt.cosa:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8</value></property><property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value></property><property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value></property><property> <name>javax.jdo.option.ConnectionPassword</name> <value>AAAaaa111</value></property>
[root@www.ljt.cosa ~]# vi /etc/init.d/hive-metastore
/root/apache-hive-1.2.1-bin/bin/hive –service metastore >/dev/null 2>&1 &
[root@www.ljt.cosa ~]# chmod 777 /etc/init.d/hive-metastore
[root@www.ljt.cosa ~]# ln -s /etc/init.d/hive-metastore /etc/rc.d/rc3.d/S65hive-metastore
[root@www.ljt.cosa ~]# hive
[root@www.ljt.cosa ~]# mysql -h localhost -u root -p
mysql> alter database hive character set latin1;
Hbase编译安装
http://archive.apache.org/dist/hbase/1.3.1/hbase-1.3.1-src.tar.gz
官方提供的是基础Hadoop2.5.1编译的,所以要进行编译:
将pom.xml文件中依赖的hadoop版本修改:
Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
export HBASE_MASTER_OPTS=”$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m”
export HBASE_REGIONSERVER_OPTS=”$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m”
将etc/profile,及hbase复制到其他两个节点上
[root@www.ljt.cosa ~]# start-hbase.sh
back-master需要手动起
[root@www.ljt.cos02 ~]# hbase-daemon.sh start master
[root@www.ljt.cosa ~]# hbase shell
spark
https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz
[root@www.ljt.cosa ~]# cp /root/spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh.template /root/spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh
[root@www.ljt.cosa ~]# vi /root/spark-2.1.1-bin-hadoop2.7/conf/spark-env.sh
export SCALA_HOME=/root/scala-2.11.11
export JAVA_HOME=/root/jdk1.8.0_121
export HADOOP_HOME=/root/hadoop-2.7.3
export HADOOP_CONF_DIR=/root/hadoop-2.7.3/etc/hadoop
export SPARK_DAEMON_JAVA_OPTS=”-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=www.ljt.cosa:2181,www.ljt.cos02:2181,www.ljt.cos03:2181 -Dspark.deploy.zookeeper.dir=/spark”
[root@www.ljt.cosa ~]# cp /root/spark-2.1.1-bin-hadoop2.7/conf/slaves.template /root/spark-2.1.1-bin-hadoop2.7/conf/slaves
[root@www.ljt.cosa ~]# vi /root/spark-2.1.1-bin-hadoop2.7/conf/slaves
www.ljt.cosa
www.ljt.cos02
www.ljt.cos03
[root@www.ljt.cosa ~]# scp -r /root/spark-2.1.1-bin-hadoop2.7 www.ljt.cos02:/root
[root@www.ljt.cosa ~]# scp -r /root/spark-2.1.1-bin-hadoop2.7 www.ljt.cos03:/root
[root@www.ljt.cosa ~]# /root/spark-2.1.1-bin-hadoop2.7/sbin/start-all.sh
./start.sh
zkServer.sh start
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/zookeeper-3.4.9/bin/zkServer.sh start’
ssh root@www.ljt.cos03 ‘export BASH_ENV=/etc/profile;/root/zookeeper-3.4.9/bin/zkServer.sh start’
/root/hadoop-2.7.3/sbin/start-dfs.sh
/root/hadoop-2.7.3/sbin/start-yarn.sh
如果Yarn做HA,则打开
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/hadoop-2.7.3/sbin/yarn-daemon.sh start resourcemanager’
/root/hadoop-2.7.3/sbin/hadoop-daemon.sh start zkfc
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/hadoop-2.7.3/sbin/hadoop-daemon.sh start zkfc’
ssh root@www.ljt.cos03 ‘export BASH_ENV=/etc/profile;/root/hadoop-2.7.3/sbin/hadoop-daemon.sh start zkfc’
/root/hadoop-2.7.3/bin/hdfs haadmin -ns mycluster -failover nn2 nn1
echo ‘Y’ | ssh root@www.ljt.cosa ‘export BASH_ENV=/etc/profile;/root/hadoop-2.7.3/bin/yarn rmadmin -transitionToActive –forcemanual rm1’
/root/hbase-1.3.1/bin/start-hbase.sh
如果HBase做HA,则打开
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/hbase-1.3.1/bin/hbase-daemon.sh start master’
/root/spark-2.1.1-bin-hadoop2.7/sbin/start-all.sh
如果Spark做HA,则打开
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/spark-2.1.1-bin-hadoop2.7/sbin/start-master.sh’
/root/hadoop-2.7.3/sbin/mr-jobhistory-daemon.sh start historyserver
echo ‘————–www.ljt.cosa—————’
jps | grep -v Jps | sort -k 2 -t ’ ’
echo ‘————–www.ljt.cos02—————’
ssh root@www.ljt.cos02 “export PATH=/usr/bin:
./stop.sh
/root/spark-2.1.1-bin-hadoop2.7/sbin/stop-all.sh
/root/hbase-1.3.1/bin/stop-hbase.sh
如果Yarn开HA,则去掉注释
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/hadoop-2.7.3/sbin/yarn-daemon.sh stop resourcemanager’
/root/hadoop-2.7.3/sbin/stop-yarn.sh
/root/hadoop-2.7.3/sbin/stop-dfs.sh
/root/hadoop-2.7.3/sbin/hadoop-daemon.sh stop zkfc
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/hadoop-2.7.3/sbin/hadoop-daemon.sh stop zkfc’
/root/zookeeper-3.4.9/bin/zkServer.sh stop
ssh root@www.ljt.cos02 ‘export BASH_ENV=/etc/profile;/root/zookeeper-3.4.9/bin/zkServer.sh stop’
ssh root@www.ljt.cos03 ‘export BASH_ENV=/etc/profile;/root/zookeeper-3.4.9/bin/zkServer.sh stop’
/root/hadoop-2.7.3/sbin/mr-jobhistory-daemon.sh stop historyserver
./shutdown.sh
ssh root@www.ljt.cos02 “export PATH=/usr/bin:
shutdown -h now
./reboot.sh
ssh root@www.ljt.cos02 “export PATH=/usr/bin:
reboot
- CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1
- 伪分布安装Hadoop2.8.0+Hbase1.3.1+Hive1.2.1+Kylin2.0
- hadoop2.7.2+hbase1.2.5+storm1.1.0+spark2.1.1环境搭建
- Hadoop2.6.4、zookeeper3.4.6、HBase1.2.2、Hive1.2.1、sqoop1.99.7、spark1.6.2安装
- hadoop2.6.5+spark2.1.0+hbase1.2.6完全分布式环境部署
- hive安装 (hive1.2.1+hadoop2.7+mysql)
- Hadoop2.6.4分布式下安装 hive1.2.1
- ubuntu14.04+hadoop2.6.2+hive1.1.1
- centos6\7下hive1.2.1安装部署
- Hadoop2.7.3 + HBase1.2.3 + ZooKeeper3.4.9 整合
- Hbase1.2.5 hadoop2.7.3 importsv实例
- hadoop2.7.3完全分布式安装-docker-hive1.2.1-hiveserver2-weave1.9.3
- # hadoop2.7.3-spark2.0.2集群部署(备忘)
- 虚拟机安装spark2.2+hadoop2.7.3
- Hadoop2.7.3+Spark2.1.0 完全分布式环境
- 基于hadoop集群的Hive1.2.1、Hbase1.2.2、Zookeeper3.4.8完全分布式安装
- CentOS7安装Hive1.2.1内嵌derby模式(Hadoop2.6)
- ubuntu16.04+hadoop2.7.2+hive1.2.1 server2通过jdbc连接
- Terrible Sets (单调队列)
- Unable to initialize MapOutputCollector org.apache.hadoop.mapred.MapTask$MapOutputBuffer java.lang.C
- STL库容器vector clear函数
- HDU-1005-Number Sequence
- 2017.8.14--------单调队列
- CentOS6.9+Hadoop2.7.3+Hive1.2.1+Hbase1.3.1+Spark2.1.1
- jquery 动画
- 脏读,不可重复读,幻读的区别
- 用共享内存和信号量实现的简单的卖票系统
- vector中的resize()函数 VS reserve()函数
- flask 学习成果
- QT SSL QSslSocket: cannot call unresolved function SSLv23_client_method
- 子串和(南阳理工oj-题目44)
- SpringMVC(4.x) 从搭建到放弃(含源码分析)——一