记录Presto数据查询引擎的配置过程 - 夜丶帝 - 博客园


本站和网页 https://www.cnblogs.com/tonghu008/p/3547795.html 的作者无关,不对其内容负责。快照谨为网络故障时之索引,不代表被搜索网站的即时页面。

记录Presto数据查询引擎的配置过程 - 夜丶帝 - 博客园
首页
新闻
博问
专区
闪存
班级
我的博客
我的园子
账号设置
简洁模式 ...
退出登录
注册
登录
夜丶帝
不要以为别人不在乎你不在乎的事
博客园
首页
新随笔
新文章
联系
订阅
管理
记录Presto数据查询引擎的配置过程
配置准备:
1、centos6.4系统的虚拟机4个(master、secondary、node1、node2)
2、准备安装包
hadoop-cdh4.4.0、hive-cdh4.4.0、presto、discovery-server、hbase、JDK7.0+64bit、pythin2.4+、postgresql
3、配置规划
主机:192.168.69.180 master (hadoop、hbase、discovery-server、hive、presto、postgresql)
副主机:192.168.69.181 secondary(hadoop、hbase、presto)
节点:192.168.69.182 node1(hadoop、hbase、presto),192.168.69.183 node2(hadoop、hbase、presto)
配置步骤:
1、在每个虚拟机上安装Java JDK和python。
2、修改每台虚拟机的hosts文件,如下红色部分:
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost
192.168.69.180 master
192.168.69.181 secondary
192.168.69.182 node1
192.168.69.182 node3
3、关闭所有虚拟机的防火墙,命令如下:
[root@master~]# service iptables stop
4、配置主机与节点机无密码连接(以master与secondary为例,红色部分不录入):
master:
[root@master:~]$mkdir .ssh
[root@master:~]$cd .ssh
[root@master:.ssh]$ssh-keygen -t rsa(执行该命令后输入名称,这里为方便使用master,输入执行后继续按提示输入密码,如:123456)
[root@master:.ssh]$cp master.pub authorized_keys(将公钥加入authorized_keys)
[root@master:.ssh]$scp master.pub secondary:/root/.ssh/(执行该命令将master.pub同步到secondary的/root/.ssh/,然后在secondary中执行cp master.pub authorized_keys命令,同样,将master.pub同步到node1和node2执行同样命令)
secondary:
[root@secondary:~]$mkdir .ssh
[root@secondary:~]$cd .ssh
[root@secondary:.ssh]cp master.pub authorized_keys
[root@secondary:.ssh]$ssh-keygen -t rsa(执行该命令后输入名称,这里为方便使用secondary,输入执行后继续按提示输入密码,如:123456)
[root@secondary:.ssh]$cat secondary.pub >> authorized_keys(将公钥追加authorized_keys)
[root@secondary:.ssh]$scp secondary.pub master:/root/.ssh/(执行该命令将secondary.pub同步到master的/root/.ssh/,然后在master中执行cat secondary.pub >> authorized_keys命令将公钥追加到authorized_keys中)
注:Ssh 权限配置问题: 用户目录权限为 755 或者 700就是不能是77x .ssh目录权限必须为755 rsa_id.pub 及authorized_keys权限必须为644 rsa_id权限必须为600 最后,在master中测试:ssh master date、ssh secondary date、ssh node1 date、ssh node2 date 不需要密码,则成功。 如果ssh secondary 、ssh node1、ssh node2 连接速度慢,需要更改/etc/ssh/ssh_config 为GSSAPIAuthentication no
修改root的ssh权限,/etc/ssh/sshd_config,将PermitRootLogin no 改为yes重启sshd服务:/etc/init.d/sshd restrat
5、配置环境变量
[root@master~]# gedit .bash_profile # .bash_profile
# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi
# User specific environment and startup programsexport JAVA_HOME=/usr/java/jdk1.7.0_45export JRE_HOME=$JAVA_HOME/jreexport CLASS_PATH=./:$JAVA_HOME/lib:$JRE_HOME/lib:$JRE_HOME/lib/tools.jar:/usr/presto/server/lib:/usr/discovery-server/lib
export HADOOP_HOME=/usr/hadoopexport HIVE_HOME=/usr/hiveexport HBASE_HOME=/usr/hbase
export HADOOP_MAPRED_HOME=${HADOOP_HOME}export HADOOP_COMMON_HOME=${HADOOP_HOME}export HADOOP_HDFS_HOME=${HADOOP_HOME}export YARN_HOME=${HADOOP_HOME}export HADOOP_YARN_HOME=${HADOOP_HOME}export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoopexport HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoopexport YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin master环境变量配置好后,secondary、node1和node2同样配置,可以使用scp命令同步到secondary、node1和node2中
6、配置hadoop
a、下载并解压hadoop-2.2.0-cdh4.4.0.tar.gz,将解压文件复制到/usr下,即:/usr/ hadoop
b、配置core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!--fs.default.name for MRV1 ,fs.defaultFS for MRV2(yarn) -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>
<property>
<name>fs.trash.checkpoint.interval</name>
<value>10080</value>
</property>
</configuration>
c、hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/data/hadoop-${user.name}</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>master:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>secondary:50090</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
d、masters(没有则创建该文件)
master
secondary
e、slaves(没有则创建该文件)
node1
node2
f、mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
g、yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,
$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/opt/data/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/opt/data/yarn/logs</value>
</property>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/opt/data/yarn/logs</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user</value>
</property>
</configuration>
h、复制hadoop到secondary、node1和node2
i、hadoop第一次运行需要先格式化,命令如下:[root@tamaster hadoop]hadoop namenode -format
j、关闭hadoop安全模式,命令如下:hdfs dfsadmin -safemode leave
k、运行hadoop,命令: [root@tamaster:~]start-all.sh
7、安装hbase
a、解压hbase压缩包,复制到/usr,即:/usr/hbase
b 、regionservers
mastersecondarynode1
c、hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master/hbase-${user.name}</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/opt/data/hbase-${user.name}</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,secondary,node1,node2</value>
</property>
</configuration>
d、将hbase同步到secondary、node1、node2中
e、启动hbase,命令如下:
[root@master:~]# start-hbase.sh
8、安装hive
a、下载hive压缩包,并将其解压到/usr,即:/usr/hive
b、hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://master/testdb</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>redhat</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>mapred.job.tracker</name>
<value>master:8031</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/hive/lib/zookeeper-3.4.5-cdh4.4.0.jar,
file:///usr/hive/lib/hive-hbase-handler-0.10.0-cdh4.4.0.jar,
file:///usr/hive/lib/hbase-0.94.2-cdh4.4.0.jar,
file:///usr/hive/lib/guava-11.0.2.jar</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/opt/data/warehouse-${user.name}</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/opt/data/hive-${user.name}</value>
<description>Scratch space for Hive jobs</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/data/querylog-${user.name}</value>
<description>
Location of Hive run time structured log file
</description>
</property>
<property>
<name>hive.support.concurrency</name>
<description>Enable Hive's Table Lock Manager Service</description>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<description>Zookeeper quorum used by Hive's Table Lock Manager</description>
<value>node1</value>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>desktop1</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>lib/hive-hwi-0.10.0-cdh4.2.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
</configuration>
9、安装postgresql(用postgresql作为元数据库)
a、下载postgresql,并安装b、使用pgadmin创建用户sac、使用pgadmin创建数据库testdb,并指定所属角色为sad、配置pg_hba.conf的访问地址,允许主机访问e、配置postgresql.conf standard_conforming_strings = offf、复制postgres 的jdbc驱动 到 /usr/hive-cdh4.4.0/lib
10、安装presto
a、下载并解压到/usr下,即:/usr/presto
b、presto文件夹中创建etc文件夹,并在其中建立以下配置文件
1)node.properties
node.environment=productionnode.id=F25B16CB-5D5B-50FD-A30D-B2221D71C882node.data-dir=/var/presto/data 注意每台服务器node.id必须是唯一的2)jvm.config-server -Xmx16G-XX:+UseConcMarkSweepGC-XX:+ExplicitGCInvokesConcurrent-XX:+CMSClassUnloadingEnabled-XX:+AggressiveOpts-XX:+HeapDumpOnOutOfMemoryError-XX:OnOutOfMemoryError=kill -9 %p-XX:PermSize=150M-XX:MaxPermSize=150M-XX:ReservedCodeCacheSize=150M-Xbootclasspath/p:/var/presto/installation/lib/floatingdecimal-0.1.jar下载floatingdecimal-0.1.jar包放在/var/presto/installation/lib/目录下3)config.propertiescoordinator=truedatasources=jmxhttp-server.http.port=8080presto-metastore.db.type=h2presto-metastore.db.filename=var/db/MetaStoretask.max-memory=1GBdiscovery-server.enabled=truediscovery.uri=http://master:8411以上为master的配置,secondary、node1和node2中需将coordinator=true值改为false,将discovery-server.enabled=true删除掉4)log.properties com.facebook.presto=DEBUG5)在/usr/presto/etc中创建catalog文件夹,并创建以下配置文件jmx.properties connector.name=jmxhive.propertes connector.name=hive-cdh4 hive.metastore.uri=thrift://master:9083
11、安装discovery-service
a、下载并解压 discovery-service压缩包,放在/usr下,即:/usr/ discovery-service
b、与presto配置一样,在 /usr/ discovery-service下创建etc文件夹,并在其中创建以下配置文件
1)node.propertiesnode.environment=production node.id=D28C24CF-78A1-CD09-C693-7BDE66A51EFDnode.data-dir=/var/discovery/data2)jvm.config-server -Xmx1G-XX:+UseConcMarkSweepGC-XX:+ExplicitGCInvokesConcurrent-XX:+AggressiveOpts-XX:+HeapDumpOnOutOfMemoryError-XX:OnOutOfMemoryError=kill -9 %p3)config.properties http-server.http.port=8411
运行:
master机器上运行命令如下:
start-all.sh(启动每台机器上的hadoop)
start-hbase.sh(启动每台机器上的hbase)
转入usr/disdiscovery-server/bin中启动disdiscovery-server,命令如下
laucher start //启动
laucher run //运行
转入/usr/hive/bin中启动hive,命令如下:
./hive --service hiveserver -p 9083 //thrift模式
master及每台节点机上运行如下命令:
转入/usr/presto/server/bin中启动presto,命令如下:
laucher start //启动
laucher run //运行
命令汇总:
1、启动hadoop命令:hadoop namenode -formathadoop datanode -formatstart-all.shhadoop dfsadmin -safemode leavehdfs dfsadmin -safemode leave2、hive启动命令:./hive./hive --service hiveserver -p 9083 //thrift模式3、hbase 命令./start-hbase.sh4、discovery-server命令:laucher start //启动laucher run //运行lancher stop //停止5、presto命令laucher start //启动laucher run //运行lancher stop //停止6、presto 客户端启动./presto --server localhost:8080 --catalog hive --schema default
测试:
在master机上运行presto的客户端,命令如下:
转入/usr/presto/client启动客户端,命令如下:
./presto --server localhost:8080 --catalog hive --schema default
客户端启动后执行 show tables;看是否成功
测试结果如下:
以下测试虚拟机配置为:CentOS64bit 内存3个为2G 1个为1G 数据量为61W
节点数 sql语句 执行时间(秒)
4 nodes select Count(*) from mytable; 10s
4 nodes select Count(*),num from mytable group by num; 10s
4 nodes select num from mytable group by num having count(*)>1000; 10s
4 nodes select min(num) from mytable group by num; 9s
4 nodes select min(num) from mytable; 9s
4 nodes select max(num) from mytable; 9s
4 nodes select min(num) from mytable group by num; 9s
4 nodes select row_number() over(partition by name order by num) as row_index from mytable; 16s
posted on
2014-02-13 11:43
夜丶帝
阅读(9770)
评论(0)
编辑
收藏
举报
刷新评论刷新页面返回顶部
Copyright 2022 夜丶帝
Powered by .NET 7.0 on Kubernetes
Powered By博客园