目录
一、hdfs抽取到MySQL
二、hive的table表抽取到MySQL
抽取hive的数据到MySQL(将hive中的表导入到MySQL中)有两种方式:
- 直接从hdfs上抽取,因为hive的数据储存在hdfs上。
- 从hive的table表中直接抽取也可以!
一、hdfs抽取到MySQL
使用csv读数据hdfs的数据,jdbc取数据:
关键代码:
# 获取sparkSession对象spark = SparkSession.builder.master("local[2]").appName("").config("spark.sql.shuffle.partitions", 2).getOrCreate()# 读取hive数据(本质是读取hdfs)df=spark.read.csv('hdfs://bigdata01:9820/user/hive/warehouse/yunhe01.db/t_user').toDF('id','name')
# 写入本地mysql中
df.write.format('jdbc') \.option("driver", "com.mysql.cj.jdbc.Driver") \.option("url", "jdbc:mysql://localhost:3306/zuoye") \.option("dbtable", "t_user") \.option("user", "root") \.option("password", "123456") \.save(mode='overwrite')
完整代码:
import osfrom pyspark.sql import SparkSession"""
------------------------------------------Description : TODO:SourceFile : _14-hive读取到mysqlAuthor : songDate : 2024/11/6
-------------------------------------------
"""
if __name__ == '__main__':# 配置环境os.environ['JAVA_HOME'] = 'C:/Program Files/Java/jdk1.8.0_201'# 配置Hadoop的路径,就是前面解压的那个路径os.environ['HADOOP_HOME'] = 'D:/B/05-Hadoop/hadoop-3.3.1/hadoop-3.3.1'# 配置base环境Python解析器的路径os.environ['PYSPARK_PYTHON'] = 'C:/ProgramData/Miniconda3/python.exe' # 配置base环境Python解析器的路径os.environ['PYSPARK_DRIVER_PYTHON'] = 'C:/ProgramData/Miniconda3/python.exe'os.environ['HADOOP_USER_NAME'] = 'root'# 获取sparkSession对象spark = SparkSession.builder.master("local[2]").appName("").config("spark.sql.shuffle.partitions", 2).getOrCreate()# 读取hive数据(本质是读取hdfs)df=spark.read.csv('hdfs://bigdata01:9820/user/hive/warehouse/yunhe01.db/t_user').toDF('id','name')# 写入本地mysql中df.write.format('jdbc') \.option("driver", "com.mysql.cj.jdbc.Driver") \.option("url", "jdbc:mysql://localhost:3306/zuoye") \.option("dbtable", "t_user") \.option("user", "root") \.option("password", "123456") \.save(mode='overwrite')spark.stop()
二、hive的table表抽取到MySQL
使用table(也就是hive数据库)读数据,jdbc取数据:
关键代码:
# 获取sparkSession对象spark = SparkSession \.builder \.appName("Hive表导入到MySQL") \.master("local[2]") \.config("spark.sql.warehouse.dir", 'hdfs://bigdata01:9820/user/hive/warehouse') \.config('hive.metastore.uris', 'thrift://bigdata01:9083') \.config("spark.sql.shuffle.partitions", 2) \.enableHiveSupport() \.getOrCreate()# 读取hive表中数据df=spark.read.table("yunhe01.t_user")# 写入本地mysql中df.write.format('jdbc') \.option("driver", "com.mysql.cj.jdbc.Driver") \.option("url", "jdbc:mysql://localhost:3306/zuoye") \.option("dbtable", "t_user1") \.option("user", "root") \.option("password", "123456") \.save(mode='overwrite')
完整代码:
import osfrom pyspark.sql import SparkSession"""
------------------------------------------Description : TODO:SourceFile : _14-hive读取到mysqlAuthor : songDate : 2024/11/6
-------------------------------------------
"""
if __name__ == '__main__':# 配置环境os.environ['JAVA_HOME'] = 'C:/Program Files/Java/jdk1.8.0_201'# 配置Hadoop的路径,就是前面解压的那个路径os.environ['HADOOP_HOME'] = 'D:/B/05-Hadoop/hadoop-3.3.1/hadoop-3.3.1'# 配置base环境Python解析器的路径os.environ['PYSPARK_PYTHON'] = 'C:/ProgramData/Miniconda3/python.exe' # 配置base环境Python解析器的路径os.environ['PYSPARK_DRIVER_PYTHON'] = 'C:/ProgramData/Miniconda3/python.exe'os.environ['HADOOP_USER_NAME'] = 'root'# 获取sparkSession对象spark = SparkSession \.builder \.appName("Hive表导入到MySQL") \.master("local[2]") \.config("spark.sql.warehouse.dir", 'hdfs://bigdata01:9820/user/hive/warehouse') \.config('hive.metastore.uris', 'thrift://bigdata01:9083') \.config("spark.sql.shuffle.partitions", 2) \.enableHiveSupport() \.getOrCreate()# 读取hive表中数据df=spark.read.table("yunhe01.t_user")# 写入本地mysql中df.write.format('jdbc') \.option("driver", "com.mysql.cj.jdbc.Driver") \.option("url", "jdbc:mysql://localhost:3306/zuoye") \.option("dbtable", "t_user1") \.option("user", "root") \.option("password", "123456") \.save(mode='overwrite')spark.stop()
总结:sqoop、datax、kettle都可以实现数据的导入导出,但发现使用spark是最简单的方式并且导入导出的速度也很快!