点一下关注吧!!!非常感谢!!持续更新!!!
Java篇开始了!
目前开始更新 MyBatis,一起深入浅出!
目前已经更新到了:
- Hadoop(已更完)
- HDFS(已更完)
- MapReduce(已更完)
- Hive(已更完)
- Flume(已更完)
- Sqoop(已更完)
- Zookeeper(已更完)
- HBase(已更完)
- Redis (已更完)
- Kafka(已更完)
- Spark(已更完)
- Flink(已更完)
- ClickHouse(已更完)
- Kudu(已更完)
- Druid(已更完)
- Kylin(已更完)
- Elasticsearch(已更完)
- DataX(已更完)
- Tez(已更完)
- 数据挖掘(已更完)
- Prometheus(已更完)
- Grafana(已更完)
- 离线数仓(正在更新…)
章节内容
上节我们完成了如下的内容:
- 电商核心交易 业务数据表结构
- 订单、产品、分类、店铺、支付表
数据导入
已经确定的事情:DataX、导出7张表的数据。
MySQL导出:全量导出、增量导出(导出前一天的数据)
业务数据保存在MySQL中,每日凌晨上一天的表数据:
- 表数据量少,采用全量方式导出MySQL
- 表数据量多,而且根据字段能区分出每天新增数据,采用增量的方式导出MySQL
3张增量表:
- 订单表 wzk_trade_orders
- 订单产品表 wzk_order_produce
- 产品信息表 wzk_product_info
4张全量表:
- 产品分量表 wzk_product_category
- 商家店铺表 wzk_shops
- 商家地域组织表 wzk_shop_admin_org
- 支付方式表 wzk_payment
数据库导入
上述的内容中,可以先把表构建出来,然后生成一批数据,用于测试,我这里就导入生成好的数据了。
业务需求
电商系统业务中最关键的业务,电商的运营活动都是围绕这个主题展开。
选取的指标包括:订单数、商品数、支付金额,对这些指标按销售区域、商品类型分析。
在大数据的分析中,"电商核心交易"是指电商平台上所有与商品交易相关的核心行为和交易数据的集合。具体来说,核心交易涵盖了商品的浏览、加购物车、下单、支付、发货、收货等一系列行为,它们直接影响电商平台的运营效率、用户体验和商业价值。
需求板块
电商平台的核心交易可以分为以下几个主要环节,每个环节都涉及大量数据的收集、存储和分析:
- 商品浏览:用户浏览商品的行为数据,例如用户查看了哪些商品、查看时长、是否点击了相关广告或推荐商品等。这些数据能够帮助平台了解用户的兴趣点,进而优化商品推荐和个性化营销策略。
- 加入购物车:用户将商品添加到购物车中的行为。通过分析购物车中的商品,可以获取用户的购买意图和倾向,帮助商家调整商品定价、库存和促销策略。
- 下单:用户在电商平台上完成的订单生成行为。包括订单的创建、订单内容、用户的收货地址、选择的支付方式等数据。订单数据是电商交易中的核心,通常涉及大量的数据信息,要求系统能够高效地处理和存储。
- 支付:支付是交易中至关重要的环节,支付数据可以通过支付方式、支付成功与否、支付金额、支付时间等维度进行分析。这部分数据可以帮助平台评估不同支付方式的受欢迎程度,并进行相应的优化。
- 发货:商品发货数据记录了商家发货的时间、物流公司、物流单号等信息。通过对发货数据的分析,可以判断出物流时效、发货效率等关键指标,进一步优化供应链和物流流程。
- 收货和评价:用户收到商品后的评价、退换货行为等。评价数据不仅反映了商品的质量和用户满意度,还对后续的购买决策产生影响。此外,退换货数据也能够反映出商品质量问题和物流中的痛点。
全量数据导入
- MySQL => HDFS => Hive
- 每日加载全量数据,形成新的分区
- MySQL Reader => HDFS Writer
产品分类表
vim /opt/wzk/datax/product_category.json
写入的内容如下所示:
- 数据量小的表没有必要使用多个channel,使用多个channel会生成多个小文件
- 执行命令之前要在HDFS上创建对应的目录:/user/data/trade.db/product_category/dt=yyyy-mm-dd
{"job": {"setting": {"speed": {"channel": 1}},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "hive","password": "hive@wzk.icu","column": ["catId", "parentId", "catName", "isShow", "sortNum", "isDel", "createTime", "level"],"connection": [{"table": ["wzk_product_category"],"jdbcUrl": ["jdbc:mysql://h122.wzk.icu:3306/ebiz"]}]}},"writer": {"name": "hdfswriter","parameter": {"defaultFS": "hdfs://h121.wzk.icu:9000","fileType": "text","path": "/user/data/trade.db/product_category/dt=$do_date","fileName": "product_category_$do_date","column": [{"name": "catId","type": "INT"},{"name": "parentId","type": "INT"},{"name": "catName","type": "STRING"},{"name": "isShow","type": "TINYINT"},{"name": "sortNum","type": "INT"},{"name": "isDel","type": "TINYINT"},{"name": "createTime","type": "STRING"},{"name": "level","type": "TINYINT"}],"writeMode": "append","fieldDelimiter": ","}}}]}
}
写入的结果如下图:
加载数据的过程如下:
do_date='2020-07-01'
# 创建目录
hdfs dfs -mkdir -p /user/data/trade.db/product_category/dt=$do_date
# 数据迁移
python $DATAX_HOME/bin/datax.py -p "-Ddo_date=$do_date" /opt/wzk/datax/product_category.json
# 加载数据
# hive 还没有表,后续再执行
hive -e "alter table ods.ods_trade_product_category add partition(dt='$do_date')"
对应的截图如下所示:
DataX将MySQL数据加载到HDFS上:
商家店铺表
wzk_shops => ods.ods_trade_shops
vim /opt/wzk/datax/shops.json
创建的内容:
{"job": {"setting": {"speed": {"channel": 1},"errorLimit": {"record": 0}},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "hive","password": "hive@wzk.icu","column": ["shopId","userId","areaId","shopName","shopLevel","status","createTime","modifyTime"],"connection": [{"table": ["wzk_shops"],"jdbcUrl": ["jdbc:mysql://h122.wzk.icu:3306/ebiz"]}]}},"writer": {"name": "hdfswriter","parameter": {"defaultFS": "hdfs://h121.wzk.icu:9000","fileType": "text","path": "/user/data/trade.db/shops/dt=$do_date","fileName": "shops_$do_date","column": [{"name": "shopId","type": "INT"},{"name": "userId","type": "INT"},{"name": "areaId","type": "INT"},{"name": "shopName","type": "STRING"},{"name": "shopLevel","type": "TINYINT"},{"name": "status","type": "TINYINT"},{"name": "createTime","type": "STRING"},{"name": "modifyTime","type": "STRING"}],"writeMode": "append","fieldDelimiter": ","}}}]}
}
对应的截图如下所示:
数据加载执行如下指令:
do_date='2020-07-02'
# 创建目录
hdfs dfs -mkdir -p /user/data/trade.db/shops/dt=$do_date
# 数据迁移
python $DATAX_HOME/bin/datax.py -p "-Ddo_date=$do_date" /opt/wzk/datax/shops.json
# 加载数据
# hive中还没有表 后续再执行
hive -e "alter table ods.ods_trade_shops add
partition(dt='$do_date')"
DataX 数据库数据导入到 HDFS 结果如下:
商家地域组织表
wzk_shop_admin_org => ods.ods_trade_shop_admin_org
vim /opt/wzk/datax/shop_org.json
编写的内容如下所示:
{"job": {"setting": {"speed": {"channel": 1},"errorLimit": {"record": 0}},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "hive","password": "hive@wzk.icu","column": ["id","parentId","orgName","orgLevel","isDelete","createTime","updateTime","isShow","orgType"],"connection": [{"table": ["wzk_shop_admin_org"],"jdbcUrl": ["jdbc:mysql://h122.wzk.icu:3306/ebiz"]}]}},"writer": {"name": "hdfswriter","parameter": {"defaultFS": "hdfs://h121.wzk.icu:9000","fileType": "text","path": "/user/data/trade.db/shop_org/dt=$do_date","fileName": "shop_admin_org_$do_date.dat","column": [{"name": "id","type": "INT"},{"name": "parentId","type": "INT"},{"name": "orgName","type": "STRING"},{"name": "orgLevel","type": "TINYINT"},{"name": "isDelete","type": "TINYINT"},{"name": "createTime","type": "STRING"},{"name": "updateTime","type": "STRING"},{"name": "isShow","type": "TINYINT"},{"name": "orgType","type": "TINYINT"}],"writeMode": "append","fieldDelimiter": ","}}}]}
}
对应的截图如下所示:
数据加载脚本:
do_date='2020-07-01'
# 创建目录
hdfs dfs -mkdir -p /user/data/trade.db/shop_org/dt=$do_date
# 数据迁移
python $DATAX_HOME/bin/datax.py -p "-Ddo_date=$do_date" /opt/wzk/datax/shop_org.json
# 加载数据
# hive中还没有表 后续再执行
hive -e "alter table ods.ods_trade_shop_admin_org add
partition(dt='$do_date')"
写入的内容如下所示,从数据库将数据加载到HDFS中:
支付方式表
wzk_payments => ods.ods_trade_payments
vim /opt/wzk/datax/payments.json
对应的内容如下:
{"job": {"setting": {"speed": {"channel": 1},"errorLimit": {"record": 0}},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "hive","password": "hive@wzk.icu","column": ["id","payMethod","payName","description","payOrder","online"],"connection": [{"table": ["wzk_payments"],"jdbcUrl": ["jdbc:mysql://h122.wzk.icu:3306/ebiz"]}]}},"writer": {"name": "hdfswriter","parameter": {"defaultFS": "hdfs://h121.wzk.icu:9000","fileType": "text","path": "/user/data/trade.db/payments/dt=$do_date","fileName": "payments_$do_date.dat","column": [{"name": "id","type": "INT"},{"name": "payMethod","type": "STRING"},{"name": "payName","type": "STRING"},{"name": "description","type": "STRING"},{"name": "payOrder","type": "INT"},{"name": "online","type": "TINYINT"}],"writeMode": "append","fieldDelimiter": ","}}}]}
}
对应的截图如下:
数据导入的过程如下:
do_date='2020-07-01'
# 创建目录
hdfs dfs -mkdir -p /user/data/trade.db/payments/dt=$do_date
# 数据迁移
python $DATAX_HOME/bin/datax.py -p "-Ddo_date=$do_date" /opt/wzk/datax/payments.json
# 加载数据
# hive还没有表 后续再执行
hive -e "alter table ods.ods_trade_payments add
partition(dt='$do_date')"
从MySQL中将数据加载到HDFS中: