Python异步框架大战:FastAPI、Sanic、Tornado VS Go 的 Gin

一、前言

异步编程在构建高性能 Web 应用中起着关键作用,而 FastAPI、Sanic、Tornado 都声称具有卓越的性能。本文将通过性能压测对这些框架与Go的Gin框架进行全面对比,揭示它们之间的差异。

原文:Python异步框架大战:FastAPI、Sanic、Tornado VS Go 的 Gin

二、环境准备

系统环境配置

编程语言

语言版本官网/Github
Python3.10.12https://www.python.org/
Go1.20.5https://go.dev/

压测工具

工具介绍官网/Github
abApache的压力测试工具,使用简单https://httpd.apache.org/docs/2.4/programs/ab.html
wrk高性能多线程压力测试工具https://github.com/wg/wrk
JMeter功能强大的压力/负载测试工具https://github.com/apache/jmeter

这里选择 wrk 工具进行压测,mac 安装直接通过brew快速安装

brew install wrk

window安装可能要依赖它的子系统才方便安装,或者换成其他的压测工具例如JMeter。

web框架

框架介绍压测版本官网/Github
FastAPI基于Python的高性能web框架0.103.1https://fastapi.tiangolo.com/
SanicPython的异步web服务器框架23.6.0https://sanic.dev/zh/
TornadoPython的非阻塞式web框架6.3.3https://www.tornadoweb.org/en/stable/
GinGo语言的web框架1.9.1https://gin-gonic.com/
Fibertodotodohttps://gofiber.io/
Flasktodotodohttps://github.com/pallets/flask
Djangotodotodohttps://www.djangoproject.com/

数据库配置

数据库名介绍压测版本依赖库
MySQL关系型数据库8.0sqlalchemy+aiomysql
RedisNoSQL数据库7.2aioredis

三、wrk 工具 http压测

FastAPI

普通http请求压测

依赖安装

pip install fastapi==0.103.1
pip install uvicorn==0.23.2

编写测试路由

from fastapi import FastAPIapp = FastAPI(summary="fastapi性能测试")@app.get(path="/http/fastapi/test")
async def fastapi_test():return {"code": 0, "message": "fastapi_http_test", "data": {}}

Uvicorn 运行,这里是起四个进程运行部署

uvicorn fastapi_test:app --log-level critical --port 8000 --workers 4

wrk压测

开20个线程,建立500个连接,持续请求30s

wrk -t20 -d30s -c500 http://127.0.0.1:8000/http/fastapi/test

压测结果

~ wrk -t20 -d30s -c500 http://127.0.0.1:8000/http/fastapi/testRunning 30s test @ http://127.0.0.1:8000/http/fastapi/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     3.06ms    2.89ms  36.65ms   85.34%Req/Sec     3.85k     3.15k   41.59k    70.05%2298746 requests in 30.11s, 383.64MB readSocket errors: connect 267, read 100, write 0, timeout 0Requests/sec:  76357.51
Transfer/sec:     12.74MB

Thread Stats 这里是 20、30个压测线程的平均结果指标

  • 平均延迟(Avg Latency):每个线程的平均响应延迟
  • 标准差(Stdev Latency):每个线程延迟的标准差
  • 最大延迟(Max Latency):每个线程遇到的最大延迟
  • 延迟分布(+/- Stdev Latency):每个线程延迟分布情况
  • 每秒请求数(Req/Sec):每个线程每秒完成的请求数
  • 请求数分布(+/- Stdev Req/Sec):每个线程请求数的分布情况

Socket errors: connect 267, read 100, write 0, timeout 0,是压测过程中socket的错误统计

  • connect:连接错误,表示在压测过程中,总共有 267 次连接异常
  • read:读取错误,表示有 100 次读取数据异常
  • write:写入错误,表示有0次写入异常
  • timeout:超时错误,表示有0次超时

MySQL数据查询请求压测

这里在简单试下数据库查询时候的情况

首先先补充下项目依赖

pip install hui-tools[db-orm, db-redis]==0.2.0

hui-tools是我自己开发的一个工具库,欢迎大家一起来贡献。https://github.com/HuiDBK/py-tools

#!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Author: Hui
# @Desc: { fastapi性能测试 }
# @Date: 2023/09/10 12:24
import uvicorn
from fastapi import FastAPI
from py_tools.connections.db.mysql import SQLAlchemyManager, DBManagerapp = FastAPI(summary="fastapi性能测试")async def init_orm():db_client = SQLAlchemyManager(host="127.0.0.1",port=3306,user="root",password="123456",db_name="house_rental")db_client.init_mysql_engine()DBManager.init_db_client(db_client)@app.on_event("startup")
async def startup_event():"""项目启动时准备环境"""await init_orm()@app.get(path="/http/fastapi/mysql/test")
async def fastapi_mysql_query_test():sql = "select id, username, role from user_basic where username='hui'"ret = await DBManager().run_sql(sql)column_names = [desc[0] for desc in ret.cursor.description]result_tuple = ret.fetchone()user_info = dict(zip(column_names, result_tuple))return {"code": 0, "message": "fastapi_http_test", "data": {**user_info}}

wrk压测

wrk -t20 -d30s -c500 http://127.0.0.1:8000/http/fastapi/mysql/test
~ wrk -t20 -d30s -c500 http://127.0.0.1:8000/http/fastapi/mysql/testRunning 30s test @ http://127.0.0.1:8000/http/fastapi/mysql/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency    38.81ms   19.35ms 226.42ms   76.86%Req/Sec   317.65    227.19   848.00     57.21%180255 requests in 30.09s, 36.95MB readSocket errors: connect 267, read 239, write 0, timeout 0Non-2xx or 3xx responses: 140Requests/sec:   5989.59
Transfer/sec:      1.23MB

可以发现就加入一个简单的数据库查询,QPS从 76357.51 降到 5989.59 足足降了有10倍多,其实是单机数据库处理不过来太多请求,并发的瓶颈是在数据库,可以尝试加个redis缓存对比MySQL来说并发提升了多少。

Redis缓存查询压测

#!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Author: Hui
# @Desc: { fastapi性能测试 }
# @Date: 2023/09/10 12:24
import json
from datetime import timedeltaimport uvicorn
from fastapi import FastAPI
from py_tools.connections.db.mysql import SQLAlchemyManager, DBManager
from py_tools.connections.db.redis_client import RedisManagerapp = FastAPI(summary="fastapi性能测试")async def init_orm():db_client = SQLAlchemyManager(host="127.0.0.1",port=3306,user="root",password="123456",db_name="house_rental")db_client.init_mysql_engine()DBManager.init_db_client(db_client)async def init_redis():RedisManager.init_redis_client(async_client=True,host="127.0.0.1",port=6379,db=0,)@app.on_event("startup")
async def startup_event():"""项目启动时准备环境"""await init_orm()await init_redis()@app.get(path="/http/fastapi/redis/{username}")
async def fastapi_redis_query_test(username: str):# 先判断缓存有没有user_info = await RedisManager.client.get(name=username)if user_info:user_info = json.loads(user_info)return {"code": 0, "message": "fastapi_redis_test", "data": {**user_info}}sql = f"select id, username, role from user_basic where username='{username}'"ret = await DBManager().run_sql(sql)column_names = [desc[0] for desc in ret.cursor.description]result_tuple = ret.fetchone()user_info = dict(zip(column_names, result_tuple))# 存入redis缓存中, 3minawait RedisManager.client.set(name=user_info.get("username"),value=json.dumps(user_info),ex=timedelta(minutes=3))return {"code": 0, "message": "fastapi_redis_test", "data": {**user_info}}if __name__ == '__main__':uvicorn.run(app)

运行

wrk -t20 -d30s -c500 http://127.0.0.1:8000/http/fastapi/redis/hui

结果

~ wrk -t20 -d30s -c500 http://127.0.0.1:8000/http/fastapi/redis/huiRunning 30s test @ http://127.0.0.1:8000/http/fastapi/redis/hui20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     9.60ms    5.59ms 126.63ms   88.41%Req/Sec     1.22k     0.91k    3.45k    57.54%730083 requests in 30.10s, 149.70MB readSocket errors: connect 267, read 101, write 0, timeout 0Requests/sec:  24257.09
Transfer/sec:      4.97MB

缓存信息

添加了redis缓存,并发能力也提升了不少,因此在业务开发中一些查多改少的数据可以适当的做缓存。

压测结论

压测类型测试时长线程数连接数请求总数QPS平均延迟最大延迟总流量吞吐量/s
普通请求30s20500229874676357.513.06ms36.65ms383.64MB12.74MB
MySQL查询30s205007300835989.5938.81ms226.42ms36.95MB1.23MB
Redis缓存30s2050073008324257.099.60ms126.63ms149.70MB4.97MB

给 mysql 查询加了个 redis 缓存 qps 提升了 3倍多,对于一些查多改少的数据,根据业务设置适当的缓存可以大大提升系统的吞吐能力。其他框架我就直接上代码测,就不一一赘述了,直接看结果指标。

Sanic

压测方式都是一样的我就不像fastapi一样的一个一个写了,直接写全部压测然后看结果

环境安装

pip install sanic==23.6.0
pip install hui-tools'[db-orm, db-redis]'==0.2.0

编写测试路由

#!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Author: Hui
# @Desc: { sanic性能测试 }
# @Date: 2023/09/10 12:24
import json
from datetime import timedeltafrom py_tools.connections.db.mysql import SQLAlchemyManager, DBManager
from py_tools.connections.db.redis_client import RedisManager
from sanic import Sanic
from sanic.response import json as sanic_jsonapp = Sanic("sanic_test")async def init_orm():db_client = SQLAlchemyManager(host="127.0.0.1",port=3306,user="root",password="123456",db_name="house_rental")db_client.init_mysql_engine()DBManager.init_db_client(db_client)async def init_redis():RedisManager.init_redis_client(async_client=True,host="127.0.0.1",port=6379,db=0,)@app.listener('before_server_start')
async def server_start_event(app, loop):await init_orm()await init_redis()@app.get(uri="/http/sanic/test")
async def fastapi_test(req):return sanic_json({"code": 0, "message": "sanic_http_test", "data": {}})@app.get(uri="/http/sanic/mysql/test")
async def sanic_myql_query_test(req):sql = "select id, username, role from user_basic where username='hui'"ret = await DBManager().run_sql(sql)column_names = [desc[0] for desc in ret.cursor.description]result_tuple = ret.fetchone()user_info = dict(zip(column_names, result_tuple))return sanic_json({"code": 0, "message": "sanic_mysql_test", "data": {**user_info}})@app.get(uri="/http/sanic/redis/<username>")
async def sanic_redis_query_test(req, username: str):# 先判断缓存有没有user_info = await RedisManager.client.get(name=username)if user_info:user_info = json.loads(user_info)return sanic_json({"code": 0, "message": "sanic_redis_test", "data": {**user_info}})sql = f"select id, username, role from user_basic where username='{username}'"ret = await DBManager().run_sql(sql)column_names = [desc[0] for desc in ret.cursor.description]result_tuple = ret.fetchone()user_info = dict(zip(column_names, result_tuple))# 存入redis缓存中, 3minawait RedisManager.client.set(name=user_info.get("username"),value=json.dumps(user_info),ex=timedelta(minutes=3))return sanic_json({"code": 0, "message": "sanic_redis_test", "data": {**user_info}})def main():app.run()if __name__ == '__main__':# sanic sanic_test.app -p 8001 -w 4 --access-log=Falsemain()

运行

Sanic 内置了一个生产web服务器,可以直接使用

sanic python.sanic_test.app -p 8001 -w 4 --access-log=False

普通http请求压测

同样是起了四个进程看看性能如何

wrk -t20 -d30s -c500 http://127.0.0.1:8001/http/sanic/test

压测结果

~ wrk -t20 -d30s -c500 http://127.0.0.1:8001/http/sanic/testRunning 30s test @ http://127.0.0.1:8001/http/sanic/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     1.93ms    2.20ms  61.89ms   91.96%Req/Sec     6.10k     3.80k   27.08k    69.37%3651099 requests in 30.10s, 497.92MB readSocket errors: connect 267, read 163, write 0, timeout 0Requests/sec: 121286.47
Transfer/sec:     16.54MB

Sanic 果然性能很强,在python中估计数一数二了。

mysql数据查询请求压测

运行

wrk -t20 -d30s -c500 http://127.0.0.1:8001/http/sanic/mysql/test

结果

~ wrk -t20 -d30s -c500 http://127.0.0.1:8001/http/sanic/mysql/testRunning 30s test @ http://127.0.0.1:8001/http/sanic/mysql/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency    35.22ms   21.75ms 264.37ms   78.52%Req/Sec   333.14    230.95     1.05k    68.99%198925 requests in 30.10s, 34.72MB readSocket errors: connect 267, read 146, write 0, timeout 0Requests/sec:   6609.65
Transfer/sec:      1.15MB

Redis缓存查询压测

运行

wrk -t20 -d30s -c500 http://127.0.0.1:8001/http/sanic/redis/hui

结果

~ wrk -t20 -d30s -c500 http://127.0.0.1:8001/http/sanic/redis/huiRunning 30s test @ http://127.0.0.1:8001/http/sanic/redis/hui20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     6.91ms    4.13ms 217.47ms   95.62%Req/Sec     1.71k     0.88k    4.28k    68.05%1022884 requests in 30.09s, 178.52MB readSocket errors: connect 267, read 163, write 0, timeout 0Requests/sec:  33997.96
Transfer/sec:      5.93MB

压测结论

压测类型测试时长线程数连接数请求总数QPS平均延迟最大延迟总流量吞吐量/s
普通请求30s205003651099121286.471.93ms61.89ms497.92MB16.54MB
MySQL查询30s205001989256609.6535.22ms264.37ms34.72MB1.15MB
Redis缓存30s20500102288433997.966.91ms217.47ms178.52MB5.93MB

Tornado

环境安装

pip install tornado==6.3.3
pip install gunicorn==21.2.0
pip install hui-tools[db-orm, db-redis]==0.2.0

编写测试路由

#!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Author: Hui
# @Desc: { tornado 性能测试 }
# @Date: 2023/09/20 22:42
import asyncio
from datetime import timedelta
import json
import tornado.web
import tornado.ioloop
from tornado.httpserver import HTTPServer
from py_tools.connections.db.mysql import SQLAlchemyManager, DBManager
from py_tools.connections.db.redis_client import RedisManagerclass TornadoBaseHandler(tornado.web.RequestHandler):passclass TornadoTestHandler(TornadoBaseHandler):async def get(self):self.write({"code": 0, "message": "tornado_http_test", "data": {}})class TornadoMySQLTestHandler(TornadoBaseHandler):async def get(self):sql = "select id, username, role from user_basic where username='hui'"ret = await DBManager().run_sql(sql)column_names = [desc[0] for desc in ret.cursor.description]result_tuple = ret.fetchone()user_info = dict(zip(column_names, result_tuple))self.write({"code": 0, "message": "tornado_mysql_test", "data": {**user_info}})class TornadoRedisTestHandler(TornadoBaseHandler):async def get(self, username):user_info = await RedisManager.client.get(name=username)if user_info:user_info = json.loads(user_info)self.write({"code": 0, "message": "tornado_redis_test", "data": {**user_info}})returnsql = f"select id, username, role from user_basic where username='{username}'"ret = await DBManager().run_sql(sql)column_names = [desc[0] for desc in ret.cursor.description]result_tuple = ret.fetchone()user_info = dict(zip(column_names, result_tuple))# 存入redis缓存中, 3minawait RedisManager.client.set(name=user_info.get("username"),value=json.dumps(user_info),ex=timedelta(minutes=3),)self.write({"code": 0, "message": "tornado_redis_test", "data": {**user_info}})def init_orm():db_client = SQLAlchemyManager(host="127.0.0.1",port=3306,user="root",password="123456",db_name="house_rental",)db_client.init_mysql_engine()DBManager.init_db_client(db_client)def init_redis():RedisManager.init_redis_client(async_client=True,host="127.0.0.1",port=6379,db=0,)def init_setup():init_orm()init_redis()def make_app():init_setup()return tornado.web.Application([(r"/http/tornado/test", TornadoTestHandler),(r"/http/tornado/mysql/test", TornadoMySQLTestHandler),(r"/http/tornado/redis/(.*)", TornadoRedisTestHandler),])app = make_app()async def main():# init_setup()# app = make_app()server = HTTPServer(app)server.bind(8002)# server.start(4) # start 4 worker# app.listen(8002)await asyncio.Event().wait()if __name__ == "__main__":# gunicorn -k tornado -w=4 -b=127.0.0.1:8002 python.tornado_test:appasyncio.run(main())

运行tornado服务

gunicorn -k tornado -w=4 -b=127.0.0.1:8002 python.tornado_test:app

wrk 压测

wrk -t20 -d30s -c500 http://127.0.0.1:8002/http/tornado/testwrk -t20 -d30s -c500 http://127.0.0.1:8002/http/tornado/mysql/testwrk -t20 -d30s -c500 http://127.0.0.1:8002/http/tornado/redis/hui

结果

~ wrk -t20 -d30s -c500 http:// 127.0.0.1 : 8002 /http/tornado/test
Running 30s test @ http://127.0.0.1:8002/http/tornado/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     6.54ms    1.92ms  34.75ms   63.85%Req/Sec     1.79k     1.07k    3.83k    56.23%1068205 requests in 30.07s, 280.15MB readSocket errors: connect 267, read 98, write 0, timeout 0Requests/sec:  35525.38
Transfer/sec:      9.32MB➜  ~ wrk -t20 -d30s -c500 http:// 127.0.0.1 : 8002 /http/tornado/mysql/test
Running 30s test @ http://127.0.0.1:8002/http/tornado/mysql/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency    41.29ms   16.51ms 250.81ms   71.45%Req/Sec   283.47    188.81     0.95k    65.31%169471 requests in 30.09s, 51.88MB readSocket errors: connect 267, read 105, write 0, timeout 0Requests/sec:   5631.76
Transfer/sec:      1.72MB➜  ~ wrk -t20 -d30s -c500 http:// 127.0.0.1 : 8002 /http/tornado/redis/hui
Running 30s test @ http://127.0.0.1:8002/http/tornado/redis/hui20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency    11.69ms    3.83ms 125.75ms   78.27%Req/Sec     1.00k   537.85     2.20k    64.34%599840 requests in 30.07s, 183.63MB readSocket errors: connect 267, read 97, write 0, timeout 0Non-2xx or 3xx responses: 2Requests/sec:  19947.28
Transfer/sec:      6.11MB

Gin

环境安装

go get "github.com/gin-gonic/gin"
go get "github.com/go-redis/redis"
go get "gorm.io/driver/mysql"
go get "gorm.io/gorm"

代码编写

package mainimport ("encoding/json""time""github.com/gin-gonic/gin""github.com/go-redis/redis""gorm.io/driver/mysql""gorm.io/gorm""gorm.io/gorm/logger"
)var (db          *gorm.DBredisClient *redis.Client
)type UserBasic struct {Id       int    `json:"id"`Username string `json:"username"`Role     string `json:"role"`
}func (UserBasic) TableName() string {return "user_basic"
}func initDB() *gorm.DB {var err errordb, err = gorm.Open(mysql.Open("root:123456@/house_rental"), &gorm.Config{// 将LogMode设置为logger.Silent以禁用日志打印Logger: logger.Default.LogMode(logger.Silent),})if err != nil {panic("failed to connect database")}sqlDB, err := db.DB()// SetMaxIdleConns sets the maximum number of connections in the idle connection pool.sqlDB.SetMaxIdleConns(10)// SetMaxOpenConns sets the maximum number of open connections to the database.sqlDB.SetMaxOpenConns(30)// SetConnMaxLifetime sets the maximum amount of time a connection may be reused.sqlDB.SetConnMaxLifetime(time.Hour)return db
}func initRedis() *redis.Client {redisClient = redis.NewClient(&redis.Options{Addr: "localhost:6379",})return redisClient
}func jsonTestHandler(c *gin.Context) {c.JSON(200, gin.H{"code": 0, "message": "gin json", "data": make(map[string]any),})
}func mysqlQueryHandler(c *gin.Context) {// 查询语句var user UserBasicdb.First(&user, "username = ?", "hui")//fmt.Println(user)// 返回响应c.JSON(200, gin.H{"code":    0,"message": "go mysql test","data":    user,})}func cacheQueryHandler(c *gin.Context) {// 从Redis中获取缓存username := "hui" // 要查询的用户名cachedUser, err := redisClient.Get(username).Result()if err == nil {// 缓存存在,将缓存结果返回给客户端var user UserBasic_ = json.Unmarshal([]byte(cachedUser), &user)c.JSON(200, gin.H{"code":    0,"message": "gin redis test","data":    user,})return}// 缓存不存在,执行数据库查询var user UserBasicdb.First(&user, "username = ?", username)// 将查询结果保存到Redis缓存userJSON, _ := json.Marshal(user)redisClient.Set(username, userJSON, time.Minute*2)// 返回响应c.JSON(200, gin.H{"code":    0,"message": "gin redis test","data":    user,})
}func initDao() {initDB()initRedis()
}func main() {//r := gin.Default()r := gin.New()gin.SetMode(gin.ReleaseMode) // 生产模式initDao()r.GET("/http/gin/test", jsonTestHandler)r.GET("/http/gin/mysql/test", mysqlQueryHandler)r.GET("/http/gin/redis/test", cacheQueryHandler)r.Run("127.0.0.1:8003")
}

wrk 压测

wrk -t20 -d30s -c500 http: //127.0.0.1:8003/http/gin/testwrk -t20 -d30s -c500 http: //127.0.0.1:8003/http/gin/mysql/testwrk -t20 -d30s -c500 http: //127.0.0.1:8003/http/gin/redis/test

结果

~ wrk -t20 -d30s -c500 http:// 127.0.0.1 : 8003 /http/gin/test
Running 30s test @ http://127.0.0.1:8003/http/gin/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     2.45ms    5.68ms 186.48ms   91.70%Req/Sec     6.36k     5.62k   53.15k    83.99%3787808 requests in 30.10s, 592.42MB readSocket errors: connect 267, read 95, write 0, timeout 0Requests/sec: 125855.41
Transfer/sec:     19.68MB➜  ~ wrk -t20 -d30s -c500 http:// 127.0.0.1 : 8003 /http/gin/mysql/test
Running 30s test @ http://127.0.0.1:8003/http/gin/mysql/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency    40.89ms   83.70ms   1.12s    90.99%Req/Sec   522.33    322.88     1.72k    64.84%308836 requests in 30.10s, 61.26MB readSocket errors: connect 267, read 100, write 0, timeout 0Requests/sec:  10260.63
Transfer/sec:      2.04MB
➜  ~~ wrk -t20 -d30s -c500 http:// 127.0.0.1 : 8003 /http/gin/redis/test
Running 30s test @ http://127.0.0.1:8003/http/gin/redis/test20 threads and 500 connectionsThread Stats   Avg      Stdev     Max   +/- StdevLatency     7.18ms    1.76ms  79.40ms   81.93%Req/Sec     1.63k     1.09k    4.34k    62.59%972272 requests in 30.10s, 193.79MB readSocket errors: connect 267, read 104, write 0, timeout 0Requests/sec:  32305.30
Transfer/sec:      6.44MB

四、总结

web框架压测类型测试时长线程数连接数请求总数QPS平均延迟最大延迟总流量吞吐量/s
FastAPI普通请求30s205002298746(229w)76357.51
(76k)
3.06ms36.65ms383.64MB12.74MB
MySQL查询30s20500180255
(18w)
5989.59
(5.9k)
38.81ms226.42ms36.95MB1.23MB
Redis缓存30s20500730083
(73w)
24257.09
(24k)
9.60ms126.63ms149.70MB4.97MB
Sanic普通请求30s205003651099(365w)121286.47(120k)1.93ms61.89ms497.92MB16.54MB
MySQL查询30s20500198925
(19w)
6609.65
(6k)
35.22ms264.37ms34.72MB1.15MB
Redis缓存30s205001022884(100w)33997.96
(33k)
6.91ms217.47ms178.52MB5.93MB
Tornado普通请求30s205001068205(106w)35525.38(35k)6.54ms34.75ms280.15MB9.32MB
MySQL查询30s20500169471
(16w)
5631.76
(5.6k)
41.29ms250.81ms51.88MB1.72MB
Redis缓存30s20500599840
(59w)
19947.28
(19k)
11.69ms125.75ms183.63MB6.11MB
Gin普通请求30s205003787808(378w)125855.41(125k)2.45ms186.48ms592.42MB19.68MB
MySQL查询30s20500308836
(30w)
10260.63
(10k)
40.89ms1.12s61.26MB2.04MB
Redis缓存30s20500972272
(97w)
32305.30(32k)7.18ms79.40ms193.79MB6.44MB

性能

从性能角度来看,各个Web框架的表现如下:

Gin > Sanic > FastAPI > Tornado

Gin:在普通请求方面表现最佳,具有最高的QPS和吞吐量。在MySQL查询中,性能很高,但最大延迟也相对较高。gin承受的并发请求最高有 1w qps,其他python框架都在5-6k qps,但gin的mysql查询请求最大延迟达到了1.12s, 虽然可以接受这么多并发请求,但单机mysql还是处理不过来。

还有非常重要的一点,cpython的多线程由于GIL原因不能充分利用多核CPU,故而都是通过开了四个进程来处理请求,资源开销远远大于go的gin,go底层的GMP的调度策略很强,天然支持并发。

注意:Python使用asyncio语法时切记不要使用同步IO操作不然会堵塞住主线程的事件loop,从而大大降低性能,如果没有异步库支持可以采用线程来处理同步IO。

综合评价

除了性能之外,还有其他因素需要考虑,例如框架的社区活跃性、生态系统、文档质量以及团队熟悉度等。这些因素也应该在选择Web框架时考虑。

最终的选择应该基于具体需求和项目要求。如果性能是最重要的因素之一,那么Sanic和go的一些框架可能是不错的选择。如果您更关注其他方面的因素,可以考虑框架的社区支持和适用性。我个人还是挺喜欢使用FastAPI。

五、测试源代码

https://github.com/HuiDBK/WebFrameworkPressureTest

Github上已经有其他语言的web框架的压测,感兴趣也可以去了解下: https://web-frameworks-benchmark.netlify.app/result

不知道为啥他们测试的python性能好低,可能异步没用对😄

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/142381.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

第77步 时间序列建模实战:多因素预测 vol-2(以ARIMA为例)

基于WIN10的64位系统演示 一、写在前面 上一期&#xff0c;我们构建了多变量的ARIMA时间序列预测模型&#xff0c;其实人家有名字的&#xff0c;叫做ARIMAX模型&#xff08;X就代表解释变量&#xff09;。 这一期&#xff0c;我们介绍其他机器学习回归模型如何建立多变量的时…

Windows10/11显示文件扩展名 修改文件后缀名教程

前言 写这篇文章的原因是由于我分享的教程中的文件、安装包基本都是存在阿里云盘的&#xff0c;下载后需要改后缀名才能使用。 但是好多同学不会改。。 Windows 10 随便打开一个文件夹&#xff0c;在上方工具栏点击 “查看”点击 “查看” 后下方会显示更详细的工具栏然后点…

Lyapunov optimization 李雅普诺夫优化

文章目录 正文引言Lyapunov drift for queueing networks 排队网络的Lyapunov漂移Quadratic Lyapunov functions 二次李雅普诺夫函数Bounding the Lyapunov drift 李亚普诺夫漂移的边界A basic Lyapunov drift theorem 一个基本的李雅普诺夫漂移定理 Lyapunov optimization for…

全球与中国数字万用表市场:增长趋势、竞争格局与前景展望

数字万用表是一种标准诊断工具&#xff0c;用于测试电气设备中的电压、电流和电阻等电气值。它由带按钮的显示屏、刻度盘或旋转开关以及各种输入插孔&#xff08;用于插入测试导线&#xff09;组成。此外&#xff0c;与传统的指针式模拟仪表相比&#xff0c;数字式仪表具有更高…

C#程序中很多ntdll.dll、clr.dll的线程

VS中调试缓慢&#xff0c;如下图 需要“右键工程——调试——取消勾选‘启用本地代码调试’”即可。

算法leetcode|83. 删除排序链表中的重复元素(rust重拳出击)

文章目录 83. 删除排序链表中的重复元素&#xff1a;样例 1&#xff1a;样例 2&#xff1a;提示&#xff1a; 分析&#xff1a;题解&#xff1a;rust&#xff1a;go&#xff1a;c&#xff1a;python&#xff1a;java&#xff1a; 83. 删除排序链表中的重复元素&#xff1a; 给…

Zookeeper-集群介绍与核心理论

Zookeeper集群 4.Zookeeper集群4.1) 介绍4.2) 核心理论 4.Zookeeper集群 4.1) 介绍 Leader选举&#xff1a; Serverid&#xff1a;服务器ID。比如有三台服务器&#xff0c;编号分别是1,2,3。编号越大在选择算法中的权重越大。Zxid&#xff1a;数据ID。服务器中存放的最大数据…

【1】ElementUI 组件实际应用===》按钮的使用

文章底部有个人公众号&#xff1a;热爱技术的小郑。主要分享开发知识、学习资料、毕业设计指导等。个人B站主页热爱技术的小郑 &#xff0c;视频内容主要是对应文章的视频讲解形式。有兴趣的可以关注一下。为何分享&#xff1f; 踩过的坑没必要让别人在再踩&#xff0c;自己复盘…

MySQL数据库入门到精通6--进阶篇(锁)

5. 锁 5.1 概述 锁是计算机协调多个进程或线程并发访问某一资源的机制。在数据库中&#xff0c;除传统的计算资源&#xff08;CPU、RAM、I/O&#xff09;的争用以外&#xff0c;数据也是一种供许多用户共享的资源。如何保证数据并发访问的一致性、有效性是所有数据库必须解决…

M1/M2芯片Parallels Desktop 19安装使用教程(超详细)

引言 在Window上VMware最强&#xff0c;在Mac上毫无疑问Parallels Desktop为最强&#xff01; 今天带来的是最新版Parallels Desktop 19的安装使用教程。 1. 下载安装包 Parallels Desktop 19安装包&#xff1a;https://www.aliyundrive.com/s/ThB8Fs6D3AD Parallels Deskto…

羧基荧光素-氨基.盐酸盐,FAM-NH2.HCl,138589-19-2

产品简介&#xff1a;5-FAM-NH2.HCl(羧基荧光素-氨基.盐酸盐)其中异硫氰酸荧光素(FITC)具有比较高的活性,通常来说,在固相合成过程中引 入该种荧光基团相对于其他荧光素要更容易,并且反应过程中不需要加入活化试剂。可以用来修饰蛋白质、多肽以及其他活性基团材料或者小分子。 …

ASCII码-对照表

ASCII 1> ASCII 控制字符2> ASCII 显示字符3> 常用ASCII码3.1> 【CR】\r 回车符3.2> 【LF】\n 换行符3.3> 不同操作系统&#xff0c;文件中换行 1> ASCII 控制字符 2> ASCII 显示字符 3> 常用ASCII码 3.1> 【CR】‘\r’ 回车符 CR Carriage Re…

软件设计模式系列之九——桥接模式

1 模式的定义 桥接模式是一种结构型设计模式&#xff0c;它用于将抽象部分与其实现部分分离&#xff0c;以便它们可以独立地变化。这种模式涉及一个接口&#xff0c;它充当一个桥&#xff0c;使得具体类可以在不影响客户端代码的情况下改变。桥接模式将继承关系转化为组合关系…

液氮超低温保存法的原理

细菌保存是有效保存活体微生物群体&#xff0c;使细菌不死、不衰、不变&#xff0c;便于研究和应用。保存细菌的方法有很多。保存原理是利用干燥、低温、隔离空气的方法&#xff0c;降低微生物菌株的代谢速度&#xff0c;使菌株的生命活动处于半永久性休眠状态&#xff0c;从而…

【C++】手撕string(string的模拟实现)

手撕string目录&#xff1a; 一、 Member functions 1.1 constructor 1.2 Copy constructor&#xff08;代码重构&#xff1a;传统写法和现代写法&#xff09; 1.3 operator&#xff08;代码重构&#xff1a;现代写法超级牛逼&#xff09; 1.4 destructor 二、Other mem…

多旋翼无人机组合导航系统-多源信息融合算法(Matlab代码实现)

&#x1f4a5;&#x1f4a5;&#x1f49e;&#x1f49e;欢迎来到本博客❤️❤️&#x1f4a5;&#x1f4a5; &#x1f3c6;博主优势&#xff1a;&#x1f31e;&#x1f31e;&#x1f31e;博客内容尽量做到思维缜密&#xff0c;逻辑清晰&#xff0c;为了方便读者。 ⛳️座右铭&a…

动手吧,vue数字动画

数字动画&#xff0c;有数字的地方都能用上&#xff0c;拿去吧&#xff01; 效果&#xff1a; 1、template部分 <template><div class"v-count-up">{{ dispVlaue }}</div> </template> 2、js部分 export default {data() {return {timer…

【LeetCode热题100】--54.螺旋矩阵

54.螺旋矩阵 给你一个 m 行 n 列的矩阵 matrix &#xff0c;请按照 顺时针螺旋顺序 &#xff0c;返回矩阵中的所有元素。 按层遍历 可以将矩阵看成若干层&#xff0c;首先输出最外层的元素&#xff0c;其次输出次外层的元素&#xff0c;直到输出最内层的元素。 对于每层&…

【二叉树】——链式结构(快速掌握递归与刷题技巧)

&#x1f4d9;作者简介&#xff1a; 清水加冰&#xff0c;目前大二在读&#xff0c;正在学习C/C、Python、操作系统、数据库等。 &#x1f4d8;相关专栏&#xff1a;C语言初阶、C语言进阶、C语言刷题训练营、数据结构刷题训练营、有感兴趣的可以看一看。 欢迎点赞 &#x1f44d…

《学术小白学习之路12》进阶-基于Python实现中文文本的DTM主题动态模型构建

《学术小白学习之路》基于Python实现中文文本的DTM主题动态模型构建 一、数据选择二、数据预处理三、输入数据ID映射词典构建四、文档加载成构造语料库五、DTM模型构建与结果分析六、结果进行保存七、保存模型一、数据选择 所选取的数据集是论文摘要,作为实验数据集,共计12条…