hrnet人体关键点检测模型适配atlas笔记

hrnet人体关键点检测模型适配atlas
模型转换
将end2end.onnx模型转换为om模型
转换模型前激活:

source /usr/local/Ascend/ascend-toolkit/set_env.sh

转模型

atc --model=end2end.onnx --framework=5 --output=end2end --soc_version=Ascend310P3 

可修改Atlas开发板版本Ascend310P3、Ascend310B1、Ascend310等
推理代码

import time
import cv2
import numpy as np
from ais_bench.infer.interface import InferSessionmodel_path = "/tmp/pose/end2_16.om"
IMG_PATH = "/tmp/pose/ztest.jpg"import loggingdef print_outputs(outputs):"""将 outputs 的完整内容打印到日志中。Args:outputs (numpy.ndarray): 模型的输出结果。"""logging.basicConfig(level=logging.INFO, format='%(message)s')for i, output_slice in enumerate(outputs):logging.info(f"Output slice {i}:")for j, row in enumerate(output_slice):row_str = ', '.join([f'{x:.6f}' for x in row])logging.info(f"[{row_str}]")logging.info('')def bbox_xywh2cs(bbox, aspect_ratio, padding, pixel_std):"""Transform the bbox format from (x,y,w,h) into (center, scale)Args:bbox (ndarray): Single bbox in (x, y, w, h)aspect_ratio (float): The expected bbox aspect ratio (w over h)padding (float): Bbox padding factor that will be multilied to scale.Default: 1.0pixel_std (float): The scale normalization factor. Default: 200.0Returns:tuple: A tuple containing center and scale.- np.ndarray[float32](2,): Center of the bbox (x, y).- np.ndarray[float32](2,): Scale of the bbox w & h."""x, y, w, h = bbox[:4]center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)if w > aspect_ratio * h:h = w * 1.0 / aspect_ratioelif w < aspect_ratio * h:w = h * aspect_ratioscale = np.array([w, h], dtype=np.float32) / pixel_stdscale = scale * paddingreturn center, scaledef rotate_point(pt, angle_rad):"""Rotate a point by an angle.Args:pt (list[float]): 2 dimensional point to be rotatedangle_rad (float): rotation angle by radianReturns:list[float]: Rotated point."""assert len(pt) == 2sn, cs = np.sin(angle_rad), np.cos(angle_rad)new_x = pt[0] * cs - pt[1] * snnew_y = pt[0] * sn + pt[1] * csrotated_pt = [new_x, new_y]return rotated_ptdef _get_3rd_point(a, b):"""To calculate the affine matrix, three pairs of points are required. Thisfunction is used to get the 3rd point, given 2D points a & b.The 3rd point is defined by rotating vector `a - b` by 90 degreesanticlockwise, using b as the rotation center.Args:a (np.ndarray): point(x,y)b (np.ndarray): point(x,y)Returns:np.ndarray: The 3rd point."""assert len(a) == 2assert len(b) == 2direction = a - bthird_pt = b + np.array([-direction[1], direction[0]], dtype=np.float32)return third_ptdef get_affine_transform(center,scale,rot,output_size,shift=(0., 0.),inv=False):"""Get the affine transform matrix, given the center/scale/rot/output_size.Args:center (np.ndarray[2, ]): Center of the bounding box (x, y).scale (np.ndarray[2, ]): Scale of the bounding boxwrt [width, height].rot (float): Rotation angle (degree).output_size (np.ndarray[2, ] | list(2,)): Size of thedestination heatmaps.shift (0-100%): Shift translation ratio wrt the width/height.Default (0., 0.).inv (bool): Option to inverse the affine transform direction.(inv=False: src->dst or inv=True: dst->src)Returns:np.ndarray: The transform matrix."""assert len(center) == 2assert len(scale) == 2assert len(output_size) == 2assert len(shift) == 2# pixel_std is 200.scale_tmp = scale * 210shift = np.array(shift)src_w = scale_tmp[0]dst_w = output_size[0]dst_h = output_size[1]rot_rad = np.pi * rot / 180src_dir = rotate_point([0., src_w * -0.5], rot_rad)dst_dir = np.array([0., dst_w * -0.5])src = np.zeros((3, 2), dtype=np.float32)src[0, :] = center + scale_tmp * shiftsrc[1, :] = center + src_dir + scale_tmp * shiftsrc[2, :] = _get_3rd_point(src[0, :], src[1, :])dst = np.zeros((3, 2), dtype=np.float32)dst[0, :] = [dst_w * 0.5, dst_h * 0.5]dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dirdst[2, :] = _get_3rd_point(dst[0, :], dst[1, :])if inv:trans = cv2.getAffineTransform(np.float32(dst), np.float32(src))else:trans = cv2.getAffineTransform(np.float32(src), np.float32(dst))return transdef bbox_xyxy2xywh(bbox_xyxy):"""Transform the bbox format from x1y1x2y2 to xywh.Args:bbox_xyxy (np.ndarray): Bounding boxes (with scores), shaped (n, 4) or(n, 5). (left, top, right, bottom, [score])Returns:np.ndarray: Bounding boxes (with scores),shaped (n, 4) or (n, 5). (left, top, width, height, [score])"""bbox_xywh = bbox_xyxy.copy()bbox_xywh[:, 2] = bbox_xywh[:, 2] - bbox_xywh[:, 0]bbox_xywh[:, 3] = bbox_xywh[:, 3] - bbox_xywh[:, 1]return bbox_xywhdef _get_max_preds(heatmaps):"""Get keypoint predictions from score maps.Note:batch_size: Nnum_keypoints: Kheatmap height: Hheatmap width: WArgs:heatmaps (np.ndarray[N, K, H, W]): model predicted heatmaps.Returns:tuple: A tuple containing aggregated results.- preds (np.ndarray[N, K, 2]): Predicted keypoint location.- maxvals (np.ndarray[N, K, 1]): Scores (confidence) of the keypoints."""assert isinstance(heatmaps,np.ndarray), ('heatmaps should be numpy.ndarray')assert heatmaps.ndim == 4, 'batch_images should be 4-ndim'N, K, _, W = heatmaps.shapeheatmaps_reshaped = heatmaps.reshape((N, K, -1))idx = np.argmax(heatmaps_reshaped, 2).reshape((N, K, 1))maxvals = np.amax(heatmaps_reshaped, 2).reshape((N, K, 1))preds = np.tile(idx, (1, 1, 2)).astype(np.float32)preds[:, :, 0] = preds[:, :, 0] % Wpreds[:, :, 1] = preds[:, :, 1] // Wpreds = np.where(np.tile(maxvals, (1, 1, 2)) > 0.0, preds, -1)return preds, maxvalsdef transform_preds(coords, center, scale, output_size, use_udp=False):"""Get final keypoint predictions from heatmaps and apply scaling andtranslation to map them back to the image.Note:num_keypoints: KArgs:coords (np.ndarray[K, ndims]):* If ndims=2, corrds are predicted keypoint location.* If ndims=4, corrds are composed of (x, y, scores, tags)* If ndims=5, corrds are composed of (x, y, scores, tags,flipped_tags)center (np.ndarray[2, ]): Center of the bounding box (x, y).scale (np.ndarray[2, ]): Scale of the bounding boxwrt [width, height].output_size (np.ndarray[2, ] | list(2,)): Size of thedestination heatmaps.use_udp (bool): Use unbiased data processingReturns:np.ndarray: Predicted coordinates in the images."""assert coords.shape[1] in (2, 4, 5)assert len(center) == 2assert len(scale) == 2assert len(output_size) == 2# Recover the scale which is normalized by a factor of 200.scale = scale * 200if use_udp:scale_x = scale[0] / (output_size[0] - 1.0)scale_y = scale[1] / (output_size[1] - 1.0)else:scale_x = scale[0] / output_size[0]scale_y = scale[1] / output_size[1]target_coords = np.ones_like(coords)target_coords[:, 0] = coords[:, 0] * scale_x + center[0] - scale[0] * 0.5target_coords[:, 1] = coords[:, 1] * scale_y + center[1] - scale[1] * 0.5return target_coordsdef keypoints_from_heatmaps(heatmaps,center,scale,unbiased=False,post_process='default',kernel=11,valid_radius_factor=0.0546875,use_udp=False,target_type='GaussianHeatmap'):# Avoid being affectedheatmaps = heatmaps.copy()N, K, H, W = heatmaps.shapepreds, maxvals = _get_max_preds(heatmaps)# add +/-0.25 shift to the predicted locations for higher acc.for n in range(N):for k in range(K):heatmap = heatmaps[n][k]px = int(preds[n][k][0])py = int(preds[n][k][1])if 1 < px < W - 1 and 1 < py < H - 1:diff = np.array([heatmap[py][px + 1] - heatmap[py][px - 1],heatmap[py + 1][px] - heatmap[py - 1][px]])preds[n][k] += np.sign(diff) * .25if post_process == 'megvii':preds[n][k] += 0.5# Transform back to the imagefor i in range(N):preds[i] = transform_preds(preds[i], center[i], scale[i], [W, H], use_udp=use_udp)if post_process == 'megvii':maxvals = maxvals / 255.0 + 0.5return preds, maxvalsdef decode(output, center, scale, score_, batch_size=1):c = np.zeros((batch_size, 2), dtype=np.float32)s = np.zeros((batch_size, 2), dtype=np.float32)score = np.ones(batch_size)for i in range(batch_size):c[i, :] = centers[i, :] = scalescore[i] = np.array(score_).reshape(-1)preds, maxvals = keypoints_from_heatmaps(output,c,s,False,'default',11,0.0546875,False,'GaussianHeatmap')all_preds = np.zeros((batch_size, preds.shape[1], 3), dtype=np.float32)all_boxes = np.zeros((batch_size, 6), dtype=np.float32)all_preds[:, :, 0:2] = preds[:, :, 0:2]all_preds[:, :, 2:3] = maxvalsall_boxes[:, 0:2] = c[:, 0:2]all_boxes[:, 2:4] = s[:, 0:2]all_boxes[:, 4] = np.prod(s * 200.0, axis=1)all_boxes[:, 5] = scoreresult = {}result['preds'] = all_predsresult['boxes'] = all_boxesprint(result)return resultdef draw(bgr, predict_dict, skeleton, box):cv2.rectangle(bgr, (int(box[0]), int(box[1])), (int(box[0]) + int(box[2]), int(box[1]) + int(box[3])),(255, 0, 0))all_preds = predict_dict["preds"]for all_pred in all_preds:for x, y, s in all_pred:cv2.circle(bgr, (int(x), int(y)), 3, (0, 255, 120), -1)for sk in skeleton:if sk[0] < len(all_pred) and sk[1] < len(all_pred):x0 = int(all_pred[sk[0]][0])y0 = int(all_pred[sk[0]][1])x1 = int(all_pred[sk[1]][0])y1 = int(all_pred[sk[1]][1])cv2.line(bgr, (x0, y0), (x1, y1), (0, 255, 0), 1)cv2.imwrite("new_test.jpg", bgr)if __name__ == "__main__":# Create RKNN objectmodel = InferSession(0, model_path)print("done")#[x1,y1,w,h,conf],根据图像中人的目标框进行关键点检测bbox = [1, 185, 272, 225, 9]image_size = [192, 256]src_img = cv2.imread(IMG_PATH)img = cv2.cvtColor(src_img, cv2.COLOR_BGR2RGB)  # hwc rgb# RE_img = cv2.resize(img, (256, 192))aspect_ratio = image_size[0] / image_size[1]# aspect_ratio =1.33print('aspect_ratio', aspect_ratio)img_height = img.shape[0]img_width = img.shape[1]img_sz = [img_width, img_height]print('image shape', src_img.shape)padding = 1.2pixel_std = 220center, scale = bbox_xywh2cs(bbox,aspect_ratio,padding,pixel_std)# print(scale)# center =np.array([484.5, 395.5], dtype=np.float32)# scale = np.array([0.87, 0.653], dtype=np.float32)print(center, scale)trans = get_affine_transform(center, scale, 0, image_size)img = cv2.warpAffine(img,trans, (int(image_size[0]), int(image_size[1])),flags=cv2.INTER_LINEAR)print('trans', trans)print(img.shape)img = img / 255.0  # 归一化到0~1img = img.transpose(2, 0, 1)img = np.ascontiguousarray(img, dtype=np.float16)# Inferenceprint("--> Running model")outputs = model.infer([img])[0]print('outputs', outputs)predict_dict = decode(outputs, center, scale, bbox[-1])inv_trans = cv2.invertAffineTransform(trans)skeleton = [[15, 13], [13, 11], [16, 14], [14, 12], [11, 12], [5, 11], [6, 12], [5, 6], [5, 7], [6, 8], [7, 9],[8, 10], [1, 2], [0, 1], [0, 2], [1, 3], [2, 4], [3, 5], [4, 6]]# draw(src_img, {"preds": [original_points]}, skeleton,[x_min, y_min, width, height])draw(src_img, predict_dict, skeleton, bbox)

运行代码需安装aclruntime,安装教程:
https://gitee.com/ascend/msit/pulls/1124.diff
此外,需下载ais_bench包
下载链接:https://download.csdn.net/download/qq_40357993/89994844?spm=1001.2014.3001.5503

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/16182.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

MySQL进阶-索引的组合索引

练习题目 题目链接难度SQL进阶-索引的组合索引★★★☆☆ SQL思路 SQL进阶-索引的组合索引 初始化数据 drop table if exists user_profile; CREATE TABLE user_profile ( id int NOT NULL, device_id int NOT NULL, gender varchar(14) NOT NULL, age int , university va…

适用比亚迪汽车生产线的RFID高频读写器

随着人工智能和物联网技术的发展&#xff0c;汽车产线正朝着高度自动化和智能化的方向发展&#xff0c;许多汽车制造商选择将RFID技术应用在其生产线上&#xff0c;以提高生产效率、降低劳动强度。例如比亚迪等汽车生产线上已经广泛应用RFID技术。 健永科技利用自身的研发能力…

用Python实现中国象棋(详细教程 | 附代码)

创建一个完整的中国象棋游戏是一个复杂的项目&#xff0c;涉及到游戏规则、用户界面、AI算法等多个方面。在这里&#xff0c;我将提供一个更完整的Python代码示例&#xff0c;包括基本的棋盘、棋子移动规则和简单的用户交互。但请注意&#xff0c;这仍然是一个简化的版本&#…

力扣-Mysql-3308- 寻找表现最佳的司机(中等)

一、题目来源 3308. 寻找表现最佳的司机 - 力扣&#xff08;LeetCode&#xff09; 二、数据表结构 表&#xff1a;Drivers ----------------------- | Column Name | Type | ----------------------- | driver_id | int | | name | varchar | | age …

LeetCode 209.长度最小的子数组

209.长度最小的子数组 思路&#x1f9d0;&#xff1a; 该题可以用滑动窗口进行解答&#xff0c;滑动窗口的意思是&#xff0c;我们判断一段区间的情况&#xff0c;再根据不同情况进行区间的更新。 这里要求满足总和大于等于target的子数组&#xff0c;那么我们可以用两个指针当…

国网山东电力生产检修建设基地绿色低碳智慧用能项目获创新创意劳动竞赛一等奖

原标题&#xff1a;深化开展“供电能效服务”&#xff0c;全力推动全社会能效提升&#xff0c;国网山东电力生产检修建设基地绿色低碳智慧用能项目获得全省智慧综合能源服务项目创新创意劳动竞赛一等奖 11月14日,由山东省发展和改革委员会、山东省总工会、山东省能源局主办,山…

AIHub: 模型和数据集的私有云存储库

AIStor 的最新功能之一是广受欢迎的开源项目 Hugging Face 的私有云版本。这篇文章详细介绍了 AIStor 的 AIHub 如何有效地创建一个完全由企业控制的 API 兼容的私有云版本的 Hugging Face。在我们开始之前&#xff0c;介绍 Hugging Face 是有意义的。Hugging Face 是面向 AI 工…

【SAP FICO】财务三大报表_2-进阶(现金流量表-数据表结构、取数逻辑)

系列文章目录 文章目录 系列文章目录前言一、现金流量表二、现金流量表的数据表结构1、核心数据表2、内部数据结构 三、现金流量表的取数逻辑1、获取用户输入2、获取数据3、处理数据 总结 前言 承接上篇财务三大报表_2-进阶&#xff08;利润表-数据表结构、取数逻辑&#xff0…

【人工智能】深入解析!三种实现ChatGPT打字机效果的最佳方案

在当今AI快速发展的时代&#xff0c;ChatGPT 凭借其强大的自然语言处理能力&#xff0c;已经成为众多开发者和企业的首选工具。然而&#xff0c;如何在前端页面中实现类似于ChatGPT的打字机效果&#xff0c;以提升用户交互体验&#xff0c;成为了一个广受关注的话题。今天&…

C++:继承

一、什么是继承&#xff1f; 概念&#xff1a; 在我们认识模板之后&#xff0c;模板是写与类型无关的代码&#xff0c;是一种复用方法。今天讲解的是继承&#xff0c;继承也是代码复用的方法&#xff0c;是在原有的基础上进行增加新的类。由此继承体现了面向对象的层次结构&a…

Java版本Spring Cloud+SpringBoot b2b2c:Java商城实现一件代发设置及多商家直播带货商城搭建

一、产品简介 我们的JAVA版多商家入驻直播带货商城系统是一款全*面的电子商务平台&#xff0c;它允许商家和消费者在一个集成的环境中进行互动。系统采用先进的JAVA语言开发&#xff0c;提供多商家入驻、直播带货、B2B2C等多种功能&#xff0c;帮助用户实现线上线下的无缝对接…

【Linux】进程

目录 谈谈硬件冯诺依曼体系结构数据流向 谈谈软件(操作系统)什么是操作系统&#xff1f;为什么需要操作系统&#xff1f;操作系统如何管理&#xff1f; 谈谈进程管理进程PCB查看进程ps ajxprockill -9 PID 系统调用getpid()getppid()fork() 进程状态linux下的进程状态RSDT/tXZ …

【comfyui教程】ComfyUI绘画|ComfyUI 本地部署(Windows系统)

前言 关于 ComfyUI 的部署&#xff0c;推荐使用 Window系统 英伟达显卡 的搭配组合。 整合包下载⏬ 所有的AI设计工具&#xff0c;安装包、模型和插件&#xff0c;都已经整理好了&#xff0c;&#x1f447;获取~ PS&#xff1a;最好是下载到固态硬盘内&#xff0c;确保存储空…

飞牛云fnOS本地部署1Panel服务器运维管理面板并搭建Halo个人博客

&#x1f49d;&#x1f49d;&#x1f49d;欢迎来到我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;希望您在这里可以感受到一份轻松愉快的氛围&#xff0c;不仅可以获得有趣的内容和知识&#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:kwan 的首页,持续学…

Python实现贪吃蛇 经典解压小游戏!附源码

大家应该都玩过诺基亚上面的贪吃蛇吧&#xff0c;那是一段美好的童年回忆&#xff0c;本文将带你一步步用python语言实现一个snake小游戏&#xff01; 基础环境必备 版本&#xff1a;Python3 ●系统&#xff1a;Windows ●相关模块&#xff1a;pygame pip install pygame安…

史上最强大的 S3 API?介绍 Prompt API。

迄今为止&#xff0c;对象存储世界已由 PUT 和 GET 的 S3 API 概念定义。然而&#xff0c;我们现在生活的世界需要更多。鉴于 MinIO 的 S3 部署甚至比 Amazon 还多&#xff0c;因此我们不得不提出下一个出色的 S3 API。 这个新 API 就是 Prompt API&#xff0c;它很可能成为有…

微信小程序 — 农产品供销系统

农产品供销系统 一&#xff1a;基本介绍开发环境功能模块图系统功能部分数据库表设计 二&#xff1a;部分系统页面展示小程序登录界面小程序首页水果分类列表 ![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/415514d6c40c461c91c1739a4f682fea.jpeg#pic_center)小程序…

为什么说数字化转型需要用到RPA

在现代商业环境中&#xff0c;数字化转型已成为企业追求创新、提高竞争力和适应市场变化的重要战略。然而&#xff0c;数字化转型不仅仅是简单地将纸质文档转化为电子文件或引入新的IT系统&#xff0c;而是要全面优化和重塑企业的业务流程、运营模式和客户体验。在这一过程中&a…

海外媒体发稿:聚焦摩洛哥世界新闻 Morocco World News

关于摩洛哥世界新闻简介&#xff1a; 摩洛哥世界新闻&#xff1a;通过卓越的新闻报道倡导言论自由和深思熟虑的辩论 摩洛哥世界新闻致力于向广大受众提供摩洛哥和中东及北非地区的新闻&#xff0c;不带偏见或政治目的。摩洛哥世界新闻的愿景是成为言论自由的捍卫者&#xff0…

快速建造高品质音乐厅:声学气膜馆打造专业降噪空间—轻空间

随着音乐艺术在城市生活中的地位不断提升&#xff0c;各类音乐厅和演出场馆的需求量也逐年增加。然而&#xff0c;传统音乐厅的建设往往周期长、成本高&#xff0c;特别是在城市中心和文化聚集区&#xff0c;土地资源有限&#xff0c;建造优质的音乐厅面临诸多挑战。如何在有限…