找回密码
 会员注册
查看: 32|回复: 0

JetsonNano部署YOLOv5与Tensorrtx加速——(自己走一遍全过程记录)

[复制链接]

2万

主题

0

回帖

6万

积分

超级版主

积分
64726
发表于 2024-9-3 21:10:55 | 显示全部楼层 |阅读模式
说在前面搞了一下Jetsonnano和YOLOv5,网上的资料大多重复也有许多的坑,在配置过程中摸爬滚打了好几天,出坑后决定写下这份教程供自己备忘。事先声明,这篇文章的许多内容本身并不是原创,而是将配置过程中的文献进行了搜集整理,但是所有步骤都1:1复刻我的配置过程,包括其中的出错和解决途径,但是每个人的设备和网络上的包都是不断更新的,不能保证写下这篇文章之后的版本在兼容性上没有问题,总之提前祝自己好运!一、烧录镜像1、镜像选择这里我选择的是亚博智能,它已经将镜像大部分给配置好了。获取链接:(提取码:o6a4)镜像的下载地址里面已经安装好了如下的东西:CUDA10.2,CUDNNv8,tensorRT,opencv4.1.1,python2,python3,tensorflow2.3,jetpack4.4.1,yolov4-tiny和yolov4,jetson-inference包(含资料中的训练模型),jetson-gpio库,安装pytorch1.6和torchvesion0.7,安装nodev15.0.1,npm7.0.3,jupterlab,jetcham,已开启VNC服务。2、镜像烧录方法烧录方法参考这一篇文章,很简单的。镜像烧录方法3、Jetsonnano系统初始化设置插卡!开机!最好连接上屏幕,不差这几个钱了。之后的很多命令需要用到root权限,我们需要开启root用户。sudopasswdroot1之后设置密码即可开发板需要插上网线或者插上免驱动的无线网卡联网!!!①做个小备份sudocp/etc/apt/sources.list/etc/apt/sources.list.baksudogedit/etc/apt/sources.list12②删除所有,替换为如下的东西debhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionicmainmultiverserestricteduniversedebhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-securitymainmultiverserestricteduniversedebhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-updatesmainmultiverserestricteduniversedebhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-backportsmainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionicmainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-securitymainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-updatesmainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-backportsmainmultiverserestricteduniverse12345678题外话:如何更换源呢?JetsonNano烧录的镜像是国外的源,安装软件和升级软件包的速度非常慢,甚至还会常常出现网络错误,更换源的步骤如下:①先备份原本的source.list文件。sudocp/etc/apt/sources.list/etc/apt/sources.list.bak1②编辑source.list,并更换国内源。sudogedit/etc/apt/sources.list1③按“i”开始输入,删除所有内容,复制并更换源。(这里选清华源或中科大源其中一个,然后保存)#清华源debhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionicmainmultiverserestricteduniversedebhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-securitymainmultiverserestricteduniversedebhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-updatesmainmultiverserestricteduniversedebhttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-backportsmainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionicmainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-securitymainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-updatesmainmultiverserestricteduniversedeb-srchttp://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/bionic-backportsmainmultiverserestricteduniverse#中科大源debhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-updatesmainrestricteddebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionicuniversedebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-updatesuniversedebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionicmultiversedebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-updatesmultiversedebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-backportsmainrestricteduniversemultiversedebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-securitymainrestricteddebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-securityuniversedebhttp://mirrors.ustc.edu.cn/ubuntu-ports/bionic-securitymultiverse1234567891011121314151617181920④更新软件#更新软件sudoapt-getupdatesudoapt-getupgrade123二、开始配置所需的环境,安装各种支持包1、配置CUDAJetsonnano内置好了CUDA,但需要配置环境变量才能使用,打开命令行添加环境变量即可,我这里是CUDA10.2如果不是使用我的镜像就需要根据自己的CUDA版本去填写路径了。#打开终端,输入命令vi.bashrc12拉到最后,在最后添加这些exportPATH=/usr/local/cuda-10.2/bin${PATH:+{PATH}}exportLD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64${LD_LIBRARY_PATH:+{LD_LIBRARY_PATH}}exportCUDA_ROOT=/usr/local/cuda123应用当前配置(刷新一下)source~/.bashrc1查看是否配置成功nvcc-V12、安装pip3sudoapt-getupdatesudoapt-getinstallpython3-pippython3-dev-y123、安装jtop安装jtop库这个可以监控自己的设备CPU、GPU工作状态sudo-Hpip3installjetson-statssudojtop #运行jtop(第一次可能不行,第二次就好了)按【q】退出124、配置可能需要用到的库sudoapt-getinstallbuild-essentialmakecmakecmake-curses-gui-ysudoapt-getinstallgitg++pkg-configcurl-ysudoapt-getinstalllibatlas-base-devgfortranlibcanberra-gtk-modulelibcanberra-gtk3-module-ysudoapt-getinstalllibhdf5-serial-devhdf5-tools-ysudoapt-getinstallnanolocatescreen-y123455、安装所需要的依赖环境sudoapt-getinstalllibfreetype6-dev-ysudoapt-getinstallprotobuf-compilerlibprotobuf-devopenssl-ysudoapt-getinstalllibssl-devlibcurl4-openssl-dev-ysudoapt-getinstallcython3-y12346、安装opencv的系统级依赖,一些编解码的库sudoapt-getinstallbuild-essential-ysudoapt-getinstallcmakegitlibgtk2.0-devpkg-configlibavcodec-devlibavformat-devlibswscale-dev-ysudoapt-getinstallpython-devpython-numpylibtbb2libtbb-devlibjpeg-devlibpng-devlibtiff5-devlibdc1394-22-dev-ysudoapt-getinstalllibavcodec-devlibavformat-devlibswscale-devlibv4l-devliblapacke-dev-ysudoapt-getinstalllibxvidcore-devlibx264-dev-ysudoapt-getinstalllibatlas-base-devgfortran-ysudoapt-getinstallffmpeg-y12345677、更新CMake这一步是必须的,因为ARM架构的很多东西都要从源码编译wgethttp://www.cmake.org/files/v3.13/cmake-3.13.0.tar.gztarxpvfcmake-3.13.0.tar.gzcmake-3.13.0/#解压cdcmake-3.13.0/./bootstrap--system-curl #漫长的等待,做一套眼保健操...make-j4#编译同样是漫长的等待...echo'exportPATH=~/cmake-3.13.0/bin/PATH'>>~/.bashrcsource~/.bashrc#更新.bashrc12345678、U盘兼容之后的步骤可能需要使用U盘把大文件拷入开发板,但是对于大容量设备可能会出现无法挂载,一条安装命令解决。sudoapt-getinstallexfat-utils1三、安装pytorchJetsonnano上的Linux其实不是x86架构,而是类似手机的ARM架构,这也就导致它的很多包和普通的Linux上的不是通用的。也是踩过的坑之一,pytorch官网下载的包,在实际使用时无法调用开发板的显卡(这是个大问题,失去显卡的开发板算力暴跌!)。这里的PyTorch以及接下来的torchvision等包都需要安装Nvidia官网给出的版本。1.下载PyTorch1.8我已经下载好了,现成的安装包下载链接奉上:(提取码:yvex)安装包2.安装PyTorch1.8把下载的东西用U盘拷到Jetsonnano开发板上,建议放桌面上,好找。sudopip3install…#直接把.whl拖到命令窗口中,让它自动填充文件位置安装需要略漫长的等待。四、安装torchvision0.9.0版本PyTorch和torchvision版本是需要对应的,上一步下载的那个正好是对应的。1.提前安装好我们需要的依赖sudoapt-getinstalllibopenmpi2sudoapt-getinstalllibopenblas-devsudoapt-getinstalllibjpeg-devzlib1g-dev1232.安装torchvision0.9.0同样需要特殊的匹配Jetsonnano的版本,步骤三中个人链接里包含了这个torchvision。把下载的包拷到开发板上,同样建议放桌面上。cdtorchvision #进入到这个包的目录下exportBUILD_VERSION=0.9.0sudopython3setup.pyinstall #安装(估计要20、30分钟不止吧)1233.检验一下是否成功安装python3importtorchimporttorchvisionprint(torch.cuda.is_available()) #这一步如果输出True那么就成功了!quit() #最后退出python编译12345五、下载YOLOv5-5.0源代码在自己的电脑上或服务器上训练好。这里如何训练,不做过多解释,可以去B站找一些视频学习一下。我的项目是检测电梯按键。需要数据集和训练权重以及各种yolov5改进代码的同学可以滴滴私信联系我。六、安装使YOLOv5成功运行需依赖的包注意:下载过程如果因为网络原因失败的话可以在命令后加上-ihttps://pypi.tuna.tsinghua.edu.cn/simple来使用清华镜像源1、sudopip3installmatplotlib==3.2.2sudopip3install--upgradeCython #更新一下这个包122、numpy有些特殊,已经自带了,但是是apt-get安装的,所以先卸掉原来的,也方便之后包的管理sudoapt-getremovepython-numpysudopip3installnumpy==1.19.4sudopip3installscipy==1.4.1. #这个包安装巨慢,耐心等待1233、这之后的一些包我在安装时都没有指定版本,这里的指令是根据之后pip3list补上的sudopip3installtqdm==4.61.2sudopip3installseaborn==0.11.1sudopip3installscikit-build==0.11.1 #安装opencv需要这个包sudopip3installopencv-python==4.5.3.56 #不出意外也是一个相当漫长的过程sudopip3installtensorboard==2.5.0-ihttps://pypi.tuna.tsinghua.edu.cn/simplesudopip3install--upgradePyYAML #我升级到了5.4.1也可以sudopip3installPyYAML==5.4.1sudopip3installthopsudopip3installpycocotools123456784、根据YOLOv5官方给的所需的安装包清单,仔细对照,查漏补缺的给安装好。安装命令输入格式:sudopip3install.................#base----------------------------------------matplotlib>=3.2.2numpy>=1.18.5opencv-python>=4.1.2PillowPyYAML>=5.3.1scipy>=1.4.1torch>=1.7.0torchvision>=0.8.1tqdm>=4.41.0#logging-------------------------------------tensorboard>=2.4.1wandb#plotting------------------------------------seaborn>=0.11.0pandas#export--------------------------------------coremltools>=4.1onnx>=1.8.1scikit-learn==0.19.2#forcoremlquantization#extras--------------------------------------thop#FLOPScomputationpycocotools>=2.0#COCOmAP123456789101112131415161718192021222324252627285、运行检测脚本在源码的detect.py同目录下,打开终端,运行下面的命令。效果还可以,启动模型要很久,预测效果还可以。之后就可以在自己的inference中的output中看到自己预测的图片了。接着打开detecy.py检测脚本,修改一下检测资源参数,改为调用摄像头进行实时视频预测,大概10fps,应该说不算差,但是是有提升办法的。python3detect.py--source/path/to/xxx.jpg--weights/path/to/best.pt--conf-thres0.7或者是:python3detect.py123七、来一波TensorRT加速?1、安装pycuda-2019①(网络好的时候用这个方法)在线安装pycudapip3installpycuda1②(你的网络不好的时候用下面这个方法)提取码:t94b下载链接下载完之后解压。进入解压出来的文件。tarzxvfpycuda-2019.1.2.tar.gzcdpycuda-2019.1.2/python3configure.py--cuda-root=/usr/local/cuda-10.2sudopython3setup.pyinstall1234出现这个就说明正在编译文件安装,等待一段时间后即可安装完成。安装完出现:就表明安装成功了。但是使用的时候还得配置一下一些必要的东西不然会报错:FileNotFoundError:[Errno2]Nosuchfileordirectory:‘nvcc’1将nvcc的完整路径硬编码到Pycuda的compiler.py文件中的compile_plain()中,大约在第73行的位置中加入下面段代码!nvcc='/usr/local/cuda/bin/'+nvcc12、TensorRT加速这时我们要用到一个大佬的开源,GitHub地址如下:https://github.com/wang-xinyu/tensorrtx/tree/master/yolov51大佬是真的牛批,好好看一下吧。不仅有yolov5的,还有好多算法的,大佬都给做了相关的加速,大佬给他的项目起名叫TensorRTx,比原版的TensorRT加速更好用。需要下载两个东西:第一是:YOLOv5原版的开源程序(选择v5.0版本)第二是:将大佬开源的项目tensorrtx,下载到自己的windows电脑上然后,把tensorrtx文件夹整体,复制粘贴到yolov5-5.0原版程序的文件夹中。我为了自己理解方便,和之后的操作,稍微改了一下文件夹名称:(当然我都把东西准备好了,下载就行:提取码:私信聊)下载YOLOv5原版程序文件夹改名为yolov5(Tensorrtx)如下图所示:把tenserrtx文件夹整体改名为:tensorrtx-yolov5-v5.0,复制粘贴到yolov5(Tensorrtx)的文件夹中,如下图所示:下面开始真正的操作了:①生成.wts文件(在windows电脑上操作即可)1.将训练得到的.pt权重文件改名为yolov5s.pt(必须改成这个名,没有为什么),把它放到yolov5(Tenserrtx)文件夹中。2.将这个文件yolov5-5.0(Tensorrtx)\tensorrtx-yolov5-v5.0\yolov5\gen_wts.py复制粘贴到yolov5(Tensorrtx)文件夹中。注意:此时yolov5(Tensorrtx)文件夹中有了yolov5s.pt和gen_wts.py这两个文件。然后,在yolov5(Tensorrtx)文件夹中右击鼠标,打开终端,激活在anaconda中自己创建的虚拟环境比如:condaactivatetorch1.10。然后输入命令:pythongen_wts.py-wyolov5s.pt-oyolov5s.wts1(问题:在anaconda中自己创建虚拟环境不会?那你就去B站找视频自己学一下。YOLOv5的权重都训练好了,这个不可能不会的。)文件内会生成一个文件:yolov5s.wts②build(在Jetsonnano上弄)(这一步是生成引擎文件)1.将上述生成的.wts文件用U盘复制到Jetsonnano里的yolov5-5.0(Tensorrtx)\tensorrtx-yolov5-v5.0\yolov5文件夹中。2.打开上述文件夹里的yololayer.h文件,修改CLASS_NUM的数量(根据自己训练模型的类的个数来设,我的是55)。3.此时上述文件夹里有(.wts是在windows电脑上生成的)(yolov5.cpp未进行过改动)(yololayer.h已经改为自己训练的类数了)这三个。4.在上述文件夹中打开终端,依次运行指令:mkdirbuildcdbuildcmake..makesudo./yolov5-s../yolov5s.wtsyolov5s.engines12345稍微等待之后,在build文件夹中便通过tensorrtx生成了基于C++的engine引擎部署文件了。但是我C++水平不怎么样,对它有种心理上的抵触,把他搞成python的吧。③USB摄像头实时检测加速由于本人C++语言很一般,所以只能硬着头皮修改了下yolov5-5.0(Tensorrtx)\tensorrtx-yolov5-v5.0\yolov5文件夹中的yolov5_trt.py脚本,脚本的代码格式较差,但是能够实现加速,有需要的可以作为一个参考。在文件夹下新建一个yolo_trt_test.py文件。复制下面v4.0或者v5.0的代码到yolo_trt_test.py。需要自行更改的地方:yolov5s.engine的路径要改成自己的、检测物体的类别名称要改为自己的。①v5.0代码"""AnexamplethatusesTensorRT'sPythonapitomakeinferences."""importctypesimportosimportshutilimportrandomimportsysimportthreadingimporttimeimportcv2importnumpyasnpimportpycuda.autoinitimportpycuda.driverascudaimporttensorrtastrtimporttorchimporttorchvisionimportargparseCONF_THRESH=0.5IOU_THRESHOLD=0.4defget_img_path_batches(batch_size,img_dir):ret=[]batch=[]forroot,dirs,filesinos.walk(img_dir):fornameinfiles:iflen(batch)==batch_size:ret.append(batch)batch=[]batch.append(os.path.join(root,name))iflen(batch)>0:ret.append(batch)returnretdefplot_one_box(x,img,color=None,label=None,line_thickness=None):"""descriptionlotsoneboundingboxonimageimg,thisfunctioncomesfromYoLov5project.param:x:aboxlikes[x1,y1,x2,y2]img:aopencvimageobjectcolor:colortodrawrectangle,suchas(0,255,0)label:strline_thickness:intreturn:noreturn"""tl=(line_thicknessorround(0.002*(img.shape[0]+img.shape[1])/2)+1)#line/fontthicknesscolor=coloror[random.randint(0,255)for_inrange(3)]c1,c2=(int(x[0]),int(x[1])),(int(x[2]),int(x[3]))cv2.rectangle(img,c1,c2,color,thickness=tl,lineType=cv2.LINE_AA)iflabel:tf=max(tl-1,1)#fontthicknesst_size=cv2.getTextSize(label,0,fontScale=tl/3,thickness=tf)[0]c2=c1[0]+t_size[0],c1[1]-t_size[1]-3cv2.rectangle(img,c1,c2,color,-1,cv2.LINE_AA)#filledcv2.putText(img,label,(c1[0],c1[1]-2),0,tl/3,[225,255,255],thickness=tf,lineType=cv2.LINE_AA,)classYoLov5TRT(object):"""description:AYOLOv5classthatwarpsTensorRTops,preprocessandpostprocessops."""def__init__(self,engine_file_path):#CreateaContextonthisdevice,self.ctx=cuda.Device(0).make_context()stream=cuda.Stream()TRT_LOGGER=trt.Logger(trt.Logger.INFO)runtime=trt.Runtime(TRT_LOGGER)#Deserializetheenginefromfilewithopen(engine_file_path,"rb")asf:engine=runtime.deserialize_cuda_engine(f.read())context=engine.create_execution_context()host_inputs=[]cuda_inputs=[]host_outputs=[]cuda_outputs=[]bindings=[]forbindinginengine:print('bingding:',binding,engine.get_binding_shape(binding))size=trt.volume(engine.get_binding_shape(binding))*engine.max_batch_sizedtype=trt.nptype(engine.get_binding_dtype(binding))#Allocatehostanddevicebuffershost_mem=cuda.pagelocked_empty(size,dtype)cuda_mem=cuda.mem_alloc(host_mem.nbytes)#Appendthedevicebuffertodevicebindings.bindings.append(int(cuda_mem))#Appendtotheappropriatelist.ifengine.binding_is_input(binding):self.input_w=engine.get_binding_shape(binding)[-1]self.input_h=engine.get_binding_shape(binding)[-2]host_inputs.append(host_mem)cuda_inputs.append(cuda_mem)else:host_outputs.append(host_mem)cuda_outputs.append(cuda_mem)#Storeself.stream=streamself.context=contextself.engine=engineself.host_inputs=host_inputsself.cuda_inputs=cuda_inputsself.host_outputs=host_outputsself.cuda_outputs=cuda_outputsself.bindings=bindingsself.batch_size=engine.max_batch_sizedefinfer(self,input_image_path):threading.Thread.__init__(self)#Makeselftheactivecontext,pushingitontopofthecontextstack.self.ctx.push()self.input_image_path=input_image_path#Restorestream=self.streamcontext=self.contextengine=self.enginehost_inputs=self.host_inputscuda_inputs=self.cuda_inputshost_outputs=self.host_outputscuda_outputs=self.cuda_outputsbindings=self.bindings#Doimagepreprocessbatch_image_raw=[]batch_origin_h=[]batch_origin_w=[]batch_input_image=np.empty(shape=[self.batch_size,3,self.input_h,self.input_w])input_image,image_raw,origin_h,origin_w=self.preprocess_image(input_image_path)batch_origin_h.append(origin_h)batch_origin_w.append(origin_w)np.copyto(batch_input_image,input_image)batch_input_image=np.ascontiguousarray(batch_input_image)#Copyinputimagetohostbuffernp.copyto(host_inputs[0],batch_input_image.ravel())start=time.time()#TransferinputdatatotheGPU.cuda.memcpy_htod_async(cuda_inputs[0],host_inputs[0],stream)#Runinference.context.execute_async(batch_size=self.batch_size,bindings=bindings,stream_handle=stream.handle)#TransferpredictionsbackfromtheGPU.cuda.memcpy_dtoh_async(host_outputs[0],cuda_outputs[0],stream)#Synchronizethestreamstream.synchronize()end=time.time()#Removeanycontextfromthetopofthecontextstack,deactivatingit.self.ctx.pop()#Hereweusethefirstrowofoutputinthatbatch_size=1output=host_outputs[0]#Dopostprocessresult_boxes,result_scores,result_classid=self.post_process(output,origin_h,origin_w)#Drawrectanglesandlabelsontheoriginalimageforjinrange(len(result_boxes)):box=result_boxes[j]plot_one_box(box,image_raw,label="{}:{:.2f}".format(categories[int(result_classid[j])],result_scores[j]),)returnimage_raw,end-startdefdestroy(self):#Removeanycontextfromthetopofthecontextstack,deactivatingit.self.ctx.pop()defget_raw_image(self,image_path_batch):"""description:Readanimagefromimagepath"""forimg_pathinimage_path_batch:yieldcv2.imread(img_path)defget_raw_image_zeros(self,image_path_batch=None):"""description:Readydataforwarmup"""for_inrange(self.batch_size):yieldnp.zeros([self.input_h,self.input_w,3],dtype=np.uint8)defpreprocess_image(self,input_image_path):"""description:ConvertBGRimagetoRGB,resizeandpadittotargetsize,normalizeto[0,1],transformtoNCHWformat.param:input_image_path:str,imagepathreturn:image:theprocessedimageimage_raw:theoriginalimagehriginalheightwriginalwidth"""image_raw=input_image_pathh,w,c=image_raw.shapeimage=cv2.cvtColor(image_raw,cv2.COLOR_BGR2RGB)#Calculatewidhtandheightandpaddingsr_w=self.input_w/wr_h=self.input_h/hifr_h>r_w:tw=self.input_wth=int(r_w*h)tx1=tx2=0ty1=int((self.input_h-th)/2)ty2=self.input_h-th-ty1else:tw=int(r_h*w)th=self.input_htx1=int((self.input_w-tw)/2)tx2=self.input_w-tw-tx1ty1=ty2=0#Resizetheimagewithlongsidewhilemaintainingratioimage=cv2.resize(image,(tw,th))#Padtheshortsidewith(128,128,128)image=cv2.copyMakeBorder(image,ty1,ty2,tx1,tx2,cv2.BORDER_CONSTANT,(128,128,128))image=image.astype(np.float32)#Normalizeto[0,1]image/=255.0#HWCtoCHWformat:image=np.transpose(image,[2,0,1])#CHWtoNCHWformatimage=np.expand_dims(image,axis=0)#Converttheimagetorow-majororder,alsoknownas"Corder":image=np.ascontiguousarray(image)returnimage,image_raw,h,wdefxywh2xyxy(self,origin_h,origin_w,x):"""description:Convertnx4boxesfrom[x,y,w,h]to[x1,y1,x2,y2]wherexy1=top-left,xy2=bottom-rightparamrigin_h:heightoforiginalimageorigin_w:widthoforiginalimagex:Aboxestensor,eachrowisabox[center_x,center_y,w,h]return:y:Aboxestensor,eachrowisabox[x1,y1,x2,y2]"""y=torch.zeros_like(x)ifisinstance(x,torch.Tensor)elsenp.zeros_like(x)r_w=self.input_w/origin_wr_h=self.input_h/origin_hifr_h>r_w:y[:,0]=x[:,0]-x[:,2]/2y[:,2]=x[:,0]+x[:,2]/2y[:,1]=x[:,1]-x[:,3]/2-(self.input_h-r_w*origin_h)/2y[:,3]=x[:,1]+x[:,3]/2-(self.input_h-r_w*origin_h)/2y/=r_welse:y[:,0]=x[:,0]-x[:,2]/2-(self.input_w-r_h*origin_w)/2y[:,2]=x[:,0]+x[:,2]/2-(self.input_w-r_h*origin_w)/2y[:,1]=x[:,1]-x[:,3]/2y[:,3]=x[:,1]+x[:,3]/2y/=r_hreturnydefpost_process(self,output,origin_h,origin_w):"""description:postprocessthepredictionparamutput:Atensorlikes[num_boxes,cx,cy,w,h,conf,cls_id,cx,cy,w,h,conf,cls_id,...]origin_h:heightoforiginalimageorigin_w:widthoforiginalimagereturn:result_boxes:finallyboxes,aboxestensor,eachrowisabox[x1,y1,x2,y2]result_scores:finallyscores,atensor,eachelementisthescorecorrespoingtoboxresult_classid:finallyclassid,atensor,eachelementistheclassidcorrespoingtobox"""#Getthenumofboxesdetectednum=int(output[0])#Reshapetoatwodimentionalndarraypred=np.reshape(output[1:],(-1,6))[:num,:]#toatorchTensorpred=torch.Tensor(pred).cuda()#Gettheboxesboxes=pred[:,:4]#Getthescoresscores=pred[:,4]#Gettheclassidclassid=pred[:,5]#Choosethoseboxesthatscore>CONF_THRESHsi=scores>CONF_THRESHboxes=boxes[si,:]scores=scores[si]classid=classid[si]#Trandformbboxfrom[center_x,center_y,w,h]to[x1,y1,x2,y2]boxes=self.xywh2xyxy(origin_h,origin_w,boxes)#Donmsindices=torchvision.ops.nms(boxes,scores,iou_threshold=IOU_THRESHOLD).cpu()result_boxes=boxes[indices,:].cpu()result_scores=scores[indices].cpu()result_classid=classid[indices].cpu()returnresult_boxes,result_scores,result_classidclassinferThread(threading.Thread):def__init__(self,yolov5_wrapper):threading.Thread.__init__(self)self.yolov5_wrapper=yolov5_wrapperdefinfer(self,frame):batch_image_raw,use_time=self.yolov5_wrapper.infer(frame)#fori,img_pathinenumerate(self.image_path_batch):#parent,filename=os.path.split(img_path)#save_name=os.path.join('output',filename)##Saveimage#cv2.imwrite(save_name,batch_image_raw[i])#print('input->{},time->{:.2f}ms,savingintooutput/'.format(self.image_path_batch,use_time*1000))returnbatch_image_raw,use_timeclasswarmUpThread(threading.Thread):def__init__(self,yolov5_wrapper):threading.Thread.__init__(self)self.yolov5_wrapper=yolov5_wrapperdefrun(self):batch_image_raw,use_time=self.yolov5_wrapper.infer(self.yolov5_wrapper.get_raw_image_zeros())print('warm_up->{},time->{:.2f}ms'.format(batch_image_raw[0].shape,use_time*1000))if__name__=="__main__":#loadcustompluginsparser=argparse.ArgumentParser()parser.add_argument('--engine',nargs='+',type=str,default="build/yolov5s.engine",help='.enginepath(s)')#改为自己的路径parser.add_argument('--save',type=int,default=0,help='save?')opt=parser.parse_args()PLUGIN_LIBRARY="build/libmyplugins.so"engine_file_path=opt.enginectypes.CDLL(PLUGIN_LIBRARY)#loadcocolabelscategories=["person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","trafficlight","firehydrant","stopsign","parkingmeter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sportsball","kite","baseballbat","baseballglove","skateboard","surfboard","tennisracket","bottle","wineglass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hotdog","pizza","donut","cake","chair","couch","pottedplant","bed","diningtable","toilet","tv","laptop","mouse","remote","keyboard","cellphone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddybear","hairdrier","toothbrush"]#改为自己的检测类别名称#aYoLov5TRTinstanceyolov5_wrapper=YoLov5TRT(engine_file_path)cap=cv2.VideoCapture(0)try:thread1=inferThread(yolov5_wrapper)thread1.start()thread1.join()while1:_,frame=cap.read()img,t=thread1.infer(frame)cv2.imshow("result",img)ifcv2.waitKey(1)&0XFF==ord('q'):#1millisecondbreakfinally:#destroytheinstancecap.release()cv2.destroyAllWindows()yolov5_wrapper.destroy()123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386②v4.0代码"""AnexamplethatusesTensorRT'sPythonapitomakeinferences."""importctypesimportosimportrandomimportsysimportthreadingimporttimeimportcv2importnumpyasnpimportpycuda.autoinitimportpycuda.driverascudaimporttensorrtastrtimporttorchimporttorchvisionINPUT_W=608INPUT_H=608CONF_THRESH=0.15IOU_THRESHOLD=0.45int_box=[0,0,0,0]int_box1=[0,0,0,0]fps1=0.0defplot_one_box(x,img,color=None,label=None,line_thickness=None):"""descriptionlotsoneboundingboxonimageimg,thisfunctioncomesfromYoLov5project.param:x:aboxlikes[x1,y1,x2,y2]img:aopencvimageobjectcolor:colortodrawrectangle,suchas(0,255,0)label:strline_thickness:intreturn:noreturn"""tl=(line_thicknessorround(0.002*(img.shape[0]+img.shape[1])/2)+1)#line/fontthicknesscolor=coloror[random.randint(0,255)for_inrange(3)]c1,c2=(int(x[0]),int(x[1])),(int(x[2]),int(x[3]))C2=c2cv2.rectangle(img,c1,c2,color,thickness=tl,lineType=cv2.LINE_AA)iflabel:tf=max(tl-1,1)#fontthicknesst_size=cv2.getTextSize(label,0,fontScale=tl/3,thickness=tf)[0]c2=c1[0]+t_size[0],c1[1]+t_size[1]+8cv2.rectangle(img,c1,c2,color,-1,cv2.LINE_AA)#filledcv2.putText(img,label,(c1[0],c1[1]+t_size[1]+5),0,tl/3,[255,255,255],thickness=tf,lineType=cv2.LINE_AA,)classYoLov5TRT(object):"""description:AYOLOv5classthatwarpsTensorRTops,preprocessandpostprocessops."""def__init__(self,engine_file_path):#CreateaContextonthisdevice,self.cfx=cuda.Device(0).make_context()stream=cuda.Stream()TRT_LOGGER=trt.Logger(trt.Logger.INFO)runtime=trt.Runtime(TRT_LOGGER)#Deserializetheenginefromfilewithopen(engine_file_path,"rb")asf:engine=runtime.deserialize_cuda_engine(f.read())context=engine.create_execution_context()host_inputs=[]cuda_inputs=[]host_outputs=[]cuda_outputs=[]bindings=[]forbindinginengine:size=trt.volume(engine.get_binding_shape(binding))*engine.max_batch_sizedtype=trt.nptype(engine.get_binding_dtype(binding))#Allocatehostanddevicebuffershost_mem=cuda.pagelocked_empty(size,dtype)cuda_mem=cuda.mem_alloc(host_mem.nbytes)#Appendthedevicebuffertodevicebindings.bindings.append(int(cuda_mem))#Appendtotheappropriatelist.ifengine.binding_is_input(binding):host_inputs.append(host_mem)cuda_inputs.append(cuda_mem)else:host_outputs.append(host_mem)cuda_outputs.append(cuda_mem)#Storeself.stream=streamself.context=contextself.engine=engineself.host_inputs=host_inputsself.cuda_inputs=cuda_inputsself.host_outputs=host_outputsself.cuda_outputs=cuda_outputsself.bindings=bindingsdefinfer(self,input_image_path):globalint_box,int_box1,fps1#threading.Thread.__init__(self)#Makeselftheactivecontext,pushingitontopofthecontextstack.self.cfx.push()#Restorestream=self.streamcontext=self.contextengine=self.enginehost_inputs=self.host_inputscuda_inputs=self.cuda_inputshost_outputs=self.host_outputscuda_outputs=self.cuda_outputsbindings=self.bindings#Doimagepreprocessinput_image,image_raw,origin_h,origin_w=self.preprocess_image(input_image_path)#Copyinputimagetohostbuffernp.copyto(host_inputs[0],input_image.ravel())#TransferinputdatatotheGPU.cuda.memcpy_htod_async(cuda_inputs[0],host_inputs[0],stream)#Runinference.context.execute_async(bindings=bindings,stream_handle=stream.handle)#TransferpredictionsbackfromtheGPU.cuda.memcpy_dtoh_async(host_outputs[0],cuda_outputs[0],stream)#Synchronizethestreamstream.synchronize()#Removeanycontextfromthetopofthecontextstack,deactivatingit.self.cfx.pop()#Hereweusethefirstrowofoutputinthatbatch_size=1output=host_outputs[0]#Dopostprocessresult_boxes,result_scores,result_classid=self.post_process(output,origin_h,origin_w)#Drawrectanglesandlabelsontheoriginalimageforiinrange(len(result_boxes)):box1=result_boxes[i]plot_one_box(box1,image_raw,label="{}:{:.2f}".format(categories[int(result_classid[i])],result_scores[i]),)returnimage_raw#parent,filename=os.path.split(input_image_path)#save_name=os.path.join(parent,"output_"+filename)## Saveimage#cv2.imwrite(save_name,image_raw)defdestroy(self):#Removeanycontextfromthetopofthecontextstack,deactivatingit.self.cfx.pop()defpreprocess_image(self,input_image_path):"""description:Readanimagefromimagepath,convertittoRGB,resizeandpadittotargetsize,normalizeto[0,1],transformtoNCHWformat.param:input_image_path:str,imagepathreturn:image:theprocessedimageimage_raw:theoriginalimagehriginalheightwriginalwidth"""image_raw=input_image_pathh,w,c=image_raw.shapeimage=cv2.cvtColor(image_raw,cv2.COLOR_BGR2RGB)#Calculatewidhtandheightandpaddingsr_w=INPUT_W/wr_h=INPUT_H/hifr_h>r_w:tw=INPUT_Wth=int(r_w*h)tx1=tx2=0ty1=int((INPUT_H-th)/2)ty2=INPUT_H-th-ty1else:tw=int(r_h*w)th=INPUT_Htx1=int((INPUT_W-tw)/2)tx2=INPUT_W-tw-tx1ty1=ty2=0#Resizetheimagewithlongsidewhilemaintainingratioimage=cv2.resize(image,(tw,th))#Padtheshortsidewith(128,128,128)image=cv2.copyMakeBorder(image,ty1,ty2,tx1,tx2,cv2.BORDER_CONSTANT,(128,128,128))image=image.astype(np.float32)#Normalizeto[0,1]image/=255.0#HWCtoCHWformat:image=np.transpose(image,[2,0,1])#CHWtoNCHWformatimage=np.expand_dims(image,axis=0)#Converttheimagetorow-majororder,alsoknownas"Corder":image=np.ascontiguousarray(image)returnimage,image_raw,h,wdefxywh2xyxy(self,origin_h,origin_w,x):"""description:Convertnx4boxesfrom[x,y,w,h]to[x1,y1,x2,y2]wherexy1=top-left,xy2=bottom-rightparamrigin_h:heightoforiginalimageorigin_w:widthoforiginalimagex:Aboxestensor,eachrowisabox[center_x,center_y,w,h]return:y:Aboxestensor,eachrowisabox[x1,y1,x2,y2]"""y=torch.zeros_like(x)ifisinstance(x,torch.Tensor)elsenp.zeros_like(x)r_w=INPUT_W/origin_wr_h=INPUT_H/origin_hifr_h>r_w:y[:,0]=x[:,0]-x[:,2]/2y[:,2]=x[:,0]+x[:,2]/2y[:,1]=x[:,1]-x[:,3]/2-(INPUT_H-r_w*origin_h)/2y[:,3]=x[:,1]+x[:,3]/2-(INPUT_H-r_w*origin_h)/2y/=r_welse:y[:,0]=x[:,0]-x[:,2]/2-(INPUT_W-r_h*origin_w)/2y[:,2]=x[:,0]+x[:,2]/2-(INPUT_W-r_h*origin_w)/2y[:,1]=x[:,1]-x[:,3]/2y[:,3]=x[:,1]+x[:,3]/2y/=r_hreturnydefpost_process(self,output,origin_h,origin_w):"""description:postprocessthepredictionparamutput:Atensorlikes[num_boxes,cx,cy,w,h,conf,cls_id,cx,cy,w,h,conf,cls_id,...]origin_h:heightoforiginalimageorigin_w:widthoforiginalimagereturn:result_boxes:finallyboxes,aboxestensor,eachrowisabox[x1,y1,x2,y2]result_scores:finallyscores,atensor,eachelementisthescorecorrespoingtoboxresult_classid:finallyclassid,atensor,eachelementistheclassidcorrespoingtobox"""#Getthenumofboxesdetectednum=int(output[0])#Reshapetoatwodimentionalndarraypred=np.reshape(output[1:],(-1,6))[:num,:]#toatorchTensorpred=torch.Tensor(pred).cuda()#Gettheboxesboxes=pred[:,:4]#Getthescoresscores=pred[:,4]#Gettheclassidclassid=pred[:,5]#Choosethoseboxesthatscore>CONF_THRESHsi=scores>CONF_THRESHboxes=boxes[si,:]scores=scores[si]classid=classid[si]#Trandformbboxfrom[center_x,center_y,w,h]to[x1,y1,x2,y2]boxes=self.xywh2xyxy(origin_h,origin_w,boxes)#Donmsindices=torchvision.ops.nms(boxes,scores,iou_threshold=IOU_THRESHOLD).cpu()result_boxes=boxes[indices,:].cpu()result_scores=scores[indices].cpu()result_classid=classid[indices].cpu()returnresult_boxes,result_scores,result_classidclassmyThread(threading.Thread):def__init__(self,func,args):threading.Thread.__init__(self)self.func=funcself.args=argsdefrun(self):self.func(*self.args)if__name__=="__main__":#loadcustompluginsPLUGIN_LIBRARY="build/libmyplugins.so"ctypes.CDLL(PLUGIN_LIBRARY)engine_file_path="yolov5s.engine"#loadcocolabelscategories=["person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","trafficlight","firehydrant","stopsign","parkingmeter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sportsball","kite","baseballbat","baseballglove","skateboard","surfboard","tennisracket","bottle","wineglass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hotdog","pizza","donut","cake","chair","couch","pottedplant","bed","diningtable","toilet","tv","laptop","mouse","remote","keyboard","cellphone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddybear","hairdrier","toothbrush"]#aYoLov5TRTinstanceyolov5_wrapper=YoLov5TRT(engine_file_path)cap=cv2.VideoCapture(0)while1:_,image=cap.read()img=yolov5_wrapper.infer(image)cv2.imshow("result",img)ifcv2.waitKey(1)&0XFF==ord('q'):#1millisecondbreakcap.release()cv2.destroyAllWindows()yolov5_wrapper.destroy()123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328修改完成后,在yolov5-5.0(Tensorrtx)\tensorrtx-yolov5-v5.0\yolov5文件夹中打开终端命令行运行:python3yolo_trt_test.py1最后检测效果还是挺好的,效果视频或者动图后期再放上来吧。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 会员注册

本版积分规则

QQ|手机版|心飞设计-版权所有:微度网络信息技术服务中心 ( 鲁ICP备17032091号-12 )|网站地图

GMT+8, 2024-12-28 20:22 , Processed in 0.458186 second(s), 26 queries .

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表