GPU-加速优化变形器基于实时推论的神经网络
GPU-Accelerated Optimization of Transformer-Based Neural Networks for Real-Time Inference
作者
Authors
暂无作者信息
期刊
Journal
暂无期刊信息
年份
Year
-
分类
Category
国家
Country
-
📝 摘要
Abstract
This paper presents the design and evaluation of a GPU-accelerated inference pipeline for transformer models using NVIDIA TensorRT with mixed-precision optimization. We evaluate BERT-base (110M parameters) and GPT-2 (124M parameters) across batch sizes from 1 to 32 and sequence lengths from 32 to 512. The system achieves up to 64.4x speedup over CPU baselines, sub-10 ms latency for single-sample inference, and a 63 percent reduction in memory usage. We introduce a hybrid precision strategy that
📊 文章统计
Article Statistics
基础数据
Basic Stats
9
浏览
Views
0
下载
Downloads
0
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views