was optimized using TensorRT that speeds up inference performance. The inference time for each 800×3 block data is ≈9–10ms. Thus the model has been further ...
確定! 回上一頁