| 11853897 |
Neural network training with decreased memory consumption and processor utilization |
Taesik Na, Haishan ZHU, Eric S. Chung |
2023-12-26 |
| 11741362 |
Training neural networks using mixed precision computations |
Eric S. Chung, Bita Darvish Rouhani |
2023-08-29 |
| 11676003 |
Training neural network accelerators using mixed precision data formats |
Bita Darvish Rouhani, Taesik Na, Eric S. Chung, Douglas C. Burger |
2023-06-13 |
| 11645493 |
Flow for quantized neural networks |
Douglas C. Burger, Eric S. Chung, Bita Darvish Rouhani, Ritchie Zhao |
2023-05-09 |
| 11586883 |
Residual quantization for neural networks |
Eric S. Chung, Jialiang Zhang, Ritchie Zhao |
2023-02-21 |
| 11574239 |
Outlier quantization for training and inference |
Eric S. Chung, Ritchie Zhao |
2023-02-07 |
| 11562247 |
Neural network activation compression with non-uniform mantissas |
Amar Phanishayee, Eric S. Chung, Yiren Zhao |
2023-01-24 |
| 11562201 |
Neural network layer processing with normalization and transformation of data |
— |
2023-01-24 |
| 11556764 |
Deriving a concordant software neural network layer from a quantized firmware neural network layer |
Jeremy Fowers, Deeksha Dangwal |
2023-01-17 |
| 11544521 |
Neural network layer processing with scaled quantization |
— |
2023-01-03 |