Issued Patents All Time
Showing 25 most recent of 62 patents
| Patent # | Title | Co-Inventors | Date |
|---|---|---|---|
| 12412088 | Reducing operations for training neural networks | Maral Mesmakhosroshahi, Bita Darvish Rouhani, Douglas C. Burger | 2025-09-09 |
| 12346277 | Systems and methods for hardware acceleration of data masking | Jinwen Xi, Ming Liu | 2025-07-01 |
| 12307355 | Neural network processing with chained instructions | Jeremy Fowers, Douglas C. Burger | 2025-05-20 |
| 12307372 | Data-aware model pruning for neural networks | Venmugil Elango, Bita Darvish Rouhani, Douglas C. Burger, Maximilian Golub | 2025-05-20 |
| 12277502 | Neural network activation compression with non-uniform mantissas | Daniel Lo, Amar Phanishayee, Yiren Zhao | 2025-04-15 |
| 12190235 | System for training an artificial neural network | Maximilian Golub, Ritchie Zhao, Douglas C. Burger, Bita Darvish Rouhani, Ge Yang +1 more | 2025-01-07 |
| 12165038 | Adjusting activation compression for neural network training | Daniel Lo, Bita Darvish Rouhani, Yiren Zhao, Amar Phanishayee, Ritchie Zhao | 2024-12-10 |
| 12067495 | Neural network activation compression with non-uniform mantissas | Daniel Lo, Amar Phanishayee, Yiren Zhao | 2024-08-20 |
| 12045724 | Neural network activation compression with outlier block floating-point | Daniel Lo, Amar Phanishayee, Yiren Zhao, Ritchie Zhao | 2024-07-23 |
| 11934327 | Systems and methods for hardware acceleration of data masking using a field programmable gate array | Jinwen Xi, Ming Liu | 2024-03-19 |
| 11886833 | Hierarchical and shared exponent floating point data types | Bita Darvish Rouhani, Venmugil Elango, Rasoul SHAFIPOUR, Jeremy Fowers, Ming Liu +2 more | 2024-01-30 |
| 11853897 | Neural network training with decreased memory consumption and processor utilization | Taesik Na, Daniel Lo, Haishan ZHU | 2023-12-26 |
| 11790212 | Quantization-aware neural architecture search | Kalin Ovtcharov, Vahideh Akhlaghi, Ritchie Zhao | 2023-10-17 |
| 11741362 | Training neural networks using mixed precision computations | Daniel Lo, Bita Darvish Rouhani | 2023-08-29 |
| 11676003 | Training neural network accelerators using mixed precision data formats | Bita Darvish Rouhani, Taesik Na, Daniel Lo, Douglas C. Burger | 2023-06-13 |
| 11663450 | Neural network processing with chained instructions | Jeremy Fowers, Douglas C. Burger | 2023-05-30 |
| 11645493 | Flow for quantized neural networks | Douglas C. Burger, Bita Darvish Rouhani, Daniel Lo, Ritchie Zhao | 2023-05-09 |
| 11604960 | Differential bit width neural architecture search | Kalin Ovtcharov, Vahideh Akhlaghi, Ritchie Zhao | 2023-03-14 |
| 11586883 | Residual quantization for neural networks | Daniel Lo, Jialiang Zhang, Ritchie Zhao | 2023-02-21 |
| 11574239 | Outlier quantization for training and inference | Daniel Lo, Ritchie Zhao | 2023-02-07 |
| 11562247 | Neural network activation compression with non-uniform mantissas | Daniel Lo, Amar Phanishayee, Yiren Zhao | 2023-01-24 |
| 11556762 | Neural network processor based on application specific synthesis specialization parameters | Jeremy Fowers, Kalin Ovtcharov, Todd Michael Massengill, Ming Liu, Gabriel Leonard Weisz | 2023-01-17 |
| 11526761 | Neural network training with decreased memory consumption and processor utilization | Taesik Na, Daniel Lo, Haishan ZHU | 2022-12-13 |
| 11494614 | Subsampling training data during artificial neural network training | Douglas C. Burger, Bita Darvish Rouhani | 2022-11-08 |
| 11200486 | Convolutional neural networks on hardware accelerators | Karin Strauss, Kalin Ovtcharov, Joo Young Kim, Olatunji Ruwase | 2021-12-14 |