Issued Patents All Time
Showing 25 most recent of 38 patents
| Patent # | Title | Co-Inventors | Date |
|---|---|---|---|
| 12423102 | Instructions to convert from FP16 to BF8 | Alexander Heinecke, Robert Valentine, Mark J. Charney, Christopher J. Hughes, Evangelos Georganas +3 more | 2025-09-23 |
| 12417100 | Instructions for structured-sparse tile matrix FMA | Menachem Adelman, Amit Gradstein, Alexander Heinecke, Christopher J. Hughes, Shahar Mizrahi +7 more | 2025-09-16 |
| 12412232 | Dynamic precision management for integer deep learning primitives | Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan | 2025-09-09 |
| 12405787 | Utilizing structured sparsity in systolic arrays | Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei +10 more | 2025-09-02 |
| 12399685 | Systolic array having support for output sparsity | Jorge Parra, Fangwen Fu, Subramaniam Maiyuran, Varghese George, Mike B. Macpherson +6 more | 2025-08-26 |
| 12367045 | Instructions to convert from FP16 to BF8 | Alexander Heinecke, Robert Valentine, Mark J. Charney, Christopher J. Hughes, Evangelos Georganas +3 more | 2025-07-22 |
| 12314727 | Optimized compute hardware for machine learning operations | Dipankar Das, Roger Gramunt, Mikhail Smelyanskiy, Jesus Corbal, Dheevatsa Mudigere +1 more | 2025-05-27 |
| 12288062 | Instructions for fused multiply-add operations with variable precision input operands | Dipankar Das, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek KUNDU | 2025-04-29 |
| 12242846 | Supporting 8-bit floating point format operands in a computing architecture | Subramaniam Maiyuran, Varghese George, Fangwen Fu, Shuai Mu, Supratim Pal +1 more | 2025-03-04 |
| 12198055 | Incremental precision networks using residual inference and fine-grain quantization | Abhisek KUNDU, Dheevatsa Mudigere, Dipankar Das | 2025-01-14 |
| 12135968 | Instructions to convert from FP16 to BF8 | Alexander Heinecke, Robert Valentine, Mark J. Charney, Christopher J. Hughes, Evangelos Georganas +3 more | 2024-11-05 |
| 12106210 | Scaling half-precision floating point tensors for training deep neural networks | Dipankar Das | 2024-10-01 |
| 12056489 | Apparatuses, methods, and systems for 8-bit floating-point matrix dot product instructions | Alexander Heinecke, Robert Valentine, Mark J. Charney, Christopher J. Hughes, Evangelos Georganas +3 more | 2024-08-06 |
| 12033237 | Dynamic precision management for integer deep learning primitives | Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan | 2024-07-09 |
| 12020028 | Apparatuses, methods, and systems for 8-bit floating-point matrix dot product instructions | Alexander Heinecke, Robert Valentine, Mark J. Charney, Christopher J. Hughes, Evangelos Georganas +3 more | 2024-06-25 |
| 11977885 | Utilizing structured sparsity in systolic arrays | Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei +10 more | 2024-05-07 |
| 11900107 | Instructions for fused multiply-add operations with variable precision input operands | Dipankar Das, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek KUNDU | 2024-02-13 |
| 11893490 | Incremental precision networks using residual inference and fine-grain quantization | Abhisek KUNDU, Dheevatsa Mudigere, Dipankar Das | 2024-02-06 |
| 11823034 | Scaling half-precision floating point tensors for training deep neural networks | Dipankar Das | 2023-11-21 |
| 11669933 | Dynamic precision management for integer deep learning primitives | Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan | 2023-06-06 |
| 11556772 | Incremental precision networks using residual inference and fine-grain quantization | Abhisek KUNDU, Dheevatsa Mudigere, Dipankar Das | 2023-01-17 |
| 11507815 | Scaling half-precision floating point tensors for training deep neural networks | Dipankar Das | 2022-11-22 |
| 11501139 | Scaling half-precision floating point tensors for training deep neural networks | Dipankar Das | 2022-11-15 |
| 11494163 | Conversion hardware mechanism | Dipankar Das, Chunhui Mei, Kristopher Wong, Dhiraj D. Kalamkar, Hong Jiang +2 more | 2022-11-08 |
| 11468303 | Scaling half-precision floating point tensors for training deep neural networks | Dipankar Das | 2022-10-11 |