[Weekly Review] 2020/06/22-28
2020/06/22-28
This week, I read two and a half papers:
- A domain-specific supercomputer for training deep neural networks
- In-Datacenter Performance Analysis of a Tensor Processing Unit
- Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors (unfinished)
And I also learned several terms (see posts bellow) as well as a little SystemC syntax.
Additionally, I watched the presentation video of Deep Learning Hardware: Past, Present, and Future
via YouTube.
I posted these blogs this week (most of them are posted on Sunday):
-
Round-Robin Arbitration: a scheduling scheme
-
Unified Power Format: The Unified Power Format (UPF) is intended to ease the job of specifying, simulating and verifying IC designs that have a number of power states and power islands.
-
All-Reduce Operations: one kind of collective operations in NCCL and MPI lib
-
Operator Fusion: fuse chains of basic operators
-
A domain-specific supercomputer for training deep neural networks:
-
TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks:
-
tinyML Talks: Low-Power Computer Vision: introduction of hierarchical neural network
-
tinyML Talks: Saving 95% of Your Edge Power with Sparsity: It will explain these types of sparsity (time, space, connectivity, activation) in terms of edge processes, and how they affect computation on a practical level.