Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Coding_Cat
on April 5, 2016
|
parent
|
context
|
favorite
| on:
The Nvidia DGX-1 Deep Learning Supercomputer in a ...
Wait, how many chips did they cram in there that they're getting 170 TFlops. Even at a very generous 10 TFLOP per chip that would be 17 chips.
krasin
on April 5, 2016
[–]
NVIDIA Tesla P100 has 21 TeraFLOPS of FP16 performance by their words. So they got 8 chips there.
jsheard
on April 5, 2016
|
parent
|
next
[–]
Yep, they showed a diagram of how it fits together:
http://i.imgur.com/xk1daFG.jpg
aconz2
on April 5, 2016
|
root
|
parent
|
next
[–]
https://devblogs.nvidia.com/parallelforall/wp-content/upload...
source:
https://devblogs.nvidia.com/parallelforall/inside-pascal/
cptskippy
on April 5, 2016
|
root
|
parent
|
prev
|
next
[–]
I wish that made that information more accessible. I wasn't able to find it on the site and it was all I really cared about.
Coding_Cat
on April 5, 2016
|
parent
|
prev
[–]
Ah, half-floats. That explains it. Still pretty high but realistic at least.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: