1. In order to maximize benefit for our early AIM miners, DeepBrain Chain Foundation has decided to reduce the total GPU numbers on the first batch of AI mining machines to 756GPUs.
2. In consideration of this reduction, we are limiting the maximum amount of AIMs each individual has the potential to purchase down to 2 machines total, across the categories you are eligible for in accordance with your ranking position.
3. There will be 100 AIM Storage machines; 20 AIM Workstation 2s; 30 AIM Workstation 4s; 30 AIM Clouds and 2 AIM Clusters.
4. For more detailed information on the duration of the ranking, which position qualifies for what and any other rules regarding the DBC AIM pre-sale, please download this doc.
5. 更多关于矿机排名和购买的规则请下载 此文件 解读，谢谢。
According to our plan, by the end of October, 2018.
We encourage participants to be part of the DBC ecosystem, even without buying AI miners from us. However, we do require each DBC node to hold certain amount of DBCs deposits to be eligible for mining rewards.
The GPUs that support efficient AI training, especially deep learning, are primarily Nvidia GPUs that meet certain standards (typically above GTX 1080Ti). We are open to user-owned AI workstations to join our ecosystem, as long as it can contribute effective AI training. Information regarding the lowest requirement of AI workstations will be released before our mainnet goes live.
DBC tokens will be rewarded for consensus nodes, computing power, and storage mining. Based on an AI based advanced algorithms, each AI mining machines’ resources (available computer power, storage, memory) will be evaluated and benchmarked against AI training speed. The more compute power and storage are offered to the shared DBC ecosystem, the faster the training, the more DBC rewards will be given.
DeepBrain Chain ecosystem will be primarily use GPUs, as it has been proven successful and best suited for today’s AI training, especially large scale deep learning and machine learning. But our team is tracking and monitoring ASICs, FPGAs and quantum computing very closely, and will always utilize the most efficient and effective hardware and software to enable best AI training on DeepBrain Chain.
Yes, we are working with reputable mining pools to host and sell shares of DBC AI Miners in the future.
We do have reference architecture and will release detailed spec on our website over time, before the official go-live of our MainNet in Q4.
We plan to support popular AI frameworks such as TensorFlow, PyTorch, Caffe, CNTK etc.
Normal rule of thumb is to design a system with more than twice the size of the aggregated total of the local memory on the GPU as data does not always stay with the GPUs. In a situation with four 12GB per GPU, the system memory is greater than twice the amount on the GPU devices, or over 96GB DDR4. Lower than the recommended ratio threshold will lead to performance impact in training workloads.
For hot data, SSDs are mandatory for Deep Learning workloads. As we acknowledge we cannot design for all use cases. At the same time, we benchmarked other products and recognize that data requirements are correlated with the model size. Our rule of thumb is to keep 2TB of SSD per GPU available to be processed.
DBC design to support based on the workload requirement of the Deep Learning training computation. At present, the dominant OS for Deep Learning is Linux based systems.
AI Computing Platform Driven By Blockchain