/ tokenize asset
/ secure IP
/ distribute solution
/ compute inference
/ gain income
DePIN for AI
Paid AI transactions provider within decentralized computing network
Join Community
/ user roles and benefits
B2B studios /  AI-startups
benefits
cut computing costs / copyright / funding asset distribution
actions
upload asset / tokenize / protect copyright
analyze performance / manage & claim
gamers / gaming clubs / data centers / crypto miners
benefits
hedge risks & volatility / software & legal access / accelerate payback model
actions
run node / set time frame / configure models / analyze performance / manage & claim
Web3 enthusiasts / family offices / VCs
benefits
new asset class / predictable dividends /
low entry barrier
actions
choose asset / analyze performance/
compare / invest / manage & claim
HPC PROVIDER
INVESTOR
ai company
/ Real-Time Transaction Dashboard

Total Computations

-

Total Payment Transactions

-

Total Computation Time

-

Overall Revenue

-
/ get early bonus
free computations
& IP protection
free launch
& costs covering
/ What is under the hood?
Transaction providing
interfaces
HPC providers network
Inference verification service
Node
1. Application layer
2. Web3 layer
3. AI Computing layer
Datasets
Models
Pipelines
mobile
PC
cloud
browser
API
Request manager protocol
Tokenization & Copyright protection protocols
Revenue sharing protocol
request
inference
request
inference
Aioma Network accumulates the following GPUs for AI inference & training tasks
/ Available GPUs
RTX 4090
RTX 5090
RTX 4080
RTX 4070 Ti
RTX 4060 Ti
RTX 3090
RTX 3080 Ti
RTX 3070
RTX 3060 Ti
RTX 3050
RTX 3080
NVIDIA H100
80 GB HBM3
16 896
3 350
NVIDIA H200
NVIDIA A100
141 GB HBM3e
40 GB HBM2e
16 896
6 912
4 800
1 555
24 GB GDDR6X
32 GB GDDR7
16 GB GDDR6X
12 GB GDDR6X
8 GB GDDR6
24 GB GDDR6X
12 GB GDDR6X
8 GB GDDR6
8 GB GDDR6
8 GB GDDR6
10 GB GDDR6X
16 384
24 576
9 728
7 680
4 352
10 496
10 240
5 888
4 864
2 560
8 704
1 008
1 500
736
504
288
936
912
448
448
224
760
inference
training
Model
Capacity (VRAM)
CUDA cores
Bandwidth (GB/s)
RTX 4090
RTX 5090
RTX 4080
RTX 4070 Ti
RTX 4060 Ti
RTX 3090
RTX 3080 Ti
RTX 3070
RTX 3060 Ti
RTX 3050
RTX 3080
NVIDIA H100
80 GB HBM3
16 896
3 350
NVIDIA H200
NVIDIA A100
141 GB HBM3e
40 GB HBM2e
16 896
6 912
4 800
1 555
24 GB GDDR6X
32 GB GDDR7
16 GB GDDR6X
12 GB GDDR6X
8 GB GDDR6
24 GB GDDR6X
12 GB GDDR6X
8 GB GDDR6
8 GB GDDR6
8 GB GDDR6
10 GB GDDR6X
16 384
24 576
9 728
7 680
4 352
10 496
10 240
5 888
4 864
2 560
8 704
1 008
1 500
736
504
288
936
912
448
448
224
760
inference
training
Model
Capacity (VRAM)
CUDA cores
Bandwidth (GB/s)
We make AI more
ethical
We are concerned about the security and legality of generated AI content and moderate all partner models and datasets with the help of the community
inclusive
With available computing infrastructure and educational tracks, AI will become more understandable and applied
cheap
We source computing power for your solutions from the secondary market and allocate it most efficiently using a specialized request manager network
secure
We endeavour to protect the privacy of personal data for all participants in the process and moderate generative content to avoid their risks
advanced
Offer new solutions for the AI solutions market and support talent and developers at a strategic level
cheap
We source computing power for your solutions from the secondary market and allocate it most efficiently using a specialized request manager network
secure
We endeavour to protect the privacy of personal data for all participants in the process and moderate generative content to avoid their risks
advanced
Offer new solutions for the AI solutions market and support talent and developers at a strategic level
ethical
We are concerned about the security and legality of generated AI content and moderate all partner models and datasets with the help of the community
inclusive
With available computing infrastructure and educational tracks, AI will become more understandable and applied
/ get early bonus
free computations & IP protection
free launch & costs covering
/ telegram
/ e-mail