Storage Options
Network Volumes
Keep your data safe and accessible across GPU sessions with Hyperbolic's persistent network storage volumes.
What are Network Volumes?
Network volumes are persistent storage drives that stay attached to your data even when your GPU instances shut down. Think of them as external hard drives for the cloud – perfect for storing datasets, model checkpoints, and any files you want to keep between training sessions.
Whether you're a student working on a semester project or a researcher managing large datasets, network volumes ensure your work is always there when you need it.
When to Use Network Volumes
Network volumes are perfect for:
- Large datasets that take time to download and prepare
- Model checkpoints you want to save during long training runs
- Project files that need to persist between sessions
- Shared storage for collaborative research projects
- Backup storage for important experimental results
You can always skip adding a volume if you're just running quick experiments or testing code.
Getting Started
Creating Your First Volume
When launching a GPU instance, you'll see the volume attachment step in your deployment wizard:
- Choose "Create New Network Volume" if this is your first time
- Give your volume a name – something memorable like "mnist-dataset" or "thesis-project"
- Select your storage size using the slider (256GB to 8TB available)
- Click "Create Volume" and we'll set it up for you
The volume will be ready to use as soon as your GPU instance starts.
Using an Existing Volume
Already have volumes from previous sessions? Perfect! You can reattach them:
- Choose "Attach Existing Network Volume"
- Select from your available volumes – you'll see the name, size, and location
- Check the hourly cost for each volume option
- Click "Attach and Continue" to connect it to your new instance
Only volumes in the same region as your GPU will be shown for the best performance.
Skipping Volume Attachment
Running a quick test or don't need persistent storage? No problem:
- Click "Skip" to continue without a volume
- You can always add volumes to future deployments when you need them
Volume Specifications
Size Options
- Minimum: 256GB – great for small datasets and code projects
- Maximum: 8TB – perfect for large-scale research datasets
- Increments: Choose any size that fits your needs
Regional Availability
- Volumes are region-specific for optimal performance
- Create volumes in the same region as your GPU instances
- Your existing volumes will only appear when deploying in matching regions
Pricing
Network volumes are billed hourly at $0.0008/TB/hour for storage volumes. You'll see the exact hourly cost for each volume option before making your selection.
Mounting Your Volume
After attaching a volume to your GPU instance, you'll need to mount it manually to access your data. Here's how:
Step 1: Connect to Your Instance
SSH into your GPU instance using the connection details from your instance's Details panel:
ssh user@your-instance-ip
Step 2: Configure the File System
Add your storage volume configuration to /etc/fstab
. You'll need your volume's virtual IP address, which you can find in:
- The "Mounting your storage volume" dropdown in your instance Details panel
- Your volume entry on the Storage page
# Replace <storage-vip> with your actual Storage virtual IP
echo "<storage-vip>:/data /data nfs rw,nconnect=16,nfsvers=3 0 0" | sudo tee -a /etc/fstab
Step 3: Mount the Volume
Run the mount command to connect your volume:
sudo mount -a
Step 4: Verify the Mount
Confirm your volume mounted successfully:
df -h
You should see your /data
directory connected to your storage volume's virtual IP.
Working with Your Mounted Volume
Once mounted, you can use your volume like any other directory:
# Access your volume at /data
import os
# List files in your volume
print(os.listdir("/data"))
# Save files to your volume
with open("/data/results.txt", "w") as f:
f.write("Training completed successfully!")
# Create project structure
os.makedirs("/data/my-project/models", exist_ok=True)
os.makedirs("/data/my-project/datasets", exist_ok=True)
Best Practices
Organize your data:
# Create a clear folder structure
mkdir -p /data/my-project/{datasets,models,outputs,logs}
Save checkpoints regularly:
# Save model checkpoints to your volume
import torch
torch.save(model.state_dict(), '/data/my-project/models/checkpoint_epoch_10.pth')
Keep important outputs:
# Save results and visualizations
import matplotlib.pyplot as plt
# Generate your plots
plt.savefig('/data/my-project/outputs/training_curve.png')
Volume Management
Lifecycle
- Create: Set up new volumes during instance deployment
- Attach: Connect existing volumes to new instances
- Detach: Volumes persist when instances shut down
- Reuse: Attach the same volume to multiple deployments over time
Data Persistence
Your data stays safe on the volume even when:
- Your GPU instance stops running
- You switch to a different GPU type
- You deploy in the same region weeks later
Sharing and Collaboration
Each volume belongs to your account and can be attached to any of your GPU instances in the same region. Perfect for maintaining consistent environments across different experiments.
Common Use Cases
Machine Learning Workflows
Dataset Storage:
# Store your datasets on a volume for reuse
dataset_path = "/data/imagenet"
# Download once, use everywhere
if not os.path.exists(dataset_path):
download_imagenet(dataset_path)
# Load data in any session
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
train_loader = DataLoader(ImageFolder(dataset_path))
Model Development:
# Keep model versions organized
model_dir = "/data/my-research/models"
# Save different experiments
import torch
torch.save(model, f"{model_dir}/experiment_v{version}.pth")
# Load previous work
previous_model = torch.load(f"{model_dir}/best_model.pth")
Research Projects
Long-term Studies:
- Store experimental data across multiple sessions
- Maintain version control for research code
- Keep research papers and documentation organized
Collaborative Work:
- Share consistent environments between team members
- Maintain standardized dataset versions
- Preserve experimental results for peer review
Troubleshooting
Volume Not Appearing
If your volume doesn't show up in the attachment list:
- Check the Storage tab: Verify your volume exists in the Storage section
- Verify ownership: Only your volumes are visible
- Wait for creation: New volumes may take a moment to appear
Mounting Issues
If you can't mount your volume:
- Check NFS installation: Ensure
nfs-common
is installed withsudo apt install nfs-common
- Verify virtual IP: Double-check the virtual IP address in your Storage details
- Check fstab syntax: Ensure the
/etc/fstab
entry follows the correct format - Try manual mount: Use
sudo mount -t nfs <storage-vip>:/data /data -o rw,nconnect=16,nfsvers=3
Access Issues
If you can't access your volume files after mounting:
- Check the mount path: Your data should be accessible at
/data
- Verify permissions: Ensure your user has read/write access to
/data
- Check mount status: Run
df -h
to confirm the volume is properly mounted - Restart if needed: Sometimes remounting helps:
sudo umount /data && sudo mount -a
Performance Considerations
For best performance:
- Use appropriate mount options: The recommended
nconnect=16,nfsvers=3
options optimize throughput - Organize files efficiently to minimize directory traversal
- Consider volume size for your workload requirements
Updated 2 days ago