Technical FAQs
Questions related to the technicalities of deploying on StackAI.
Can I access my workload remotely?
StackAI automatically ensures that your website is exposed publicly with a default HTTPS endpoint for quick testing
Can I use a custom domain?
Yes, a quick TXT record proving you own the domain/hostname will enable you to use custom domains
Are there multiple clusters?
Yes, StackAI currently has 3 clusters of operation and will soon expand to even more services providers.
Why do we have clusters and not just global compute?
Typical applications have a Web Server and a Database Server. To ensure you have the best performance, the Web Server, Database Server and associated storage must be located together.
How can I connect my workloads, but secure against outsiders?
When you login with your Metamask wallet, a private network for your workloads is created.
Workload Pods that run within this account can talk to each other without restriction.
If you need to separate Development and Production Environments completely, simply use two different Metamask Wallets
Is StackAI Secure?
Yes, you may select from any of Secure Datacenter Clusters operated by the core StackAI team. In the future there will be more clusters and you can choose the ones that meet your requirements.
You can expect the same security from StackAI as you would from a traditional Cloud provider with the added security that we do not know who you are and will not be able to target your workloads.
How much CPU/RAM is available?
You can reserve as much CPU and RAM as you wish when you configure your account.
There are limits on the CPU/RAM that you can use for each individual workload, however, if you need to exceed your initial quotas, please contact us to have it adjusted.
How are sudden spikes in demand handled?
You reserve the amount of CPU, RAM, Disk and Bandwidth you require during your onboarding.
As long as your spikes are within your reserved amount, StackAI is designed to ensure those resources are available to you.
How hard is it to debug on StackAI?
StackAI is based on Docker and provides Kubernetes as a built in layer.
All you knowledge of debugging docker images will be the same in StackAI.
You can obtain a shell into your workloads via the StackAI WebTTY just like you would from a Linux shell prompt.
How can I link two pods/workloads together?
Assume you have a webserver and a database server.
You name the webserver "web" and the database server "db"
Each server can be contacted via an internal hostname. For example:
web-0xYourEthereumAddress
db-0xYourEthereumAddress
It is fairly typical then to connect to your mysql server with a command such as:
mysql -P3306 -uusername -p -h db-0xYourEthereumAddress dbname
Can I SSH into my server?
First, Docker containers typically DO NOT have an ssh server running. This is 1) for security and 2) direct SSH access is not necessary.
Instead, open up a WebTTY within StackAI
Then use some kubernetes commands to do everything else
How can I see the logs for my server?
How do I preserve disk content across reboots?
Each pod can configure a directory that will persist across reboots and even docker image upgrades.
If you have more than one directory that must persist, place a shell command into your docker image that links both directories into a "target persisted data directory"
For example, you persist this directory:
/pdata
And you can locate two types of data here:
/pdata/dataType1
/pdata/dataType2
If you have a more complex requirement where user ownership switching is required, follow this guide:
If you need assistance please contact us
My docker image doesn't have a special startup script
Docker is built with the idea of composability. Select the image you wish to use and add what you need.
For example a customized Dockerfile could be something like:
FROM imageWeLike:1.0.0
COPY somethingWeNeed /newStartup.sh
ENTRYPOINT ["/newStartup.sh"]
Any starting tips?
Yes! We've seen a lot of problems when pods are run with too little CPU & RAM.
We recommend "over allocating" at first just to check that things are running correctly.
Then adjust the CPU/RAM to just the amount you need.
It is really difficult to debug an application that is CPU/RAM starved
Who runs these clusters?
Currently, in the first stage, the StackAI team runs and manages the three public clusters.
We are currently working on tools that will make it easy for others to operate clusters as well.
Can I run compute nodes too?
In the future, community members will be able to set up clusters and nodes on the StackAI' cloud and receive payments in tokens when resources are consumed by the applications deployed on StackAI.
Last updated