Blockchain, unlike “cloud computing” is more than a buzz word as it proves to be superior for integral and consistent systems of record in many aspects, such as IT infrastructure footprint and cryptographic security of the data at rest. While there are many projects out there aiming to deliver technology solutions based on blockchain concepts, I believe Ethereum will continue to play a crucial role as an underlying backbone of distributed applications and storage.
Since Ethereum is an open source project, I performed a little exercise of launching a public node. Perhaps I could even try some mining? To make things more difficult, I’ll describe here how I did that on Gentoo running on my VPC (virtual private server hosted by Linode), inside a Docker container. This is in no way an attempt to get rich by mining, since a VPC only operates on a CPU (of which I have 2 cores) and a VGA-compatible stub driver described as:
00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)
. Quite obviously this cannot run any mining in a serious fashion. All in all, there are some observations gathered throughout the exercise and a few problems solved, which I hope could make someone’s life easier.
I will start with installing the Docker daemon. Surprisingly, there are no software package dependencies. That’s really good, because I want my server to remain minimal.
Packages installed: 389
Packages in system: 43
root@rzski data # equery d docker
* These packages depend on docker:
root@rzski data #
The Ethereum project team now provides an official Docker image. Once the daemon is installed, it is as easy as pulling the image from the official repo by issuing docker pull ethereum/client-go. The image is only 44MB, which again makes my server satisfied as storage space aint cheap these days.
Before creating a new container by running this iamge, here’s a brief comparison of the 3 running modes geth (that’s the name of the Ethereum node software in Go language) can run with:
- –syncmode “full” – geth will download both the block header and data and start validating everything from the genesis block.
- –syncmode “fast” – geth will download only the block header and data, but once it catches up to the current block, it switches to “full” sync mode and starts validating everything on the chosen network. This is the default option.
- –syncmode “light” – geth only participates in validating and doesn’t download anything, but mining is disabled (this can be verified by the list of loaded modules. Even if you try to load the miner module later through the console, geth will print an error stating mining is not possible in this mode).
I went with the default “fast” sync mode, but decided to specify resource limits to prevent my container from slowing down other services I’m running from my server. I did the following:
Create a volume so the blockchain data could remain persistent: docker volume create geth_vol
And then launch the image to create a new running container:
docker run -it -m 2G --cpus 1.5 --storage-opt size=20G --name geth_container -v geth_vol ethereum/client-go --syncmode "fast" console
If you’ve run a node previously, you’ll see how naive I’d been by limiting the size of the container to 20 GB of storage space… Etherscan offers a graph showing how much space is actually needed to operate a public Ethereum node. At the time of writing, it is almost 100 GB, which greatly exceeds what I have available on this VPC, therefore I will have to abandon the mining idea and switch to the light mode. Perhaps in the future, once sharding is enabled, this will not be an issue.
Other settings are quite self-explanatory: I gave the container half of my RAM and one and a half CPU cores, which would result in the typical 150% CPU user time in “top”. The CPU limit can also be specified in microseconds of CPU time for finer granularity and updated on demand with the docker container update directive or via Kubernetes. When it comes to limiting the resources, docker depends on cgroups. In my case, not all cgroup options were compiled into the kernel, therefore upon launching geth in interactive mode, I got the following warning: WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted
.
This might seem benign but is actually quite tricky. The Linux kernel sends the TERM signal to processes that run out of memory. The OOM state is determined by a mixture of physical and swap memory, therefore once my container consumed all swap, then despite it only allocated half of the physical memory (I had 2 GB left for other services), the container still got killed. The only trace of this killing was a cryptic message in docker ps -a informing that my container stopped with error code 137.
There are two ways to address this: enable cgroups for swap memory in the kernel. The downside of this approach is there’s a performance hit and recompiling the custom kernel is required (in case someone wants to give it a go, steps to achieve are: get the kernel sources, make oldconfig from current settings in config.gz, make menuconfig to enable the missing cgroups swap option, make -j2 to compile the sources and then install grub bootloader, specify the path to the kernel binary, select the boot loader in the VPC dashboard (otherwise it will try to load the default kernel) and that should do the trick).
A slightly faster option is to simply create more swap space and reduce the swappiness. The downside here is naturally the storage space plus, not everyone has the flexibility to move partitions around like in LVM. You can however create a swap file wherever you do have some space at a small performance hit (the calls to pages written to thsi file will go through the FS layer). Here’s how to achieve this:
- First, drop existing caches: echo 3 > /proc/sys/vm/drop_caches
- Second, create a swap file:
dd if=/dev/zero of=/path/to/swap bs=1024 count=1500000 (for 1.5GB of swap)
chmod 0600 /path/to/swap
mkswap /path/to/swap && swapon /path/to/swap
- Finally, reduce swappiness (the level of physical memory use at which pages get written to swap, default is 60): echo 10 > /proc/sys/vm/swappiness Note: use sysctl to make this change persistent if you need to.
With that, the node and the container in which it is running, should remain safe from the OOM killer. There’s also another option here: use docker’s option to disable containers from being OOM-killed, but that’s silly on a production server. The algorithm in a recent kernel will kill the “worst” process. If it can’t, it will kill something else, which could lead to a disaster. In any case, after following the steps above, it is safe to restart the container with docker start geth_container.
Note: it is super comfortable to use zsh with docker, as it auto-completes docker commands and lists help options as well as locally created container and volume names. Despite it is good practice to custom-name things, you don’t have to.
And that’s all – no configuration is required to start participating on the Ethereum network. Peers will be auto-detected within a minute and synchronisation will happen automatically. You might want to reduce verbosity in the console (debug.verbosity(2)) and check which peers you’re connecting to with admin.peers, and obviously the status of your synchronization with eth.syncing.
Mining is only possible after syncing completely, but if you have the disk space for that, then all you need is creating an account:
private.newAccount(“password”) and geth will automatically use this account’s address to store whatever it managed to mine.