How to Setup Ethereum Pool

This a step by step guide on how to setup your own Ethereum Mining Pool. This guide is going over how to setup an ethereum pool using open source ethereum pool software. This is meant to setup a mining pool for a SINGLE CRYPTO CURRENCY. This is not a guide for a Multipool!!!!

If you want to see what it looks like before you set it all up, head to the what to see more pool check this list

For this guide purpose we I will be using Ethereum Crypto Currency .


VPS with at least 2GB of Ram you can get it from any where like vultr , lenode , digitalocen
Ubuntu Ubuntu 16.04 LTS
Putty (
WinSCP (


At this point you should have your VPS started, putty up and running and your logged in as root.

sudo apt-get update

sudo apt-get dist-upgrade

sudo reboot

Install make and build tools

sudo apt-get install build-essential make

GoLangInstall GoLang


sudo tar -xvf go1.9.2.linux-amd64.tar.gz

sudo mv go /usr/local

export GOROOT=/usr/local/go

export PATH=$GOPATH/bin:$GOROOT/bin:$PATH

export GOPATH=$HOME/go

Type: go version

You should have a response with the version.

Installing Redis latest version

tar xvzf redis-stable.tar.gz
cd redis-stable
sudo cp src/redis-server /usr/local/bin/
sudo cp src/redis-cli /usr/local/bin/

sudo mkdir /etc/redis

sudo mkdir /var/redis

sudo cp utils/redis_init_script /etc/init.d/redis_6379

sudo cp redis.conf /etc/redis/6379.conf

sudo nano /etc/redis/6379.conf

Edit the configuration file, making sure to perform the following changes:
Set daemonize to yes (by default it is set to no).
Set the dir to /var/redis/6379 (very important step!)
sudo mkdir /var/redis/6379

sudo update-rc.d redis_6379 defaults

sudo /etc/init.d/redis_6379 start

Test Redis Install

Expected response

redis> ping



Install nginx

sudo apt-get install nginx

NGINX starts automatically on port 80. This is all that needs to be competed at this step.

NPM and NODE Install
Install using this guide provided ..

ln -s `which nodejs` /usr/bin/node

User Setup

You never run things like your coin daemon (wallet), or pool as root!

Let’s create a user for your mining pool.

Login to ssh using putty as root.

adduser usernameyouwant

You’ll be prompted for a password, please use a password that is different from your root password.
The other info it asks for you can either fill out or just leave blank and hit enter.

Ethereum Daemon Setup (Wallet)

Now let’s setup the coin daemon, I will be using Ethereum.

cd ~

mkdir eth

cd eth

git clone

cd go-ethereum

export GOROOT=/usr/local/go

export PATH=$GOPATH/bin:$GOROOT/bin:$PATH

export GOPATH=$HOME/go

make all

Run Eth For first Time

cd build/bin/

./geth --fast --cache=512

It will start sync process depend on speed of of your server it may take time to completely sync .

Once complete now we need generate wallet address in our pool wallet

./geth –rpc console

-add account for miner pool


You will see like this ” 0xf0d066c928aeb570847900a1a39fde27a98d83ce ”

Save this adders in you pc somewhere as we need to use this in later stage of configuration


To Exit: Control B release buttons and then press D

Mining Pool Setup

cd ..

git clone

cd open-ethereum-pool


export GOROOT=/usr/local/go

export PATH=$GOPATH/bin:$GOROOT/bin:$PATH

export GOPATH=$HOME/go


** Edit open-ethereum-pool >>payouts >>unlocker.go and change [ var homesteadReward = math.MustParseBig256(“3000000000000000000”) ] with your coin block reward

** Edit pool configs before ./build

cp ~/open-ethereum-pool/config.example.json ~/open-ethereum-pool/build/bin/config.json

edit config.json

** edit www/config/environment.js

Change ApiUrl: ‘//’ to ApiUrl: ‘//

Running Pool

./build/bin/open-ethereum-pool config.json
You can use Ubuntu upstart – check for sample config in upstart.conf.

Building Frontend

The frontend is a single-page Ember.js application that polls the pool API to render miner stats.

cd www
Change ApiUrl: '//' in www/config/environment.js to match your domain name. Also don’t forget to adjust other options.

npm install -g ember-cli@2.9.1
npm install -g bower
npm install
bower install

Configure nginx to serve API on /api subdirectory. Configure nginx to serve www/dist as static website.

Serving API using nginx

Create an upstream for API:

upstream api {

and add this setting after location /:

location /api {
proxy_pass http://api;


You can customize the layout using built-in web server with live reload:

ember server --port 8082 --environment development
Don’t use built-in web server in production.

Check out www/app/templates directory and edit these templates in order to customise the frontend.


Configuration is actually simple, just read it twice and think twice before changing defaults.

Don’t copy config directly from this manual. Use the example config from the package, otherwise you will get errors on start because of JSON comments.

// Set to the number of CPU cores of your server
“threads”: 2,
// Prefix for keys in redis store
“coin”: “eth”,
// Give unique name to each instance
“name”: “main”,

“proxy”: {
“enabled”: true,

// Bind HTTP mining endpoint to this IP:PORT
“listen”: “”,

// Allow only this header and body size of HTTP request from miners
“limitHeadersSize”: 1024,
“limitBodySize”: 256,

/* Set to true if you are behind CloudFlare (not recommended) or behind http-reverse
proxy to enable IP detection from X-Forwarded-For header.
Advanced users only. It’s tricky to make it right and secure.
“behindReverseProxy”: false,

// Stratum mining endpoint
“stratum”: {
“enabled”: true,
// Bind stratum mining socket to this IP:PORT
“listen”: “”,
“timeout”: “120s”,
“maxConn”: 8192

// Try to get new job from geth in this interval
“blockRefreshInterval”: “120ms”,
“stateUpdateInterval”: “3s”,
// Require this share difficulty from miners
“difficulty”: 2000000000,

/* Reply error to miner instead of job if redis is unavailable.
Should save electricity to miners if pool is sick and they didn’t set up failovers.
“healthCheck”: true,
// Mark pool sick after this number of redis failures.
“maxFails”: 100,
// TTL for workers stats, usually should be equal to large hashrate window from API section
“hashrateExpiration”: “3h”,

“policy”: {
“workers”: 8,
“resetInterval”: “60m”,
“refreshInterval”: “1m”,

“banning”: {
“enabled”: false,
/* Name of ipset for banning.
Check documentation.
“ipset”: “blacklist”,
// Remove ban after this amount of time
“timeout”: 1800,
// Percent of invalid shares from all shares to ban miner
“invalidPercent”: 30,
// Check after after miner submitted this number of shares
“checkThreshold”: 30,
// Bad miner after this number of malformed requests
“malformedLimit”: 5
// Connection rate limit
“limits”: {
“enabled”: false,
// Number of initial connections
“limit”: 30,
“grace”: “5m”,
// Increase allowed number of connections on each valid share
“limitJump”: 10

// Provides JSON data for frontend which is static website
“api”: {
“enabled”: true,
“listen”: “”,
// Collect miners stats (hashrate, …) in this interval
“statsCollectInterval”: “5s”,
// Purge stale stats interval
“purgeInterval”: “10m”,
// Fast hashrate estimation window for each miner from it’s shares
“hashrateWindow”: “30m”,
// Long and precise hashrate from shares, 3h is cool, keep it
“hashrateLargeWindow”: “3h”,
// Collect stats for shares/diff ratio for this number of blocks
“luckWindow”: [64, 128, 256],
// Max number of payments to display in frontend
“payments”: 50,
// Max numbers of blocks to display in frontend
“blocks”: 50,

/* If you are running API node on a different server where this module
is reading data from redis writeable slave, you must run an api instance with this option enabled in order to purge hashrate stats from main redis node.
Only redis writeable slave will work properly if you are distributing using redis slaves.
Very advanced. Usually all modules should share same redis instance.
“purgeOnly”: false

// Check health of each geth node in this interval
“upstreamCheckInterval”: “5s”,

/* List of geth nodes to poll for new jobs. Pool will try to get work from
first alive one and check in background for failed to back up.
Current block template of the pool is always cached in RAM indeed.
“upstream”: [
“name”: “main”,
“url”: “”,
“timeout”: “10s”
“name”: “backup”,
“url”: “”,
“timeout”: “10s”

// This is standard redis connection options
“redis”: {
// Where your redis instance is listening for commands
“endpoint”: “”,
“poolSize”: 10,
“database”: 0,
“password”: “”

// This module periodically remits ether to miners
“unlocker”: {
“enabled”: false,
// Pool fee percentage
“poolFee”: 1.0,
// Pool fees beneficiary address (leave it blank to disable fee withdrawals)
“poolFeeAddress”: “”,
// Donate 10% from pool fees to developers
“donate”: true,
// Unlock only if this number of blocks mined back
“depth”: 120,
// Simply don’t touch this option
“immatureDepth”: 20,
// Keep mined transaction fees as pool fees
“keepTxFees”: false,
// Run unlocker in this interval
“interval”: “10m”,
// Geth instance node rpc endpoint for unlocking blocks
“daemon”: “”,
// Rise error if can’t reach geth in this amount of time
“timeout”: “10s”

// Pay out miners using this module
“payouts”: {
“enabled”: false,
// Require minimum number of peers on node
“requirePeers”: 25,
// Run payouts in this interval
“interval”: “12h”,
// Geth instance node rpc endpoint for payouts processing
“daemon”: “”,
// Rise error if can’t reach geth in this amount of time
“timeout”: “10s”,
// Address with pool balance
“address”: “0x0”,
// Let geth to determine gas and gasPrice
“autoGas”: true,
// Gas amount and price for payout tx (advanced users only)
“gas”: “21000”,
“gasPrice”: “50000000000”,
// Send payment only if miner’s balance is >= 0.5 Ether
“threshold”: 500000000,
// Perform BGSAVE on Redis after successful payouts session
“bgsave”: false

If you are distributing your pool deployment to several servers or processes, create several configs and disable unneeded modules on each server. (Advanced users)

I recommend this deployment strategy:

Mining instance – 1x (it depends, you can run one node for EU, one for US, one for Asia)
Unlocker and payouts instance – 1x each (strict!)
API instance – 1x

Be the first to comment

Leave a Reply

Your email address will not be published.