The following steps align with our mainnet guide. You may need to adjust file names and directory locations where appropriate. The core concepts remain the same.
As per best practices, always try everything on a testnet before doing it for real on mainnet.
🔥 Problem: Why the commotion?
If improving the stability of the beacon chain is not a good enough reason for you to switch from Prysm to either Teku or Nimbus, you also need to consider that due to the design of the beacon chain you will be subject to severe financial penalties if Prysm ever has an issue. ~Lamboshi on Twitter
🚀 Solution: Increase client diversity by migrating to Teku
🚧 How to Migrate from Prysm to Teku
PegaSys Teku (formerly known as Artemis) is a Java-based Ethereum 2.0 client designed & built to meet institutional needs and security requirements. PegaSys is an arm of ConsenSys dedicated to building enterprise-ready clients and tools for interacting with the core Ethereum platform. Teku is Apache 2 licensed and written in Java, a language notable for its materity & ubiquity.
Replace**<0x_CHANGE_THIS_TO_MY_ETH_FEE_RECIPIENT_ADDRESS>** with your own Ethereum address that you control. Tips are sent to this address and are immediately spendable, unlike the validator's attestation and block proposal rewards.
Replace <MY_GRAFFITI> with your own graffiti message. However for privacy and opsec reasons, avoid personal information. Optionally, leave it blank by deleting the flag option.
🛑 2. Stop and disable Prysm
Stop and disable the Prysm services. Choose your guide.
Verify that your firewall configuration is correct.
sudo ufw status numbered
Example output of firewall configuration:
To Action From------------[ 1] 22/tcp ALLOW IN Anywhere # SSH[ 2] 9000/tcp ALLOW IN Anywhere # eth2 p2p traffic[ 3] 9000/udp ALLOW IN Anywhere # eth2 p2p traffic[ 4] 30303/tcp ALLOW IN Anywhere # eth1[ 5] 22/tcp (v6) ALLOW INAnywhere (v6) # SSH[ 6] 9000/tcp (v6) ALLOW IN Anywhere (v6) # eth2 p2p traffic[ 7] 9000/udp (v6) ALLOW IN Anywhere (v6) # eth2 p2p traffic[ 8] 30303/tcp (v6) ALLOW IN Anywhere (v6) # eth1
Your router's port forwarding setup or cloud provider settings will need to be updated to ensure your validator's firewall ports are open and reachable.
You'll need to add new port forwarding rules for Teku and remove the existing Prysm port forwarding rules.
Optional - Update your server and reboot for best practice.
Copy your validator_keys directory to the data directory we created above and remove the extra deposit_data file. If you no longer have the validator keys on your node, you will need to restore from file backup or restore from secret recovery phrase.
🛑FINAL WARNING REMINDER !!!Do not start the Teku validator client until you have stopped the Prysm one, or you will get slashed (penalized and exited from the system).
Wait until your validator's last attestation is in a finalized epoch - usually about 15 minutes.
Storing your keystore password in a text file is required so that Teku can decrypt and load your validators automatically.
Replace <my_keystore_password_goes_here> with your keystore password between the single quotation marks and then run the command to save it to validators-password.txt
Clear the bash history in order to remove traces of keystore password.
shred -u ~/.bash_history && touch ~/.bash_history
When specifying directories for your validator-keys, Teku expects to find identically named keystore and password files. For example keystore-m_12221_3600_1_0_0-11222333.json and keystore-m_12221_3600_1_0_0-11222333.txt
Create a corresponding password file for every one of your validators.
for f in /var/lib/teku/validator_keys/keystore*.json; do cp /etc/teku/validators-password.txt /var/lib/teku/validator_keys/$(basename $f .json).txt; done
Verify that your validator's keystore and validator's passwords are present by checking the following directory.
Syncing the beacon node might take up to 36 hours depending on your hardware. Keep validating using your current Prysm setup until it completes. However, thanks to Teku's Checkpoint sync, you'll complete this step in a few minutes.
Syncing is complete when your beacon node's slot matches that of a block explorer's slot number (i.e. https://beaconcha.in/)
Check the beacon node syncing progress with the following:
journalctl-fubeacon-chain
Check the logs to verify the services are working properly and ensure there are no errors.
#view and follow the logjournalctl-fubeacon-chain
Confirm that your new Teku validator has started attesting with block explorer beaconcha.in or beaconscan.com
🛠 Some helpful systemd commands
🗄 Viewing and filtering logs
#view and follow the logjournalctl-fubeacon-chain
#view log since yesterdayjournalctl--unit=beacon-chain--since=yesterday
#view log since todayjournalctl--unit=beacon-chain--since=today
#view log between a datejournalctl--unit=beacon-chain--since='2020-12-01 00:00:00'--until='2020-12-02 12:00:00'
🔎View the status of the beacon chain
sudo systemctl status beacon-chain
🔁Restart the beacon chain
sudo systemctl restart beacon-chain
🛑Stop the beacon chain
sudo systemctl stop beacon-chain
📡 6. Update Prometheus and Grafana monitoring
Select your Ethereum execution engine and then re-create your prometheus.yml configuration file to match Teku's metric's settings.
cat> $HOME/prometheus.yml<<EOFglobal: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor'# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: - job_name: 'node_exporter' static_configs: - targets: ['localhost:9100'] - job_name: 'nodes' metrics_path: /metrics static_configs: - targets: ['localhost:8008'] - job_name: 'geth' scrape_interval: 15s scrape_timeout: 10s metrics_path: /debug/metrics/prometheus scheme: http static_configs: - targets: ['localhost:6060']EOF
cat> $HOME/prometheus.yml<<EOFglobal: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor'# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: - job_name: 'node_exporter' static_configs: - targets: ['localhost:9100'] - job_name: 'nodes' metrics_path: /metrics static_configs: - targets: ['localhost:8008'] - job_name: 'besu' scrape_interval: 15s scrape_timeout: 10s metrics_path: /metrics scheme: http static_configs: - targets: - localhost:9545EOF
cat> $HOME/prometheus.yml<<EOFglobal: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor'# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: - job_name: 'node_exporter' static_configs: - targets: ['localhost:9100'] - job_name: 'nodes' metrics_path: /metrics static_configs: - targets: ['localhost:8008'] - job_name: 'nethermind' scrape_interval: 15s scrape_timeout: 10s honor_labels: true static_configs: - targets: ['localhost:9091']EOF
Nethermind monitoring requires Prometheus Pushgateway. Install with the following command.
sudoapt-getinstall-yprometheus-pushgateway
Pushgateway listens for data from Nethermind on port 9091.
cat> $HOME/prometheus.yml<<EOFglobal: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor'# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: - job_name: 'node_exporter' static_configs: - targets: ['localhost:9100'] - job_name: 'nodes' metrics_path: /metrics static_configs: - targets: ['localhost:8008'] - job_name: 'openethereum' scrape_interval: 15s scrape_timeout: 10s metrics_path: /metrics scheme: http static_configs: - targets: ['localhost:6060']EOF
cat> $HOME/prometheus.yml<<EOFglobal: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor'# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: - job_name: 'node_exporter' static_configs: - targets: ['localhost:9100'] - job_name: 'nodes' metrics_path: /metrics static_configs: - targets: ['localhost:8008'] - job_name: 'erigon' scrape_interval: 10s scrape_timeout: 3s metrics_path: /debug/metrics/prometheus scheme: http static_configs: - targets: ['localhost:6060']EOF