Upgrade and Monitor Packages on Ubuntu

It is a New Year. As I was moving my rigs around, I felt brave enough to potentially cause chaos and upgrade the packages. Mainly doing so to keep up to date on security issues.

SSHing in showed me the state of affairs:

211 packages can be updated.
4 updates are security updates.

To update the packages, type the following into terminal:

sudo apt-get update
sudo apt-get dist-upgrade

Lots of logging information should start scrolling by. If for some reason you need to track it via an actual log file, you can do the following:

tail -f /var/log/dpkg.log

Then, reboot for good measure:

sudo shutdown -r now

Next time you login, you will see all updates have been applied.

0 packages can be updated.
0 updates are security updates.

Finding Rigs on Your Local Network

Just moved my rigs to a network setup for DHCP, and as a result I lost control of what-rigs-have-what-IP.

The way I got a handle on it was to use nmap from one of my Ubuntu machines.

I’ve got a mix of Ubuntu and Windows machines.

To locate, I do a scan for all machines running SSH (port 22) or RDP (port 3389) on my local network (192.168.3.XXX). Then I go through and look for hosts that report “open” for the ports of interest.

sudo nmap -sS -p 22
sudo nmap -sS -p 3389

Magical OSX Script to Backup Remote Directories Locally

You can use the following script to backup directory structures from a remote host.  

It will create a time stamped folder structure that will allow you to track multiple backups over time.

I use this to backup my Dreamhost website before I do any big updates.  Likely could also be used in crypto scenarios. This may work on Ubuntu as well Рhave not tried it.

It uses rsync which is an awesome directory synchronization tool. It can be used to synchronize directories and only transfer what files have changed (i.e. the deltas).

Here it is:


read -p "Continue with PROD -> LOCAL BACKUP (yes/no)? " CONT

if [ "$CONT" == "yes" ]; then
        backup_parent_dir="$HOME/Desktop/backup prod"
        backup_date=`date +%Y_%m_%d_%H_%M`

        mkdir -p "$backup_dir" # The -p flag will create all subdirectories needed to creat the target directory

        rsync -avz -C --delete some_username@somewebsite.com:/some/awesome/folder/path/ "$backup_dir"
        echo "OK...not doing anything";

Bold indicates areas you will need to customize per your lifestyle.

Note on the bolded some_username@somewebsite.com part…that only works without username and password because I set up the two machines to trust each other as explained here.


My rig with a BIOSTAR TB250-BTC board was constantly logging PCIe Bus Error messages under /var/log/kern.log and /var/log/sys.log. About twenty GBs worth or log files!

Beyond the logging errors, I couldn’t have more than four GPUs attached until I performed the below fix. FYI, I am using five ZOTAC GeForce GTX 1060 AMP Edition (model: ZT-P10600B-10M) cards.

The solution: you need to enable “Miner Mode” in the BIOS Settings for the board.

  1. During boot hold the delete key until you enter the motherboard setup.
  2. Once in, navigate to: Chipset => Miner Mode => Set to [Enabled]

For reference, here’s the error that was filling my logs:

pcieport 0000:00:1c.7:   device [8086:a297] error status/mask=00000001/00002000
pcieport 0000:00:1c.7:    [ 0] Receiver Error         (First)
pcieport 0000:00:1c.7: AER: Corrected error received: id=00e7
pcieport 0000:00:1c.7: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e7(Receiver ID)

Doing research online led me down a couple of paths that are NOT needed, and revolved around adding pci flags to /etc/default/grub. Some red-herring suggestions were:

  • GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash pci=nommconf”
  • GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash pci=nomsi”

Lesson for the future: After building rigs it would be worth seeing if errors are being perpetually written to the /var/log/ directory. You may not realize it until you either run out of space or if the error finally manifests itself in a way that will cause you to investigate. In my case it was added a fifth GPU.

Running Headless Nvidia Mining Rig Without HDMI Plugs

To run a headless Ubuntu server that you can remote into and will mine your crypto you need to do one of the following:

  • The easy: Buy a dummy HDMI plug for each rig and move on with your life.
  • The hard (but adventuresome):
      1. From the terminal run this command to update /etc/X11/xorg.conf to fake your system out into thinking you have a monitor connected:
        $ sudo nvidia-xconfig --use-display-device="DFP-0" --connected-monitor="DFP-0"

        Note: this command assumes that you previously booted the system into the GUI and installed the “NVIDIA X Server Settings” client (needed to populate xorg.conf). See my overclocking post if you don’t have “NVIDIA X Server Settings” running yet.

      2. Now if you want to VNC into (a.k.a. “remote into”) your rig, you need to install a VNC server on the destination machine (see my post on steps for that).Assuming you did step one above, and installed a VNC server you will discover that when you remote in the resolution defaults to 800 x 600 and is not helpful. In order to have a sensible resolution default when you remote in, modify (or create) a file sudo nano /etc/lightdm/lightdm.conf to resemble the following:

        Note: Update YOURLOGIN with the account you use to SSH into with.

        Now, we have to create and configure the vc_monitor_resolution.sh file.

        $ cd ~/Desktop
        $ nano vnc_monitor_resolution.sh

        Then within the file, add:

        xrandr --fb 1360x768

        Note: The resolution can be whatever works for you.

        After that we have to make the file executable.

        chmod a+x vnc_monitor_resolution.sh

        Next time you VNC in, magic should happen and it will be in the resolution specified above!

        (References: 1 2 3)

Overclocking GTX 1060s with Coolbits with Persistence Through Reboots

A path for overclocking GTX 1060’s on Ubuntu 16.04 LTS:

  1. Install NVIDIA drivers via the GUI: Start Menu -> Software -> Software & Updates -> Additional Drivers Tab -> “Using NVIDIA binary driver…” radio button -> Apply -> Restart
  2. Know that there is a graphical interface for managing your NVIDIA GPUs: Start Menu -> NVIDIA X Server Settings
  3. See that you cannot manually adjust clock settings via the “NVIDIA X Server Settings” application under the PowerMizer settings for each GPU
  4. Understand that there are two important configuration files that govern configurations for your NVIDIA GPUs: /etc/X11/xorg.conf and ~/.nvidia-settings-rc. The latter, I believe, is populated once “NVIDIA X Server Settings” is run.
  5. To “unlock” the fields to adjust clock speed in “NVIDIA X Server Settings” open up a terminal window and run:
    $ sudo nvidia-xconfig --enable-all-gpus
    $ sudo nvidia-xconfig --cool-bits=8

    This will update the xorg.conf file for each of your GPUs and set cool-bits flags. (More information on cool-bits)

  6. Now reboot:
    $ sudo shutdown -r now
  7. Once rebooted, “NVIDIA X Server Settings” will show you the unlocked graphics clock and memory transfer fields under PowerMizer.
  8. Like many things with Ubuntu, there is a potential for drama now. Sometimes you can update the field by hitting enter. Other times, you can hit enter and the field will not update. In either case a better approach that persists through reboots is to edit the ~/.nvidia-settings-rc file.At the end of the file, I added the following to overclock my two GPUs:
  9. There are many different ways to do things, and the above steps came about to work around existing command line issues that exist that prevent overclocking via the nvidia-settings command from taking hold. The above also required the X server be running, and there may be ways around that – but I take it as a given that I will be running some sort of GUI as a convenience. Much of the flow of the above came from this discussion. Lastly, here’s a good discussion that may work in the future when existing driver issues are resolved.

Backup Disk Images to Another Ubuntu Box

I wanted to backup my mining rig disk images to another machine with the hopes that I could easily load the images to extract files. The following works well for home directories that are not encrypted. For directories that are encrypted, I am currently copying files out selectively – not sure if there is a better way. But, what follows works for copying unencrypted disk images over SSH and then mounting them on another machine.

Determine the path for the filesystem you want to backup:

$ df

On the machine you want to backup:

$ sudo dd if=path/from/df | gzip -c --fast | ssh user@ip 'dd of=~/path/to/backup.dd.gz' &

Then to mount the image on the receiver:

$ gunzip -c backup.dd.gz >backup.dd
$ sudo mkdir /mnt/disk_image
$ sudo losetup --partscan --find --show backup.dd
$ sudo mount /dev/loop0 /mnt/disk_image

To unmount after:

$ losetup -d /dev/loop0

References 1 2 3 4

Backup Ethereum Wallet

This is how I backup my Ethereum root folder (which includes the wallet) from one Ubuntu box to another.

This command is run from the perspective of the receiver-of-the-backup (i.e. where you want the backup to reside).

$ scp -r user@ip:~/.ethereum/ ~/Desktop/backup/

In your backup directory, to make the folder unhidden do this:

$ mv .ethereum ethereum