uniq_

Getting TLS certs with lets encrypt (certbot) for a Debian 9 (Stretch) Server running Nginx

I needed to get TLS certs with lets encrypt for a Debian 9 (Stretch) with nginx web-server.

It's super easy to get TLS certificates with certbot. Please note that there are several ways to do a ACME verification. Using apache looks similar to using nginx. There's also a standalone server built into certbot should you have no http server running.

# install required packages
sudo apt install certbot python-cerbot-nginx

# get certificates
# use the fqdn (full qualified domain name) of the machine you're running
# this on instead or example.com. Also supply a mail address for
# notifications from lets encrypt instead of hostmaster@example.com
sudo certbot certonly --agree-tos --nginx -d example.com -m hostmaster@example.com

# add cronjob for renewing certs
sudo bash -c '(crontab -l; echo "@daily certbot renew --quiet") | crontab -'

That's it. Go ahead and take a look at your certificates.

sudo ls -l /etc/letsencrypt/live

sources:

recover deleted files on ext4

So I just happened to delete two files by accident. So here's a quick and dirty way to recover them.

Please note: Deleted files may be over-written at any point in time by your OS. So typically you have to immediately unmount the disk. (or pull the plug if its on '/' and start forensics distro) for minimizing risk of data-loss.

However I felt lucky and just did this:

sudo apt-get install extundelete
sudo extundelete --restore-file /home/user/theFile.txt /dev/sda1

One file could be restored, the other one not. I guess I was lucky.

KVM nesting on Debian 8

Turns out running KVM inside KVM performs acceptable. Here's what I had to to do give it a try.

check if nesting is enabled

cat /sys/module/kvm_intel/parameters/nested

should print: Y

if not you'll need to enable it (requires reboot to become effective)

sudo bash -c "echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf"

next check if boot parameters correct

egrep 'KVM_INTEL|KVM_AMD' /boot/config-3.16.0-4-amd64

should return:

CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m

check if /dev/kvm exists, if not run:

sudo modprobe kvm-intel

That's it, your host system is now configured to do KVM nesting. Make sure to configure your libvirt/KVM guest VMs to allow using svm/mvx instruction sets.

resources:

Increase size of a libvirt kvm image

I have this huge VM and it tends to grow, luckily I could figure out how to increase the VM image. (Following this guide the old KVM image will stay in tact without any modifications.)

On the host system:

# install required tools
sudo apt-get install libguestfs-tools

# shutdown running libvirt container
virsh shutdown my_vm

# move old image
sudo mv /var/lib/libvirt/images/my_vm.img /var/lib/libvirt/images/my_vm.old.img

# create new empty file for our new kvm image
truncate -s 128G /var/lib/libvirt/images/my_vm.img
# or if you want qcow2:
# qemu-img create -f qcow2 /var/lib/libvirt/images/my_vm.qcow2 128G

# (optional) you can list the partitions of an existing KVM image like this
sudo virt-filesystems --long --parts --blkdevs -h -a /var/lib/libvirt/images/my_vm.old.img
# in case of lvm you can list lvm partitions like this:
# virt-filesystems --logical-volumes --long -a /var/lib/libvirt/images/my_vm.old.img

# make copy of old image and expand the new image to all available space in the designeted new image file.
sudo virt-resize --expand /dev/sda1 /var/lib/libvirt/images/mv_vm.old.img /var/lib/libvirt/images/my_vm.img

# if you are using LLVM you might need to run the command like this:
# virt-resize --expand /dev/sda2 --LV-expand /dev/vg_guest/lv_root olddisk newdisk

Once this is complete you might want to edit the qemu VM definition in /etc/libvirt/qemu/my_vm.xml. Alternatively you may also simply rename the old image and use the new resized image to the path of the old one.

Now you can start the VM again:

virsh start my_vm

Next you'll need to connect your VM and resize the partition to fill the newly create empty (virtual) disk space:

sudo resize2fs /dev/sda1

resources:

freenode irc over tor (using hexchat)

I thought IRC is stable technology and easy to use in a privacy friendly way. Turns out this assumption is wrong. Here's what I needed to do to get me set up for chatting on freenode over a TOR secured connection:

  • install hexchat

    sudo apt-get install hexchat

  • start hexchat

  • enter nick-names
  • select freenode from the list
  • click edit
  • select: servers: irc.freenode.net
  • select: connect to select server only
  • select: use ssl for the servers on this network
  • select: login method: sasl external (cert)
  • click close
  • click connect

  • register a freenode account:

    /msg nickserv register your_password your_email_address
    
  • Wait for an email containing an irc-command to verify your account. Copy and paste that command to hexchat

  • restart hexchat, login with auth method: username+password

  • create sasl cert and display fingerprint

    mkdir -p ~/.config/hexchat/certs
    openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout ~/.config/hexchat/certs/client.pem -out ~/.config/hexchat/certs/client.pem
    chmod 600 .config/hexchat/certs/client.pem
    
  • add fingerprint to freenode account

    /msg nickserv CERT ADD
    
  • open: settings > preferences > network setup and enter tor proxy
    (defaults to host: 127.0.0.1 port: 9050 type: socks5)

  • restart hexchat and edit freenode network settings

  • change login method to sasl external
  • use the add button to add following domain name as server: freenodeok2gncmy.onion

  • click: close

  • click: connect

Took my quite a while to figure out what I actually needed to do to get this working. Frankly it feels like a waste when configuration a chat client takes a couple of hours. At least it works now, so see you on freenet eventually.

resources:

raid 1 using mdadm and llvm setup on debian

I'm assuming following disks are going be part of the new raid1:

  • /dev/sdb
  • /dev/sdc
  1. Create software raid 1:

    sudo mdadm --zero-superblock /dev/sdb /dev/sdc
    sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
    sudo bash -c "mdadm --detail --scan /dev/md0 >> /etc/mdadm/mdadm.conf"
    
  2. setup lvm:

    # create lvm physical volume
    sudo pvcreate /dev/md0
    
    # create lvm volume group
    sudo vgcreate vgSix /dev/md0
    
    # create lvm logical volume
    sudo lvcreate -n six --size 5T vgSix
    
    # make fs
    sudo mkfd -t ext4 /dev/vgSix/six
    
    # label
    sudo e2label /dev/vgSix/six six
    

resources:

good netstat options

netstat shows you which ports are open on your linux machine. Most of the time I simply want to see which ports are open and what process opened them and this is how I do it:

# install netstat if not installed already
sudo apt install net-tools

# run netstat as root so it can display more info about processes
sudo netstat -tulpn

I heard there's a more modern replacement for this, but I my days only got so many hours.

resources:

host docker container via systemd on debian 8

I want to run a custom docker image as a systemd service. This assumes I'm starting out on plain debian 8:

# add backports repository
sudo bash -c 'echo -e "\\n\\n#backports\\ndeb http://ftp.debian.org/debian jessie-backports main" >> /etc/apt/sources.list'

# install docker
sudo apt-get update
sudo apt-get install docker.io

# optional: add your user to docker group so you dont
# have to use sudo all the time for calling docker
# (you will need to log in again to make this come into effect.
# or simply start a new shell)
#sudo adduser $USER docker

# tell systemd to start docker on boot
sudo systemctl enable docker


# create docker container
# (just a simple test container, you might create your own...)
sudo docker run -d -p 80:80 --name example_webserver nginx

## create a systemd unit for a docker container
## (repeat this step for every container you need)

cat << EOF | sudo bash -c 'cat >> /etc/systemd/system/docker-example_webserver.service'

[Unit]
Description=Test Web Server
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a example_webserver
ExecStop=/usr/bin/docker stop -t 2 example_webserver

[Install]
WantedBy=default.target

EOF

# reload systemd because we added a new unit
sudo systemctl daemon-reload

# start docker container
sudo systemctl start docker-example_webserver.service

# tell systemd to start docker container on boot
sudo systemctl enable docker-example_webserver.service

resources:

simple multi user git server setup

This is a very simple, yet effective setup for multi user git servers. I'm using file-system permissions for managing users and use ssh for remote access.

  1. get an ssh server up and running.
  2. create a new user for each git repository. (eg. sudo adduser git-my-project)
  3. init git repo

    sudo su git-my-project
    cd --
    git init --bare --shared=group my-project.git
    
  4. add user to according group

    sudo adduser devuser1 git-my-project
    sudo adduser devuser2 git-my-project
    
  5. clone repo

    git clone 'ext::ssh -i .../.ssh/id_rsa devuser1@repo.buzzmark.com %S /home/git-my-project/my-project.git'