QNAP serial console setup with Arduino

The Arduino Uno provides easy access to the QNAP serial console.

Yesterday I had to repair a wildly blinking QNAP NAS TS-212 that could not be accessed via HTTP Web UI or SSH. Also the system rebooted regularly and it was not clear why … Also googling didn’t get me anywhere, except wild speculations in all possible directions. Hmm… in a case like this, you wish you had the good old serial console, which gives you more insight into the system.

The precautionary note: Everyone tinkers at their own risk! You should also have some previous knowledge to be able to carry out the steps described here successfully! So, don’t whine if you cause a short circuit or something like that and you have ruined your QNAP! 🙂

After unscrewing the case the positive surprise: On the mainboard you can actually find a 4-port connector, which according to the information in the net actually points to a RS-232 interface. On my model TS-212 the connector is labeled CN9. It is a 3.3V TTL connector and the four pins are TX, VCC, RX, GND.

Now the only question is, how can I tap the serial port and use it on my laptop via serial-USB adapter? The solution: An Arduino Uno board brings everything I need! On one side we have a digital input strip with connectors for RX, TX and GND and on the other side we have the USB connector to connect it directly to the laptop. With a small adapter cable the connection works easily (see picture).

Pin assignment of the serial interface of a QNAP TS-212
The pins RX, TX, GND are connected to the corresponding digital inputs of the Arduino
… finally, connect the USB port to the laptop.

Then I start the program Minicom on my Linux laptop with the Arduino port /dev/ttyACM0 and with the terminal settings VT102, 115200 Baud, 8N1. And lo and behold, immediately I have the entire output of the QNAP boot process in my terminal.

This now offers me much better possibilities for error analysis. Furthermore, I can also intervene in the boot loop by pressing any key shortly after the reboot when “Marvell UBoot” is displayed. There are some system commands available here that might help.

XEN: How to remove an old-school EXT3 Storage Repository and create a local LVM based SR with RAID 1

With newer XenServer Version (> 5.x) you cannot longer use the old EXT3 based local storage repositories to store your virtual disks. Citrix has introduced LVM based storage repositories instead.

Caution: Make a backup before executing the following steps, otherwise you’ll lose all your data on the disk!

I assume, we already have a configured RAID1 mirror with a XSLocalEXT storage repository due to a XenServer upgrade. If you want to create a new software mirror with two new installed local disks, you have to create a new md-device with mdadm (this is not part of this documentation).

First, identify the status and devices of the mirror

If the RAID is in “rebuild” status, wait until it’s in “active” status.

[root@xenserver ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda[1] sdb[0]
488383488 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sdb[1](S) sda[0](S)
5928 blocks super external:imsm
unused devices: <none>

Get more details about the Software Mirror

mdadm --detail /dev/md126
mdadm --examine --brief --scan  --config=partitions

Identify and remove old logical volume from RAID1

[root@xenserver ~]# lvscan
  ACTIVE            '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/MGT' [4.00 MiB] inherit
  ACTIVE            '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-9ac571bf-5809-4f1e-a767-9fcb0c1d0b88' [19.57 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-a5b9ff3b-3bf2-4ea8-a53b-7fc086f4b4ab' [80.16 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-186f9092-9d98-410c-8839-024172512a36' [78.31 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-6fb6374a-1b83-48c6-bd26-277f19dcf4a8' [34.78 GiB] inherit
  inactive          '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-940e675e-aa13-4aa1-b694-41b6516b7e28' [48.93 GiB] inherit
  inactive          '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-60d3bd98-f81d-412b-86f6-d1d65ae6c7d2' [34.78 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e/VHD-ac10a162-e39e-4e61-a40c-3221413da002' [65.13 GiB] inherit
  inactive          '/dev/XSLocalEXT-6123fd9a-b126-2b74-a476-5a120175a1d9/6123fd9a-b126-2b74-a476-5a120175a1d9' [465.75 GiB] inherit

We want to remove the device /dev/XSLocalEXT …

First deactivate the LV if not already done

[root@xenserver ~]# lvchange -a n /dev/XSLocalEXT-6123fd9a-b126-2b74-a476-5a120175a1d9/6123fd9a-b126-2b74-a476-5a120175a1d9

Now you can remove the LV. You must add the metadata_read_only option, because Xenserver sets a read-only flag.

lvremove /dev/XSLocalEXT-6123fd9a-b126-2b74-a476-5a120175a1d9/6123fd9a-b126-2b74-a476-5a120175a1d9 --config global{metadata_read_only=0}

Remove also the empty volume group

[root@xenserver ~]# vgs

VG #PV #LV #SN Attr VSize VFree
VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e 1 8 0 wz--n- 890.00g 528.33g
XSLocalEXT-6123fd9a-b126-2b74-a476-5a120175a1d9 1 0 0 wz--n- 465.75g 465.75g

vgremove XSLocalEXT-6123fd9a-b126-2b74-a476-5a120175a1d9 --config global{metadata_read_only=0}

Check also, that the physical volume has no more corresponding logical volume

[root@xenserver ~]# pvscan
  PV /dev/sdd3      VG XSLocalEXT-fa330a8c-3f42-42f5-9935-3e4b40c03be3      lvm2 [457.75 GiB / 0    free]
  PV /dev/sdc3      VG VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e   lvm2 [890.00 GiB / 528.33 GiB free]
  PV /dev/md126p1                                                           lvm2 [465.76 GiB]
  Total: 3 [1.77 TiB] / in use: 2 [1.32 TiB] / in no VG: 1 [465.76 GiB]

[root@xenserver ~]
# pvs PV VG Fmt Attr PSize PFree /dev/md126p1 lvm2 --- 465.76g 465.76g /dev/sdc3 VG_XenStorage-04cd13e0-8237-7754-2d66-4a6a10c5137e lvm2 a-- 890.00g 528.33g /dev/sdd3 XSLocalEXT-fa330a8c-3f42-42f5-9935-3e4b40c03be3 lvm2 a-- 457.75g 0

Reusing the local RAID1 by creating a new local storage repository with type=lvm

First, identify the MD-Partition with it’s uuid

ll /dev/disk/by-id/

lrwxrwxrwx 1 root root 13 Dec 29 09:26 md-uuid-27052c74:af76a037:e50f003e:65d1ffb8-part1 -> ../../md126p1

Then create the new LVM based storage repository

xe sr-create type=lvm content-type=user device-config:device=/dev/disk/by-id/md-uuid-27052c74:af76a037:e50f003e:65d1ffb8-part1 name-label="Local Disk RAID1"

Check, that the SR was successfully created

[root@xenserver ~]# lsblk
NAME                                                                                                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                                                                      8:0    0 465.8G  0 disk
└─md126                                                                                                  9:126  0 465.8G  0 raid1
  └─md126p1                                                                                            259:0    0 465.8G  0 md
    └─VG_XenStorage--d5e61586--e747--dac1--f971--29281873a18c-MGT                                      253:2    0     4M  0 lvm
sdb                                                                                                      8:16   0 465.8G  0 disk
└─md126                                                                                                  9:126  0 465.8G  0 raid1
  └─md126p1                                                                                            259:0    0 465.8G  0 md
    └─VG_XenStorage--d5e61586--e747--dac1--f971--29281873a18c-MGT                                      253:2    0     4M  0 lvm

You can now use the new created SR in Xencenter to store your VMs.

Docker Swarm Cluster on Raspberry Pi with Raspbian Stretch

Docker is is on everyone’s lips and now you can also use Docker on the Raspi. But one is not enough, we’ll install a Docker Swarm Cluster. If you’d like to know more about Docker and Swarm see the homepage of Docker Community.

First install a fresh Rasbian Stretch image like explained here:

Raspberry Pi Headless Installation with SSH enabled and WLAN

Update your packages to the newest versions and install some dependencies.

sudo apt-get update && sudo apt-get upgrade
sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common

Docker now supports also Debian Stretch (Docker CE Installation Guide)
So you can install Docker easiely with a Helper Script. Download the Script and execute it:

curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh

After successful setup, you can add the user ‘pi’ to the ‘docker’ group, to directly execute docker commands

sudo usermod -aG docker pi

After a reboot, you can execute the docker commands with your ‘pi’ login:

docker run hello-world

Initializing a Swarm Cluster

Docker Swarm Documentation

To initialize the Docker Swarm Cluster execute the following command on the first Docker instance:

docker swarm init
Swarm initialized: current node (7nkdamk2m1tdhjzf45gfk5o91) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-4ynwgkfmjkj937j7kxlyll7ncucvo8jawijv8lybxourfm2d6n-dcol6t8ulsdl7axyjg2q0moqp \
    192.168.2.58:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

If you get this error…

docker swarm init
Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on interface wlan0 (fdde:530f:b4e8::f8a and fdde:530f:b4e8:0:ba66:db42:43a1:46eb) - specify one with --advertise-addr

.. you can join one of the IPv6 adresses with –advertise-addr or you can instead disable IPv6 completely:

/etc/sysctl.conf

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

After successfully initializing the cluster, you can get a list of cluster nodes with this command:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
7nkdamk2m1tdhjzf45gfk5o91 *   dockerpi1           Ready               Active              Leader

To add a second instance as worker node, execute the docker swarm join command, like you can see in the output of the docker swarm init command:

docker swarm join \
>     --token SWMTKN-1-4ynwgkfmjkj937j7kxlyll7ncucvo8jawijv8lybxourfm2d6n-dcol6t8ulsdl7axyjg2q0moqp \
>     192.168.2.58:2377
This node joined a swarm as a worker.

Docker Compose Installation

Docker Compose Documentation. To Install Docker Compose use this commands:

sudo apt-get install python-pip
sudo pip install docker-compose
docker-compose version
docker-compose version 1.16.1, build 6d1ac219
docker-py version: 2.5.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.1.0f  25 May 2017

Basic setup of a Multi Node Apache Kafka/Zookeeper Cluster

Prerequesites

Install three nodes with CentOS 7 with at least 20GB Disk, 2 GB RAM and two CPU Cores.

Install JDK

yum install -y java-1.8.0-openjdkl java-1.8.0-openjdk-devel net-tools

Set JAVA_HOME in ~/.bashrc

# Set Java-Home
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-5.b12.el7_4.x86_64/jre"
export PATH=$JAVA_HOME/bin:$PATH

Disable SELinux, Firewall and IPv6

systemctl disable firewalld 
systemctl stop firewalld 
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf 

[root@kafka3 ~]# cat /etc/selinux/config | grep "^SELINUX=" SELINUX=permissive

Reboot Server

Installing Kafka

Download Kafka and unpack it under /opt

https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz

tar zxvf kafka_2.11-0.11.0.2.tgz

Starting Zookeeper

On each node create a zookeeper directory and a file ‘myid’ with a unique number:

mkdir /zookeeper
echo '1' > /zookeeper/myid

On all three Server go to Kafka home folder /opt/kafka_2.11-0.11.0.1 and setup zookeeper like this

vi config/zookeeper.properties

# the directory where the snapshot is stored.
dataDir=/zookeeper
# the port at which the clients will connect
clientPort=2181
clientPortAddress=192.168.2.56
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0

# The number of milliseconds of each tick
tickTime=2000

# The number of ticks that the initial synchronization phase can take
initLimit=10

# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5

# zoo servers
server.1=kafka1.fritz.box:2888:3888
server.2=kafka2.fritz.box:2888:3888
server.3=kafka3.fritz.box:2888:3888
#add here more servers if you want

Start Zookeeper on all three servers

./bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

Change the Kafka server.properties on all three servers (set a unique broker id on each server)

vi config/server.properties

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2

#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9093

# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs-2

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=kafka1.fritz.box:2181,kafka2.fritz.box:2181,kafka3.fritz.box

Start Kafka on all three nodes:

./bin/kafka-server-start.sh -daemon config/server.properties

Verify kafka and zookeper are running:

jps
4150 Jps
2365 QuorumPeerMain
1743 Kafka

Verify also all brokers are registered to zookeeper:

# ./bin/zookeeper-shell.sh kafka1:2181 ls /brokers/ids
Connecting to kafka1:2181

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[1, 2, 3]

Create a example Topic with three partitions and replicationfactor 3

# ./bin/kafka-topics.sh --create --zookeeper kafka1:2181 --topic example-topic --partitions 3 --replication-factor 3
Created topic "example-topic".

# ./bin/kafka-topics.sh --list --zookeeper kafka1:2181 --topic example-topic
example-topic

# ./bin/kafka-topics.sh --describe --zookeeper kafka1:2181 --topic example-topic
Topic:example-topic    PartitionCount:3    ReplicationFactor:3 Configs:
    Topic: example-topic    Partition: 0    Leader: 2   Replicas: 2,3,1 Isr: 2,3,1
    Topic: example-topic    Partition: 1    Leader: 3   Replicas: 3,1,2 Isr: 3,2,1
    Topic: example-topic    Partition: 2    Leader: 1   Replicas: 1,2,3 Isr: 1,2,3

Test the Topic

Start a Producer on one node:

# ./bin/kafka-console-producer.sh --broker-list kafka1:9093,kafka2:9093,kafka3:9093 --topic example-topic

Start also a Consumer on a different node:

# ./bin/kafka-console-consumer.sh --zookeeper kafka1:2181 --topic example-topic --from-beginning

Write some text in the producer console. You should then see the Text on the Consumer Console.

Stop a node and write again some messages in the producer console to verify the high availability is working.