# Getting Started with Z-Wave

Having recently acquired a couple of Aeotec MultiSensor 6 devices, I decided to investigate how to make them easy to use via MQTT so I could use them from BoilerIO (my IoT heating control) and EmonCMS (the energy/home monitoring system from OpenEnergyMonitor.org).

In this article I present the beginnings of a Z-Wave to MQTT bridge as well as discuss the basics of Z-Wave to help you get started with it.

If you want to get straight to the code, jump to the section on the MQTT bridge, check out the code on Github, or run pip install zwave-mqtt-bridge.

# Z-Wave Concepts

A number of nodes (up to 256) can form a Z-Wave network.  The network can have one or more controllers, with at least one primary controller.  The primary controller has a Home ID that identifies the network, and assigns Node IDs to nodes as they join (are “included”) in the network.

Nodes have a product type that defines their behaviour.  The Z-Wave specification lists a number of command classes that are required and optional for particular product types; these command classes specify a set of commands (with parameters) that a device must implement.  For example, the “multilevel sensor” command class, number 0x31, defines a multilevel sensor report command, which is used by a device to advertise sensor readings.  It contains fields to indicate the scale, unit, and size of the data followed by the actual data itself.

Communication is asynchronous, and applications are generally built in an event-driven way as a result.  For example, sending a “Get” command to retrieve a sensor reading value may later result in a report command being sent back.  However, devices may be battery operated and not awake at the time readings or configuration changes are requested, so the application cannot simply wait for responses before continuing; the controller should deal with retransmission at the correct time for you.

“Association groups” allow nodes to communicate and send events to each other.  This is how, for example, a light switch and an associated bulb might be configured so that when the switch is pressed the bulb is notified and toggles its state.  Devices are required to have at least one notification group, the lifeline notification group, that includes the primary controller and is established when the device is included into the network.  The Z-Wave standard also imposes requirements on the commands sent to this association group, e.g. multilevel sensors are required to periodically send a report to the lifeline association group with their readings.

# Hardware

The Aeon Labs Z-Stick and the Aeon Labs MultiSensor 6 are the two main devices used throughout this article.

The MultiSensor 6 provides temperature and various other sensor values over the Z-Wave network, and is powered either from USB or up to two CR123 batteries.  One interesting feature for future BoilerIO work is the motion and luminance sensors, which could be used to indicate presence.  Lower precision temperature reporting, relatively infrequent reporting when on battery, and proprietary firmware and hardware designs are downsides compared with the EmonTH I’ve been using so far.

The Z-Stick is s static controller that exposes a serial API (the “Zensys API”) for which there seems to be little public documentation (and the reply from the Z-Wave alliance to this post suggests that there is no intention to make this information public).  The OpenZWave project has reverse-engineered this protocol and has a list of supported controllers (of which the Aeotec Z-Stick is one).  If you are going to do a similar project then it’s worth checking the controller you plan to buy is on that list.

# OpenZWave and python-openzwave

The OpenZWave project provides a way for developers to write applications that can interact with a Z-Wave network using a PC-based controller such as the Z-Stick mentioned above.  OpenZWave is written in C++, and so we are using python-openazwave which is a set of Python bindings to OpenZWave.  OpenZWave is used in some significant home automation projects such as openHAB and Domoticz to interface with Z-Wave devices.

## Installing python-openzwave on the Raspberry Pi

Grab a copy of the python-openzwave repository from github:

$git clone https://github.com/OpenZWave/python-openzwave This checks out the master branch, which is ‘unsupported’ and in development, so you may want to switch to a release branch such as v0.3.3 and install dependencies, build, and install it like so: $ sudo apt-get update && sudo apt-get install libudev-dev
$cd python-openzwave$ git checkout v0.3.3
$git submodule update --init$ make build
$sudo make install This can take quite some time on a Raspberry Pi since it involves downloading and building OpenZWave. The Aeon Z Stick has a built in battery and can operate either as a standalone controller with a serial interface when it is plugged into your USB port, or in inclusion/exclusion mode when not plugged in. Assuming the device is plugged in, you should get a serial port that you can use to communicate with it; on my system this is /dev/ttyACM0 but the number at the end may differ. Python-openzwave comes with a utility called the OpenZWave shell which is a somewhat quirky ncurses-style command-line utility that lets you perform actions on the network. When installing python-openzwave using the method above, the XML configuration files that describe the devices might not be found by ozwsh so you can pass in their location on the command line. To start the shell, run: $ ozwsh -d/dev/ttyACM0 -c/usr/local/lib/python2.7/dist-packages/python_openzwave/ozw_config/

Some quirks to be aware of: You can use TAB to move between panes and therefore get access to scrolling on the main window.  You can’t cd to paths, only to “directories” in the current level.  The hierarchical layout that the tools presents makes sense but is an abstraction and doesn’t really represent how the Z-Wave network actually works.  The utility was renamed to py_ozwsh in version 0.4 and higher of python-openzwave.

# Z-Wave Bridge to MQTT (and EmonCMS)

I wrote a simple Z-Wave to MQTT bridge that interfaces between Z-Wave and MQTT, and tested it with the MultiSensor 6 (with others coming in future).  The goal is to provide a simple and universal service that my (and others’) home automation can interact with.  Here are some steps to help you get started with it:

1. Install python-openzwave as above.  Note that this is not available on PyPI so isn’t automatically installed via the pip command below.
2. Install the Z-Wave MQTT bridge: you can install this via pip or from the repository on github; for example pip install zwave-mqtt-bridge.  (At the time of writing this depends on the boilerio package just for configuration file parsing but that should go away soon.)
3. Create a configuration file, /etc/sensors/config, to specify your MQTT server and the base topic path to publish to.  See the README.md file for an example of a skeleton configuration file.
4. Run the binary: $zwave_mqtt_bridge. Once you’ve done this, you can use the moquitto_sub tool to see the messages that are being published, for example: $ mosquitto_sub -h (host) -u (username) -P (password) \
-v -t emon_sensors/\#
emon_sensors/ZWaveNode6 {"temperature": 19.299999237060547}
emon_sensors/ZWaveNode6 {"humidity": 52.0}
emon_sensors/ZWaveNode6 {"luminance": 11.0}
emon_sensors/ZWaveNode6 {"ultraviolet": 0.0}
...


You can follow the instructions in the README to have the ZWave bridge start from systemd and run as an unprivileged user.

To get this to work with EmonCMS, you need to post to EmonCMS when messages are published to the configured MQTT topics.  I wrote a simple Python script that you can find in my emonhub fork on github to do this, or you can use the PHP script provided with EmonCMS (though I had issues with it).

# How it works: Using Z-Wave from a Python application

The examples that come with python-openzwave show how to initialise your connection to the Z-Wave network.  They use a ZWaveOptions object for configuration that exposes functionality from the C++ OpenZWave library: configuration is taken from a number of places including system-wide and per-user options files and the command line.  The application can tell OpenZWave the command line it was called with, and OpenZWave will then do common parsing of Z-Wave related options.  The application can also set options directly through the object, which is the approach I would suggest:

# Initialise openzwave.
zw_options = ZWaveOption(args.device, user_path=".")
zw_options.set_console_output(False)
zw_options.lock()
network = ZWaveNetwork(zw_options)
network.start()

time.sleep(1)


## Getting sensor readings from the MultiSensor

There are three ways that sensor readings can be obtained from the MultiSensor: via a GET command that specifically requests the value of a sensor, via the reports that the device automatically generates at specified intervals, and similarly via reports generated when sensors crosss thresholds.  The latter two options are the best since we don’t want to constantly be requesting reports from the device.

### Receiving sensor reports

Python-openzwave uses the louie framework for signalling events within Python.  I couldn’t find huge amounts of documentation about this at the time of writing, but it is based on PyDispatch and, for the purposes of receiving events, is pretty simple to use.  There is a connect method in the louie.dispatcher module that allows you to connect a Python callable up to a signal indicated by a hashable Python object.  By subscribing to the SIGNAL_VALUE signal, your application will be notified when sensor reports are received.

Code similar to the following would connect the listener and print a message with information about what was received whenever an event occurred:

import signal
from louie import dispatcher
from openzwave.network import ZWaveNetwork
from datetime import datetime

# ... network setup code here ...

exit = False
def sigint_handler(signal, frame):
global exit
exit = True
signal.signal(signal.SIGINT, sigint_handler)

# Connect to events
def value_updated(network, node, value):
now = datetime.now().isoformat()
print "%s: Value updated node_id: <%d>, label <%s> new value <%s> instance %d" % ( now, node.node_id, value.label, str(value.data), value.instance)
dispatcher.connect(value_updated, ZWaveNetwork.SIGNAL_VALUE)

# Loop until we're told to exit:
print "Running: Ctrl+C to exit."
while not exit:
signal.pause()
dispatcher.disconnect(value_updated, ZWaveNetwork.SIGNAL_VALUE)

One thing to watch out for here is that exceptions are swallowed in the signal handler, so if you get an exception it appears as though your function didn’t complete.  As such, it’s worth wrapping the code in your signal handler with try/except.

### Values

Python-openzwave stores values (as instances of ZWaveValue) with devices; these are created depending on the command classes and specific configuration information for each device type.  They are initialised with default values and when reading them directly they may be out of date or wrong.

This excerpt from the zw100.xml file for the MultiSensor 6 shows a value, the Group 1 reporting interval, being defined with a default of 3600:

...
type="int" index="111" genre="config"
label="Group 1 Interval" units="seconds"
min="1" max="2678400" value="3600">
...

The value="3600" causes the ZWaveValue instance to be initialised with the value of 3600 so if you need to get the actual value being used by the device, you first need to call its refresh method.  Your application will then receive a SIGNAL_VALUE notification with the data.

You can also check the is_set property of the ZWaveValue instance to determine whether the value was actually set as a result of a report from the device or it is just the default specified in the config file.

To set the value, simply set its data property to the desired value.

### Configuring report interval and type

As part of the configuration command class, the contents and frequency of up to three sets of automatically generated reports can be configured.  These are documented in the firmware manual.  The relevant configuration parameters are 101-103 for the desired contents of the report (as a bitfield whose values are defined in the firmware spec) and 111-113 for the interval in seconds for group 1 through 3 respectively.  When a reporting group’s interval occurs, the MultiSensor sends a report for each parameter requested in the corresponding configuration value.

Note that, when on battery, the MultiSensor cannot send reports at an interval shorter than the wake-up interval, and the minimum wake-up interval is 240 seconds.  (This is possibly a problem for use with BoilerIO, but when connected to USB this limitation does not exist.)

As part of our application we want to configure the wake-up and reporting interval appropriately: here is some example code to do that:

# Find wake-up and reporting interval and set to desired value:
for value in network.nodes[node_id].get_values().values():
if value.label == "Wake-up Interval":
value.data = 240
if value.label == "Group 1 Interval":
value.data = 240

# Integration issues

## Battery operation with the MultiSensor

When testing the MultiSensor on battery, I noticed an issue that configuration updates weren’t happening when the device was woken up.  Looking at the log files, I saw sections like this:

2017-11-11 18:52:52.245 Info, Node006, Received reply to FUNC_ID_ZW_GET_NODE_PROTOCOL_INFO
2017-11-11 18:52:52.245 Info, Node006,   Protocol Info for Node 6:
2017-11-11 18:52:52.245 Info, Node006,     Listening     = true
2017-11-11 18:52:52.245 Info, Node006,     Beaming       = true
2017-11-11 18:52:52.245 Info, Node006,     Routing       = true
2017-11-11 18:52:52.245 Info, Node006,     Max Baud Rate = 40000
2017-11-11 18:52:52.245 Info, Node006,     Version       = 4
2017-11-11 18:52:52.245 Info, Node006,     Security      = false

This indicated that the controller thought the device was a listening device, which is not the case when it is battery-operated, resulting in some features not working correctly.  I suspected that this configuration came from the controller because this set of messages was appearing quickly on initialisation, which I confirmed by connecting to the network with the MultiSensor powered down.

To address the issue, I excluded then re-included the device when it was on batteries and it now worked correctly.  These are the new, correct, log messages:

2017-11-11 18:52:52.267 Info, Node007, Received reply to FUNC_ID_ZW_GET_NODE_PROTOCOL_INFO
2017-11-11 18:52:52.267 Info, Node007,   Protocol Info for Node 7:
2017-11-11 18:52:52.267 Info, Node007,     Listening     = false
2017-11-11 18:52:52.267 Info, Node007,     Frequent      = false
2017-11-11 18:52:52.267 Info, Node007,     Beaming       = true
2017-11-11 18:52:52.267 Info, Node007,     Routing       = true
2017-11-11 18:52:52.267 Info, Node007,     Max Baud Rate = 40000
2017-11-11 18:52:52.267 Info, Node007,     Version       = 4
2017-11-11 18:52:52.267 Info, Node007,     Security      = false

## Device name changes

The Aeon Z-Stick is designed to work as a static controller when plugged into a host PC, but in order to include and exclude device it is necessary to unplug the device from the PC, move it near to the device being included/excluded and press the button.  However, unplugging the device will stop the Z-Wave network connection from working and there is nothing in place to restart it when the device is reconnected.  Further, the device is not guaranteed to re-appear with the same device name it originally had, and certainly seems not to if file handles to the original device are kept open.

The Z-Stick appears as a modem device, but other devices may also appear as modem devices (ttyACM*).  The particular Z-Stick can be identified by it’s Home ID, but to query this you have to first determine that a given device is in fact a Z-Stick and connect to it.

I don’t think there are any great solutions to this: some options are:

• Watch for the device node disappearing and disconnecting from the network if it does.  This closes any open file handles and, if the device was the only ACM device on the system then it will be more like to re-appear as the same device again (/dev/ttyACM0).  Not ideal because replugging the device might unintentionally stop the application from working without the user knowing.
• Connect to any ACM device that is attached if the one we were using was disconnected, or connect to all Z-Sticks attached.  Better, but required indirection of ZWaveNetwork objects because the device name is configured in the ZWaveOptions object that is created, locked, and passed to the network object at initialisation time.

The zwave-mqtt-bridge currently implements a solution using the watchdog Python module that gives you callbacks when directory contents change (i.e. files are created or deleted).  On Linux, this uses the INotify API that is provided by Linux 2.6+.

# Conclusion

This article has hopefully helped you to interface with Z-Wave devices, as well as provided an introduction to some of the intricacies of using Z-Wave from Python.

In future we’ll look at integrating the MultiSensor 6 and other Z-Wave devices with BoilerIO in more depth.

# Power Usage of Virgin TiVo, V6 TV box, and SuperHub

Modern devices such as TVs tend to be able to run in standby mode quite efficiently, which means the common advice about turning them off completely is becoming less relevant.  However, retaining the overall aim of reducing your overall energy footprint, new always-on devices such as modems, routers, set top-boxes that have a recording facility, etc., are more relevant targets as they are still active even when you’re not using them directly.

At home we use Virgin Media for our broadband and TV, with a TiVo box for recording.  Having just upgraded to the SuperHub 3 and V6 TV box, I was curious to see if the power consumption of these always-on devices had improved.

# Results

## TV equipment

It looks good: the V6 box uses just over a third of the energy that the TiVo uses over a day of operation.

To get to the “one-third” estimate quoted above, I used a couple of observations and modelled the usage of the device over a typical day – your mileage may vary.  As I will note below, the original Virgin TiVo seems to have two modes of standby and also seemed to be awake when it was not recording and in standby when I went to observe it on one occasion.  I’m not sure exactly when this happens/what the cause is, nor am I sure the rules governing the higher-power standby, so I’ve tried to account for these by including 1 hour of time in the higher-power standby.

 Virgin TiVo Virgin V6 Duration Power Energy/day Power Energy/day Standby 1 16h 11W 176Wh 3W 48Wh Standby 2 1h 13W 13Wh 3W 3Wh Recording 4h 15W 60Wh 10W 40Wh Watching 3h 19W 57Wh 10W 30Wh Total 24h 306Wh 121Wh

## Superhub (v1 vs. v3)

This is less good.  I use the Superhub in modem mode (i.e. no wifi or other features enabled since these are implemented elsewhere on my network), but we see a modest 1W increase in general power usage (10W for the SH1 vs. 11W for the SH3, consistently):

 SuperHub 1 SuperHub 3 Power Energy/day Power Energy/day Modem mode 10W 240Wh 11W 264Wh

# The (quite unscientific) measurement method

I used a plug-in power meter to measure the power consumption. This power meter is a modern equivalent, though the one I had is this one that is no longer sold.

The meter has a function that totals energy use but the resolution of the reading is too low for this testing, so instead I set up a camera pointing at the meter and sampled the reading at the start of each minute.  I then took the average (mode) for the period as the result.

This could obviously be done much more efficiently and accurately with a meter that supports a data-logging facility and has higher accuracy, but I didn’t have that at the time and this served my purpose.

## An aside on video tools

I used a Raspberry Pi Zero W with a camera module (without the IR filter, just because that’s what I already had lying around) to record video of the power meter during the measurement period.  I used the raspivid program to capture video.  To keep file sizes down, I saved the video at 320×240 resolution using a command similar to the following:

pi@cam:~ $raspivid -w 320 -h 240 -o 2_v6_recording_standby.h264 -t$((1000 * 60 * 45))

$((1000 * 60 * 45)) in this case computes to 45 minutes of time, since the -t option specifies how long to record for in milliseconds. This creates a raw h264 file, but it needs to wrapped in a container format in order for media players to understand it: you can use the MP4Box program to create this (part of the gpac package): pi@cam:~$ MP4Box -add 2_v6_recording_standby.h264 2_v6_recording_standby.mp4

I then used a processing script to extract the image at 1-minute intervals, similar to the following (run on a different machine that had a working ffmpeg command):

#!/bin/bash

for i in seq 0 $((2 * 60)) ; do ffmpeg -ss$(($i * 60)) -i v6_recording.mp4 -frames:v 1$i.bmp
done

# Observations

## Virgin Tivo box

Firstly: standby.  There are multiple power-saving settings for this device: the box was set to ‘Sleep’ for these tests, which is supposedly the most aggressive power-saving mode (see the Virgin Media help page on the subject for more info).

Here is a graph showing the unit in standby with a recording finishing at minute 51 and the subsequent time spent idle:

Note that for some time after the unit enters standby it sits at 12W, then later appears to power back up and then drops to 11W.  I assume this is indicative that it enters a deeper sleep state when using 11W.

When I came to do further testing the next day, having left the power meter plugged in, I noticed it was showing 13W-15W for a while, then it reverted to 11W.  It wasn’t turned on or recording at the time, so I assume it was doing some scheduled activity.  As a result of this and the higher power draw when initially entering standby, I modelled a second standby state for the device above but it is a complete guess as to how much time is split between the states.

Watching TV consumes a relatively constant 19W:

## Virgin V6 box

The V6 box was relatively consistent in its power usage compared with the original Tivo.  Again, it was set to the most aggressive power-saving mode, the Eco standby mode.

Here is the unit in standby and idle:

And watching TV – an almost-constant 10W:

Finally, recording a program while the unit was in standby mode:

# Conclusion

To put the 161Wh/day saving across both appliances into perspective, for me it accounts for a reduction of around 5% of my always-on energy consumption at home.  I’d say it’s neither totally insignificant nor huge.  However, I hope that by having information like this available to consumers it can enable people to take it into account when making purchasing decisions, and I couldn’t find anything to this level of detail already published for these two devices.

Tools like OpenEnergyMonitor or a smart meter can help you to understand your energy usage better, and with the help of these you could consider performing an audit to reduce unnecessary sources of consumption.

# Controlling your heating from anywhere

Previously we discussed the algorithms that can be used to control a domestic heating system, built a simple UI to schedule the temperature changes, and reverse-engineered a pre-existing RF relay to control the boiler.  So far the web service that provides the UI has been deployed on a PC at home but, since you’re most likely to want to override your heating controls when you’re unexpectedly arriving home early or late, we need a site that can be reached from anywhere.

This article describes changes to the software that were necessary to achieve that, some discussion about the technical choices, and step-by-step instructions to set this up for yourself.  If you’re here just for the step-by-step setup guide, jump to “Deploying BoilerIO on AWS”.

As usual, all source code is available under an MIT license on github.

# Client, server, and device

In the previous article, we assumed that the hosts running the web service and issuing boiler control commands were on the same network and, in fact, probably the same host.  To put BoilerIO online (i.e. provide access to controls from anywhere), this assumption is no longer true.  Although you could contemplate putting the web-server host into the DMZ (i.e. exposing it to the Internet whilst retaining a connection to your network), this is a massive can of security worms and is not suggested.  You could consider hosting your own server on a completely separate network if you have access to multiple public IP addresses, but this is not the common case and so is not discussed here.

So now, we have three distinct applications:

1. Clients: Your phone or PC, hosting the scheduler application in a web browser and accessing and controlling heating controls provided by the service.  This is a JavaScript/HTML web-app currently (though you could imagine other client apps such as an iOS app).
2. The service: This provides the backend that everything else talks to: it stores schedule data, cached temperatures, and hosts the static files needed for the web UI.  It could be run across several servers with a load balancer, but for a domestic installation a single, low-power VM is sufficient to host everything (the backend uWSGI app, the frontend web-server, and the Postgres database).
3. Devices: This is just a special case of a client.  Currently there’s no real difference in how they access the service, other than making use of additional HTTP endpoints to update the cached temperature on the server, etc.  They execute the state the user has programmed via the web interface by interacting with the local infrastructure (e.g. issuing boiler commands).  They are fixed ‘appliances’ running on the user’s local network and currently authenticate the same way as user-facing clients, using a shared secret.

In future, role-based access control could be used to restrict access to endpoints on the backend service depending on whether a device or user credential was used for authenciation.

# Security considerations

With any online service there are always security risks and the possibility of unknown methods of attack exists.  One future piece of work is to write a threat model for BoilerIO.  In the meantime, some key considerations include:

1. Ensuring that the backend service requires authentication as a user or device for any API access.  See below for more info: this is currently achieved using HTTP Basic Authentication (over HTTPS), as supported by nginx.
2. Ensuring that communication with the backend service is always encrypted.  Again, this is implemented server-side: the web-server configuration redirects HTTP requests to the HTTPS protocol.  The device code supports but currently doesn’t require HTTPS as this is not convenient for local-only deployments.
3. Ensuring that the device doesn’t allow unverified certificates, to reduce the likelihood of man-in-the-middle attacks.  This is the default for the device code and, for the UI, browsers should provide an indication if the certificate is bad.
4. Sanitising inputs to try to filter out anything that might cause damage (either malformed or otherwise malicious input).
5. Running all services with least privilege.  All device code and service code should run as an unprivileged system user.  Further improvements could be made by creating either SELinux or AppArmour policies for the modules.
6. Not exposing anything unnecessary externally such as the database or any service on the device itself.  The device now only makes outbound connections, and only requires HTTPS access to the scheduler service.
7. Use of strong passwords, since you are in control of the entire deployment and can use utilities in your browser to store passwords, you can have separate secure passwords for each user and device that is accessing the service.

## The backend service

Authentication of users and devices making API requests to the backend service ensures that only you and the devices you own can read and configure the heating schedule and state.

There are many ways of implementing authentication, either through the application itself or the web-server hosting it.  Since BoilerIO doesn’t yet support authentication itself (e.g. via OAuth), an expedient way to achieve a reasonable level of security is to configure the web server to require HTTP authentication.  Since I’m using nginx, I’m forced to stick with HTTP Basic Auth since their Digest Auth implementation is incomplete at the time of writing.  It’s critical the authentication is done over a secure (HTTPS) connection to prevent credentials being stolen by capturing the authentication traffic.

Note that using Basic Auth means that, if someone acquires the HTTP requests in cleartext, the password will be directly available to the intruder.  Although interception shouldn’t normally happen with TLS, you may, for example, be subject to man-in-the-middle attacks if you have rogue certificates installed on your system (so that your site looks legitimate to the browser but is being impersonated).  This is sometimes the case on corporate systems employing SSL proxies.

For good overall security it is best to use long, random, machine-generated passwords to access your site.  You can configure modern browsers to remember those passwords on computers you trust, and if you forget or lose them you can easily reset them.

## The device(s)

The scheduler operates on the user’s local network to interface with temperature sensors and the boiler controller (any of which may be running on the same host) and uses the scheduler web service to get an up-to-date heating schedule.

Assuming that the “device software” is hosted on a private network, and not reachable from the Internet, then the main security considerations are:

1. Protecting against invalid responses from the scheduler service, in case that has been compromised.  The code as-is does do some validation of responses and will ignore faulty responses.
2. Ensuring any parts of the local configuration, such as the MQTT broker, are sufficiently locked-down.  This is up to the end user to configure.

# Deploying BoilerIO on Amazon Web Services

Amazon Web Services (AWS) provides a cheap and convenient way of getting server resources with a public IP address.  You may have access to alternatives including your own server, in which case the AWS-specific parts can be ignored or modified to suit your environment.

BoilerIO currently uses a Postgres database as a backend, so it’s most convenient to configure our own services on a VM.  Elastic Beanstalk can provide a Python environment but then there’s still the database to be hosted and Amazon charge for the underlying EC2 instance anyway.  I chose to use a Lightsail VM.  Lightsail is essentially a simpler version of EC2, made to look more like a traditional VM hosting service with simple pricing and configuration.

For any of this to be useful, you need to have the pre-requisites including a temperature sensor (such as the EmonTH) with a compatible interface (for EmonTH, you can use the JSON mode I added to emonhub) between it and MQTT, and a way of interfacing with your boiler (such as the Danfoss RF interface running on a JeeLink v3c 433MHz, if you have a compatible Danfoss receiver).

Setting up the device side is a little complex as you need to get the above parts working together.  More documentation will be added over time, as well as more complete coverage for other types of boiler setup and temperature sensor.  For now, if you are technically-savvy and have the requisite hardware then you can set this up for yourself and use these instructions to configure the server side.

## Step 1: Create your VM

You need to have signed up for AWS.  Once you have an AWS account, you can get to Lightsail from the AWS Management Console to configure both an ‘instance’ (a virtual machine), and a public IP address.

## Step 3: Database configuration

The web backend for BoilerIO requires a Postgres database to be configured:

Firstly, install postgres:

$sudo apt-get install postgresql Then, we need to create a user for the backend. I would suggest using a long, random, password for this user, which you can generate using the pwgen utility. Once you have a password, create the user and database for the scheduler: $ sudo -u postgres createuser -P scheduler
Enter it again:
$sudo -u postgres createdb scheduler Then, create the tables etc. using the script in the boilerio repository: $ sudo -u postgres psql scheduler <boilerio/scheduler.sql
SET
SET
...

## Step 4: Installing the app

This is a simple case of installing the Python package:

$sudo apt-get install python-setuptools$ sudo easy_install pip
$cd boilerio$ sudo -H pip install .

The package also relies on a config file, /etc/sensors/config, as described in the README.md in the repository.  Only the configuration options relevant to the web app are neede since we aren’t installing the other components on the server.  A sample configuration file might look like this:

[heating]
scheduler_db_host = localhost
scheduler_db_name = scheduler
scheduler_db_user = scheduler
scheduler_db_password = 

## Step 5: Configuring uWSGI

uWSGI a web server that can be used to serve the BoilerIO flask app behind nginx.  We use this because it is more performant, robust, and secure that the built-in werkzeug web-server provided as part of flask (which is only intended for development use).  To install it, run the following:

$sudo apt-get install uwsgi uwsgi-plugin-python It’s best to run uwsgi apps as an unprivileged user, so we need to create the user and then set permissions of various locations appropriately $ sudo adduser --no-create-home --system boilerio

Now, create a configuration for the app.  Create the file /etc/uwsgi/apps-available/thermostat.ini:

[uwsgi]
socket = /var/www/boilerio/thermostat.sock
module = boilerio.schedulerweb:app
uid = boilerio
gid = www-data
chmod-socket = 664

Finally, create the directory for the socket, enable the app, and restart uWSGI:

$sudo mkdir -p /var/www/boilerio$ sudo chown boilerio:root /var/www/boilerio
$sudo ln -s ../apps-available/thermostat.ini /etc/uwsgi/apps-enabled/$ sudo systemctl restart uwsgi

You can check that it is running by checking the output of ps:

$pgrep -a uwsgi 18503 /usr/bin/uwsgi --ini /usr/share/uwsgi/conf/default.ini --ini /etc/uwsgi/apps-enabled/thermostat.ini --daemonize /var/log/uwsgi/app/thermostat.log 18511 /usr/bin/uwsgi --ini /usr/share/uwsgi/conf/default.ini --ini /etc/uwsgi/apps-enabled/thermostat.ini --daemonize /var/log/uwsgi/app/thermostat.log 18512 /usr/bin/uwsgi --ini /usr/share/uwsgi/conf/default.ini --ini /etc/uwsgi/apps-enabled/thermostat.ini --daemonize /var/log/uwsgi/app/thermostat.log To ensure uWSGI starts at systems startup, check that the RUN_AT_STARTUP variable is set to yes in the /etc/default/uwsgi file. ## Step 6: Configuring nginx and HTTPS To install nginx, simply run: $ sudo apt-get install nginx apache2-utils

### Step 6a: Let’s Encrypt!

To set up HTTPS, you can use the Let’s Encrypt! Certificate Authority to get a certificate for your HTTPS server that will be trusted by most browsers.  Let’s Ecnrypt! provides domain-validated certificates, which are probably good enough for your needs.  In order to get the certificate, you have to prove (with the help of a tool called certbot) that you are in control of the domain.  This gives you a way of setting up secure communication to your site, but it does not prove to users that you are who you say you are beyond the fact you are the current owner of the domain.

To set up HTTPS using a Let’s Encrypt! certificate, you can follow the certbot instructions.

On AWS Lightsail, there is a firewall protecting your VM that you will need to reconfigure to allow HTTPS (port 443) incoming connections before configuring certbot.

### Step 6b: Basic authentication setup

To use HTTP Basic Authentication, you need to setup a htpasswd file with usernames and hashed passwords.  This shouldn’t be served over HTTP; it can be kept in /etc/nginx well out of the way of the HTTP server.  The easiest way to create it is using the htpasswd utility.  Since there isn’t a native nginx utility for this you can use the one from the apache2-utils package.   You can install this, then create users for your site, using the -c option the first time around to create the non-existent file.  We’re using sudo just to get write permission on /etc/nginx here:

$sudo htpasswd -c /etc/nginx/thermostat_htpasswd$ sudo htpasswd /etc/nginx/thermostat_htpasswd


It is recommended to use long random passwords here; you could generate these with pwgen(1), and then use your browser to remember the password.  You’ll need to include the scheduler password in the /etc/sensors/config file on the host running the scheduler itself: for more info see the README.md in the repository, especially the section on the configuration fie.

### Step 6c: nginx configuration file

Finally, you need to set up your configuration to serve the BoilerIO API and the static pages for the user interface.  To do this, create a new configuration file that looks something like this under /etc/nginx/sites-available/boiler, then link it under sites-enabled and remove the default site:

server {
listen 80;
listen [::]:80;
server_name ;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live//fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live//privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

root /var/www/html;
index index.html;
location / {
auth_basic 'thermostat';
auth_basic_user_file '/etc/nginx/thermostat_htpasswd';
location /api {
try_files $uri @boilerio; } } location @boilerio { include uwsgi_params; rewrite ^/api/(.*)$ /1 break; uwsgi_pass unix:/var/www/boilerio/thermostat.sock; } } The config file above uses a different structure than that installed by certbot to redirect users from the HTTP site to the HTTPS site. The certbot version does work, but the above avoids the if statement (see If Is Evil) and doesn’t include any of the statements that would cause any content to be delivered within the non-SSL server block, reducing the chance of mistakes in future changes to the configuration file. It is important that the 301 Moved Permanently response happens before the authentication. The sequence for a redirected request should be as illustrated here: You can check that the flow is happening as you expect using a tool such as tcpdump or wireshark. If you run tcpdump -w packets -i eth0 or similar on the server, access the site from your browser, then open the packet trace created in Wireshark, and filter on http you should see the following – note no authentication requests are shown: You can either modify the above configuration to serve the static files from elsewhere, or copy them into /var/www/html directory. # Conclusion Setting up the service is currently not a simple operation but hopefully this article describes the process well enough to get a technically-savvy user started. There is lots of further work to be done but, now we have the foundations set up, future articles can look at adding features such as sub-zone control and new types of sensor and actuator. Watch this space, and please comment with anything you’d be interested in seeing. # The BoilerIO Software Thermostat It’s time to step up from command-line control of the heating system I’ve been working on to having a weekly schedule and temporary override function available through a UI, making the system practical on a day-to-day basis. As always, code described here is available on github under an MIT license. # Overview The scheduler chooses a target heating temperature throughout the day and needs to be usable by someone non-technical to show the state of the system, provide a means to enter and edit a schedule, and provide a means to change the target temperature for a fixed period of time. To achieve this a new daemon and web service have been added to the architecture. Here is a summary of the responsibilities of those components: • The Scheduler reads the weekly schedule from the database, and sends commands to the Boiler Controller to change the target temperature at the appropriate times. • The SchedulerWeb REST service provides a REST API over HTTP to get a summary of system status, set or clear the temperature target override, and add and remove entries from the weekly schedule. This is implemented using Flask in Python, with uWSGI and nginx to provide a robust service. • The scheduler configuration database is a PostegreSQL database holding the schedule, a cached copy of the current temperature, and the target override configuration. It’s the primary store for this information: the REST service updates it and the scheduler uses it to control the temperature. • The SchedulerWeb web frontend is an HTML5/CSS3/Javascript single-page web UI that makes use of the REST service to interface with the system from the user’s web browser. # Experience and implementation notes Rather than trying to document the entire implementation, I will instead talk about each area and some of the challenges or points of interest, and the major decisions that were taken and why. My day job is a lot lower in the software stack than this, so there may better approaches than what I’m presenting. ## The HTML frontend The app is designed to largely follow Google’s Material Design guidelines; I hadn’t realised when I started that there are stylesheets provided by Google that can be used so I implemented the styles I needed myself. Since the fashionable choice of web-framework seems to change frequently I was initially resistant to using anything at all fancy, and instead use traditional jQuery directly. In retrospect, not having some of the features of a slightly higher-level framework probably made the code worse that it ought to be and in future I might look at whether a framework provides a better frontend implementation. Modern web development has definitely come on a long way since I last attempted it: the developer tools inside Chrome are great, though there are still cross-browser issues even when only supporting latest-generation browsers (for example implementing modal dialogs is still done manually even though there’s the dialog tag now, and I had issues with alignment and appearance of certain input fields). ## The REST service The backend service uses Flask, a neat framework for web apps in Python. When developing your service you should be aware that werkzeug, the built-in web server, is not suitable for production due to security and scalability issues. However, if you do use it during test you’ll find it also makes it easy to accidentally keep global state within your app, which you shouldn’t be doing because it won’t work when you’re inside a production server. For that reason, I suggest starting to use uWSGI relatively early in your development. It’s not difficult to use for test. I’ll come onto a step-by-step deployment guide in an upcoming post, but I recommend that the web service be deployed using uWSGI behind nginx or Apache to get a secure and scalable deployment. However, on Raspbian at least there are a couple of pitfalls I found with uWSGI: I used pip to install a relatively modern version. The installation used a PREFIX of /usr/local. For some reason, even with /usr/local/bin on my PATH, uwsgi did not work correctly unless called with its full path (despite printing a message stating the full path it had detected, which was correct). Perhaps this is a security measure, but the failure mode here and on other issues I experienced was somewhat opaque and better error messages would have been useful. To use uwsgi in production, it is helpful to have it start to host relevant services on system boot. On Raspbian, this requires a systemd unit to be created (on other systems an init script, upstart job, or just adding to rc.local would be needed). That nothing was already in place could be a result of the way I installed uwsgi, but in any case I followed the Debian package’s convention of creating a configuration file in /etc/uwsgi/apps-available and linking to it in /etc/uwsgi/apps-enabled. The config I used was this:   [uwsgi] chdir = /var/www/app/boilerio socket = /var/www/app/thermostat.sock module = schedulerweb:app logto = /var/log/uwsgi/thermostat.log uid = boilerio gid = www-data chmod-socket = 664 Note here that, for better isolation, I’m using a system user specifically for this service, and sharing a group with the web-server so I can create the socket with permisions for both to be able to access. ### nginx and the URL namespace The web service natively places its REST endpoints at root level. As I use nginx to also serve static content – the client HTML/JS/CSS files – I decided to map the service under /api. Files for the web client get served from /, and they make API calls assuming the api prefix . I use a simple nginx configuration file to achieve this: server { listen 80; server_name hub; charset utf-8; location / { root /var/www/app/boilerio/static; } location /api { try_filesuri @boilerio;
}
location @boilerio {
include uwsgi_params;
rewrite ^/api/(.*)$/$1 break;
uwsgi_pass unix:/var/www/app/thermostat.sock;
}
}

The endpoints and the data they expect are currently hand-coded in the Python web application code, which is less than ideal.  Defining a clear API where constraints on input and validation can be consistently and mechanistically verified is a better approach and an area for improvement.  Swagger seems like one good option to implement this, has integration options with Flask, and has the side benefit of a nice web UI for making REST calls to the service too.

## The scheduler

After modifying the maintaintemp script to listen for new target temperatures over MQTT rather than having a static target passed on the command-line, the scheduler is able to periodically update the target temperature.  The current, simple, implementation polls the database once per minute, or when a trigger message is received, to load the currently-active schedule and target override.  It then selects a target based on these inputs and sends a message to the boiler controller to update the target temperature.  At startup, and when the controller for a zone restarts, the target request is sent immediately to avoid having to wait a whole polling interval.

This is a likely area for innovation in future: enhancing how the target is chosen using additional inputs or policies, enabling features like pre-heating to reach an upcoming set-point, altering the target based on presence information, and intelligently dealing with installations with multiple zones or independent controls within a zone.

## The database

A PostgreSQL database is used to store the schedule and configuration.  This might seem like overkill, but when developing a web app where multiple processes need read/write access to the data it seems sensible to use a tool that is designed for that kind of environment, even if the scale it is being deployed at is relatively tiny, for a few reasons:

• It avoids designing scalability out now.  If we used another approach that was “simpler” but couldn’t be scaled if necessary it would be a potentially large undertaking to fix.
• You get a lot of correctness for free.  If, for example, the schedule were stored in a plain text file (say, as JSON), then it is definitely possible to make everything atomic.  But the hassle of getting it exactly right does not seem worthwhile when the database can deal with it all more efficiently from the beginning.

Another approach would be to use a document store like MongoDB.  There are pros and cons to either way (this strongly-worded post is an interesting read concerning problems faced in a practical application), but I decided to go with something I was familiar and confident with.  While having a fixed schema seems to be considered by some to be “overhead” or to slow down development, it can also reduce problems by helping to identify programming or design errors earlier in the cycle and certainly did not seem to make development difficult.

A somewhat different approach could be to use Amazon or Azure web services.  Amazon have several relevant offerings here (and I believe Microsoft have alternatives too):

• AWS IoT: This is a service that maintains a ‘shadow device state’.  User agents can post new ‘desired state’ and devices can post the actual state of the device.  AWS publishes messages to indicate various conditions including when the target and current state are divergent so that the device can change its physical state to match the desired state.
• AWS Lamda and API gateway: This provides a potentially simple way to implement the scheduler REST API without having to host the web service component yourself, potentially reducing the maintenance burden.  You can easily provide authenticated access regardless of where you are connecting from.  Zappa is a tool that lets you easily run a Flask applications within Lambda, so could be used to allow the BoilerIO code base to be used without modification.
• S3 could host the static files, such as the CSS, JS, HTML, etc. for the client app.
• AWS DynamoDB or Relational DB services.  The latter could be a drop-in replacement as it has PostgreSQL support, whereas changes to the app (relatively minor/contained within one module) would be required for DynamoDB although that option does have more attractive pricing

The first issue to consider with this approach is what the connectivity requirements are and what is the user’s expectation of behaviour when their Internet connection is unavailable.  In this case, the minimum requirements seem to be that (i) the schedule should continue to run without interruption regardless of Internet downtime, and (ii) the user should be able to supply at least an override even if the schedule is not editable.  Both are doable, the first trivially since the scheduler can (and should) be run locally to the installation.

The second issue is vendor lock-in.  These services are proprietary, and there’s no way to run local versions of them either for testing or deployments where using an online service is suitable.

In the end I decided to stick with a regular web service for now, which leaves the option open for either hosting it off-site, in a remote VM for example, or having a connector module that enables AWS IoT or similar to provide off-network access without hosting a public-facing service locally.

# Next steps

This blog post covers part of one of the “next steps” I identified in the previous post.  Upcoming areas for further work are better documentation including a setup guide, and looking at additional features such as multiple zone support and pre-heating.

# Boiler Control to Maintain a Set Temperature

I wanted to be able to control my home’s heating from a computer.  This post discusses the next phase in that project: a control layer that maintains a specified room temperature using a temperature sensor and the boiler control built in previous articles.

I published the boilerio repository on github that contains the code to do this.

Note that neither the code nor the article come with any warranty: please be careful if you’re using it as you could damage your heating system or create a safety hazard.

# Heating system overview

Our heating system is fairly typical for the UK: an S-plan system using a gas-fired “system” boiler with a pressurised hot water storage vessel.  When the thermostats call for heat, water is heated by the boiler and pumped around a series of radiators.  As noted in the previous article, we have a Danfoss RX2 receiver whose control protocol I’ve reverse-engineered, so we will use that here to control the boiler.

This article looks at the common but relatively crude control method of simply turning the boiler on/off periodically to maintain a steady temperature.  Note that the boiler also has a manually-specified target flow temperature and will modulate the burners to achieve this when it is active.

More advanced controls, possibly the subject of future articles, could adjust boiler settings in response to flow temperature and other variables to ensure the boiler is within its most efficient operating parameters.  This requires integration with the boiler’s electronic systems not explored here.

The system has three main components:

1. The thermostat transceiver.  Here we’re using simple on/off control by implementing an interface to the Danfoss RX2 receiver.  We’ll respond to MQTT messages to allow services to issue commands to the boiler and, for monitoring, publish to an MQTT topic when messages are received over RF.
2. The heating controller.  This will be a Python daemon that works towards a temperature setpoint (for now this is a command-line parameter, but in future it will be hooked up to a scheduler) by monitoring the current temperature and deciding how to control the boiler to reach the target.
3. The temperature sensor.  I’m using an EmonTH for this that measures and publishes room temperature to MQTT.  There are some software tweaks I will discuss below.

# The heating controller

There are three modes of operation in reaching and maintaining temperature: significantly below setpoint, significantly above setpoint, and near the setpoint.  The first two cases are easy: the boiler should either be on or off.  Within the target zone we can modulate the boiler on/off to produce an average heating input to the room of the desired level: this is a type of pulse-width modulation (PWM) with long pulse durations in the order of minutes.

To decide what the duty-cycle (the fraction of the time boiler is turned on for in the full cycle) should be we need to determine the required heat input for the room using a control mechanism that can ‘find’ the correct value, since it will differ based on various factors including the temperature difference to outside and outside weather, building materials and insulation type, effectiveness of the radiators, losses through pipes, etc.

After trying out a couple of approaches based around incrementally increasing or decreasing the PWM duty-cycle according to the current “error” (difference between target and actual temperature), I learned that using a PID controller is a common approach that can be effective given some tuning.  This computes a control variable (in our case the PWM duty cycle) given a process variable (the current temperature), and a setpoint (the target temperature).  The output u at time t is given by:

$u(t) = K_pe(t) + K_i\int_0^t e(\tau) d\tau - K_d\frac{de(t)}{dt}$

The PID output combines the error (difference between current value and setpoint), the total error over time (the integral component, which allows the controller to adjust to the current conditions), and the differential (to damp excessive corrections), in an amount that is application specific using the coefficients Kp, Ki, and Kd.  An initial implementation is in pid.py in the boilerio repository.  It is currently quite basic and could be further refined.

For a more detailed overview of PID controllers, the Wikipedia page is a good place to start, and then Brett Beauregard’s excellent Improving the Beginner’s PID article series and accompanying library for the Arduino provide a good explanation of some of the common issues and solutions with basic PID controllers.

• The output is limited between 0.15 and 1.  Values below 0.15 are rounded down to 0 since such a small duty cycle doesn’t give the boiler chance to do anything useful.  Different limits may be suitable for different systems.
• The integral component is limited between -1 and +1 to avoid it becoming excessively large in either direction (since it can’t influence the output beyond those limits anyway).
• Unlike many applications of PID controllers where the process variable is actively moved in both directions, we can’t actively cool the room.  Therefore, we allow a negative integral that’s larger than one might in other systems, to accommodate the proportional term being too large.

# The simulator

Choosing appropriate coefficients for the PID controller and the efficiency of test/dev cycles were both important challenges.  At this time of year there is little opportunity to do real-world testing where the temperature difference between inside and out is very high, and to do such a test is both time-consuming and potentially wasteful of energy.  Instead, I decided to write a simple simulator, sim.py, to help with the majority of the debugging and tuning.

There are various tools online for calculating heat loss in your home that take into account the building materials, insulation, windows, ventilation, etc.  To estimate heat loss through conduction they look at loss through each building element $Q=UA(T_i-T_O)$; there is then the heat loss through ventilation to add in.

We use an extremely simple model that is sufficient to achieve the goals described above. Firstly we combine the U and A terms in the heat loss formula and assume an average across all building elements.  We assume that in each time increment, some fraction of the heat will be lost to the outside and some heat will be gained through transfer from the radiators, each with different efficiencies and therefore coefficients, without considering ventilation separately.  The radiator temperature itself is assumed to increase and decrease linearly over a ramp-up and ramp-down time when heating demand is indicated or ceases.

We inject a fake Boiler class that updates the model parameters rather than actually sending commands to the real system, allowing the model to interact with the controller.  The code is careful to only get the current time in one place and pass it as a parameter, to make mocking the passage of time easier.

To find a reasonable value for the heat-loss coefficient, I grabbed some real data from my temperature logs and used scipy to do a curve fit.  Then, keeping that value constant I did a similar exercise to determine the coefficient for heat transfer from the radiator in the room.  These values are obviously very rough; different time periods produced different results as the conditions at the time weren’t known (doors opened/closed, etc.).

# Interfacing the boiler to MQTT

The boilerio repository includes a daemon, boiler_to_mqtt.py, that will interface with a serial port using the protocol implemented in the previous two articles.  This, like the other tools, uses a config file to specify the location of the MQTT broker and the topic names to use.

RF messages sent and received are published to the topic specified by info_basetopic in the config file.  The published payload contains a JSON message with keys “direction” which is ISSUE or RECV, and “cmd”, which is the command issued or received (ON, OFF, or LEARN).  An example payload might be:

{"thermostat": "0x1234", "cmd": "ON", "direction": "RECV"}

Clients can issue commands to the boiler by publishing to the topic specified as the heating_demand_topic in the configuration file.  The script expects a JSON payload consisting of an object containing two values: the command (“O” for On, “X” for Off, and “L” for Learn), and the thermostat ID as an integer.  A sample payload might be:

{"command": "X", "thermostat": 23123}

See the README.md for more information on using the boiler_to_mqtt.py script if you are using a Danfoss receiver.  Alternatively, you can still use the temperature management code but replace this script with something that can control whatever receiver you are using.

# Temperature input

There are several options for temperature input: originally I had put together my own temperature ‘transmitter’ using an AVR microprocessor, a Dallas Instruments DS18B20 and an XBee radio (Sparkfun have a guide on the XBee).  If you go down that route be sure to get the right XBee hardware since v1 and v2 are not compatible.  I also had issues with an Arduino shield I bought, though breadboarding with Sparkfun’s XBee breakout worked fine.

I have since switched to using the excellent emonTH v2 from OpenEnergyMonitor.  These have a simpler RF69 radio, which is all that’s needed (and handily is the same one that supports interfacing with the Danfoss receiver), come pre-assembled, have a lower-power sensor that can also record humidity, and are battery powered.  The hardware design and software are open-source.

I did choose to make some modifications to the emonTH and emonhub software. For the emonTH I increased the resolution of the temperature readings, which required a number of updates across the stack:

• Support for setting the resolution in the library for the SI7021 sensor;
• Setting the SI7021 resolution during emonTH startup and reporting hundredths rather than tenths of a degree over RF, which required reprogramming the emonTH using a USB-to-UART adapter;
• Modifying the emonhub configuration to accommodate the change of packet format.

Increasing the resolution of the SI7021 sensor readings will also increase the time taken to acquire those readings, and therefore the overall power consumption, so expect batteries to run out quicker.  That being said, the OEM project estimate years of battery life from the default configuration so even at a quarter of that it would still be acceptable to me since I’m using rechargeable batteries anyway.

I also modified the format emonhub uses to post data to MQTT rather than using the pre-existing options of either a single message with a series of values whose order is significant in determining their meaning (the “rx” format), or one message per reading (e.g. to topics like emonth/temperature, emonth/humidity) where the grouping of the messages cannot be reconstructed.  My branch of emonhub posts a single MQTT message that has a JSON payload with the group of readings (temperature, humidity, battery voltage, etc.) that were taken simultaneously.  This is not strictly necessary but was helpful for other projects.

The modified emonhub, emonTH, and SI7021 code are available from github.

# Real-world testing

I have used this code to control the real boiler a number of times, mostly with overnight tests.  With the weather getting warmer, I’ve not been able to get a feel for how it works when it’s really cold outside but, in the situations I’ve used it so far, it seems to have worked well.  Generally it maintains the temperature to within ±0.2ºC of the setpoint, which I consider to be a success.

# Next steps

The upcoming good weather will surely slow progress but there is plenty that can still be done: three possible areas to investigate next are:

1. Power measurement.  It would be useful to read gas usage automatically to better understand how efficiently gas is being used and what effect change have on this.
2. Scheduling.  This isn’t really usable as control requires ssh and command-line knowledge.
3. More advanced integration with the boiler.  Monitoring and setting parameters such as target, supply, and return water temperatures and burner on/off.

Hope you found this interesting and/or useful!

# Danfoss Wireless Thermostat Hacking – Part Two

I’ve been trying to take over control of my home’s central heating using a combination of software and commodity hardware such as the Arduino and Raspberry Pi.  Part one of this series looked at how my existing RF thermostats worked and showed it should be possible to emulate them so that the receiver (which has relays that turn heating zones on/off) already connected to the boiler could be used by my own control system.  I currently have two Danfoss TP7000-RF wireless thermostats (one per zone) and a Danfoss RX2 receiver.

In this part, we look at programmatically receiving and transmitting packets from/to the Danfoss RX2 receiver in order to turn the boiler on and off, and start to look at how this could be integrated into a more complete system.

In order to be able to transmit and receive thermostat messages, we need an FSK transceiver that can receive and transmit packets of the right format.  The RF69 family by HopeRF is a popular module used by enthusiasts; typical use cases include creating networks of home automation devices and sensors.  There are various libraries that make use of the packet format features of the module, or layer a packet format on top, to provide bi-directional communication.  However, in our case we need to integrate with the non-RF69 receivers/transmitters used by the existing installation.  This is possible: the RF69CW supports up to eight sync words, fixed- and variable-length packet formats that are flexible enough to receive packets in the format transmitted by the thermostats, and supports the 433MHz frequency.

One minor issue is that the data sheet claims that the minimum supported data-rate is 1.2kbps, however my experimentation shows that it can deal with the 1000bps rate used by the Danfoss thermostats.

There are a variety of hardware options for incorporating the RF69 into your project:

• Connect the RF69 directly to a Raspberry Pi: You could make up an interface board yourself or buy a PCB with the correct headers and pads for the RF69 and Pi (or facility to add them).  This has the downside of only working on a Pi.
• OpenEnergyMonitor’s RFM69Pi module, which is an Arduino-compatible Pi “hat” including an AVR chip and the RF69 module on board.  You can easily upload new firmware to it for this project; I think it is well-suited though mine is currently busy in my energy-monitoring setup.  This approach shares the downside of requiring the Pi to operate it.
• The JeeLink v3c by JeeLabs, which combines an Atmega 328P and RF69CW module into a USB form-factor that’s Arduino-compatible.  Be sure to purchase the 434MHz version.

I went with the JeeLink option as it’s a USB device so can be used easily both with the target Raspberry Pi as well as a traditional PC for development.

# Firmware

The firmware used in this project is available on GitHub under an MIT license.

The first thing to deal with is interacting with the RF69 module.  There are a number of existing projects that implement libraries for RF69, though I decided to write my own because the others either didn’t quite fit my needs or had application logic embedded in the code. Both JeeLib and Mad Scientist Labs, whose work served as a useful reference here, deserve shout-outs.  DeKay’s posts at Mad Scientist Labs on reverse-engineering a Davis weather station are a fascinating read.

Some specific requirements we have for this project:

• The sync words:  We’ll need to use the six encoded sync words that the thermostats transmit (0x6cb6 0xcb2c 0x92d9, which decodes to 0xdd46).  These come after the preamble; the RF69 normally uses a raw 10101... pattern as its preamble, but can be configured not to send one and seems to lock on to the transmission just fine even with the encoded version of that pattern being used by the thermostats.
• The packet format: The RF69 supports whitening and Manchester encoding, checking and embedding CRCs, and variable-length packets (where the length is indicated in a byte contained within the packet).  We want to disable all these features: we use fixed-length packets, and receive the encoded packet into the Arduino firmware where we will decode them.

We want to provide a serial interface, emitting a line per received message with the thermostat ID and the command that was sent (on, off, or learn), as well taking commands as input to tell us to transmit packets with a particular thermostat ID and command.  I’ve tried to keep it machine- and human-parsable: the sketch I provide takes input of the form XTTTT\n, where X is the command (O for On, X for Off, and L for Learn), and TTTT is the thermostat ID in hex.  It prints lines like <RECV|ISSUE> TTTT CMD where RECV indicates that a packet was received or ISSUE is a command we just issued, TTTT is the thermostat ID, and CMD is either ON, OFF, or LEARN.

## Encoding and decoding to/from the wire format

Encoding and decoding on the Arduino with the RF69 is simpler than in part one where we were using the wave file from the SDR because, once the RF module is programmed with the correct bit-rate etc., it does the data slicing and bit synchronisation for us.

The representation of a bit in the encoded packet has a preceding 0 and trailing 1, and the middle bit is the unencoded value being transmitted (this is a simple technique to ensure the signal is constantly being modulated so that the gain on the receiver remains within usable bounds).

To decode, we set bit i of the output according to bit 1 + 3 * i of the input (counting from left to right in the binary representation, so bit 0 being the most-significant bit of the first byte of output). Similarly, on encoding we copy bit i from the input into bit 1 + 3 * i of the output, inserting the preceding 0 (at bit 3 * i) and trailing 1 (at bit 2 + 3 * i.  You can check out the sketch to see details of how this is done: the encode_3b and decode_3b functions are the relevant place to look.

## Receiving packets

The receive code gets a packet of data from the RF69 and has to decode it, validate it, and extract the instruction and thermostat ID.  The thermostats retransmit the packet immediately, so the received packet has the sync word stripped off the first copy of the packet by the receiver but both it and the preamble are present in the second copy as passed to the micro-controller.

One annoying issue that there is a stray 0 bit in-between the first and second transmissions.  As a consequence the overall data is not a whole number of bytes, which is a problem because the packet length is specified in bytes to the RF69.  I experimented with programming the receiver to get the last byte, of which only the first bit is transmitted, but this causes problems such as the reported RSSI value being useless since the thermostats don’t transmit anything for 7 of the 8 bits in the last byte.  The sketch instead specifies a packet length that is the number of bytes rounded down and works around the missing bit at the end of the transmission.

To receive a packet we do the following:

• Get the packet from the RF69’s FIFO into an array;
• Shift the second copy of the received packet left by a bit so we can do direct comparisons between the two copies;
• Decode the packet;
• Validate the packet: check that the sync word is correct in the retransmission, and that the thermostat ID and command match in both copies (being sure to account for that missing bit);
• Extract the thermostat ID and command.

If valid, the received data is then output to the serial console.

## Transmitting a packet

Originally I’d hoped I could use the RF69’s preamble and sync word features for transmit also, but this would require the receiver to accept packets of a slightly different format than it sends.  Having tried this and found it not to work, the sketch instead closely emulates the thermostat’s packet structure.

During transmission we have to temporarily turn off the sync word feature of the RF69 in order to produce a packet with the custom preamble, followed by the sync words, the data, and then a repeat of the packet (the repeat doesn’t seem to be strictly necessary and therefore could potentially be handled more simply but I decided to maintain a close emulation anyway).  The RF69 library I wrote has support for temporarily disabling the sync words and using a different packet length than for receive.

Other than that, the transmit sequence is pretty simple: parse the command from serial, generate a thermostat packet (including preamble and sync words) with the appropriate values included, encode it to the line-encoding used by the receiver, put it in the RF69’s FIFO, and then transmit it.

# The higher-level control system

So far we’ve provided a basic mechanism to turn on/off heating in a zone.  There are many options for how this can be used to achieve the features you’d expect from a heating control system.  Some characteristics of such a system might include:

• It has inputs in the form of current temperature readings;
• The available temperature data will be used to decide when to turn on/off heating in a zone;
• There is a scheduling mechanism, choosing at what times specific temperatures should be targeted;
• A way to see the current temperatures, the current state of the boiler, the target temperature, the schedule, etc.;
• Safety features.  In particular, what happens if the control system, the devices providing temperatures (or those receiving them), the radio module, etc., fail?  What should the desired outcome and recovery be in these cases?
• Being able to set a desired temperature and have the system automatically start heating earlier to reach the target temperature at the requested time;
• Outside temperature and other factors (such as other heating sources interfering with the feedback mechanism) are inputs to the system to enable it to optimise central heating use.

Implementing all of the above is pretty large undertaking (not all of which I have yet done!) but would essentially provide an implementation of a domestic-grade heating control system.

By aiming to create decoupled components with clear interfaces we can enable substitution of alternatives suitable for the specific installation.  For example, users without Danfoss thermostats may wish to replace the component described in this and the previous post with their own system for turning on/off heating in a zone (e.g. using relays directly attached to an Arduino, or interfacing with a different RF receiver).

# What’s next?

Future articles will examine the behaviour of the Danfoss system further and look at when it is turning heating on and off in response to input, and start to implement the higher level control mechanisms described above.

# Danfoss Wireless Thermostat Hacking – Part One

I wanted to control my central heating system using a Raspberry Pi and Arduino micro-controllers to provide better control, flexibility, and a fun home automation project.

We originally chose wireless thermostats when we replaced the heating system in our home, but their user interface is not great and they are fiddly to use.  “Smart” thermostats were starting to come onto the market showing a glimpse of what could be done.

Having made some useful progress in my overall goal, I am documenting it here for the benefit of others.  My requirements were simple:

• Easy to change the heating profile for a day, e.g. if we decided to light a fire and didn’t need heat from the central heating system;
• The boiler should be used efficiently to reduce costs;
• Changes should be minimally invasive to the existing setup (e.g. no major rewiring/plumbing).

This post talks about how I was able to control the boiler whilst being minimally invasive by using the existing thermostat receiver and reverse-engineering its protocol, thus avoiding any electrical modifications.

# Setup

The system being ‘hacked’ is a Danfoss RX2 wireless receiver, with two TP7000 RF thermostats.  It’s plumbed to create a two-zone heating system, one zone for each floor of the house.

# Options for controlling the boiler

Initially I planned to put my own relay into the system with a wireless module attached that I could control it with.  I chickened out of this approach mostly because I didn’t want my dodgy soldering interacting with always-on mains-voltage equipment.  This led me to the idea of a Z-Wave based relay.  Fibaro make a product (the FGS-222) that’s quite appropriate for this use case: it is a dual-relay unit (since my home has two heating zones) and has switched and permanent inputs so you can have the existing control system continue to operate, or override it with your own.  The problem here was that Z-Wave devices require a gateway (such as Domoticz) to get them working, which seemed a bit overkill, but I think in general this is a reasonable route to go down.

However, my goal is to be minimally invasive: by using the existing control mechanism (the Danfoss RX2 wireless receiver), no changes are needed to the boiler electrical circuits.  Of course, that is easier said that done since it requires emulation of the protocol used by the wireless thermostats.  In this post I talk about receiving and decoding the protocol; subsequent posts will talk about emulating it and taking over control of the system.

# Signal acquisition

My starting point was to try to capture the signal being sent by the Danfoss wireless thermostats to the receiver unit, in order that I could at least replicate it bit-for-bit.  Ideally though, I’d also like to understand the contents of the payload of the messages being transmitted, and be able to capture them programatically in order to track when the existing system is calling for heat.

Having taken apart an RX1 receiver (a single-channel version of the RX2) that was given to me some time back, and photographed the circuit board in anticipation of this project, I can see it uses an Infineon TDA5210 chip for RF.  The datasheet indicates that this is a receiver only, which tells us that the protocol is one-way and could either be amplitude- or frequency-modulated.  Having looked at the circuit, I mistakenly thought that the signal was amplitude modulated and tried to use a basic RF receiver a friend gave me to receive the signal by having an Arduino dump its output over serial in variously increasingly complicated ways.  I quickly became frustrated on seeing a long “high” followed by silence as the gain circuit ramped back up to just amplifying noise in the receiver.  I initially thought I was missing the transmission, but was actually seeing it all along albeit unable to decode it because it was actually frequency modulated (and therefore seen by the ASK receiver as the long ‘high’ pulse).

Unable to make progress I wondered if I was mistaken about the modulation, didn’t know much about the RF69 yet that we’ll use later, and needed to find a way of figuring out what was going on.  Software-defined radio seemed to provide the answer: enter the Nooelec USB software-defined radio receiver.  Note that this isn’t necessarily the best hardware to buy, but it was available quickly in the UK and seemed to be good enough.  The RTL-SDR blog sell a modified version of units like these that are optimised for use with SDR apps, but as they are shipped from China the shipping time can be quite long.

As a Mac/Linux user the software options for SDR are a bit limited.  The flagship option seems to be SDR# but this is only available on Windows.  You apparently can get it to work on macOS using Mono, but instead I decided to opt for gqrx using X11 installed via MacPorts.  Once installed, you can turn on the waterfall view and then try to trigger the signal.  From previous experimentation with the ASK decoders, I was pretty sure that just pressing a button (temperature up/down) would result in an RF transmission even if the boiler state wasn’t being changed, which is handy because it meant I could avoid cycling the boiler on/off without disconnecting it from the mains.

On centring the receiver at 433.9MHz (chosen from looking at the TDA5210 datasheet) and triggering a transmission, it’s very clear that the signal is frequency modulated (the horizontal axis shows the frequency domain, the vertical axis shows time, and the colour show signal strength).  The waterfall display isn’t detailed enough to be able to see the signal content, but by experimenting with demodulation options in the software I found that the signal came out cleanly demodulated using the “FM (Stereo)” option:

1. Choose the FM (Stereo) demodulation option
2. Ensure the correct centre frequency, 433.9MHz, is chosen
3. Press the Rec button in the bottom-right.
4. Trigger the transmission.
5. Press the Rec button again to stop the recording.

The signal is saved as a .wav file, which then takes us into similar territory documented by others of examining and trying to replay the signal ourselves.  You can use Audacity to view the waveform you saved from gqrx:

As a starting point, I captured the same signal multiple times with the target temperature being different (i.e. different set temperatures all of which result in no heating demand, and the room temperature not having been updated) and found each capture produced a signal that looked identical.  I then compared that with one where the boiler should be on and at that point it was looking good: the signal was pretty much the same apart from in one section where a couple of 0s become 1s and vice-versa.

# Decoding the signal

Looking at the signal it seems like there is a clear pattern of 001 and 011 occurring; these likely correspond with 0s and 1s in the decoded signal.  Python has a handy library, wave, that you can use to easily read the values from a .wav file, so I used this first to dump the file to get an idea of how many frames the longer pulses lasted for.  I then used simple temporal and amplitude thresholding (detecting when a high or low has been seen for more than a fixed number of frames in the wave file) to find the encoded values: if we see two 0s together in the wire protocol we emit a 0, and if we see two 1s together a 1 is emitted.

This is the program that I used:

We can use xxd to dump this as hex so we can inspect it.  This is the decoded data for the ‘upstairs off’ signal:

andy@beta:~/rf\$ python decode_danfoss.py -d up_off.wav | xxd
00000000: aadd 46c5 88cc 556e a362 c466            ..F...Un.b.f

Looking at this combined with other captured signals makes it pretty clear what’s going on:

• 0xAA at the start is the preamble.  It is somewhat interesting that they transmit this as encoded data, I’m not sure if that is common practise.  The preamble is used by the receiver to set the gain correctly.
• 0xDD and 0x46 are both part of a “sync word”, and are consistent across all messages from all thermostats.  This indicates to the receiver that the signal is of interest to them.
• 0xC5 and 0x88 (together 0x88C5, also seen written in ink on the PCB of this particular thermostat) are the thermostat ID.  This is different for the other thermostats.
• 0xCC is the instruction.  This is 0xCC for ‘off’, 0x77 for ‘learn’, and 0x33 for ‘on’.
• The rest of the transmission is a repeat of the original message, and looks different at first glance in hex because there was a 0 bit between the two transmissions so the second one is offset by one bit.  (You can see this for yourself by running xxd with the -b option to dump the output as binary instead of hex.)

# In part two

In the next part, we will use an RF69 module alongside an Arduino-compatible microprocessor to send messages to the receiver to turn on/off the boiler, as well as receive messages from the existing thermostats programmatically to observe their behaviour.

Continue to Part Two.