PucciMon – Temperature and Humidity monitoring on a Raspberry Pi with 2-way SMS

Over the last weekend I decided it would be fun to brush off one of my old Raspberry Pis and play with some of the hardware that I bought for it a while back for a project I was going to do and subsequently abandoned. Overall what I wanted to accomplish was pretty simple: have a little C-based application which can read the AM2302 sensor which was attached to the Pi and use the Twilio REST interface via C in order to send messages.

I ended up going a little beyond my original goals since I discovered how easy Twilio makes it to respond to inbound SMS. I had a partially developed Flask application (here) which I could easily extend to have an additional method which Twilio could invoke.

Before going any further, the Python/Flask code is here and the C code for the Pi is here. I haven’t written a single line of C code in about 4 years, so I’m sure it’s stylistically not the greatest code out there, but it is functional for my purposes.

Wiring up the Pi

The first step in this project is to wire up the AM2302 sensor to the Raspberry Pi. Note for this project I’m using the Model B Pi 1 (I purchased it way back in 2013, and it’s been repurposed a few times now). The AM2302 can be directly connected to the GPIO pins on the Pi. For wiring the positive lead (the black wire on the sensor I have), should go to pin 1. The negative lead (the grey wire) goes to pin 6. The white one goes to pin 7 (GPIO4 on the Pi).

The AM2302 should look similar to this: AM2302 picture

After wiring up the RPi, it should look like this (ignore the case I have on mine): Wired Raspberry Pi

Now that everything is wired up, the hardware portion of this project should be done.

Twilio Account Setup

Before we go any further, you’ll need a Twilio account in order to proceed. The next step will require us to know the From number, Twilio Account SID and Auth Token. If you head over to Twilio, the onboarding experience is pretty painless.

Development Environment Setup

If you haven’t already booted up your Pi, do so now and make sure you have a good network connection. I’m assuming you’ve prepared the Pi with the default Raspbian Debian-derived Linux installation that most people use.

In order to make full use of everything, you’ll need to get two sets of code: the C based application which handles querying the AM2302 and a second Python/Flask based application which handles the Twilio webhooks.

  1. SSH to your Pi
  2. cd to wherever you want the code to live
  3. git clone https://github.com/lbearl/PucciMon.git to get the C code
  4. git clone https://github.com/lbearl/puccithe.dog.git to get the python code
  5. Set the following environment variables:
    a. export TWIL_WEATHER_NUMBER=<Your From Number in E.164 format>
    b. export TWIL_WEATHER_AUTH=<Your Twilio Auth key>
    c. export TWIL_WEATHER_SID=<Your Twilio SID>
    d. export TWIL_WEATHER_TONUMS=<Comma separated list of recipients in E.164 format>
  6. cd into PucciMon and run make (of course you have to have all of the build tools installed)
  7. sudo mkdir -p /opt/lbearl/ in order to make directory for the file.
  8. Execute sudo -E ./bin/c_sms -f and make sure that the AM2302 was read and that you receive an SMS.
    a. Test run output of application
  9. Execute cat /opt/lbearl/temp.txt and verify that it matches the output of the test run.
  10. In order to make things slightly easier to execute, without always requiring sudo, I’ve found that setting UID works well:
sudo mkdir -p /opt/lbearl/bin
sudo cp ./bin/c_sms /opt/lbearl/bin/c_sms
chown root /opt/lbearl/bin/c_sms
chmod u+s /opt/lbearl/bin/c_sms

At this point the sensor is working properly and can be read, the remainder of the setup is getting the flask application up and running. I’ve opted to do this using WSGI on Apache. The relevant configuration is as follows:

<VirtualHost *:443>
        ServerName puccithe.dog
        ServerAlias www.puccithe.dog

        ServerAdmin [email protected]

        WSGIDaemonProcess puccidog user=www-data group=www-data threads=5
        WSGIScriptAlias / /var/www/puccithe.dog/puccithedog.wsgi

        <Directory /var/www/puccithe.dog>
                WSGIProcessGroup puccidog
                WSGIApplicationGroup %{GLOBAL}
                Order deny,allow
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        # I use CloudFlare's certificates so that I don't have to muck about with anything else
        SSLEngine on
        SSLCertificateFile /etc/ssl/certs/puccithedog-cf.pem
        SSLCertificateKeyFile /etc/ssl/private/puccithedog-cf.key

The WSGI file is:

import sys
activate_this = '/home/pi/puccidog/puccidog/env/Scripts/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
sys.path.insert(0, '/home/pi/puccidog/puccidog')

import logging, sys

from puccidog import app as application

It’s been a couple of years since I set that up (the Flask application was actually originally used for a different side project, and I just repurposed it to process the webhooks). If you look around on the Flask website, they have some good documentation around how to make this work.

CloudFlare Dynamic DNS

As mentioned earlier, I’m using CloudFlare to handle all of the DNS and SSL around this. CloudFlare has a little known feature where they actually are able to use recent versions of ddclient in order to update your A records. I used to use Namecheap’s Dynamic DNS, and the CloudFlare one is just as simple to set up. The one big gotcha is that the version of ddclient that ships with Raspbian by default isn’t new enough to support CloudFlare. In order to fix it, we have to install a brand new version. Jens Segers wrote up a great summary on how to do it. The short version is that we need to download ddclient from SourceForge and then overwrite our local copy. The configuration file layout changed, so we need to create an /etc/ddclient directory and move (or create a new) /etc/ddclient/ddclient.conf in there. The configuration should look something like:

# ddclient.conf
use=web, web=dyndns
login=<CloudFlare Email Address>
password=<CloudFlare API Key>


Now if you send an SMS to your Twilio “From” number, you should get a response back a few seconds later which has the most recent temperature data. Additionally, if you scheduled everything in cron, then any time a measurement comes in above 86F/30C, you’ll get an SMS. You can also pass -f to the c_sms binary any time in order to force it to send SMS messages.


Why did I build this? Mostly because it was fun, and also partly because sometimes I forget to turn on the AC and don’t want my dog to have to suffer. I hope you enjoyed reading this as much as I enjoyed building it.

PageRuleAdmin – Add CloudFlare Forwarding URLs easily

For a lot of the things my wife and I work on it is useful to have redirects from one page to another. In the classic developer spirit, instead of always logging into the CloudFlare dashboard and manually updating everything, I threw together a small utility in a few hours which allows her (a relatively non-technical user) to easily administrate the forwarding url page rules for all of the domains that we have running in CloudFlare (which is most of them).

The application is just a basic ASP.NET Core 2.0 web application using traditional MVC patterns. CloudFlare, unfortunately, doesn’t really have a nice C# bindings for their SDK, so I just implemented the necessary calls using HTTPClient(). The core of the logic for interacting with the API is:

private async Task PerformWebRequest(string uri, HttpMethod method, string postBody = null)
var client = new HttpClient();

var request = new HttpRequestMessage
RequestUri = new System.Uri(uri),
Method = method,

if(postBody != null)
request.Content = new StringContent(postBody, Encoding.UTF8, "application/json");

request.Headers.Add("X-Auth-Email", UserEmail);
request.Headers.Add("X-Auth-Key", ApiKey);

var result = await client.SendAsync(request);

var resultString = await result.Content.ReadAsStringAsync();

return JsonConvert.DeserializeObject(resultString);

Note that the function is generic as I wanted to centralize all of logic which actually calls out to the API.

Take a look here for the github. I host the code on a Hyper-V Ubuntu Server VM, so unfortunately there isn’t a running example available.

Let me know if you find a use for it!

MS SQL Server Backups to S3 – On Linux!

Today I’m going to go over what is necessary in order to do full and transaction log backups for SQL Server Express on Linux. One of the big limitations of SQL Express is that it doesn’t include the SQL Agent, so most of the maintenance tasks that can normally be designed and implemented within SSMS need to be rethought. Thankfully Microsoft released sqlcmd for Linux, which makes it pretty easy to go ahead and do the backups as simple bash scripts scheduled through cron.


This post isn’t going to go through all of the steps to install SQL Server and the associated tools, but Microsoft has done a great job of documenting that on their docs site. In order to push the backups to S3 we will need the s3cmd tool:

apt install s3cmd
s3cmd --configure

You’ll need to have an IAM identity with at least enough permissions to write to the S3 bucket you designate in the script. In the configure prompts include the keys and specify what region you want to default to.

The Scripts

In order to do the backups, two scripts are necessary: one for the full backups and one for the transaction log backups. I’ve opted for a very simple structure since I only care about one database, it shouldn’t be very hard to modify the script to generate backups for each database, but I’ll leave that as an exercise for the reader :).

Full Database Backups (fullBackup.sh)

TIMESTAMP=$(date +"%F")

mkdir -p "$BACKUP_DIR"

chown -R mssql:mssql $BACKUP_DIR


s3cmd put "$BACKUP_DIR/<DBNAME>.bak" "s3://<BUCKET_NAME>/$TIMESTAMP/<DBNAME>.bak"
rm -f "$BACKUP_DIR/<DBNAME>.bak"

Transaction Log Backups (logBackup.sh)

DATESTAMP=$(date +"%F")
TIMESTAMP=$(date +"%H%M%S")

mkdir -p "$BACKUP_DIR"

chown -R mssql:mssql $BACKUP_DIR


s3cmd put "$BACKUP_DIR/<DBNAME>_log.bak" "s3://<BUCKET_NAME>/$DATESTAMP/logs/$TIMESTAMP/<DBNAME>_log.bak"

rm -f "$BACKUP_DIR/<DBNAME>_log.bak"

Then schedule them in cron:

0 0 * * * /root/bin/fullBackup.sh
*/15 * * * * /root/bin/logBackup.sh

With the default schedule I have, full backups are taken at midnight and transaction log backups are taken every 15 minutes.

S3 Lifecycles

While the scripts do a good job of cleaning up after themselves, S3 will (by design) never delete your data unless you specifically tell it to. S3 has a nifty feature called “Lifecycles” which allows us to specify rules for object retention (it is a powerful feature that can be used for a number of other things as well). To access it go to the AWS Console and enter into your S3 bucket. Follow these steps to setup object retention:
1. Select the Management Tab
2. Select Lifecycle
3. Click + Add lifecycle rule
4. Name the rule something descriptive (“Expire all files”). Leave the prefix blank
5. Leave Configure transition blank
6. In Expiration set the following options: S3 Lifecycle Creation
7. Click Save

That’s All

At this point we have full and transaction log backups configured, being pushed off site to Amazon S3. These backups are soft deleted after 7 days and fully deleted after 14 days.