I like my code in either of the following two states: semantic, efficient, and minimalistic OR straight up Mad Max/MacGyver style duct-taping everything together. This integration is the latter.
This project is a modern art installation combining technology and our town's historical heritage, an old mineshaft in the center of our town. The main idea was to create a display that would mirror the mineshaft's "breathing". The display would visually alter upon environmental changes within the mineshaft. The official name was "mine ultrasound". The main technical challenge was to lower the instruments into the mineshaft and retrieve the data in real time. POE (Power over Ethernet) was the first idea, but we've realized there are significant losses over greater distances. And as we required 150-200 meters, this wasn't suitable solution. We decided to lower 240V cables which don't have power loss over longer distances (unlike POE) and transfer the data wirelessly using nRF24L01+ modules. The monitoring was done in more than 30 days. In this blog, I will show you how I imagined the implementation and gladly take/challenge any of your suggestions.
- Hardware/Software stack
- Technical Project overview
- Custom TCP protocol
The project in question is composed of the following hardware:
- 2x Raspberry Pi (4B & 3B+)
- 2x nRF24L01+ (2.4GHz Transceiver)
- MCP9808 (I2C temperature sensor)
- SHT21 (Temperature/Humidity Sensor)
- BMP280 (I2C pressure sensor)
And the following programming languages:
- Bunch of helper libraries
The nRF24L01+ PA/LNA module has documented a range up to 1km+ with a direct line of sight, with environmental tests showing up to at least 250m. With specifications of this project being underground with almost absolute zero RF interference, I believe that if someone wanted to recreate something similar from this post, the module could achieve its full potential.
The project used a Master-Slave setup to achieve asymmetric communication between the devices. Both setups ran Raspbian OS because nothing more was required, and it was pretty simple to set up using NOOBS. I will start from the bottom of the mineshaft, and we'll work our way up all the way to "the cloud". I will not dive into the code itself within this post but you can check it out on github.com/Martincic/kova-je-nasa. Instead, I will explain roughly how the whole setup worked and how technologies intertwined.
Slave was the Raspberry 3B+ which had been collecting the data from the sensors upon master's request. The language of choice here was python due to simplicity, speed, and brilliant libraries for all sensors and nRF24L01+'s.
The setup had to be running the python script from boot onwards, but for python being known to crash after long runtime instances, I've hacked my way around this. Firstly, I have made a simple foreverPy.sh script which owns the slave.py process, and upon it's failure, it simply restarts it again. This is the best way to own a process and control it in case of a shutdown; it contains the following code:
until sudo python3 /home/pi/slave.py; do echo 'Python process crashed... restarting...' >&2 sleep 3 done
There is a sleep command before restarting the process, which acts as a buffer if there is a critical mistake in the code, and the process would fail upon any run attempt. This would overflow your console, and you wouldn't be able to stop the script.
>&2 redirects the output from echo - stdout (standard output) command to stderr (standard error) command. Text output from the command to the shell is delivered via the stdout (standard out) stream. Error messages from the command are sent through the stderr (standard error) stream.
Finally, after owning the python process and making sure that it cannot fail, I had to start it whenever the machine starts. I have achieved this by modifying the .bashrc file which can be found at
/home/pi/.bashrc. At the bottom of this "user boot" file, we can add the command that runs
sudo bash /home/pi/foreverPy.sh. It's advised to add this line to the bottom of the file.
Master is the mediator between the cloud and slave in the pit. The master has the same setup for running the python script forever. The master is asking for the data from the slave, receiving the data, constructing it from bytes to interpretable numbers, and sending it to the server's API for further use. Along with retrieving data and passing it along the line, the master is also connected to the TV and is displaying the graph with "floating dots/graphs in various colors and speeds". After lots of thoughts on how to display the graphics and passing over a ton of packages/libraries/programs, I've decided to output it in the format of a web page. This would allow anyone with a mobile device, tablet, laptop, etc. to view the live ultrasound of the mineshaft from the comfort of their home. A great contributor to the decision was the ongoing pandemic which limited the visitors. After figuring out how to display it, it was simple to set up on master. I've simply had to start up the browser on boot, and hide the mouse cursor. Going back to .bashrc file where we've started foreverPy.sh we will add two more lines below.
chromium-browser --app=http://some-website.com --start-fullscreen will open the chromium web browser at the desired website in fullscreen mode. And
unclutter -idle 1 -root will hide the mouse cursor after being idle for 1 second.
unclutter is a package that can be installed from any Linux flavor by calling
apt-get install unclutter.
This was the most intimidating part of the project for me since I'm a backend developer and the best thing I've ever designed was probably the plasticine ashtrays in kindergarten. There were a couple of prerequisites for this:
- It had to be programmatic art
- It had to have input variables
- Should not be repetitive
- Should be able to graph the data over it
- Have it run at a reasonable speed
Once the website is refreshed, PHP would input the latest records into particles.js and display the latest records. It felt like cheating, but if it works, it ain't stupid. Ninety seconds was enough for data to refresh and gave the animation enough fluidity so it's not looking bugged out to the user looking at the animation.
For this part of my implementation, I've used another great library from the CircuitPython. It has great documentation and comes with lots of examples. This part of the communication between two nRF24L01+ modules was imagined in this manner because I wanted the master to be only in transmit mode and the slave only in receive mode. This was due to random crashes and I wanted them being completely independent of each other no matter at which moment one of them falls asleep (read: crashes), or wakes up.
Traditional communication would require the master and the slave to switch between TX (transmit) and RX (receive) modes all the time. After any packet is received, there is an ACK (acknowledgement) packet which is sent from the receiver to the transmitter. Imagine a conversation where those
uhmms were mandatory for communication, and if you don't hear
mhmm after each sentence, you would repeat the sentence until you get one, or you get bored repeating it.
With all this switching of modes, there is a big room for error when the system is unreliable. Sooner or later both would be stuck in RX mode and wait for one another indefinitely.
So what I did was set master to TX and give it an array of questions which was a simple string array. Then in while true loop I'd go through each of the questions and send it to the slave. Slave at the other end is always in RX waiting for a question and is always ready to respond, it also has an associative array (unlike regular array which can be found on master) within which it has question associated with its value (eg.
'temp' => 22.1, 'humid' => 79.9, 'press' => 1011.1). Once the slave receives a message, it sends an ACK anyways so I simply access my map of answers before sending it, find the answer at the position of question (eg. I receive 'temp' and in ACK I send
answer_array at position
received_value back to the master). This way I've cut the number of packets in half and greatly reduced room for error. After each answered question, I would've refreshed my answers array with new/fresh data and always be ready to answer the next question with the latest value.
With this setup since the master is always and the only one within TX mode, nobody would complain if nothing gets send around. On the other side, the slave is always and the only one within RX mode and the master would simply ignore timed-out packets by default.
Along with simple data that was consisting of a couple of bytes altogether, I also had an idea of the coal mine live stream. The idea was the following. Since we're lowering the raspberry down at depth room where miners had telephones and other equipment, and since we're pulling down 240V cable, we could've connected multiple light bulbs to 240V power supply and control them with 5V relay for raspberry. The problem was that the image that raspberry would take with Pi Camera would appear static (always the same), but this could've been solved with a tiny servo motor, or simply by hanging the RPI in the air. It would move for sure on random because of strong air currents going through mineshaft and out. This part was discarded, but I'll post it here anyway as a bonus.
Photos from below 150ish meters below the surface. The quality is pretty bad since it's pitch black and any light that we've had was not sufficient for a mobile camera.
- Electric cupboard left
- Electric cupboard right
- Behind this door is an elevator through which we lowered the slave Raspberry
- Closed off, collapsed tunnel
- Coal veins
Here is the log of what I've recorded at the time and it is obvious that it took way too long to transmit any useful file size by acknowledging all packets.
- 13.53 - start transmission
- 14.03 - ongoing 0.9Mb sent
- 14.28 - ongoing 2.1Mb sent
- 14:35 - ongoing 2.5Mb sent
- 14:45 - end transmission 2.6Mb sent
After lots of testing and talking to CircuitPython developers, we concluded that Python was not the tool for the task. The only possible resolution of this problem was translating all the code to C++, which was not a viable solution. Until it hit me. Should I use streaming protocol rather than ack protocol and send a slow video? I've figured that in the capital of Croatia there is much more RF interference than in a couple of hundred meters deep mineshaft in the small town of Labin. This showed to be false because photos from Pi Camera V2 took about 10ish (I forgot to note measurements for this) minutes per transmission, let alone single per second.
Before the actual transmission, I've broken the image down to byte array in the following manner:
buffers =  with open("coal.jpeg", "rb") as image: f = image.read() b = bytearray(binascii.hexlify(f)) counter = 0 while counter < sys.getsizeof(b): counter += 32 if not counter > sys.getsizeof(b): buffers.append(b[counter:counter+32]) return buffers
In this code snipped, the counter += 32 is the size of a single packet which I've set to 32 bytes which is nRF24L01’s maximum packet size. After deconstructing the image into buffer, I've sent it over to master where I've had to construct it in the following manner:
f = open("image.jpeg", 'wb') while time.monotonic() < start_timer + timeout: if nrf.available(): count += 1 # retreive the received packet's payload buffer = nrf.read() # clears flags & empties RX FIFO bytes += buffer f.write(binascii.unhexlify(buffer)) if count%1000 == 0: pass start_timer = time.monotonic() # reset timer on every RX payload # recommended behavior is to keep in TX mode while idle nrf.listen = False # put the nRF24L01 is in TX mode f.close()
This writes to the .jpg file every packet that it receives (and we de-hex [convert from hex to binary] the retrieved data to make it binary again).
This worked like a charm and here is the final result:
I say it worked like a charm because once I've transmitted it, it was even better than I could have imagined. This is an art installation focused on mineshaft and if any static occurred it would too be random and created by mine itself, which really adds to the technical/artistic combination we were looking for.
So the project altogether was pretty exciting, and I've learned tons of new things. I was very fulfilled once it was done because at moments I thought it was too big of a task. Again, some issues could've been better, such as forgetting to set up SSH access on the master facepalm. Data was being sent way too fast, and I've ended up filtering data on the server by accessing every N'th record. Also, I've learned later that image transfers were not successful every time because throughout the image files there were flags of some sorts (I've originally thought there were only headers in images) and since UDP loses packets, the file would come out broken at times. Once we've lowered the rig even those lousy 10 meters in the mineshaft, the first (apparently known) anomaly was the temperature which remained constant at 14 Celsius from the moment of insertion throughout the life of the project. The humidity was about 100% from insertion until the end as well and only kept going up (up to 115%) which wasn't very interesting. And then there was the pressure which luckily was the only thing that fluctuated a little due to air currents.
Pretty interesting all in all; not great, not terrible.
Thank you for reading this! If you've found this interesting, consider leaving a ❤️
🦄, and of course, share and comment your thoughts!
Lloyds is available for partnerships and open for new projects. If you want to know more about us, click here.