In this post, I will talk about the completed unit on the drone side and show a demo of the remote operated laser pointer in action. The development continues on from the second part of this series.
Here are some pictures of the unit I created. This was mounted on a single lego brick and then attached to the drone via a PGYTECH Tello Adapter.
The whole unit is constructed over a yellow lego brick. I used a 3mm black acrylic sheet and some industrial glue to shape the structure. There is also the use of a cylindrical acrylic rod to support the structure. I learnt that I needed more power to support the RC relay and Laser diode simultaneously. The CR2032/CR2025 3V Vertical Mounting Coin Battery Holders proved an excellent option giving me a total of 12 V. These were stuck one over the other with some industrial glue. The bottom two cells are wired to drive the RC relay circuit. I placed a slider switch for power off/on function. If connected directly to the power supply, the RC relay draws up a little current for powering the signal reception circuit.
The top two coin batteries (CR2032), provide 6V DC supply to the Red laser diode. I put in a 100 Ohm resistor to protect the diode. You’ll also notice that I swapped my previous red laser for one of these.
Here is the demo of the unit in action!
Now, that the Drone end has been taken care of, I will next focus on the processing end. This is the Raspberry Pi and the Intel Neural compute stick 2. What I hope to achieve is to stream the video feed from the Tello Drone to the Raspberry Pi, that also controls the flight of the drone via Python code. Do inferences on the stream and detect target images. If found, then light up the laser pointer on the physical image. I’ll show how in my next post or two.
Here is a closer look at both the transmitter and receiver. I’ve included some pictures of the internals of the transmitter unit as well. We are only temporarily going to use it, as I will explain further. The unit is tiny and light and just what I was looking for related with this project.
Next, I connected all the pieces together so that I could trigger a transmit event that lights up a test green LED and another to switch it off. I haven’t used the red laser diode for testing in this case, as I need to work on a compact circuit that would be able to power both the RC relay and the laser diode. More to come in my next blog post on this.
The distance between the Transmission assembly and the Reception assembly could be an amazing 160m in ideal conditions as this youtube video shows. Here is another video with a successful range test of over 350m with some modifications to the hardware, such as adding longer antennae. The Reception assembly has two circuits with their own power sources. One circuit powers the wireless remote control switch and the other powers the green LED.
The pypi python site has a project for sending and receiving 433/315MHz LPD/SRD signals with generic low-cost GPIO RF modules on a Raspberry Pi. There are two script files that are of use. Click on the links to go to the source code that’s written in python.
Finally, as usual, I’ve recorded a demo of this project in action. Here’s the video:
Now with the RC relay, I can switch the LED on/off from python code. Once I sort out the power circuit issue, it will be possible to ‘build’ the final assembly of the remote controlled laser pointer. I’ll then mount this assembly on my RYZE tello drone. Then will do some tests on that. The most important test would be to check for the stability of the flight of the aircraft with the final assembly mounted on it. And of course, to check whether the laser pointer lights up while the drone is in flight.
Once these basic tests pass, it would be time to focus on capturing live video from the RYZE tello drone onto the Raspberry Pi and doing some object inference / detection using the Intel Movidius Neural compute stick 2 for deep learning . Based on finding certain objects, I would have the code switch the laser that’s mounted on the drone to ON status followed by off. This is really where the fun begins.
I will demonstrate this in action in my next set of posts in this series. Stay tuned!
There are some cool python libraries and machine learning platforms out there to support work in data science. I thought of listing out some of these in this article along with a few highlights of their core capabilities. Also included the links to details for quick reference. Here they are in random order:
statsmodel – Great for statistics around ordinary least squares (OLS), that gives measures such as R-squared, skewness, kurtosis, AIC and BIC scores on the data. It is great for conducting statistical tests, and statistical data exploration.
bokeh – This library is great for providing end users interactive data visualisation inside modern web browsers. Using bokeh, one can quickly and easily make interactive plots, dashboards, and data applications. It can generate highly customisable glyphs such as line, step lines, multiple lines, stacked lines, as well as stacked bars, hex tiles and timeseries plots.
seaborn – Seaborn is a Python data visualization library based on matplotlib. It supports several types of data visualization such as box plot, heatmap, scatterplot, cluster map, and violin plot.
Theano – It is used to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It does this by making use of GPUs for computation. It works with expressions that are passed to it from Python.
yellowbrick – Yellowbrick extends the Scikit-Learn API to make model selection and hyperparameter tuning easier. It builds on top of matplotlib. Some of its key visualization features cover feature, classification, regression, clustering, model selection, target, and text visualizations.
plotly – It is an interactive, open-source, and browser-based graphing library for Python. It supports: basic charts – scatter, line, pie, bar and more statistical charts – histograms, box plots, distplots… scientific charts – contour, heatmaps, ternary plots… financial charts – time series, candlestick, funnel chart… maps, 3D charts, and subplots. It even supports animations.
Keras – A python deep learning library. It supports CNNs and RNNs and can run seamlessly on CPUs and GPUs. It supports the sequential model and Model class that’s used with Keras functional API.
Scikit-learn – A sort of swiss-knife of libraries that allows to perform many objectives not limited to – Classification, Regression, Clustering, Dimensionality reduction, Preprocessing (Transformers and Pipelines) and for model selection.
Numpy – It is the fundamental package for scientific computing in Python. It is great for working with arrays and performing linear algebra operations on arrays. The broadcasting feature is extremely useful, making coding simpler.
Pandas – It is a fast, powerful, flexible and easy to use open-source data analysis and manipulation tool, built on top of the Python programming language. It can import and export data from and to a variety of file formats, such as csv and excel. It can be used to slice the data, subset it, merge/join/concatenate data from multiple sources, and remediate missing data. Pandas supports groupby, pivot table, time series, and sparse datasets. It is one of the most essential tools for a data scientist, especially when needing to perform exploratory data analysis (EDA)
MXNet – A flexible and efficient library for deep learning from Apache. It supports multi GPU and multi host training. MXNet has deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl.
PaddlePaddle – Is a popular deep learning framework that has been developed and is used in China. PaddlePaddle is originated from industrial practices with dedication and commitments to industrialisation. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service and so on while serving more than 1.5 million developers.
Platforms, Ecosystems and Frameworks
TensorFlow – TensorFlow is an end-to-end open-source platform for machine learning. It has all the tools, libraries and resources to build and deploy machine learning-powered applications. Models built with TensorFlow can be deployed on desktop, mobile, web and cloud. This is an offering from Google.
Caffe – A deep learning framework made with expression, speed, and modularity. There is no hardcoding and models and optimization are defined through configuration. Its speed makes it perfect for research experiments and industry deployment. Caffe can be used to create and train CNN inference models.
PyTorch – An open source machine learning framework that accelerates the path from research prototyping to production deployment. It provides an ecosystem of tools and libraries and deployment options deployment to cloud platforms such as Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure.
Scipy – Python-based ecosystem of open-source software for mathematics, science, and engineering. The SciPy ecosystem includes general and specialized tools for data management and computation, productive experimentation, and high-performance computing. Numpy, Matplotlib, and Pandas are some of the libraries that are part of the Scipy ecosystem.
CNTK – This is a Microsoft offering called Cognitive Toolkit that is open source. CNTK is also one of the first deep-learning toolkits to support the Open Neural Network Exchange ONNX format, an open-source shared model representation for framework interoperability and shared optimization. Co-developed by Microsoft and supported by many others, ONNX allows developers to move models between frameworks such as CNTK, Caffe2, MXNet, and PyTorch.
These were some of the libraries, ecosystems, and platforms that support a lot of work that can be done using them. Spending time exploring and getting experience with these can help make one proficient in the field.
Like it or not, Internet of Things (IoT) is going to be all more prevalent in the world around us. I wanted to explore how things could become smart, indicate events to humans in innovative ways.
I have this idea of a small camera drone, such as the RYZE Tello that I’ve been playing around with for a while being able to fly around and recognize objects. When it does, let’s say a picture of a cat, then there should be some way for the drone to indicate a positive identification. One inexpensive way of doing this could be a laser pointer mounted on top of the drone that could light up and cast a beam on the object. The object would light up with a red dot, positively indicating to the casual observer that the drone has honed in on the object in the actual physical world.
Such a setup would need several key components:
A drone with a camera that is able to stream images or video, such as the RYZE Tello. I showed in an earlier article that it is possible to control the flight of the drone through a program running on a Raspberry pi device. It is also possible to stream images or video to the pi for processing.
A remote-controlled laser beaming device that could be controlled programmatically.
I will blog on the end to end development of this project and will upload videos of the implementation in action. As a first step, I’ve begun with a connected device. The laser beaming device itself has been sourced from a cheap keychain laser pointer.
Using a hacksaw and a pair of pliers, I stripped opened the aluminium canister shell to reveal the glorious miniature electronics inside. Using a soldering iron, I removed the two LEDs. However, I left the switch pots on, since it would very tricky to remove them given the tiny size of the PCB. The laser beam diode is the cylinder-like unit at the top. The covering connects to the positive (+4.5 V) and negative to the lead just below. In the unit, the LR 44 button cells go into the circuit through the spring at the other end.
Fantastic! I have the laser pointer. Next, I need to wire up a few components.
Several parts are needed for this project. Let’s look at the most important components.
One channel relay board – This is a mechanical relay switch that can close or open a circuit as it receives an input signal from the Raspberry Pi.
Button coin cell battery socket holder – That’s a long name for a unit that can house 2 LR 44/AG 13 button cells to give a combined voltage feed of around 3 V. For this project, I would be using two of them.
Plexi glass sheets – These are amazing to work with to create custom housing to hold the electronics. In this case, I’m using a bit of this to act as separators. A hook cutter makes it easy to work with these.
Connections are fairly straight forward. Let me start with the right side of the relay. You’ll see that I took 4 LR44 batteries stacked together with two coin cell battery socket holders to get a tiny power pack supplying +6 V approx. This power pack will be important when mounting on the drone later in the project. The horse carving is meant to simulate the object onto which the laser beam would be projected.
The relay wiring on the left side is very similar to how it is described in this you tube video by PiddlerInTheRoot. Here you will get to see the setup in action. I had to replace the carved horse with a black cardboard box to show the laser beam in a clearer way.
As I needed around 4.5 V for the laser diode, I added a few resistors to step down the voltage to a safe operating level. Using this handy voltage divider calculator, that gave me 220 Ohms connected to +ve, followed by 1 K Ohm resister to -ve. Here is the python code that switches on and off the laser pointer diode circuit.
import RPi.GPIO as GPIO
channel = 4
# GPIO setup
GPIO.output(pin, GPIO.HIGH) # Turn laser on
GPIO.output(pin, GPIO.LOW) # Turn laser off
if __name__ == '__main__':
So that is the progress over a weekend. Using VNC Viewer it is possible to login to the Raspberry pi from a laptop, tablet and even a mobile phone. Once in, the python script above can be executed and the laser pointer can be observed to shine a beam and stop. The script could also be written to call an API on the web periodically, and when the API returns a value of interest, the program could set the laser to turn on. This has plenty of practical applications, such as a silent alarm. Let’s say the laser unit is pointed to the front of a room, where a development team is facing. The program loop checks if a build has failed on a CI/CD server. If it is the case, shine the beam on…the whole team sees it on the wall and gets alerted that the build has broken.
As mentioned at the start of this blog, I’m going to use the laser pointer for the drone to indicate visually whether it has detected an object of interest by shining a laser beam on the object.
More to come as I move on to the next steps. Stay tuned!
Cameras such as the handheld DJI OSMO Pocket allow amazing portability and high resolutions upto 4K for creating videos including vlogs with incredible ease. Professional effects including hardware image stabilisation are all part of the package.
In this article, I would like to share with you, how I took a DJI OSMO Pocket and applied a whole set of extensions around it to create a home video cast rig for monologue recording. The setup consists of plenty of units including dimmable LED lights, articulating arms, a shotgun microphone and even a ring light.
Microphone – For a close recording setup, where the subject is very close to the camera, a shotgun microphone such as the Sennheiser MKE 400 Shotgun Microphone (in the picture above), could be a great addition to keep away the sounds from the surrounding and capture more clearly what the subject is speaking. While testing, I found the MKE 400 really does it job. Of course, it needs to be connected to the DJI OSMO Pocket via a not so cheap OSMO Pocket 3.5mm adapter that’s available from DJI.
Lighting – Simply put, the most important component for any high resolution camera recording. I used two types of lighting for the set. (a) Panel video LED lights – for top angle, bottom and side accents lighting of the subject’s face. (b) Ring light – for frontal lighting and ring light effect.
The entire setup can be lifted and moved around with one hand. Along with a blue screen background and a stool for sitting – the setup can transform any space into a video broadcast studio for making video casts. The videos can then be imported into a producing tool, such as Final Cut ProX.
I created this setup for my projects, and thought I’d share the setup for anyone who’d be interested. It is really simple to setup and makes recordings with the DJI OSMO Pocket even more cool with enhanced lighting and external microphone. Check out some of my upcoming video casts using this setup.
What makes the Tello drone so amazing is the ability to programmatically send commands to it. The drone has an on-board camera and can stream both photos as well as videos. It does not have built in GPS and uses VPS instead to determine it’s flight stability routines. None the less, it is interesting to have an ability to “compute” and “actuate”. This can then become a robot UAV that could have some machine learning algorithms coupled with it to determine its own flying path including obstacle avoidance. In a way, the combination would give rise to an autonomous flying UAV.
I’m no where close to that with this Tello drone, but definitely inching towards it. As a first step, I worked on getting the drone to follow a programmed flight path based on fixed parameter values. I’ll cover the details of how I did this a little later in this article. You can view the output first.
This flight proceeded on it’s own, getting instructions via a python program over a wifi connection with the drone. The code is simple and involves sending UDP messages to the Tello over port 8889. Here is a view of the code with the key request / response commands highlighted.
The command.txt just has a list of commands from the Tello SDK, except delay, which is custom code. Here is a sample of the contents of a command.txt file.
From my iPad Pro, I did SSH into a RaspberryPi device and ran the python code that flew the drone in the video above. Here is the action happening in the console.
That’s it for now. In my next blog article related to further development around these experiments, I will cover how I extended this to use the Raspberry Pi display, wireless USB dongle, wifi repeater, Intel Neural compute stick and running a Jenkins instance on the device.
The Tello drone is a remarkable device and allows for many possibilities. My main area of interest is autonomous flying UAVs, which like Shakey in the early 70s, could maneuver around obstacles in it’s path.
I recently heard of the RYZE Tello mini drone with DJI technology. DJI is a reputed company that is known for high quality camera drones.
Over the years, I have tested several toy drones. The Tello was different though. It is an affordable drone with a camera and most interestingly is programmable. I wanted to check out how easy it is to program and control one. So I acquired a Tello drone and got started researching its SDK.
Once you checkout the project, you get a folder structure created locally as shown here:
Single_Tello_Test folder contains all that you need to send commands to Tello so that it can execute those commands. On my laptop, I detected the WiFi network of the Tello drone and connected to it. Once connected, I ran the command below:
python tello_test.py "command - iPhoneVideo2.txt"
The text file can have any name. This is the one I used for setting a sequence of commands that represented my custom flight plan for the Tello. Here is what the commands in the file look like:
The python code in tello_test.py reads each line and sends the instruction to the drone via UDP messages. This worked beautifully, and the command execution statuses can be viewed in the command window:
Here, Tello on 192.168.10.1 receives commands on port 8889 and sends back the command status message. I’ve captured a clip of the Tello drone in action. Enjoy!