RSS

Remote Operated Laser Pointer – Part 3

Remote Operated Laser Pointer – Part 3

In this post, I will talk about the completed unit on the drone side and show a demo of the remote operated laser pointer in action. The development continues on from the second part of this series.

Here are some pictures of the unit I created. This was mounted on a single lego brick and then attached to the drone via a PGYTECH Tello Adapter.

The whole unit is constructed over a yellow lego brick. I used a 3mm black acrylic sheet and some industrial glue to shape the structure. There is also the use of a cylindrical acrylic rod to support the structure. I learnt that I needed more power to support the RC relay and Laser diode simultaneously. The CR2032/CR2025 3V Vertical Mounting Coin Battery Holders proved an excellent option giving me a total of 12 V. These were stuck one over the other with some industrial glue. The bottom two cells are wired to drive the RC relay circuit. I placed a slider switch for power off/on function. If connected directly to the power supply, the RC relay draws up a little current for powering the signal reception circuit.

The top two coin batteries (CR2032), provide 6V DC supply to the Red laser diode. I put in a 100 Ohm resistor to protect the diode. You’ll also notice that I swapped my previous red laser for one of these.

Here is the demo of the unit in action!

Now, that the Drone end has been taken care of, I will next focus on the processing end. This is the Raspberry Pi and the Intel Neural compute stick 2. What I hope to achieve is to stream the video feed from the Tello Drone to the Raspberry Pi, that also controls the flight of the drone via Python code. Do inferences on the stream and detect target images. If found, then light up the laser pointer on the physical image. I’ll show how in my next post or two.

 
Leave a comment

Posted by on February 22, 2020 in IoT

 

Tags: , ,

Remote Operated Laser Pointer – Part 2

Remote Operated Laser Pointer – Part 2

In this post, I will continue on with describing the development of the remote operated laser pointer from my previous post. I recently procured a couple of DC 3.7V-12V Mini Wireless Remote Control Switch Relay Micro Receiver Transmitter System For LED Light Smart Home units from Banggood. This unit acts as a remote controlled relay supporting both normally open (ON) and normally closed (NC) configurations. I was looking to use normally open, and when triggered, to close the circuit.

Here is a closer look at both the transmitter and receiver. I’ve included some pictures of the internals of the transmitter unit as well. We are only temporarily going to use it, as I will explain further. The unit is tiny and light and just what I was looking for related with this project.

Using an approach as shown in this informative youtube post from PiddlerInTheRoot, the transmission codes of the transmitter can be detected. I used an FS1000A 433mHz Tx & Rx RF Radio Module

FS1000A
FS1000A – Transmitter and Receiver Pair

Next, I connected all the pieces together so that I could trigger a transmit event that lights up a test green LED and another to switch it off. I haven’t used the red laser diode for testing in this case, as I need to work on a compact circuit that would be able to power both the RC relay and the laser diode. More to come in my next blog post on this.

The distance between the Transmission assembly and the Reception assembly could be an amazing 160m in ideal conditions as this youtube video shows. Here is another video with a successful range test of over 350m with some modifications to the hardware, such as adding longer antennae. The Reception assembly has two circuits with their own power sources. One circuit powers the wireless remote control switch and the other powers the green LED.

The pypi python site has a project for sending and receiving 433/315MHz LPD/SRD signals with generic low-cost GPIO RF modules on a Raspberry Pi. There are two script files that are of use. Click on the links to go to the source code that’s written in python.

Finally, as usual, I’ve recorded a demo of this project in action. Here’s the video:

Quick view of the setup in action

Now with the RC relay, I can switch the LED on/off from python code. Once I sort out the power circuit issue, it will be possible to ‘build’ the final assembly of the remote controlled laser pointer. I’ll then mount this assembly on my RYZE tello drone. Then will do some tests on that. The most important test would be to check for the stability of the flight of the aircraft with the final assembly mounted on it. And of course, to check whether the laser pointer lights up while the drone is in flight.

Once these basic tests pass, it would be time to focus on capturing live video from the RYZE tello drone onto the Raspberry Pi and doing some object inference / detection using the Intel Movidius Neural compute stick 2 for deep learning . Based on finding certain objects, I would have the code switch the laser that’s mounted on the drone to ON status followed by off. This is really where the fun begins.

I will demonstrate this in action in my next set of posts in this series. Stay tuned!

 
1 Comment

Posted by on February 17, 2020 in IoT, Programming

 

Tags:

The Root to a Depth of Python Libraries and Machine Learning Platforms

Python Libraries and Machine Learning Platforms

There are some cool python libraries and machine learning platforms out there to support work in data science. I thought of listing out some of these in this article along with a few highlights of their core capabilities. Also included the links to details for quick reference. Here they are in random order:

Libraries

statsmodel – Great for statistics around ordinary least squares (OLS), that gives measures such as R-squared, skewness, kurtosis, AIC and BIC scores on the data. It is great for conducting statistical tests, and statistical data exploration.

bokeh – This library is great for providing end users interactive data visualisation inside modern web browsers. Using bokeh, one can quickly and easily make interactive plots, dashboards, and data applications. It can generate highly customisable glyphs such as line, step lines, multiple lines, stacked lines, as well as stacked bars, hex tiles and timeseries plots.

seaborn – Seaborn is a Python data visualization library based on matplotlib. It supports several types of data visualization such as box plot, heatmap, scatterplot, cluster map, and violin plot.

Theano – It is used to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It does this by making use of GPUs for computation. It works with expressions that are passed to it from Python.

yellowbrick – Yellowbrick extends the Scikit-Learn API to make model selection and hyperparameter tuning easier. It builds on top of matplotlib. Some of its key visualization features cover feature, classification, regression, clustering, model selection, target, and text visualizations.

plotly – It is an interactive, open-source, and browser-based graphing library for Python. It supports:
basic charts – scatter, line, pie, bar and more
statistical charts – histograms, box plots, distplots…
scientific charts – contour, heatmaps, ternary plots…
financial charts – time series, candlestick, funnel chart…
maps, 3D charts, and subplots. It even supports animations.

Keras – A python deep learning library. It supports CNNs and RNNs and can run seamlessly on CPUs and GPUs. It supports the sequential model and Model class that’s used with Keras functional API.

Scikit-learn – A sort of swiss-knife of libraries that allows to perform many objectives not limited to – Classification, Regression, Clustering, Dimensionality reduction, Preprocessing (Transformers and Pipelines) and for model selection.

Numpy – It is the fundamental package for scientific computing in Python. It is great for working with arrays and performing linear algebra operations on arrays. The broadcasting feature is extremely useful, making coding simpler.

Pandas – It is a fast, powerful, flexible and easy to use open-source data analysis and manipulation tool, built on top of the Python programming language. It can import and export data from and to a variety of file formats, such as csv and excel. It can be used to slice the data, subset it, merge/join/concatenate data from multiple sources, and remediate missing data. Pandas supports groupby, pivot table, time series, and sparse datasets. It is one of the most essential tools for a data scientist, especially when needing to perform exploratory data analysis (EDA)

MXNet – A flexible and efficient library for deep learning from Apache. It supports multi GPU and multi host training. MXNet has deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl. 

PaddlePaddle – Is a popular deep learning framework that has been developed and is used in China. PaddlePaddle is originated from industrial practices with dedication and commitments to industrialisation. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service and so on while serving more than 1.5 million developers.

Platforms, Ecosystems and Frameworks

TensorFlow – TensorFlow is an end-to-end open-source platform for machine learning. It has all the tools, libraries and resources to build and deploy machine learning-powered applications. Models built with TensorFlow can be deployed on desktop, mobile, web and cloud. This is an offering from Google.

Caffe – A deep learning framework made with expression, speed, and modularity. There is no hardcoding and models and optimization are defined through configuration. Its speed makes it perfect for research experiments and industry deployment. Caffe can be used to create and train CNN inference models.

PyTorch – An open source machine learning framework that accelerates the path from research prototyping to production deployment. It provides an ecosystem of tools and libraries and deployment options deployment to cloud platforms such as Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure.

Scipy – Python-based ecosystem of open-source software for mathematics, science, and engineering. The SciPy ecosystem includes general and specialized tools for data management and computation, productive experimentation, and high-performance computing. Numpy, Matplotlib, and Pandas are some of the libraries that are part of the Scipy ecosystem.

CNTK – This is a Microsoft offering called Cognitive Toolkit that is open source. CNTK is also one of the first deep-learning toolkits to support the Open Neural Network Exchange ONNX format, an open-source shared model representation for framework interoperability and shared optimization. Co-developed by Microsoft and supported by many others, ONNX allows developers to move models between frameworks such as CNTK, Caffe2, MXNet, and PyTorch.

These were some of the libraries, ecosystems, and platforms that support a lot of work that can be done using them. Spending time exploring and getting experience with these can help make one proficient in the field.

 
Leave a comment

Posted by on February 2, 2020 in Data Science

 

Tags: , ,

Remote Operated Laser Pointer – Part 1

Remote Operated Laser Pointer – Part 1

Like it or not, Internet of Things (IoT) is going to be all more prevalent in the world around us. I wanted to explore how things could become smart, indicate events to humans in innovative ways.

I have this idea of a small camera drone, such as the RYZE Tello that I’ve been playing around with for a while being able to fly around and recognize objects. When it does, let’s say a picture of a cat, then there should be some way for the drone to indicate a positive identification. One inexpensive way of doing this could be a laser pointer mounted on top of the drone that could light up and cast a beam on the object. The object would light up with a red dot, positively indicating to the casual observer that the drone has honed in on the object in the actual physical world.

Such a setup would need several key components:

  • A drone with a camera that is able to stream images or video, such as the RYZE Tello. I showed in an earlier article that it is possible to control the flight of the drone through a program running on a Raspberry pi device. It is also possible to stream images or video to the pi for processing.
  • A remote-controlled laser beaming device that could be controlled programmatically.
  • Image recognition using neural network processing hardware.

I will blog on the end to end development of this project and will upload videos of the implementation in action. As a first step, I’ve begun with a connected device. The laser beaming device itself has been sourced from a cheap keychain laser pointer.

Keychain laser pointer

Using a hacksaw and a pair of pliers, I stripped opened the aluminium canister shell to reveal the glorious miniature electronics inside. Using a soldering iron, I removed the two LEDs. However, I left the switch pots on, since it would very tricky to remove them given the tiny size of the PCB. The laser beam diode is the cylinder-like unit at the top. The covering connects to the positive (+4.5 V) and negative to the lead just below. In the unit, the LR 44 button cells go into the circuit through the spring at the other end.

Laser pointer diode board

Fantastic! I have the laser pointer. Next, I need to wire up a few components.

Components

Several parts are needed for this project. Let’s look at the most important components.

  • Raspberry Pi 3 Model B – For the compute power and as the bridge between the hardware and software worlds.
  • One channel relay board – This is a mechanical relay switch that can close or open a circuit as it receives an input signal from the Raspberry Pi.
  • Button coin cell battery socket holder – That’s a long name for a unit that can house 2 LR 44/AG 13 button cells to give a combined voltage feed of around 3 V. For this project, I would be using two of them.
  • Plexi glass sheets – These are amazing to work with to create custom housing to hold the electronics. In this case, I’m using a bit of this to act as separators. A hook cutter makes it easy to work with these.
All components wired up

Connections are fairly straight forward. Let me start with the right side of the relay. You’ll see that I took 4 LR44 batteries stacked together with two coin cell battery socket holders to get a tiny power pack supplying +6 V approx. This power pack will be important when mounting on the drone later in the project. The horse carving is meant to simulate the object onto which the laser beam would be projected.

The relay wiring on the left side is very similar to how it is described in this you tube video by PiddlerInTheRoot. Here you will get to see the setup in action. I had to replace the carved horse with a black cardboard box to show the laser beam in a clearer way.

Circuit in action

As I needed around 4.5 V for the laser diode, I added a few resistors to step down the voltage to a safe operating level. Using this handy voltage divider calculator, that gave me 220 Ohms connected to +ve, followed by 1 K Ohm resister to -ve. Here is the python code that switches on and off the laser pointer diode circuit.

import RPi.GPIO as GPIO
import time

channel = 4

# GPIO setup
GPIO.setmode(GPIO.BCM)
GPIO.setup(channel, GPIO.OUT)

def laser_on(pin):
    GPIO.output(pin, GPIO.HIGH) # Turn laser on

def laser_off(pin):
    GPIO.output(pin, GPIO.LOW) # Turn laser off

if __name__ == '__main__':
    try:
        laser_on(channel)
        time.sleep(3)
        laser_off(channel)
        time.sleep(3)
        GPIO.cleanup()
    except KeyboardInterrupt:
        GPIO.cleanup()

So that is the progress over a weekend. Using VNC Viewer it is possible to login to the Raspberry pi from a laptop, tablet and even a mobile phone. Once in, the python script above can be executed and the laser pointer can be observed to shine a beam and stop. The script could also be written to call an API on the web periodically, and when the API returns a value of interest, the program could set the laser to turn on. This has plenty of practical applications, such as a silent alarm. Let’s say the laser unit is pointed to the front of a room, where a development team is facing. The program loop checks if a build has failed on a CI/CD server. If it is the case, shine the beam on…the whole team sees it on the wall and gets alerted that the build has broken.

As mentioned at the start of this blog, I’m going to use the laser pointer for the drone to indicate visually whether it has detected an object of interest by shining a laser beam on the object.

More to come as I move on to the next steps. Stay tuned!

 
1 Comment

Posted by on January 26, 2020 in IoT

 

Tags: ,

Custom Built Video Cast Setup

Cameras such as the handheld DJI OSMO Pocket allow amazing portability and high resolutions upto 4K for creating videos including vlogs with incredible ease. Professional effects including hardware image stabilisation are all part of the package.

In this article, I would like to share with you, how I took a DJI OSMO Pocket and applied a whole set of extensions around it to create a home video cast rig for monologue recording. The setup consists of plenty of units including dimmable LED lights, articulating arms, a shotgun microphone and even a ring light.

Main Camera – This is the DJI OSMO Pocket . It is mounted using a ULANZI OP-1 Osmo pocket metal mobile phone holder mount set fixed stand bracket that’s available on amazon. This bracket fits well into the cold shoe mount of a V shaped mount bracket that attaches to an upright microphone stand, such as the Bespeco SH2RN via a CAMVATE 1/4″-20 Male to 5/8″-27 Female Microphone Stand Mount for Camera Stud. In the images above, you can also see the iPhone XS that doubles as a monitor while recording.

Microphone – For a close recording setup, where the subject is very close to the camera, a shotgun microphone such as the Sennheiser MKE 400 Shotgun Microphone (in the picture above), could be a great addition to keep away the sounds from the surrounding and capture more clearly what the subject is speaking. While testing, I found the MKE 400 really does it job. Of course, it needs to be connected to the DJI OSMO Pocket via a not so cheap OSMO Pocket 3.5mm adapter that’s available from DJI.

Lighting – Simply put, the most important component for any high resolution camera recording. I used two types of lighting for the set.
(a) Panel video LED lights – for top angle, bottom and side accents lighting of the subject’s face.
(b) Ring light – for frontal lighting and ring light effect.

For the panel video LED lights, I used 4 battery operated Ulanzi Ultra Bright LED Video Light – LED 49 Dimmable High Power Panel Video Lights. These lights have a dial to control the amount of brightness, and I noticed as well they are pretty energy efficient.

The ring light used is a Childplaymate Dimmable LED Studio Camera Ring Light. This offers 3 light modes. Cool White, Warm White and hybrid. I powered it with a power bank. Pictured above, a Belkin F7U020btBLK 10000mAH Power Bank powerbank was used.

Apart from the main setup, I used 11 inch and 7 inch articulating arms to position the lighting and microphone just right – with maximum adjustment ability. Any good articulating arm can be used. To add even more flexibility, I used a few pole clamps and some extensions from ULANZI, namely the ULANZI PT-7 Cold Shoe Bracket Vlogging Microphone Extension Plate and the Ulanzi PT-2 Aluminum Alloy Universal Cold Shoe Extension Bracket. The SHOPEE Adjustable Angle Pole Swivel Hot Shoe Mount has been also used along with the LED video lights to have better control on the up-down angle of these lights.

In case you are wondering, the tray at the middle on which the powerbank is resting, is a Mic stand accessory tray with drink holder from Gator frameworks, which is really handy.

The entire setup can be lifted and moved around with one hand. Along with a blue screen background and a stool for sitting – the setup can transform any space into a video broadcast studio for making video casts. The videos can then be imported into a producing tool, such as Final Cut ProX.

I created this setup for my projects, and thought I’d share the setup for anyone who’d be interested. It is really simple to setup and makes recordings with the DJI OSMO Pocket even more cool with enhanced lighting and external microphone. Check out some of my upcoming video casts using this setup.

 
Leave a comment

Posted by on September 22, 2019 in Vlogs

 

Tags: , ,

Programmed RYZE Tello Drone Flight

Tello drone and battery charging array

What makes the Tello drone so amazing is the ability to programmatically send commands to it. The drone has an on-board camera and can stream both photos as well as videos. It does not have built in GPS and uses VPS instead to determine it’s flight stability routines. None the less, it is interesting to have an ability to “compute” and “actuate”. This can then become a robot UAV that could have some machine learning algorithms coupled with it to determine its own flying path including obstacle avoidance. In a way, the combination would give rise to an autonomous flying UAV.

I’m no where close to that with this Tello drone, but definitely inching towards it. As a first step, I worked on getting the drone to follow a programmed flight path based on fixed parameter values. I’ll cover the details of how I did this a little later in this article. You can view the output first.

This flight proceeded on it’s own, getting instructions via a python program over a wifi connection with the drone. The code is simple and involves sending UDP messages to the Tello over port 8889. Here is a view of the code with the key request / response commands highlighted.

Program code executing a sequence of commands from command.txt
This python class sends the commands and receives the responses from the Tello drone

The command.txt just has a list of commands from the Tello SDK, except delay, which is custom code. Here is a sample of the contents of a command.txt file.

Sample commands

From my iPad Pro, I did SSH into a RaspberryPi device and ran the python code that flew the drone in the video above. Here is the action happening in the console.

Raspberry Pi device

That’s it for now. In my next blog article related to further development around these experiments, I will cover how I extended this to use the Raspberry Pi display, wireless USB dongle, wifi repeater, Intel Neural compute stick and running a Jenkins instance on the device.

Quick preview of Jenkins on Pi. Pulls code from BitBucket to fly the drone. Raspberry Pi here becomes the non-human controller of the drone’s flight path.

The Tello drone is a remarkable device and allows for many possibilities. My main area of interest is autonomous flying UAVs, which like Shakey in the early 70s, could maneuver around obstacles in it’s path.

Very early years ground robot
 
Leave a comment

Posted by on August 16, 2019 in Programming

 

Tags: , ,

RYZE Tello Drone Fun Flight

I recently heard of the RYZE Tello mini drone with DJI technology. DJI is a reputed company that is known for high quality camera drones.

Over the years, I have tested several toy drones. The Tello was different though. It is an affordable drone with a camera and most interestingly is programmable. I wanted to check out how easy it is to program and control one. So I acquired a Tello drone and got started researching its SDK.

In this demo, I used the python project from https://github.com/dji-sdk/Tello-Python

Once you checkout the project, you get a folder structure created locally as shown here:

Tello python project

Single_Tello_Test folder contains all that you need to send commands to Tello so that it can execute those commands. On my laptop, I detected the WiFi network of the Tello drone and connected to it. Once connected, I ran the command below:

python tello_test.py "command - iPhoneVideo2.txt"

The text file can have any name. This is the one I used for setting a sequence of commands that represented my custom flight plan for the Tello. Here is what the commands in the file look like:

command
delay 2
takeoff
delay 2
up 30
delay 2
cw 180
delay 2
forward 300
delay 2
left 60
delay 3
right 60
delay 2
flip f
delay 2
land

The python code in tello_test.py reads each line and sends the instruction to the drone via UDP messages. This worked beautifully, and the command execution statuses can be viewed in the command window:

Command execution status

Here, Tello on 192.168.10.1 receives commands on port 8889 and sends back the command status message. I’ve captured a clip of the Tello drone in action. Enjoy!

RYZE Tello Drone in ‘auto-pilot’ mode
 
1 Comment

Posted by on June 2, 2019 in Programming

 

Tags: , ,