TO ANY PARTIES VIEWING THIS (Universities, my Followers Etc.)

FIRST and foremost, my blog has a non-chronological arrangement. Instead of posting the most recent posts on top, I have posted them in order of ‘coolness’a completely non-subjective, reliable scientific unit of measurement. So if you scroll down and see that there’s a 3 month gap between my posts, please understand that this is because I have chosen to include my best work on top, and scrolling down will reveal to you the MULTITUDE of other tutorials.

SECOND and Eyebrowmost (because the Eyebrow is below the fore(head), just as second is below the first. Its fine, don’t thank me for the enlightenment),  The lack of tutorials in recent months is because I don’t have full ownership to disclose FIREFLY to the public, as I’m competing in a competition. I promise guys, as soon as we hit the 18th of March, be ready for a barrage of code and tutorials showing you guys how to effectively use ROS to simulate and program autonomous drones.

Till we meet again. 

DR0ne M@$Ter

FIREFLY: One step Closer to One more Life

This project is probably a 110% improvement to my Google Science Fair project. It uses an Artificial Intelligence trained in Python, it can read evacuation maps on its own and capitalizes on the way birds see to create a supreme obstacle avoidance system. FIREFLY is a drone platform for autonomous, low cost rapid evacuation of high-rise buildings.
Thanks to Dennis Shehadeh for creating my UAE Drones for Good Competition video.

Instructions on how to recreate the computer vision portion of this coming soon!

Make your life easier with imshow()

Everybody knows how annoying the cv2.imshow() function is, especially when debugging. We need to keep creation names for the first parameter of the function, which tends to interrupt the flow and hang your program if you have a slow computer. Add or import this function to make life easier.

import cv2
import numpy as np
window = 1
def imshow(img):
    global window
    name = "Window %d" %(window)
    window +=1
    cv2.imshow(name, img)

Harris Corner Detection

Prerequisites

Don’t begin this exercise until you’re familiar with the basic image processing techniques (morphological transformations etc.) . Its easy to learn and apply, but you’ll run into trouble if you want to extend yourself after tutorial.

Introducing the function

The Harris corner detector is -you won’t believe it- a function in OpenCV that allows you to detect corners in images. Its one of few methods that allow you to detect corners, and this is the gist of this method:

Corners are essentially regions of an image with considerable variations of intensity (assuming a monochrome image) in all directions. Take the picture below for instance, in every direction, there’s a different color. This is how the Harris corner detector finds corners: By taking the values of the pixels around a given pixel, it is able to determine whether or not the pixel is associated with a corner or not.

Corner detector.png

Applying the function

The code will be split up into 2 parts; a function I made for simplicity’s sake, followed by the Harris corner detector function.

import cv2
import numpy as np
window = 1
def imshow(img):
    global window
    name = "Window %d" %(window)
    window +=1
    cv2.imshow(name, img)

This first bit of the function really has nothing to do with the Harris Corner detector function itself. After doing a load of projects, you’ll realize how annoying it is to constantly specify a name for each of your windows with the cv2.imshow() function. This is especially a pain whilst debugging, so here’s a function called imshow() that simplifies this process, by taking an img as a parameter.

#insert image location in "loc"
loc = "C:\Users\user pc\Desktop\PPictures\chessboard.png"
img = cv2.imread(loc)
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
grey = np.float32(grey)

#use the harris corner function to locate corners
new = cv2.cornerHarris(grey, 5, 3, 0.04)

#Optimal value thersholding may vary from image to imag
img[new>0.01*new.max()]=[255,102,255]

#show the image
imshow(img)
cv2.imwrite('C:\Users\user pc\Desktop\PPictures\chessboardman.png',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

OpenCV allows you to implement the corner detection algorithm in one neat, simple function, cv2.cornerHarris(img, blocksize, ksize, k). This function takes the following parameters:

  • img – Float 32 input image
  • blocksize – Determines the size of the neighborhood of surrounding pixels considered
  • ksize – Aperture parameter for Sobel (the x and y derivatives are found using the sobel function)
  • k – Harris detector free parameter

chessboardcrossed

The above images are the results of the cv2.cornerHarris() function. On these still images, it works pretty well!

 

Using the watershed algorithm for Cell Based Segmentation from blood smears (part 1)

Since when did a super interesting blog become a bio lesson?

Sometimes you get inspired by the things you’re most horrible at. In my final year of school, I decided to face one of my biggest science-field nightmares, biology. However, if I’m doing bio, it’s on my terms. No memorizing for an exam or profusely exhibiting my medical adroitness through unnecessarily labyrinthine jargon. Also, I’m competing in a competition where I’m creating a neural network to detect pathogens in blood smears.

Introduction to blood smears

To put it concisely, a blood smear is essentially a microscopic image of your blood that contains a set of Red blood cells, white blood cells and other blood cell nonsense. It looks similar to this

amnig.jpg

These can reveal a lot about your present health. Blood morphology (viewing and understanding the composition of blood smear images) can point to the onset of leukemia, sickle cell, and also the presence of lymphocytes, monocytes, basophil’s etc. It generally requires a trained eye to detect all of  this, but during the next few lessons, we’re going to see if we can make a computer segment the blood cells and understand them individually.

Prerequisites

A very solid foundation in OpenCV and Python; this exercise is just another big project you’ll be undertaking to hone your skills so you should ideally have a good understanding of the basic functions or if not, have the ability to look up and process the documentation.

Watershed? Why not contours?

For the sake of simplicity and to avoid too many gifs and animations, I’m just going to say it’s because you can learn an amazing new skill and learn how to implement it. However, if you’d like to see all the gifs and animations, check this brilliant page out. It’s where I learnt about watershed.

Morphology

In order to apply watershed, you’ll need to use morphological transformations and contrast enhancement in order to define boundaries and markers for the algorithm to take effect properly. Not doing these may lead to over-segmentation or under-segmentation.

#load your image, I called mine 'rbc'
img = cv2.imread('C:\Users\\user pc\Desktop\PPictures\\rbc.jpg')
#keep resizing your image so it appropriately identifies the RBC's
img = cv2.resize (img, (0,0), fx=5, fy=5)
#it's always easier if the image is copied for long pieces of code.
#we're copying it twice for reasons you'll see soon enough.
wol = img.copy()
gg=img.copy()
#convert to grayscale
img_gray = cv2.cvtColor(gg, cv2.COLOR_BGR2GRAY)
#enhance contrast (helps makes boundaries clearer)
clache = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
img_gray = clache.apply(img_gray)
#threshold the image and apply morphological transforms
_, img_bin = cv2.threshold(img_gray, 50, 255,
        cv2.THRESH_OTSU)
img_bin = cv2.morphologyEx(img_bin, cv2.MORPH_OPEN,
        numpy.ones((3, 3), dtype=int))
img_bin = cv2.morphologyEx(img_bin, cv2.MORPH_DILATE,
        numpy.ones((3, 3), dtype=int), iterations= 1)
#call the 'segment' function (discussed soon)
dt, result = segment(img, img_bin)

Apply watershed

Prior to applying watershed, a Euclidean distance transform is first implemented onto the frame. This evaluates the brightness of each pixel relative to it’s distance to a pixel with a ‘zero’ value. A zero value pixel is black, so EDT allows for a visualization of the how far any part of the image is from the background (which due to previous thresholding, should be black). You’ll notice that areas around the edge have a duller gray color, that slowly becomes a bright white color. We want to focus on the bright white color as our marker, because we’re sure it’s a part of the shape. The edges and duller colors are not 100%.

def segment(im1, img):
    #morphological transformations
    border = cv2.dilate(img, None, iterations=10)
    border = border - cv2.erode(border, None, iterations=1)
    #invert the image so black becomes white, and vice versa
    img = -img
    #applies distance transform and shows visualization
    dt = cv2.distanceTransform(img, 2, 3)
    dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
    #reapply contrast to strengthen boundaries
    clache = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
    dt = clache.apply(dt)
    #rethreshold the image
    _, dt = cv2.threshold(dt, 40, 255, cv2.THRESH_BINARY)
    lbl, ncc = label(dt)
    lbl = lbl * (255/ncc)
    # Complete the markers
    lbl[border == 255] = 255

    lbl = lbl.astype(numpy.int32)
    #apply watershed
    cv2.watershed(im1, lbl)

    lbl[lbl == -1] = 0
    lbl = lbl.astype(numpy.uint8)
    #return the image as one list, and the labels as another.
    return dt, lbl

And there you have it! You can Segment most blood smear images in a few seconds! A major problem, however, is its lack of accuracy for different sized images of various resolutions and color textures. In order to improve the accuracy, I’ll soon be showing you how to do the same morphology, except with histogram equalization so you can more clearly define your boundaries.

 

The Real Roboticist (lesson 4): Some Important ROS theory

roboparkour1111

Since you’re going to be working with an operating system, it’s best to know how exactly the system works so you can manipulate it in order to take full advantage of this virtually limitless tool.

ros101one

Nodes and Master

ROS is used primarily to establish a connection between modules, known as nodes. A node is just but some code that you can execute. Your ‘Hello world.py’ program is a node (albeit not a very useful one). ROS’s flexibility lies in the fact that these nodes can lie anywhere. You can have a node on your laptop, a node on a raspberry pi and another node on your desktop, and all of these nodes will work with each other to make your robot function. The advantage of this distribution is to organize system control. For a robot that processes and image, finds a face and then drives towards said face, you can have a node for receiving camera images and processing them and another node for controlling the steering of the robot in response.

Nodes

Nodes perform and action; they’re actually just software modules but can communicate with other nodes and register with the master (we’ll get to that). These nodes can be created in a number of ways. You can type up a command in a terminal window to create a node or you can create them using Python or C++ as a script.

Nodes Publish and Subscribe

In ROS, a series of task

s can be split into a series of simpler tasks (both, simplicity in execution and simplicity of code can be achieved) . For a wheeled robot the tasks can include the perception of the environment using a camera or laser scanner, map making, planning a route, monitoring the battery level of the robot’s battery, and controlling the motors driving the wheels of the robot.

Some nodes provide information to others. The feedback from the camera is an example. This node is said to publish information to other nodes.

A node that receives the camera image is subscribed to the other node. As an example,  The wheels could be subscribed to a camera, which is publishing.

In some cases, a node can both publish and subscribe to one or more topics. You’ll see more of those as you progress.

ROS Master

If nodes were like railway stations in a complicated subway, ROS master is like the map. ROS master provides naming and registration services to individual nodes and allows them to locate with each other. It does so primarily using the TCP/IP, called TCPROS. In short, this enables communication over the Internet.

Enough of the boring theory, in the next lesson, we’ll get more hands on. For more information on how to list nodes and other handy tips, go to this website.

node_host

The Real Roboticist (lesson 3) : Create a catkin workspace

Catkin what?

A catkin workspace is where you’ll be able to modify existing or create new catkin packages. The catkin structure simplifies the build and install process for your ROS packages.

Create it

Naming our folder catkin_ws, Just type in the following into terminal to create the workspace.

 mkdir –p ~/catkin_ws/src
 cd ~/catkin_ws/src$ catkin_init_workspace

Next, build it by following up with:

 cd ~/catkin_ws/
 catkin_make

Overaly the workspace atop your ROS environment by sourcing setup.bash

source ~/catkin_ws/devel/setup.bash
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc

All that’s left is to validate that everything has been set up correctly. When you type the following into terminal:

echo $ROS_PACKAGE_PATH

The output of the command should be:

/home/<username>/catkin_ws/src:/opt/ros/indigo/share:/opt/ros/indigo/stacks

The Real Roboticist (lesson 2): Installing ROS on your Ubuntu LTS 14.04

Introduction

Here you’ll be installing ROS indigo (which is just a distribution of ROS, like Ubuntu is a distribution of Linux). I recommend ROS indigo because it is the most stable version to date and will have support until 2019. It also supports the latest version of Ubuntu.

Setup your sources.list

Setup your computer to accept software from packages.ros.org. ROS Indigo ONLY supports Saucy (13.10) and Trusty (14.04) for Debian packages.

sudo sh -c 'echo &amp;amp;quot;deb http://packages.ros.org/ros/ubuntu
(lsb_release -sc) main&amp;amp;quot; &amp;amp;gt; /etc/apt/sources.list.d/ros-latest.list'

 Set up your keys

sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 0xB01FA116

You can try the following command by adding :80 if you have gpg: keyserver timed out error

sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net:80 –recv-key 0xB01FA116

Installation

First, make sure your Debian package index is up-to-date:

sudo apt-get update

Now for the magic line

sudo apt-get install ros-indigo-desktop-full

If doing all that returns an error, try copy pasting this entire section into your terminal.

sudo sh -c 'echo &amp;amp;quot;deb http://packages.ros.org/ros/ubuntu trusty main&amp;amp;quot; &amp;amp;gt; /etc/apt/sources.list.d/ros-latest.list'

wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O - | sudo apt-key add -

sudo apt-get update

sudo apt-get install ros-indigo-desktop-full

Initialize rosedp

The ROS system may depend on software packages that are not loaded initially. These software packages external to ROS are provided by the operating system. The ROS environment command rosdep is used to download and install these external packages. It’s kind of like the way you use sudo apt-get install, except since ROS is a different operating system, you use this.
Type the following command:

sudo rosdep init
rosdep update

Setup ROS environment

source /opt/ros/indigo/setup.bash

Sometimes it’s easier if the variables are automatically added to your sessions every time a new shell is launched. Do so by typing the following command into terminal.

echo &amp;quot;source /opt/ros/indigo/setup.bash&amp;quot; &amp;gt;&amp;gt; ~/.bashrc
source ~/.bashrc

Rosinstall

This is the kind of like the terminal, except for ROS packages.

sudo apt-get install python-rosinstall

For more help and troubleshooting, visit the official ROS website and for problems with setting up the environment, try http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment.

The Real Roboticist: Introduction ROS and Python

Arduino. OpenCV. Kinect. AR Drone. CrazyFlie. Turtlebot. SLAM. 3D Mapping.

Some of those words may be familiar to you, some of them may not. There was a time when Robotics for us revolved around the Lego NXT and EV3 sets. After a few years, we got fed-up of only using 3 motors and 4 sensors and paying a ton for the extra stuff. That’s when we began working with the Arduino, making little ultrasonic sensor bots and enjoying a 16 servo robotic spider. If you’re done impressing people with that stuff (or just want to skip it altogether), read on:

Well there’s a step ahead of that, which allows you to use the tools of a Real Roboticist and expand any horizons in robotics you though ever limited you.

Dive into the world of the ROBOTIC OPERATING SYSTEM (ROS)

ROS is much like an operating system, but uses Linux to run. If you’re here, then you probably know the world of things ROS can do for advancing your robotics knowledge.

ROS can help you:

  • Create vivid and accurate 3D reconstructions of your environment. The maxed-out team did something like this for their project.

3D reconstruction

  • Completely automate drones and engage in localization in indoor environments like the Parrot AR drone, the Parrot Bebop drone.  Upenn students used ROS to achieve this:

Slam

  • Create amazing simulations of robots and allow them to interact with the environment, creating the perfect testing tool before actually creating your robot.

     

  • Integrate it with the libraries you know, like OpenCV, numpy and scipy, even the Arduino IDE to create some amazing robots that could truly work under a range of situations.

Although it may be cliche to say this, especially now, the possibilities are absolutely endless.

There’s also a wealth of documentation online , so you can be sure that you’ll never ever run into a problem and get stuck for too long.

That being said, there are some prerequisites before you continue. You need to be good at Python, and have at least some remote understanding of how the Terminal works on Linux.

All that being said, continue on to the next lesson to learn how to install ROS Indigo on your Ubuntu LTS 14.04 .

Connect Brushless Motors to Arduino Through ESC’s (ArduinoQuad)

Brushless motors come in handy when your project requires high RPM and low maintenance. A fantastic video by 000Plasma000 explains the properties of brushless motors very concisely.

UPDATE: WordPress is changing some of my code blocks to ‘amp’ and I haven’t yet found a way to fix this. For further guidance (although it would be a good exercise to infer), head over to my github repository.

Things You will need

Hardware:

  • Brushless DC motors
  • Electronic Speed Controllers (ESC’s). Preferably 30 AMP SimonK ESC’s
  • Lithium Polymer (LiPo) Battery. These are essentially the same batteries used by FPV drones or planes or RC cars.
  • Power Distribution Board (optional, if you want to connect more than one brushless motor to the Arduino)

Software:

  • Arduino IDE

Making the connections

For all intents and purposes, you will be treating an ESC+Brushless motor combo as if it were a servo. You’ll see this application in action soon, but for now, follow the diagram and instructions to connect the Motor to the ESC and the ESC to the Arduino.

ESC DIAG.jpg

The ground and power wires of the Motor can be interchanged with each other in plugging into the ground and power slots on the ESC. It doesn’t matter where these wires go, as long as they stay on the outside, switching them will just switch the direction. The signal wire must connect with the middle wire on the ESC as this transmits the signal. Different motors and ESC’s have different arrangements for this, so it is up to you to figure out which wire carries the signal on both the ESC and Motor so you can mess about with the other two any way you like.

f769f4fb7da44324

There will be 3 thin wires from the ESC. These will be differently colored, but one of these will be for power input, one for the Signal and one that goes to ground. You must research the color coding of your wires and then accordingly plug the signal wire into PWM port 9 and the Ground wire to GND. DO NOT PLUG THE POWER INPUT WIRE INTO THE ARDUINO, you may fry your computer’s USB port along with the Arduino.

Arduino Code

#include &amp;amp;lt;Servo.h&amp;amp;gt;

int value = 0; // set values you need to zero

Servo firstESC, secondESC; //Create as many as Servo objects as you want. You can control 2 or more Servos at the same time

void setup() {

  firstESC.attach(9);    // attached to pin 9 I just do this with 1 Servo
  Serial.begin(9600);    // start serial at 9600 baud (can change this)

}

void loop() {

//First connect your ESC WITHOUT Arming. Then Open Serial and follow Instructions

  firstESC.write(value);

  if(Serial.available())
    value = Serial.parseInt();    // Parse an Integer from Serial

}

As is evident, we’re treating the ESC as if it were a servo object. You can add more servo objects if you like, just connect them to a breadboard and to more PWM’s. This code will allow you to send a value through the serial monitor on your Arduino IDE, and control the speed of your ESC’s accordingly

My Setup

Start by connecting the ESC to the battery. You should hear a little beep. Then, connect the USB to the Arduino and load the program. You should hear another beep. Nevertheless, this all depends on what kind of ESC you have. Mine had black, white and red , White was for the signal, so that went to PWM port 9. Black was for ground so that went to ground.

Brushless Motor’s in Action

The following video show’s my brushless motors in action. I vary the serial input from 10 to around 150, and as expected, higher values result in higher RPM.