Loading [MathJax]/jax/output/HTML-CSS/config.js

Saturday, 26 November 2016

Completed Bird Box Camera

Last time I set-up a Raspberry Pi running from Power over Ethernet. So now I knew I could connect to the machine remotely, I could proceed with setting up the camera.

As I was going to install the camera in the bird box, there was not going to be much light available. Setting up a visible LED would likely be a big deterrent for birds, given that the box would be glowing brightly! So I decided to use a NoIR camera with an IR LED for illumination. The camera was installed as per normal and I set-up the live stream previously used for the remote robot arm

I bought some 850nm IR LEDs. You can't just wire these up directly to the 5v pin on the GPIO port otherwise they will burn out. To prevent this happening you need to add a resistor.

To calculate the value of resistor use the formula, R = V/I

V is the source voltage minus the forward voltage, or voltage drop over the resistor, and I is the forward current of the LED. The resistor should limit current to at or below this current.

For these, the forward voltage was quoted as 1.5V and current as 100mA so R is:


R = ( 5V – 1.5V ) / ( 100mA x 1000 )
 R = 35 Ohm

We multiply by 1000 because the LED current is quoted in units of milliamps.

The nearest resistor I had to this was 39 Ohms, so this was put in place. A larger resistor would result in smaller current across the LED so this was OK.

Because I was happy for the LED to be powered 24 x 7, I connected this to the 5v and ground pins on the Raspberry Pi GPI.


(As an aside, I practised setting up the LEDs first with an Arduino, it turns out mobile phone cameras have some sensitivity to IR light so looking at the LED in preview mode on the camera was a quick way of telling if it was working or not.)

Focussing the camera proved to be difficult. The camera is a fixed focus and so the distances involved here (10-20 cm inside the box) are too small for a sharp focus.
The lens however can be unscrewed with great care and there are some pages on the web that describe how to do this. I can only say this is an incredibly fiddly operation, so fiddly in fact that the first time I tried I ended up scratching the lens and I had to buy a new camera.

Fortunately, the new one came with a very handy widget that allows you to change the focus very easily:



Of course, setting up components in a nice an clean desktop setting is one thing, squeezing the Pi and POE into a small tupperware box and ensuring the cables were not catching was quite another especially when space was at a premium.

The last tricky bit was positioning the camera at an angle to the underside of the box roof so that it would point straight down. I succeeded with a piece of cardboard shaped into a wedge.

Here is the completed bird box, without full waterproofing:


The view of the inside is pretty good:



The box is now waterproofed and out in the garden, on the end of a 30m cable. I don't really expect to see much activity given it's winter but it'll be good to run for a few days or weeks to check out how well it's been put together and make repairs if necessary.

Monday, 10 October 2016

Taking Raspberry Pi outside



Last spring a pair of wrens moved into a bird box in the garden and they managed to successfully raise 6 chicks. While this was wonderful to watch from a distance, I thought it would be an interesting project for next season to use a Raspberry Pi and NOIR camera to be able to see what's going on inside.

Incidentally, I learned that when Wrens fledge, they don't so much fly the nest, as tumble from the nest..



Step 1: This post considers the problem of getting the Raspberry Pi camera to the bird box.

I decided on two requirements:

1. I don't want to power it from battery packs, I anticipate keeping everything running for a significant time and I don't want to disturb the birds during nesting.
2. The Pi will be outside the range of my WiFi so communication will need to be wired.

It seemed that using Power over Ethernet would satisfy both these considerations.

I used a TPLink Power over Ethernet kit to inject and split the power. My old NetGear router was dusted down for testing this.

The output power cable had a 5.5x2.1mm Jack, so I needed a converter for the micro USB Raspberry Pi power socket.

The splitter has a switch for 3 different voltages. It's important to set this to 5V for the Raspberry Pi input.
This is what the prototype looked like with some old cables I had.

The Raspberry Pi switched over to wired network and disabled the WiFi on startup. It was simple to point my laptop at the old network and open a session using PuTTY.

Next I will get a longer cable that's suitable for running outdoors. The problem of attaching this to the bird box will be covered in a future post.

Tuesday, 13 September 2016

Keeping cool

It's been a while since my last post, so it's time for a quick one to get back into the groove!

I finally caved into applying the Windows 10 upgrade on my old laptop just before the free period expired. This was mainly because the old one was being rendered slowly unusable.

The upgrade wasn't seamless, I had to download the full ISO image since there wasn't enough space for the default upgrade and had 2 false starts and roll-backs, but it succeeded in the end.

Credit where it's due, Microsoft have certainly improved disc management. The upgrade recognised the C: drive was small and did some sensible splitting of folders across that and the larger D: drive. The OS is also much faster than my old Windows 7 (although I'll give it time to clog up with updates before I definitively say it's better in that respect).

While this whole experience did push me over to Linux for my primary PC, I'm forced to use Windows for working from home.

One not so nice new thing I've started experiencing is the laptop overheating. This can manifest in a couple of ways. First, the response of the machine slowly gets worse until it grinds to a halt and second, the machine just shuts down.

Now it's a fairly old machine so dust build-up could be a problem. I checked this first and the fan is running smoothly without any obvious blockages (I always have the laptop on a hard desk with plenty of space for ventilation). I haven't opened up though, so it's possible

This has also started happening since the Windows 10 upgrade. I have a feeling this might be exercising the hardware more than Windows 7 did but don't have any solid evidence. 

Finally, August and September have had some pretty hot days, 25 degrees C or more indoors, which is a high starting point for the laptop already.

The solution was simple!

The cooling effect is remarkable, and it works for me too.

(The recent focus on problems with Lithium Ion batteries made me wonder if the laptop battery was running any extra risk with this overheating so I took it out after one of the shut-down events. Fortunately the battery itself was pretty much still at room temperature so the management infrastructure looks like it's working well - I run off mains when working from home.)

Wednesday, 8 June 2016

Interference when driving motors on Arduino

Having bought a new DC motor, I wanted to set it running via the Raspberry Pi and Arduino motor shield in a similar way to previous Robot arm efforts.
I set up the Raspberry Pi to connect to the Arduino via USB and a Hub. Power for the motor was taken via an external battery pack, as shown below:



(I've recently discovered Fritzing which is a great tool for generating schematics, I downloaded the Linux version from here http://fritzing.org/download/ )

The driver code is based on earlier Robot arm code. It reads a character from a terminal on the Raspberry Pi and sends that to the Ardunio:

# Read characters from stdin and send to Arduino via serial
import termios, fcntl, sys, os, serial

arduino = serial.Serial('/dev/ttyACM0')
fd = sys.stdin.fileno()

oldterm = termios.tcgetattr(fd)
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

try:
    while 1: # This is a tight loop with high CPU
        try:
            c = sys.stdin.read(1)
            # repr() : Return a string containing a printable
            #    representation of an object
            arduino.write(c)
            print "Read character from stdin", repr(c)
        except IOError: pass
finally:
    termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
    fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

In this case on the Arduino, 'a' is interpreted as motor forward, 's' motor backward and space ' ' stop.

This all worked to begin with, I could start and stop the motor. However, after a period, normally less than a minute, I would lose response to any character input and had to reset the Arduino.

Investigating further it seemed like the USB connection had changed to a different tty, e.g. to /dev/ttyUSB0, and back.
I then looked in dmesg and saw the following:

 usb disabled by hub (EMI?), re-enabling…
EMI (ElectroMagnetic Interference)
The solution in this case was to remove the hub and run the USB direct to the Arduino. I'm speculating that the EMI from the motor starting and stopping was the cause of this, and reducing the cable length (and hub) was enough to lower the sensitivity beneath the threshold.

Thursday, 19 May 2016

Simple data transfer across local network

I wanted to play around with sending an receiving XML data from my laptop to a web server running on the Raspberry Pi. At the moment, I'm happy for all this to happen over my local network.

I decided to use webpy for its ease of development in allowing you to setup a simple server using Python. It can be installed with the following:

sudo easy_install web.py

It's then very simple to write a web server that takes xml passed through a post method and saves it as a file:

import web, urllib

urls = (
 '/index', 'index'
)

class index:
    def GET(self):
        x = web.input()
        return "<b>text=<b>"

    def POST(self):
        # This is the xml string passed from the client
        x = web.data()
       
        # Remove the header "data="
        cleaned = x[5:]
         
        # Write the XML string to file
        filedir = '/home/pi/Scratch/test'
        fout = open(filedir + '/test.xml', 'w')

        # Decode url characters
        fout.write(urllib.unquote_plus(cleaned))
        fout.close()
        return x

if __name__ == "__main__":
    app = web.application(urls, globals())
    app.run()


The client was the Supermarket Planner app. For testing I put the code in the call to print. The first step was to convert the data collection to an XML string.

Because I used an ObservableCollection of SelectedMeal objects, this was simple using an XmlSerialiser:

SelectedMealCollection mealData;
XmlSerializer xs = new XmlSerializer(typeof(SelectedMealCollection));
          
string xml = "";

using (StringWriter writer = new StringWriter())
{
       xs.Serialize(writer, mealData);
       xml = writer.ToString();
}


Finally, I used an async method to create and call an HttpClient with the payload created above.

Note this creates the "data=" part of the payload from the KeyValuePair which needed to be trimmed by the server before using the XML.


private async Task<string> post( string payload )
{        
    using (var client = new HttpClient())
    {
         var postData = new KeyValuePair<string, string>[]
         {
               new KeyValuePair<string, string>("data", payload),
         };

         var content = new FormUrlEncodedContent(postData);
         var response = await client.PostAsync("http://192.168.0.2:8080/index", content);

         if (!response.IsSuccessStatusCode)
         {
              var message = String.Format("Server returned HTTP error {0}: {1}.", (int)response.StatusCode, response.ReasonPhrase);
              throw new InvalidOperationException(message);
         }

         var data = await response.Content.ReadAsStringAsync();
         return data;
    }
}

Thursday, 24 March 2016

Using a Servo

I've decided to see how much control I can add to the robot arm project so I bought a servo. This is a small motor that can be set to specific positions based on a signal. Such a device has 3 input lines, 2 for power and one for the signal.

First off, I wanted to use my new Ubuntu laptop to develop the Arduino code. Unfortunately, after installing the Arduino IDE and updating to version 1.6.7 I got the following error trying to upload the blink sketch:


avrdude: ser_open(): can't open device "/dev/ttyACM0": Permission denied

This is because in Ubuntu the current user isn't automatically given permission to the usb device. To grant permission I followed these steps

  1. whoami (get username)
  2. sudo usermod -a -G dialout username : Modify user, add to dialout group (allows access to serial ports via files in /dev)
  3. sudo chmod a+rw /dev/ttyACM0 : give '+' all users 'a' read/write 'rw' access to /dev/ttyACM

Now I was able to upload to the Ardunio I connected the servo, shown below
  • Brown : Groud
  • Red : + 5v
  • Yellow : S(ignal)

One thing I noticed, I had the servo plugged into SERVO_2 on the motor shield, conecting to SER1 resulted in the controller chip rapidly heating. Fortunately no damage was done.

I tried using the Sweep sketch with this and then the following

#include <Servo.h>

Servo servo;
void setup() {
  // Attach the servo on pin 9 to the servo object
  servo.attach(9);
}

void loop() {
   servo.write(20);
   delay(1000);

 
   servo.write(160);

   delay(1000);
}    


A problem with this setup was the servo sometimes would jitter or stall. I suspect this was due to interference from the Arduniop and related perhaps to automation in the loop. I managed to reduce this almost entirely by removing the sleeps from the above sample and setting the angle via serial input:

#include <Servo.h>

Servo servo;
void setup() {
  // Attach the servo on pin 9 to the servo object
  servo.attach(9);
  Serial.begin(19200);
  Serial.println("Ready");
}

void loop() {

  if (Serial.available())
  {
    char ch = Serial.read();

    switch(ch)
    {     
      case '1':
        servo.write(40);
        break;
      case '2':
        servo.write(60);
        break;
      case '3':
        servo.write(80);
        break; 
      case '4':
        servo.write(100);
        break;     
      case '5':
        servo.write(120);
        break;              
    }
  }
}


I was also happy with this because I'll be controlling from the Raspberry Pi in a similar way to this in the future.

Thursday, 17 March 2016

Supermarket planner updates

I recently fixed a bug in the Supermarket planner which was caused by a problem with the binding of XML to the DataGrid. This is a good point to explain how I did the binding and the problems solved.

The XML containing the data looks like this:

   <Meal name="Mushroom Omelette" type ="Vegetarian, Quick">
      <Ingredients>
        <Ingredient>Eggs</Ingredient>
        <Ingredient>Mushrooms</Ingredient>
        <Ingredient>Cheese</Ingredient>
        <Ingredient>Salad</Ingredient>      
      </Ingredients>
    </Meal>
    <Meal name="Lamb Chops + Veg" type="">
      <Ingredients>
        <Ingredient>Lamb Chops</Ingredient>
        <Ingredient>Brocolli</Ingredient>
        <Ingredient>Courgette</Ingredient>
      </Ingredients>
    </Meal>


The grid looks like this. For each <Meal> element there is a row with text from the name attribute.
Double clicking on the row expands the grid to show the ingredients. You can also edit these




I used the XmlDataProvider for binding, this needed to be in the Windows.Resources to be accessible by the save event.

<Window.Resources>
      <!-- Meal Data needs to be in the window resources so it can be accessed easily by the save event -->
      <XmlDataProvider x:Name="Meals" x:Key="MealData" Source="./Data/SuperMarketDataMeals.xml" XPath="/SuperMarketData/Meals/Meal"/>


The DataGrid xaml itself is shown below

The DataGrid ItemsSource is set to {Binding} with DataContent from the {StaticResource MealData} defined earlier.

I use a DataGridTemplate column for the rows, using {Binding XPath=name} for the text.

For the expanded details grid I use a RowDetailsTemplate with its own DataGrid, ItemsSource now with {Binding XPath=Ingredients/Ingredient}

The double click handler simply changes the DetailsVisibility flag on the clicked row:

var row = (DataGridRow)sender;
row.DetailsVisibility = row.DetailsVisibility == 

              System.Windows.Visibility.Collapsed ?
              System.Windows.Visibility.Visible :
              System.Windows.Visibility.Collapsed;

The bug was that when the row was minimised by double clicking, the grid details still contained the expanded row. This was manifested by dragging the next meal down across to the calendar grid. Instead of seeing that meal appear, the meal that had had its details shown was dropped instead. For example from the screen shot above, when the Omelette was minimised, dragging Lamb Chops for example would end up with the Omelette dropped.

To fix this, I needed to refresh the data grid when collapsing the details.

I needed to call CommitEdit first otherwise I would get a runtime exception "Refresh is not allowed during an AddNew or EditItem transaction"

if (row.DetailsVisibility == Visibility.Collapsed)
{
    mealGrid.CommitEdit(DataGridEditingUnit.Row, true);
    mealGrid.Items.Refresh();
}





Wednesday, 27 January 2016

Remote Robot Arm, properly

Being able to control the Robot arm using a VNC client to connect to the Raspberry Pi is all very good but it's not a very elegant form of remote control. So, I decided to build an Android App on top of the work to date.

The App should have the following behaviour:
  1. It should be able to display a stream from a web cam attached to the Raspberry PI.
  2. Control of the robot would be by pressing down on buttons on the screen. Releasing the button should stop the robot.
  3. At this point I only need to control this over the local network.

Remote Connection

Requirement 3 meant a simple TCP Client-Server model would suit me well.

As this is only on the local network, I have control over the IP address assigned by the router. I hard-coded the host name and port on both Server and client.

On the robot-arm server it's only a matter of adding a few lines to enable TCP

   import socket

   TCP_IP = "192.168.0.12"
   TCP_PORT = 5005
   BUFFER_SIZE = 20 # Normally 1024, but we want a fast response

   s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
   s.bind((TCP_IP, TCP_PORT))
   s.listen(1)

   conn, addr = s.accept()

and in the loop

   while 1:
      data = conn.recv(BUFFER_SIZE)
      if not data: break
      # send the chractera to the arduinos


On the client side I created a new java.net.Socket and use a PrintWriter to send the control characters:

   Socket socket = new Socket(hostName, port);
   PrintWriter out = new PrintWriter(socket.getOutputStream(), true);
   out.println(ch);


Note that as this is an Android App, it's mandated that sockets are not opend on the main thread. It is very simple to extend AsyncTask and implement the doInBackground method.

Finally, we need to add Internet permission to the Android manifest file:

<uses-permission android:name="android.permission.INTERNET" />

Streaming a Video feed

Keeping things simple I thought an HTTP stream would be lightweight and easy to implement. It turns out most solutions described on the web introduce a large latency of several seconds.
Fortunately however, I found a solution in the PiStreaming project on GitHub https://github.com/waveform80/pistreaming.git 

Following the install instructions and starting with python server.py I was able to view the stream from the camera with sub-second latency.

Building the App

The app has two regions, the top half shows the video stream while the bottom has the buttons to control motion




I use a WebView to show the video stream. In OnCreate it's configured with the following:

   WebView piWebView = (WebView)findViewById(R.id.webview);
   WebSettings webSettings = piWebView.getSettings();
   webSettings.setJavaScriptEnabled(true);
   piWebView.setWebViewClient(new WebViewClient()
   {
      @Override
      public boolean shouldOverrideUrlLoading(WebView view, String url)  
      {
         return false;
      }
   }); 

   piWebView.loadUrl("http://192.168.0.12:8082");

Java script must be enabled because the HTTP server uses it and the override shouldOverrideUrlLoading was added to stop the stream spawning a new web browser window.

To implement the required behaviour of the buttons, I used the OnTouchListener. The MotionEvents ACTION_DOWN and ACTION_UP capture the button press and release

m_upButton.setOnTouchListener(new OnTouchListener(){
   @Override
   public boolean onTouch(View v, MotionEvent event) {
    try
    {
     if (event.getAction() == MotionEvent.ACTION_DOWN)
     {
      new TcpClient().execute("o");
     }
     else if (event.getAction() == MotionEvent.ACTION_UP)
     {
      new TcpClient().execute(" ");
     }
    }
    catch(Exception ex)
    {
     return false;
    }
  
    return true;
   }
  });

There are a couple of problems here. I shouldn't need to create the TCP client each time and there is an intermittent but where the ACTION_UP pipeline doesn't fully make it to the robot, in this case the arm continues moving until I tap another button.

Running the App to control the robot

On the Raspberry PI start both the web server and the robo-server. Then open the Robot Arm App. Touching the buttons cause the appropriate motors on the arm to move, releasing the button stops movement.

The video shows a demo of it in action. I'm using a low spec Android phone and hadn't connected all the motors but it gives an impression of how this works


The code for the Android App has been added to the git repository https://github.com/jscott7/Robot-Arm.git 

Tuesday, 26 January 2016

It was the best of Linux, the worst of Linux

Taking an update and upgrade on the Raspberry PI, I had a problem with unmet dependencies for raspberrypi-bootloader.

I ran the suggest fix of apt-get install -f and thought nothing more of it.

Unfortunately as it turns out there were problems....

I shutdown as always ( sudo shutdown now ) but when I tried to restart a couple of days later nothing.

I didn't get any response at all, connecting to the TV via HDMI yielded a blank screen but at least when I plugged the SD card into my laptop I did see all the files, so at least the card hadn't died.

This gave me an opportunity to copy over the files and re-image the card from the ISO that I still had from the original install.

I still use Win32DiskImage to setup the SD card, but as Windows only sees the boot partition I couldn't format it. The DiskManagement tool allowed me to format both, but not to remove the partitions.

Fortunately, there's a command line tool DISKPART that has much more control
  1. Open an Administrator command prompt
  2. Run diskpart to open the tool
  3. Enter list disk to see which disk is the SD card
  4. Important point here, make sure you choose the right disk or you could wipe your OS
  5. If your SD is disk is #1 enter select disk 1
  6. Enter clean to format and clear all the partitions. 
I then copied my image back onto the SD card, plugged it back into the PI and I was back up and running. The final steps were to setup WiFi, run the update/upgrade and copy back my files. All in just over an hour.

So the worst of Linux; I've never had a machine be rendered unusable from an update, on any flavours of Windows

But the best of Linux; It took me just over an hour to restore the OS, then take the updates and have all my applications and development in precisely the state it was when things went wrong. Without multiple reboots.