Sunday, April 18, 2021

Bird box Camera

 One of the best presents that my sister and I gave to our aging mother was a bird nexting box with a camera in it. With a video extender, this was wired to her TV so that she could (during nesting season) watch what was going on. In each year a family of Blue Tits was raised, though it didn't always have a happy ending. This image is a photograph of her TV screen!



I wanted to build myself such a nest box here in Massachusetts -- the commercial ones seemed expensive and not very flexible. In my case, I wanted to be able to put it some distance from the house, so power and getting the video back were both going to prove problematic. After some research on cheap WiFi cameras on Aliexpress, I settled on a V380 camera. One of the important factors is whether you can adjust the focus -- and on these you can. You need to remove the four screws that hold the back on, and then you can push the lens in and the the circuit board will pop out. On mine, there was a blob of black gunk that held the focus locked, but a little bit of force overcame that. By unscrewing the lens a bit, the focus distance is shortened. Another factor is that these cameras can speak ONVIF and so can be integrated into many other video recording/monitoring systems. They don't come enabled for this, but the hack is easy:

  • Prepare a microsd card with a file named ceshi.ini. The microsd should be FAT formatted.
  • The file should contain the following two lines:
[CONST_PARAM]
rtsp=1

  • Put the card into the V380 and power it up. It will go through the usual startup sequence of speaking, but it will include a couple of Chinese phrases.
  • Remove the card and reboot (powercycle). It will now support ONVIF.
This appears to make a fairly fundamental change as this survived doing a firmware upgrade! 

The other aspect that you probably want to change is to turn off the voice prompts. This can be done with the V380 Android App -- I installed this on an old phone that I just use for installing apps of unknown origin. If you work your way through the settings, you can disable the voice prompts. Alternatively, while you have the back off, you can just unplug the speaker!

I used these Audubon Society plans for my Bluebird box -- but I didn't include the mounting blocks as I was just going to screw the box to a wooden 4x4 that used to have a bird feeding platform on it. To mount the camera, I just drilled a large hole in the roof so that the camera could peek through. I threaded the power connection up the pole, through one of the ventilation holes at the top and then through the camera viewing hole. I used some old flat 4 core telephone cable and just wired appropriate connectors on the end. To make it all water tight and protect the camera, I used a short piece of 3" Schedule 40 pipe, with a rubber end cap to keep the water out. It turned out that the steel clip that tightens this up interfered with the WiFi signal, so I had to remove it. I use Ubiquiti access points and when mounted up high (in the attic of the house), seem to have a good range into the yard. 

This leaves the problem of power -- currently, I'm using a portable USB power bank, and the camera uses very little power (during the day). According to a cheap USB meter, it consumes around 200mA. At night it is rather worse as it turns on infrared LEDs. My longterm plan is to use a 12V lead acid battery with a 12v to 5v buck converter and put them in a waterproof box -- this box isn't expensive, but it also isn't sealed. However, it looks as though it will keep the rain out. My goal is to be able to put two nest boxes fairly close together and just run 5V between them.


The image says that it is 1080P, but it is actually only 1280x720. Also, I don't have the camera pointing straight down -- but it gives the general idea.




The bad news is that I finished it in the middle of April, and the local birds all seem to have made their nests by now. However, I will be ready for next year, and I'll probably build a couple more in the fall. Also, I managed to make the bottom piece the wrong size so that the front flap does not go vertical. I discovered this after gluing and nailing it all together. I don't think that the bird building inspector is going to give me any grief!

Another problem is that I have an old Q-See NVR that I use for recording my security cameras, but when I try and point it at this birdbox camera, the server just crashes. Worse, I discover that Q-See is now out of business....

Update: April 20th

It seems that a sparrow has started to make a nest. I had hoped for a more photogenic bird, but I'm still impressed that it moved in within 48 hours. It must be like the local real estate market!






Wednesday, February 21, 2018

Pulse operated slave clocks

I have always been interested in clocks since I was a kid in the 60s. My father and a cousin had a friendly, ongoing, rivalry to develop accurate pendulum clocks. To me it was all a bit strange, but I loved the care and detail that went into creating a pendulum (made out of invar) with its suspension. A special bellows applied the air pressure compensation, and the whole thing was then inside a temperature controlled chamber. It emitted a one pulse per second signal that drove a clock in our dining room. He also built a counter (vacuum tube technology) that would measure the phase offset against the 'pips' that were broadcast on the radio. He reached about a 0.1 second per day level of accuracy -- i.e. about 1 ppm -- which is pretty good.

Fast forward 50 years, and I finally have the time to get his clock display running. This is the clock on the right here. Yes, neither are reading the right time....


The clock on the left came from Ebay, from an Indian seller of marine artifacts that come off ships being broken up for scrap. It takes a 2 pulse per second signal, and needs a bipolar drive.

I designed a small circuit based around an ESP8266 that could drive all manner of pulse based clocks -- it can drive up to 30 volts (though not at much current), and the timing of the pulses is entirely under software control. It has a rechargeable battery to provide time during a power outage (but more importantly, to allow time for the position of the hands to saved to non-volatile storage before shutting down). This means that you don't need to reset the clock after an outage.



There are already a number of mods to the initial design (I got some footprints wrong) and I didn't think about some of the options right. I'm getting close to updating the design to V2 and getting another pack of 10 boards.

All of this is remarkably inexpensive if you are prepared to wait. Aliexpress is my goto source of components, and seeedstudio to build the boards. If the BoM cost for a board is over $15, I would be surprised (I have parts to build roughly 10 -- only because buy smaller quantities than that doesn't save you much.). The aluminium box to hold each board is the most expensive component at around $6 each! I intending to use the PCB process to make the end plates with the labels and the cutouts already in the right place.

Anyway, the results are that I can drive my father's clock (the one of the right) so that it keeps time. However, the old marine clock loses time. This evening, I finally figured out why -- it turns out that the drive is a rotating magnet which is (I think) supposed to be glued to a small plastic gear that drives the rest of the mechanism. While the clock is laying on its back, the magnet rests on the gear, and the mostly keeps time. In any other position, there isn't enough friction, and the hands don't turn. It seems that my next task it to take it to pieces and try and reglue the mechanism.

Sunday, June 19, 2016

Netflix and IPv6 -- Problem solved

I have been griping about Netflix's handling of IPv6 as it interacts with their GeoIP database. This leads them to believe that I am behind a proxy (as I use Hurricane Electric's excellent IPv6 Tunnel Broker service). 

Netflix could fix this problem themselves (if they chose to do it0> The simplest approach would be to trigger a redirect to an IPv4 only version of the site if they don't like the IPv6 source address. However, they don't want to do that (it is work, I suppose). This leaves me no choice but to take action on my side (I'm getting grief from my kids that they can't watch their shows). The problem doesn't affect viewing Netflix on the big screen as we use Tivo boxes for that (and I guess they only support IPv4).

My setup at home uses dnscache as a local DNS cache, and I also have a DNS server written in Perl that handles special domains like my SPF record (and its references) and my ip6.arpa space.

To fix the Netflix problem, I added a forwarding entry to dnscache to point netflix.com to my local perl DNS server. The implementation of the handler for this is:
sub no_aaaa_handler {    my ($base, $qname, $qclass, $qtype, $peerhost) = @_;    my ($rcode, @ans, @auth, @add);
    $rcode = "NXDOMAIN";
    my $res = Net::DNS::Resolver->new(                 nameservers => [qw(8.8.8.8 8.8.4.4)]);
    if ($qtype eq 'ANY') {        $qtype = 'A';    }
    my $ans = $res->send($qname, $qtype, $qclass);
    if ($ans) {        @ans = grep { $_->type ne "AAAA" } $ans->answer;        @add = grep { $_->type ne "AAAA" } $ans->additional;        $rcode = $ans->header->rcode;    }
    push @auth, @soa if $rcode eq 'NXDOMAIN';
    return ($rcode, \@ans, \@auth, \@add, { aa => 1 });}



Problem solved -- traffic to Netflix is now forced over IPv4, and they think that they know where we live (actually, Maxmind gets the town right, though most of the others don't. They nearly all get the state right).

Monday, June 13, 2016

Netflix and IPv6

I have been running IPv6 at home for a few years now. I've been using a Hurricane Electric tunnel running over my Comcast IPv4 service. It performs startlingly well, with reducaed latency over the native IPv6 Comcast service (which wasn't available when I started this process).

All has been good until later May 2016 when my kids started asking me why Netflix was complaining about proxies and not letting them watch whatever it is that they watch. I ignored this for as long as possible -- whatever the problem was, it didn't affect my use of Netflix (we use Tivos as the main TV viewing platform). Then I caught a tweet which indicated that this message was a result of running an IPv6 tunnel. Why?

The Netflix help for the issue is completely useless. It was written by (charitably) a technical person who doesn't understand that the vast majority of Netflix viewers have no idea what IPv6 is (or even what IPv4 is). The message is:
Netflix supports any IPv6 connection that is natively provided to you by your ISP. Tunneling services that provide IPv6 over an IPv4 Network are not supported by Netflix, and may trigger an error message.
This message does not give you any clue as to what to do about the problem. Are they really saying "Reconfigure your network connectivity in order to view Netflix."?

I now understand what the problem is -- their GeoIP database is unable to locate the country where the IPv6 address is, and so they don't provide service to it. Does anybody know which GeoIP database they use -- maybe I could get that DB fixed, However, the whole idea behind Netflix is that it is easy and seamless to use (the idea being trying to discourage people from using pirated content). So why are they being so anti-paying-customer?

The only thing that I can think of is that they are not getting enough complaints. There are two things that they could do that are simple:
  1. Provide a list of IPv6 server addresses that people could block. This would force a fallback to IPv4 and then things would work
  2. Fix the code so that if an IPv6 address cannot be geolocated, then force a redirect to IPv4. 
For now, I've had to disable the IPv6 stack on the kids' laptops. This hardly seems like an ideal solution.

Update: See Netflix-and-ipv6-problem-solved for the resolution.

Saturday, May 21, 2016

Adventures with NodeMCU

I've always wanted to build a retro-themed display for some weather data, and I've been thinking about how to do this over a few years. Recently,  I started to assemble the hardware to actually make it a reality.

The essential piece of the system is an old-fashioned looking analog meter with a simple mechanism to choose the variable to be displayed (temperature, humidity, etc). I always wanted to be able to display two values on the same meter, so I needed a drive mechanism that could handle that. Eventually I found the VID28-05 which is a dual instrument stepper motor. These are designed for displays like car instrument panels so they are made in large volumes and hence are economical! Also they can be driven at 5 volts at low current.

The device that seemed to be suitable to drive these was the NodeMCU -- this is an ESP8266 based board that is very cheap but includes programming hardware and standard pin spacings. It is programmed in Lua -- which is great for prototyping.

The interface to the variable selector device was a cheap rotary encoder (as used in car stereo equipment) and I wrote a module for the NodeMCU to provide a sensible interface. In the course of doing this, I ended up fixing a number of other issues with the base Lua firmware and ended up as a contributer to the nodemcu-firmware project.

One of the big issues with the ESP8266 chipset is that there is very limited RAM available and this is normally the limiting constraint on writing Lua code -- it all gets loaded into RAM at runtime and then interpreted.

It occurred to me that if this could be copied into the flash memory (of which there is a lot), and it could be executed directly, then this would enable much larger applications to be written. More importantly it would allow larger sets of standard libraries to be written and shared.

The base object in Lua for a piece of code is a 'function' which corresponds to the C structure 'Proto'.

typedef struct Proto {
  CommonHeader;
  TValue *k;  /* constants used by the function */
  Instruction *code;
  struct Proto **p;  /* functions defined inside the function */
  unsigned char *packedlineinfo;
  struct LocVar *locvars;  /* information about local variables */
  TString **upvalues;  /* upvalue names */
  TString  *source;
  int sizeupvalues;
  int sizek;  /* size of `k' */
  int sizecode;
  int sizep;  /* size of `p' */
  int sizelocvars;
  int linedefined;
  int lastlinedefined;
  GCObject *gclist;
  lu_byte nups;  /* number of upvalues */
  lu_byte numparams;
  lu_byte is_vararg;
  lu_byte maxstacksize;
} Proto;

It was fairly easy to copy the 'code' to flash and then replace the pointer to point at the readonly copy. Very quickly I discovered that, after writing to the flash directly, the memory mapped, readonly, view of the flash did not update. The documentation on the ESP8266 is pretty rudimentary. It is an Xtensa lx106 core with a number of custom peripherals designed by Espressif.

After some experimentation, it appears that if you read memory at +32k and +64k, then the original cached data is lost and so, if you access it again, then the data is fetched from the flash chip. I haven't done the experiments to see if the cache can be flushed with a single read.

However, it turns out that just moving the code into flash doesn't get much memory back. A lot is consumed in strings (the constants, the local variable names, the upvalue names etc). There is a 16 byte Lua header for each string, and an 8 (or possibly 16) byte memory management overhead per block. This eats into the 48k of RAM that is available. So the next step was to move the strings (represented as TString) into flash. The code seemed fairly straightforward...

However, it didn't work except in the simplest case. The platform would lock up until the watchdog expired and triggered a reset. I had my suspicions that the garbage collector might be trying to write to my flash strings, but this should cause an exception rather than a watchdog timeout.

After some time, I recalled that the NodeMCU code had a custom exception handler that handled exceptions on 8 or 16 bit loads from flash. Apparently, the glue logic to the flash chip could only handle 32 bit loads (although this isn't clear if this is always true or whether it is only when there is a cache miss). Turns out that the exception handler also gets triggered when there is a store to the flash region. The exception handler detects that it is a store, and then (effectively) does a busy wait till the watchdog times out. The underlying SDK (from Espressif) tries to register interrupt handlers so that it can print out a nice message and save the exception parameters for the next reboot. It was a quick fix to make writes to the flash trigger an immediate crash.

This did help me track down a number of places in the garbage collector where it was trying to 'mark' my readonly TString objects. I fixed these.

I started out testing with the following code

function validate(method)
   local httpMethods = {GET=true, HEAD=true, POST=true, PUT=true, DELETE=true, TRACE=true, OPTIONS=true, CONNECT=true, PATCH=true}
   return (httpMethods[method])
end

Once I got the copying to flash to not crash the platform immediately, I tried to exercise the code above (after it was copied to flash). 

> validate("GET")
nil

What??? After lots more investigation, it turns out that the table implementation in Lua relies on the fact that two strings with the same value ("GET") are represented as the same pointer. This is no longer true once the value inside the function is stored in flash, and the interactive prompt version is located in RAM. 

I fixed the rawequal function so that it would compare the values of strings (without any significant performance penalty). It then turned out that the table implementation also used another equality checking function, so I needed to fix that as well.

It feels as though I am heading down a rabbit hole.

The current state is that the platform still triggers a watchdog timeout for complicated cases, but simple cases now work. It is a significant reduction in the amount of memory consumed by code. I am hopeful that I can the code to work reliably. Then the task will be to clean it up and make sure that there is no penalty when this copy-to-flash mode is not compiled in.



Sunday, March 22, 2015

UPDATED: IPv6: Comparing Hurricane Electric tunnel and Comcast Native

UPDATE 26th March 2015: I managed to get hold of someone at Comcast who did some troubleshooting and told me that one of my upstream routers was routing my IPv6 traffic mostly via Chicago whereas it should have been going to NYC as that was a much better path. Anyway, he got the configuration fixed and it knocked 10ms off my ping times to employees.org. The new ping result:

11 packets transmitted, 11 received, 0% packet loss, time 10005ms

rtt min/avg/max/mdev = 89.544/92.286/98.582/2.450 ms

It still isn't quite as fast as the Hurricane Electric tunnel (maybe 5ms slower). The path to the UK got a lot better and the native IPv6 is only around 2ms slower than the tunnel.

Original Story

I have been running IPv6 for a long time now by using a Hurricane Electric tunnel. It has worked well, and has been reliable. I managed to make Sage with the tunnel broker certification

I have been running a couple of NTP servers on IPv6 that are part of the pool.ntp.org pool and I started to wonder whether the amount of noise (jitter) that was being seen by the monitoring system was due to the tunnel. Happily, Comcast had rolled out native IPv6 as far as my cable modem. I have two IP addresses on my service (I was only using one) and this allowed me to bring up a new firewall (an Edgerouter Lite) and configure it to get a prefix delegation from Comcast and get native IPv6. [There will be significant complications when I want to run both prefixes on my LAN]

The obvious thing to do was to test the latency (using ping6) to get to various destinations over the tunnel and native connection. This is where things start to get confusing.

I'm based in Massachusetts and my tunnel is terminated in NYC.

Test1: www.mit.edu (hosted by akamai): 2001:559:11:183::255e

Native: 10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 9.113/11.309/17.273/2.383 ms

Tunnel: 10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 42.437/43.936/47.292/1.454 ms, pipe 2

Test2: banjo.employees.org 2001:1868:205::19   (west coast)

Native: 10 packets transmitted, 10 received, 0% packet loss, time 9012ms
rtt min/avg/max/mdev = 98.436/100.445/104.056/1.593 ms

Tunnel: 10 packets transmitted, 10 received, 0% packet loss, time 9010ms
rtt min/avg/max/mdev = 83.349/85.815/92.942/2.651 ms, pipe 2

Test3: 2a02:b80:0:6:7b::2 (some random NTP server in the UK)

Native: 10 packets transmitted, 10 received, 0% packet loss, time 9012ms
rtt min/avg/max/mdev = 118.134/119.084/119.891/0.650 ms

Tunnel: 10 packets transmitted, 10 received, 0% packet loss, time 8999ms
rtt min/avg/max/mdev = 88.790/91.656/95.290/2.253 ms, pipe 2

In two out of the three cases, the tunneled connection has less latency than the native connection. In fact, in most cases that I tried, the tunneled connection is better. The reason for the first test above may be that I did the lookup of www.mit.edu over the native connection, so it probably returned an address that was close to that exit point...

So what is going on here? Maybe a traceroute will help -- unfortunately there seems to be a dearth of reverse IP mappings registered in the DNS so that it is not possible to guess the entrie path that the packet is taking.

For employees.org via the tunnel, the IPv6 path is NYC->???->ORD->DEN->SV1->SCL. I'm guessing that SCL is Santa Clara and maybe SV1 is Silicon Valley. This seems a reasonable route. The IPv4 path to get to the tunnel endpoint is direct over Comcast's network to 111 8th Ave in NYC and then directly into Hurricane Electric's network. The Comcast path is almost entirely unnamed routers.

For the random NTP server, the tunneled path goes on Hurricane Electric's network over to the UK and then there are a couple of unnamed hops. The Comcast connection goes via Illinois before jumping across the pond.

One thing does stand out in the Comcast traceroutes:

 3  te-0-14-0-0-ar01.woburn.ma.boston.comcast.net (2001:558:200:18c::1)  17.495 ms  20.326 ms  15.029 ms
 4  2001:558:0:f6c1::1 (2001:558:0:f6c1::1)  51.400 ms  47.531 ms  47.752 ms
 5  he-0-11-0-0-pe04.350ecermak.il.ibone.comcast.net (2001:558:0:f8ce::2)  43.888 ms  42.834 ms  40.432 ms

The hop from Woburn to 2001:558:0:f6c1::1 takes a long time. 

The hops across the pond both take around 70ms (round trip). This is pretty reasonable given that the path is maybe 4,000 miles each way. 

In short, I'm sticking with Hurricane Electric for now. I've reached out to people in Comcast to figure out what is going on, but I'm not having much luck (yet) and I am now actively engaged with (I think) someone who can understand the problem and take action.


Saturday, January 17, 2015

Designjet 500 woes continue

I replaced the print heads, carriage belt and ink cartridges. Reassembly was the reverse of disassembly. However, it keeps indicating that one or more of the ink cartridges is faulty. After a bit more disassembly and reseating the ribbon cables that attach the ink station to the main circuit board, the faults are cleared. After a bit of printing (which causes the print to shake), the faults return. I suspect that the cable connectors are the problem. I'm not sure what to do next.

On to another project...