cancel
Showing results for 
Search instead for 
Did you mean: 

Raspberry Pi Says What? 3/14/15 9:26:53.589 #PiDay2015

tweet.pngI’ve had a few members ask about my silly Pi Day TwitterBot hack, and why someone would even want to do such a thing.  The real answer is a geek compulsion, but the thinking went something like this:

Pi Day 2015 was going to give us Pi via the date to 10 digits, but if you included the milliseconds you could go to 13 digits.  Well, to be fair you could go to 1,000,000 digits if you had a sufficiently accurate timer to produce a decimal second with enough accuracy.  But let’s face it; true millisecond accuracy in IT gear is unlikely anyway.   Are you happy with the clarification realtime programmers?!

IMG_8762.JPG

I realized that if I could loop tight enough to trigger at a discrete millisecond boundary I could do something at that fateful moment.  And because a geek can, a geek should.  So what should happen at that moment? Do a SQL update, save a file, update a config?  No, there was only one thing to do: tweet.  The next trick was to create a bot. I use Raspberry Pi’s for just about all maker projects now.  After years of playing with microcontrollers I finally switched over.  Pi’s are cheaper than Arduinos when you consider adding IO, they run a full Linux OS, and many add-on boards work with them. (And yes I monitor my Pi’s at home with Orion, so be sure to check out Wednesday’s SolarWinds Lab which is all about monitoring Linux.)

But neither Arduinos nor Pi's have is real-time clocks, which is a little bit of a problem if you’re planning to do time sensitive processing.  So here’s the general setup for the project, and I’ll save you the actual code because 1) I made it in < 20 minutes, 2) no one will ever need it again and 3) mostly because it’s so ugly I’m embarrassed.  I used Python because there were libs aplenty.

The Hack

  1. Find “real enough time” i.e. accurate offset.  I’m too cheap to buy a GPS module, so I used NTP. However, single NTP syncs aren’t nearly enough to get millisecond-ish accuracy, plus the Raspberry’s system (CPU) clock drifts a bit.  So first, we need to keep a moving average of the offset and I used ntplib

    c = ntplib.NTPClient()

    response = c.request('europe.pool.ntp.org', version=3)

    >>> response.offset

    -0.143156766891

    >>> response.root_delay

    0.0046844482421875

    Next, poll 30 times in a minute and, deposit the results into a collections.deque.  It’s a double ended buffer object, meaning you can add or remove items from either end.  (It’s easier to implement than a circular buffer).  Adjusting the overall length in 30 sample increments lets you expand the running average beyond a single update cycle.

  2. Keep an eye on clock drift.  The actual trigger loop on the Raspberry would need to hammer the CPU and I didn’t want to get into a situation where I’d be about to trigger on the exact millisecond but get hit with an NTP update pass.  To do that I’d need to fire based on a best guess of the accumulated drift since the previous sync.  So, whenever the NTP sync fired, I saved the previous average offset delta from the internal clock, also into a deque.  On average the Pi was drifting 3.6 secs/day or 0.0025 secs/min.  Because it was constantly recalculating this value I corrected for thermal effects and other physical factors and the drift was remarkably stable.

  3. oAuth, the web and twitter.  Twitter is REST based and if I were building an app to make some cash, I’d probably either be really picky about choosing a client library or implement something myself.  But there was no need of it here, so I checked the Twitter API docs and picked tweepy.

    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.secure = True auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)

    # If the authentication was successful, you should
    # see the name of the account print out print(api.me().name)
    # If the application settings are set for "Read and Write" then
    # this line should tweet out the message to your account's timeline.

    api.update_status('Updating using OAuth authentication via Tweepy!')

    I gave my app permission to my feed, including updates (DANGER!) generated keys and that was about it.  Tweepy  makes it really easy to use to tweet, and pretty nicly hides the oAuth foo.

  4. The RESTfull bit.  As sloppy as NTP really is, it’s nothing compared to the highly variable latency of web transactions.  With a REST call, especially to a SaaS service, there are exactly 10^42 things that can affect round trip times.  The solution was twofold.  First make sure the most variable transaction – oAuth – happened well in advance of the actual tweet. Second, you need to know what the average LAN -> gateway -> internet-> Twitter REST service delay is.  Turns out, you guessed it, it’s easy to use a third deque object to do some test polls and keep a moving average to at least guestimate future web delay.

  5. Putting it all together - the ugly bit. The program pseudocode looked a little something like this:

    // For all code twitterTime = time.time() – {offset rolling average} – {predicted accumulating drift}
    // Gives the corrected network time rather than the actual CPU time.

    Do the oAuth

    While (twitterTime < sendTime - 20)
    {
       Do the NTP moving average poll
       Update the clock drift moving average
       Update the REST transaction latency moving average
       Wait 10 minutes    
    }

    While (twitterTime < sendTime - 2)
    {
       Do the NTP moving average poll
       Update the clock drift moving average
       Wait  1 minute
    }

    While (twitterTime < sendTime – {RESTlatency moving average})
    {
       Sleep 1 tick // tight loop
    }

    Send Tweet
    Write tweet time and debug info to a file
    End

Move Every Tweet, For Great Justice

I watched my Twitter feed Saturday morning from the bleachers at kickball practice, and sure enough at ~9:26 am, there it was.  This morning with a little JSON viewing I confirmed it was officially received in the 53rd second of that minute.

Why do geeks do something like this?  Because it’s our mountain, it’s there and we must climb it.  There won’t be another Pi day like this, making it singular and special and in need of remembrance.  So, we do what we do. The only question is how closely did I hit the 589th millisecond?  Maybe if I ask Twitter, really nicely...

7 Comments
ecklerwr1
Level 19

I like the idea of coming up with an average for how long the LAN -> gateway -> internet-> Twitter REST service delay was going to be so you could send the tweet in advance to try and hit at the exact right time!  Pretty crafty pHubb!

patrick.hubbard
Level 13

Or neurotic, take your pic.  Actually between the variable transaction delay, network offset, a swag at the post-POST return latency component  and a teeny CPU taking a beating where at best the milliseconds where >= 589, it was a guess anyway. ;-)

jkump
Level 15

What a nice way to celebrate the day!!! 

dsheridan
Level 9

Wow - I think I need to turn in my hat with the propeller on it...

Good stuff!

sqlrockstar
Level 17

Wonderful stuff. Well done, good sir, I said WELL DONE! (^HT)

patrick.hubbard
Level 13

I just realized I had a bug in the pseudocode.

   twitterTime = time.time() – {offset rolling average} – {drift rolling average} 

corrected to

   twitterTime = time.time() – {offset rolling average} – {predicted accumulating drift} 

Fixed. ;-)

clubjuggle
Level 13

Thanks. That would have caused me major issues in 2115. ;-)

About the Author
I'm the Head Geek and technical marketing director at SolarWinds, (which basically means I'm an mature geek in the services of the product team). When I say geek I mean Geek, with extreme prejudice. I started writing assembly on my Apple II, got a BITNET email account in 1984, ran a BBS @ 300 baud, survived X.25, abused Token Ring, got some Netscape.com JavaScript award love in '96, and my hack flight notification service still backs aa.com. These feats of course made me quite the chick magnet for many years. Along the way in various jobs I’ve been a developer, SE, PM, PMM, and now principal evangelist. (Let us all join hands around the server.) Over 10 years at SolarWinds I’ve hatched our online live demo systems, managed the SolarWinds Certified Professional program, launched the Head Geek program, helmed SolarWinds Lab, and these days I’m focused on Cloud, DevOps and helping IT admins learn the new skills they’ll soon need not just to get ahead, but even maintain their roles. I’m always looking for new and more fiendish ways to use our products- just like our customers. And when I have a few spare minutes I fly a little, when the weather is good.