whird.jpope.org cover image

WunderBot

To continue my recent work with the Wunderground api, I have now hacked together a new StatusNet bot plugin. With this plugin, you can ask the bot for the current weather conditions with just the word 'weather' and the five digit US zip code of your choice. So long as Wundergound recognizes the zip code, it should return some basic details. This plugin is currently active over on my Matrix instance, just ping @Niobe. @jpope It is currently Overcast and feels like 64.0 F (17.8 C) in Jefferson City. The humidity is 58% and the wind is From the NNE at 2.9 MPH Gusting to 5.8 MPH. #cloudy http://www.wunderground.com/US/MO/Jefferson_City.html Niobe (niobe)'s status on Monday, 17-Sep-12 22:18:02 CDT - matrix.jpope.org I have looked to do this before but, hadn't had much success. Mostly, my lack of proper coding skills has usually been my holdup. Previously, I found a bot on identi.ca (source) that already did what I've done here, except the execution is quite different. That bot (which currently does…

Weather Bots

This has been me off and on the past couple of days: Playing with bash and the wunderground api. Jeremy Pope (jpope)'s status on Sunday, 02-Sep-12 10:13:33 CDT - micro.jpope.org My Matrix bots have been providing me with a few weather details ever since I set that StatusNet instance up. I have a little bash script that will pull the data on a set schedule via cron. Previously, I had been using google's "secret" weather api as it pulled quickly, is easily parsed, didn't require an account and had just the few details that I wanted. Judging from the error message I get when attempting to open the xml feed in my browser, they've blocked my IP and or domain: After a quick search, it turns out the api was shut down. This has also had an effect on my XMPP jsonbot as it uses the google feed for it's weather as well. :( So, now it was time to find another place to pull from. I prefer to pull from an api as opposed to scraping a page somewhere. I really hate having a script fail due to the page being scrape…