Posts Tagged ‘Technology’

Disaster At Home: What I’ve Learned

May 4th, 2011 1 comment

Disclaimer: I’m terrible at blogging, and know it. My last post before this one was last November. However, I feel strongly about what I’ve written here, and hope to continue learning from those around me.

On April 27th, 2011, at least four E-F4 tornadoes touched in the area near Chattanooga, TN (my home town). Having used social media for disaster relief purposes for over a year, I quickly became involved in the immediate aftermath of the storms’ destruction.

For over a year, I’ve been involved in an organized group of volunteers behind CrisisCommons, a nonprofit organization that uses technology and social media to help in times of crisis worldwide (

CrisisCommons seeks to advance and support the use of open data and volunteer technology communities to catalyze innovation in crisis management and global development.

CrisisCommons actively supports CrisisCamp, a barcamp event, which seeks to connect a global network of volunteers who use creative problem solving and open technologies to help people and communities in times and places of crisis.

A month ago, right after the Japan quake, I wrote the following in a LinkedIn post inside a Chattanooga-based group to explain a little more about what CrisisCommons does:

A Crisis Camp is where volunteers get together (often times programmers, but people with any skills normally have stuff to do) to help in times of crisis around the world. There was a very big effort right after the earthquake in Haiti, during the flooding in Pakistan, and many other disasters that occurred last year.

The movement is centered around, of which I’m a member, and communication is mainly doing through twitter using hashtags such as #crisiscamp, #crisiscommons, and others.

I first learned of Crisis Commons in early 2010 following the Haiti earthquake. I participated in a single CrisisCamp in Boston in February, 2010 to help with the Haiti relief efforts, and since that first involvement with Crisis Commons, I’ve helped disseminate CrisisCommons information over Twitter, and have recently become a team member of the behind-the-scenes infrastructure team to help keep the servers and website up and running (even in times of no crisis), although I haven’t really done anything useful yet, and haven’t been as involved in the CrisisCommons movement as much as I would have liked to be.

Over the last week, however, disaster struck where I least expected it: Home. I never imagined that I would be putting my experiences to use to help in my own (United States based) backyard. I have observed a lot over the last week about how real-time crisis data affects communities. A lot of good has been done by dozens of individuals in my community. However, there is always room for improvement, and there were issues I noticed that I’m still not convinced have a good solution. I have learned much over the past week that I hope to never forget, and I’d like to write them down publicly.

  1. Ask organizers before making information public
    On Thursday morning, I was wide awake at 4:30am, probably because my adrenaline had been pumping since 6:30pm the evening before! My housemate was not too far behind me, as he was headed into his regular 10-hour shift of translating at a hospital. Being the social media nutcase I am, I immediately got onto my iPhone and perused Twitter and began tweeting out some information. My housemate forwarded me an email (from HIS smart phone) that a deacon at my church (New City Fellowship) had sent, asking volunteers to show up at New City in the morning. I tweeted that out, asked folks to start using some hashtags for the relief effort, and then fell asleep again for a couple hours.

    By the time I made it to New City, my tweet for volunteers had gotten retweeted, and was even relayed across a radio station – and then it became clear to me that while the leadership at New City was very willing to have the volunteers, they were not expecting a huge turnout!

    Lesson Learned: Check with the organizers to see if the information should be made public or not.

  2. When many people want to help, collaboration is vital
    In the wake of the tornadoes, something very interesting happened. While dozens of people began networking together over Twitter and Facebook (which wasn’t all that surprising), several websites appeared with lists of needs. The problem wasn’t a lack of information. The problem was too MUCH information in too many locations. Nothing was centralized, and a lot of work was being duplicated, on multiple websites, making it difficult to find “all” the current data.

    Over the weekend, I met with @StratParrott, @JonFMoss, and @brandipearl to talk about a website Strat had started,, to address just this issue. Strat had done (and is still doing) an amazing job collecting & centralizing data into one location, and even in volunteering his time and talents to help with the relief effort.

    While the website quickly became recognized as one of the go-to sites for disaster relief information in the Chattanooga / Cleveland / Ringgold areas, there were STILL too many websites posting data on their own. A centralized location for all of the data was still needed.

    In future times of crises, I would love to see a community come together and work on a project together, where all of the data is located in one spot. Last night, I discussed this problem briefly with a CrisisCommons IT volunteer who lives in Seattle who has also noticed this kind of problem in disaster relief response. An idea has been floated about building a 2-way data-sharing application, where multiple websites can display the data (and solicit data from individuals), but be able to share the data with everyone else using the application.

    I will continue discussing this idea with the CrisisCommons community, and am hopeful that a solution for this kind of problem can be solved – and that in times of crises, websites will choose to collaborate, and not do their own thing.

  3. Even if an organization has thousands of volunteers, it does not mean it has local recognition
    Following the tornadoes, while I had in the past tried to organize some folks to talk about CrisisCommons, I learned that there wasn’t much recognition for the work, purposes, and advantages of working with this specific community. I also quickly realized that due to this “unknown”, it was not a good time for me to try to find more people. I realized that if I had continued to try, I would have become annoying, I would be providing unwanted “advice”, and that I would harm the organizing efforts that were doing good.

    A key facet of my education as a Community Development major at Covenant College is that if a local community doesn’t embrace an idea, then the “community developer” should NEVER “force” this idea onto the community. Similarly, I believe that in times of crises, that if something is already being done in the community that is providing “good”, then that effort should not be hindered. Yes, there is always room for improvement. But there comes a point where it is important to step back and decide whether or not your idea and your voice really matter. In the long run, will doing something just a little bit different really make a big difference?

    Usually, in times like this, when life & death is NOT on the line, the answer is no. (Of course, there ARE those times where experienced rescue crews from “outside” a community MUST be given absolute authority. There is a big difference between “relief” and “development” – relief is doing something for people that they can not do on their own.)

    My lesson: spend time outside of the immediate aftermath of a disaster to forge relationships, spread the word, and find people to support the work that an established organization such as CrisisCommons is doing. That way, when crises do hit, the relationships will already be formed.

These are just three of my observations, and “what I learned” moments over the last 7 days. A lot of good is happening here, and a lot of help is still needed for hundreds (thousands?) of people in our area. I will continue to do what I can through social media – and when I have the time, through volunteering with my hands – to help. But these 3 lessons I believe are vital to remember: Check with organizers, collaborate! collaborate! collaborate!, and build relationships before times of crises.

As a former Community Development major at Covenant College, I truly do envision myself (and hope to be) using technology and social media some day as a full time job to help those suffering in poverty and in the immediate aftermath of natural disasters world wide. If you have any comments, questions, or suggestions, I’m all ears!

Categories: General Tags: , ,

MSN Bot Behaving Poorly

April 24th, 2010 No comments

MSNBot Behaves Poorly

As I was going through my old emails at work today (I’m still at TechMission, and will be there for at least another month), I came across a write-up that I composed and sent to the other three members of our tech team (we manage the technical aspects of TechMission’s websites, and we maintain the web server). I wrote this last fall and had meant to post it onto my blog, but forgot about it.

This is some research that I conducted, and my recommendations into addressing a high server load problem that we were having at the time. Note that my entire time at TechMission has been in the role of an Americorps intern, and everything that I have done in this role, including my work indicated in this blog post has been completely self taught in the recent past.

There was a problem…
Last fall (2009), TechMission’s servers were fairly unstable in terms of performance. Our websites were slow, server load would routinely be above 5.0 on a 5 and 15 minute average, and we constantly had to restart Apache.

After I did some research into why we were having so many problems, I found that our website was being hammered by Robot crawlers that were not respecting all of our robots.txt directives. One of these robots, surprisingly, was the crawler used by MSN.

MSNbot caused TechMission’s server load to rise to very high levels. In the 3rd week of September, we had two days where top reported our average server load during business hours at hovering between 10 and 20. For a normal server, the ideal server load would be under 1.0. In effect, we were experiencing DDoS symptoms.

When our load first increased to high levels, we did not know what the cause was. And so in our search for this information, one of my coworkers checked our WHM Apache logs, and suggested that I do the same. As I scanned the document, I noticed that several IP addresses in the same range were showing up multiple times throughout the status log. I immediately became suspicious, because this log is a snapshot of the current activity on the server – processes that are literally happening at the time the log is loaded.

I went to and searched for several of these specific IP addresses. All of these addresses were associated with the same “user”: msnbot. I then went into one of my open putty sessions and issued netstat | grep msn and found several current connections to the server.

We found a solution…
I decided to try my theory out that MSNbot was the cause of our high server load. After getting approval from my coworkers (I was only an intern at TechMission) I went into WHM and added these IP addresses to our blacklist. Server load dropped like a rock, from 20 down to under 10 within a 2-3 minute time period.

Other people have experienced similar issues
According to several sources, msnbot is widely known to behave poorly. On April 16th, 2009, a blog posting was published which gave proof that msnbot used the wrong robots.txt file when indexing a website. Instead of using the right robots.txt, it has been known to follow the instructions of a completely different (unknown) website.[1] In February, other people complained of this same problem.[2]

The phenomenon of the msnbot slowing servers down is not new. In 2006, an article was published with a detailed report on how several webmasters and server administrators have experienced denial of service (DDoS) symptoms as a result of the bot.[3]

Traffic Sources
Approximately 76% of our traffic for comes from search engines. Out of this, 68% comes from Google, and 4% comes from Yahoo.[4] From July 1st to August 31st of this year, Bing provided 2,695 visitors to our site, and ranked as the 3rd contributing search engine (behind Google and Yahoo). From October 6, 2008 through today, October 6, 2009, Bing ranks 5th among search engines, and provided 5,435 visitors to our site. Out of these visitors, we had a 59% bounce rate.

Based on the research cited above, I have a couple of ideas. First of all, we need to do more research to find out if by blocking msnbot, our traffic from Yahoo will eventually be affected, since Microsoft and Yahoo have begun partnering together. On July 29th, 2009, this announcement was made public.[5]

Since we have more aggressive robots.txt instructions, perhaps we could begin to unblock a few of the MSN IP addresses (not all of them) and see what happens. I think it would be interesting to create a log of all MSN connections on our server, and find out what it does. We do know for a fact that currently, not all IP addresses are blocked, as I have occasionally seen the bot show up under different IP addresses than the ones we have blocked.

Based on the data that we obtain by unblocking a few more MSN IP addresses and log all of the MSN connections, I think that we could come back in approximately another month or two and determine whether our robots.txt instructions are being followed.




[4] TechMission’s Google Analytics


Categories: Articles Tags: ,