Getting Out of Firefighting Mode (2015)

WildfireIf Prometheus was worthy of the wrath of Heaven for kindling the first fire upon earth, how ought all the Gods honor the men who make it their professional business to put it out? – John Godrey Saxe

Sooner or later any seasoned software developer, programmer or project manager finds themselves in firefighting mode – where it’s a matter of putting out a fire so that it doesn’t spread to other things, taking a project down. Some people think that the ‘fingers in the dyke’ are a more appropriate metaphore – and at times that is true – but I’m writing about firefighting today.

Getting out of firefighting mode is relatively easy. Like fire, there are certain things that are needed to create the conditions for the fire in the first place. A fire needs fuel, oxygen and a source of ignition.

Oxygen is easy in a software project – it’s a requirement for the software project to start and survive in the first place: users.

The fuel, all too often, results from poor software and system administration practices and poor management decisions.

Like fire, the ignition sources can be many. It can be the fuel rubbing together to form the friction (bad decisions culminating into a burst of flame), or a visiting pyromaniac (someone taking advantage of an exploit), or enough oxygen (users) blowing over a hot spot (friction) that causes a burst of flame.

The only way to get out of firefighting mode with a software project is to deal with the cause of the fires. Sure, putting out the fires is important, but sometimes you can’t put them out fast enough – and unless you treat the underlying cause, the fire will gut the project.

Almost all the time, the people who have been fighting the fires have opinions on what is causing all the fires – and they typically have a plan to tackle them, if only they had time or management approval. Despite our best efforts in Hollywood, time travel is not yet viable – and that leaves us with management decisions. Management decisions over a period of time become company culture. A culture that is derisive of good software project management is almost impossible to change unless something drastic happens.

About 15 years ago, I came up with a phrase that seems to be pretty accurate since: Management doesn’t smell the fire until the flames are licking their posteriors. Until they understand that there is a problem that needs to be solved, they will not try to solve it. The time can vary depending on how much attention they are giving to the issues and, where the real problem usually is, how much they trust the people that they hired to talk to warn them about these things.

If you can’t solve the management/leadership problem, you’re sunk. You’re done. Cash in your paychecks and update your resume.

If you can get management to see the need for change – probably with the help of someone in finance who will point out how much money is being spent fighting fires – the only tried and true way to get out of firefighting mode is:

(1) Take a hard look at whether the project is dead or not. If you’re so bogged down in maintenance mode, it’s time to figure out what Plan B is. If not…

(2) Start implementing best practices, from source control to documentation of the project itself. When there are too many things going on, find a finite area to begin controlling and start controlling it. It’s an uphill battle but it can be done with management making the path clear.

(3) Draw lines in the sand. If fires don’t meet certain criteria you set – don’t fix them. This is triage and has to be decided with the overall project in mind.

Image at top courtesy U.S. Fish and Wildlife Service, Southeast Region, and made available under this Creative Commons License.

 

Exploring Lunar Code

#wcw Margaret HamiltonLast week I’d found this article about the Apollo 11 code being on Github, and I took a look beyond what the article mentions. The code on GitHub is written for the Apollo Guidance Computer (AGC) – and Margaret Hamilton, at right, was the Director for the development of that code. In fact, that’s a stack of the code next to her back in the days when paper was used for development.

It was all the buzz for a week or so, and while it was pretty cool to look over the project, particularly the code comments, it interested me more to consider how much interest this project has had – and what projects from my lifetime will be looked back on with the same level of interest.

I can’t think of a single one. This is from an era where we aspired to put our feet on the moon, not look around for Pokemon. Can you think of one project that will generate this much interest in 3-4 decades?

 

The Limits of Open Data and Big Data

Open Data Awards 2015A couple of days ago, one of the many political memes rolling around was how many police shootings there were under different presidencies. People were, of course, spewing rhetoric on the number of lethal shootings there were between one administration in the 1980s and one in the present. I’m being obtuse because this is not about politics.

The data presented showed that there were less shootings under one administration than another, but it was just a raw number. It had no basis in the number of police at the time, the number of arrests, or the number of police per capita.

I decided to dig into that.

The U.S. population has gone from roughly 227 million people (circa 1980) in that time to 318.9 million as of 2014. That’s fairly substantial. But how many police were there in the 1980s? A search on how many police officers there were in the 1980s was simply useless.

I went to the Bureau of Justice Statistics page on local police. It ends up that they only did any form of police officer census from 1992 to the present day in 4 year increments, which means that they didn’t have the data from the 1980s. If that’s true – if there was no data collected prior – it would mean that decisions were being made without basic data analysis back then, but it also means that we hit a limit of open data.

And I’d expended too much time on it (shouldn’t that be easy to find?), so I left it at that.

Assuming that the data simply does not exist, it means that the limit of the data is by simply not collecting it. I find it difficult to believe that this is the case, but I’ll assume good. So the open data is limited.

Assuming that the data exists but is simply not available, it means that the open data is limited.

The point here is that open data has limits, either defined by a simple lack of data or a lack of access to the data. It has limits by collection method (where bias can be inserted), by the level of participation, and so forth.

And as far as that meme, I have no opinion. “Insufficient data” – a phrase I’ve used more often than I should be comfortable with in this day and age.

Disruptive vs. Sustainable

Anachronistic TechnologyIt has been driving me a little nuts over the last few years with all the drivel posts on ‘disruptive’ this and ‘disruptive’ that, particularly when ‘sustainable’ was the catch-phrase from a few years ago that still lingers doubtfully in the verbage of non-profits. In fact, I tend to gloss over ‘disruptive’ these days when it shows up because so many people don’t balance it with sustainability.

You see, I was fortunate enough to read The Innovator’s Dilemma: When New Technologies Cause Great Firms To FailĀ back when it first came out in 1997 – I still have a copy of the first revision. So for this post, and some thoughts on a potential startup or two, I referred back to what I consider the best work out there on disruption and sustainability.

Here are the high points from the Introduction of Christensen’s book. Ā I use ‘product’ as an interchangeable word for ‘service’ in this context since a service is a product of sorts.

Sustaining Technology

  • Can be discontinuous or radical (so many internet posts seem to confuse this with disruptive when it can be either),
  • Can be of an incremental nature, or as I like to think of it,Ā iterative.
  • Improves performance of established products along the dimensions of performance that mainstream customers in majority markets historically value.
  • Largely the most advancements in an industry.

Disruptive Technology:

  • Results in worse product performance, at least in the near term (in the majority market context).
  • Brings to market a very different value proposition.
  • Under-performs in established markets.
  • Has new fringe features/functionality.
  • Is typically cheaper, smaller, simpler and has more frequent use.
  • Lower margins, not greater profits.
  • Typically is embraced by the least profitable customers of the majority market.

These are very, very simple ways of looking at the differences between the two. A startup can utilize disruptive technologies and enter the market, but there has to be a plan for sustainability (other than being bought by another company) to present itself as a value proposition to anyone involved.

And that’s the key issue that most of the posts I’ve read on disruptiveĀ anything fail to mention. Sure, there is risk, but where there is risk, there should be risk mitigation. Don’t get me wrong, I understand solving problems as they come, but only presenting one half of disruptive technology – or disruptiveĀ anything, for that matter, is disingenuous.

The disruption of today, to be successful, should be successful tomorrow. Sustainability. Sustainability is why alternating current is used to transmit power over long distances, marketing is why people still think that Edison was more inventor than he was and that Marconi invented the radio.

Writing and Tech Documentation.

cubiclecreativity

The tough part about good documentation is that everything has to make sense. All the stuff has to ‘line up’ along the user stories for the use of the documentation.

There’s the high level story and the lower level stories that make the high level story work – and those same low level stories can have multiple dimensions of their own if written conscientiously with modern tools.

Documentation is usually dry and boring. Dry and boring is great reading for those who read the DOS manuals and Unix manuals end to end (I did), and you can amaze other people with how much you can’t be understood when you talk. That’s where the social engineering aspects of documentation come into play. Or, as writers would call it, writing. The documentation has to be sticky in the marketing sense, such that when someone reads it, they remember it. For the software engineering side – the technical side – less so. On the user side, it has to be… usable.

We’ve come a long way when it comes to our capacity to organize documentation online1. The actual writing, though, has to lean toward a SEO type of writing – repeating key phrases and using possible words and phrases that a user might search for. This requires understanding how a product is expected to be used as well as how it might be used. The latter is not as important for the planned use, but is important for disruptive use cases that might pop up on the radar and allow a business to leverage quickly.

Simply put, good technical writing allows for what is planned, and enables potential uses1.

1 Something I’ll be writing about some more this week.

Crisis Informatics

DisasterPeople who have known me over the years know I’ve always had a passion for responding to disasters. I can’t tell you why it is that when most people are running away, I have a tendency to run in – something I did before I became a Navy Corpsman (and learned how to do better because of). Later became a stab on what this is about by first enabling the capture of the data itself by enabling the communication. I even worked a year at a company that does weather warnings and other emergency communication, and was disappointed at how little analysis was being done on the data.

Years later, I now read ‘The Data of Disasters‘. Some folks have been working on some of the things that I had been thinking about and working on as I had time, and they seem to have gotten further. I’m excited about since the Alert Retrieval Cache was necessarily closed and didn’t gain the traction I would have liked – and open systems present issues with:

  • Context: A post may beĀ about something mentioned prior (a.k.a.Ā ibid) but not tagged as such because of size limitations.
  • Legitimacy: Whether a source is trustworthy or not, and how many independent sources are reporting on something.
  • Timeliness: Rebuilding a timeline in a network full of shares/retweets can pose a problem because not everyone credits a source. If you go by brute force to find source date and times, you can pull on threads – but you’re not guaranteed of their legitimacy in unit time.
  • Perspectives: GIS allows for multiple perspectives on the same event in unit time.
  • Reactions: When possible, seeing when something at a site changes when all of the above can change in unit time.

It gets a bit more complicated from there – for example, languages can be difficult particularly with dialects and various mixes of languages (such asĀ patois in the Caribbean, where I got into all of this). There’s also a LOT of data involved (big, quick and dirty data) that needs to be cleaned before any analysis can happen.

This is all why I envisioned it all to be a closed system, but the world believes differently, interjecting pictures of food with actual information of use. Like it or not, there’s data out there.

The expansion of data from a source over unit time, as mentioned in their paper on Crisis InformaticsĀ , is not something Ā I had thought of. I imagine they’re doing great work up there at theĀ Department of Information Science in the College of Media, Communication and Information at CU Boulder.

I’ll be keeping an eye out on what else they publish. Might be fun to toss a beowulf cluster at some data.

Better Mousetraps.

Head in HandsIn ‘Solving All The Wrong Problems‘, Allison Arief tackles something that has been bothering me for some time:

…We are overloaded daily with new discoveries, patents and inventions all promising a better life, but that better life has not been forthcoming for most. In fact, the bulk of the above list targets a very specific (and tiny!) slice of the population. As one colleague in tech explained it to me recently, for most people working on such projects, the goal is basically to provide for themselves everything that their mothers no longer do…

I’ve always wanted to work on things that matter, that actually have a positive impact on the world or society. Over the last 2 decades, I’ve had the opportunity to work on a few things that did.

It seems more rare to find work like that. And it seems like it’s not a pressing issue when it comes to business, either. The great revelation for me was the lawsuit between iFart and PullMyFinger back in 2009, where millions of dollars are spent on making sounds on demand that most mammals can make for free, but it’s more endemic and less obvious.

All those features that you don’t use are bought and paid for by someone – yes, even Free Software and Open Source.

Allison Arief is correct, and it’s something that is irksome. We’ve been solving all the wrong problems. It would be good to work on some of the right ones – but the risk of working on one of the right ones is high because people don’t necessarily want to pay for ‘useful’.

The market drives.

1 TANSTAAFL: There ain’t no such thing as a free lunch. Even if you pay nothing for software, it doesn’t write itself and basic amenities are required by humans to write it. Remember that the next time your company leverages open source and doesn’t give back to the project in some way.