Our Technology And Ethics.

The headlines this past week have had Google’s relationship with Israel under scrutiny as they fired employees who were against what Israel has been doing and protested accordingly. I’ve looked at some of the news stories, some sympathizing with the former employees, some implicitly supporting Israel and the order that people expect within companies.

I won’t comment on that because that’s political and this isn’t about politics, or who is right or wrong.

Of Swords And Blacksmiths.

Throughout my career as a software engineer, I’ve had to deal with ethical issues and I’ve navigated them as best I could, as challenging as some of them were and some of them were personally quite challenging.

Ever since we figured out how to bonk each other over the heads with stones (stone technology), it seems we’ve found increasing occasion to do so. It could be that the first use of such weapons was for hunting or defense of the tribe from predators – likely both – but eventually we learned to turn them on ourselves.

I’m sure at some point there was a blacksmith who refused to make swords because of where the points and edges were aimed. Other blacksmiths just made them. There always seems to be someone else to kill, or to defend against. We could get into the Great Gun Debate, but we fall into the same problem with that. There’s always some human creeping around who wants to kill someone else for glorified reasons, and because of that we sleep with things under our pillows that could very well be used to kill us just as easily. It’s not a debate. It’s a criticism of humanity and an unfortunately honest one at that.

“We all lived for money, and that is what we died for.”

William T. Vollmann, No Immediate Danger: Volume One of Carbon Ideologies

Sometimes my ethics require me to move on, which I did without protest a few times over the decades: There’s always someone else who needs a job more than they care about an ethical issue if even they see the ethical issue. In the end we try, hopefully, to do more good than bad, but both of those are subjective.

Too often we use a technology as a scapegoat, an externalized criticism of ourselves that allows us to keep doing what we do. Technology can be used for good or bad; how we use that technology says something about ourselves and when we criticize the use of technology, we implicitly criticize ourselves but we don’t take the criticism because we have neatly placed the blame on a vague, externalized concept – a deflection at a species level, often because we are buying into the idea that the enemy is less than human. Yet we all are human despite ideologies, cultures, languages, and color coding that we don’t all neatly fit in.

We Are All Blacksmiths.

These days, with generative AI allowing us to paint the fence of the future once we give the corporations in control of them a few baubles, everything we do on the Internet is potentially a weapon to be used against someone else. While the firing of the Google employees who protested is news, those that still work there aren’t, and this is not to say that they aren’t faced with their own ethical dilemmas. We who work in technology hope that our work is used for good.

I worked at one place that started off with robo-calling software that was used to annoy people during elections that turned itself into an emergency communications service. Things can change, businesses can change, and controlling even a part of the infrastructure of a nation’s military can have unexpected consequences for everyone involved. What happens if Google suddenly doesn’t like something and turns something off?

The future is decidedly fickle. Our personal ethics should impact our collective ethics, but it often doesn’t. It can.

We build tools. Sadly, they aren’t used the way we would like sometimes, and we should try to influence things if we can – but ultimately, we are subject to a fickle future and good intentions that can be misdirected.

Spaghetti Source, Spaghetti Dependencies…

There’s one thing that consistently showed up in my work as a software engineer over the decades. Spaghetti.

Spaghetti code is easier to write than maintain, and in doing software archaeology (yes, it’s a thing), I’ve encountered numerous reasons for it. Requirements creep is one of the largest reasons.

In fact, the first real software archaeology I did was explained, proudly, as being a product of someone walking in and telling the developer, “Wouldn’t it be nice if…”. Of course, nobody wrote anything down, and by the time I got to it the software was 25 years old and didn’t even have a brochure. People were still walking in and saying, “Wouldn’t it be nice if…”. Meanwhile, the company was required to follow standard software processes because it was required for contracts.

So I learned, from good teachers and a few bad ones, about Software Configuration Management, Software Quality Assurance, and Software Testing. There were reasons we did things a certain way. Our project configuration management contained everything needed to rewrite the software from scratch, including every single tool. I’d actually done a backup of a development pc after writing down the hardware specifications of the system and handed that in because quality assurance had to be able to take the same stuff and rebuild the same software so that it could be tested.

From scratch. And it had to pass the same tests. From scratch.

What I saw in other companies after that was never at that level, and on the surface it seemed ridiculous. However, any software engineer worth their weight in skittles has been screwed over by a platform changing underneath the code. Windows was infamous for it, though I did encounter it in an Apple shop as well. Your code hasn’t changed, but some update suddenly had you in the middle of bug city without even a flip flop. Microsoft has been notorious about that, with their version back in the day called DLL Hell. It’s just their (old) version of dependency hell.

I never had the problem with *nix systems, though when open source became popular and everyone started using that in their code, *nix systems started to get it too. People blamed the open source, but it was really 2 things that caused the problem.

(1) Bad Configuration Management (if it even existed!) and
(2) Taking the open source project for granted.

Open Source projects that are done voluntarily are completely outside the control of a company, but having an open dialog and even sending some money for pizzas and beer can avoid issues. Even with all of that, volunteers are fickle, so having in house expertise on projects becomes as important as how important the projects are to a company’s software. A company doesn’t really know this, though, when they don’t have software configuration management for their projects – so you end up with spaghetti projects, or as I call it, “Spaghetti Configuration Management”.

Toss in the developers that are copying and pasting from Stack Overflow, or now GPT, dash in employee turnover, where expertise is lost, and you get software entropy. Talking about software entropy causes the eyes of pointy haired bosses to roll to the back of their heads, so instead we talk about technical debt, because one thing businesses understand is debt.

Over the years, companies I worked for were at various stages of technical debt. It’s a real thing, and the startups that survived long enough to get to the point of technical debt were the worst because of the culture shift needed: Documenting things, tracking things, and making sure that the knowledge stayed within the company. I can say with good conscience that I left every company better off than when I left it, sometimes because of the company, sometimes despite the company.

So we get to the article, “Hidden Tech Debt: The Importance Of Better Updates For Commercial Software“, which I came across through the author on Mastodon. It tackles the one thing I didn’t write about here: commercial software dependencies and lack of accountability in that, which is a bigger problem than we might think.

AI Reviewing Body Cam Footage, and AIs talking to themselves.

There’s been a lot posted about artificial intelligence since I last wrote about it, but some of it was just hype and marketing whereas the really cool stuff tends to sit well. There’s two main topics that I’ll get out of the way with this post – more verbose topics coming this week.

Talking To Myself…

There’s been some thought about the ‘inner monologue’ that some of us have. Not all of us do have that inner monologue, and we don’t have a reason why yet, but apparently people who do have an inner monologue think that artificial intelligences can benefit from it.

They are finding ways that an inner monologue is beneficial for artificial intelligences, which may oddly help us understand our own inner monologues and lack of it.

If you want to read a bit more deeply into it, “Thought Cloning: Learning to Think while Acting by Imitating Human Thinking” is an interesting paper.

Having spoken to myself now and then over the years, I’m not sure it’s as productive as some think, but I’m not an expert and only have my own experience to base that off of. I do know from my own experience that it’s very easy to reinforce biases that way.

I do some thinking with language, but mainly my thinking is what I would best describe as ‘visually kinetic’, so I am pretty interested in this.

Reviewing Body Cams

One of the problems with any sort of camera system is reviewing it. It takes a long time to review footage, and an experienced eye to do it.

Police departments are turning to artificial intelligence to help with this. Given there is already real time facial recognition, on the surface this seems like a good use of it. However, there are problems with it as there are realistic concerns for communities of color, as well as related to data privacy. A running body cam collects every interaction, sure, but it also collects information on everybody involved in these interactions as well as the person accidentally getting into the frame.

With everything increasingly connected, watching the watchmen through body cams means watching the watchers of the body cam footage.

I wonder what their inner monologue will be like while reviewing hours and hours of boring footage.

When Is An Algorithm ‘Expressive’?

Yesterday, I was listening to the webinar on Privacy Law and the United States First Amendment when I heard that lawyers for social networks are claiming both that they have free speech as a network as a speaker, as well as claiming not to be the speaker and claiming they are simply are presenting content users have expressed under the Freedom of Speech. How the arguments were presented I don’t know, and despite showing up for the webinar I am not a lawyer1. The case before the Supreme Court was being discussed, but that’s not my focus here.

I’m exploring how it would be possible to claim that a company’s algorithms that impact how a user perceives information could be considered ‘free speech’. I began writing this post about that and it became long and unwieldy2, so instead I’ll write a bit about the broader impact of social networks and their algorithms and tie it back.

Algorithms Don’t Make You Obese or Diabetic.

If you say the word ‘algorithm’ around some people, their eyes immediately glaze over. It’s really not that complicated; a repeatable thing is basically an algorithm. A recipe when in use is an algorithm. Instructions from Ikea are algorithms. Both hopefully give you what you want, and if they don’t, they are ‘buggy’.

Let’s go with the legal definition of what an algorithm is1. Laws don’t work without definitions, and code doesn’t either.

Per Cornell’s Legal Information Institute, an algorithm is:

“An algorithm is a set of rules or a computational procedure that is typically used to solve a specific problem. In the case of Vidillion, Inc. v. Pixalate Inc. an algorithm is defined as “one or more process(es), set of rules, or methodology (including without limitation data points collected and used in connection with any such process, set of rules, or methodology) to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations, including those that transform an input into an output, especially by computer.” With the increasing automation of services, more and more decisions are being made by algorithms. Some examples are; criminal risk assessments, predictive policing, and facial recognition technology.”

By this definition and perhaps in it’s simplest form, adding two numbers is an algorithm, which also fits just about any technical definition out there. That’s not at issue.

What is at issue in the context of social networks is how algorithms impact what we view on a social networking website. We should all understand in the broad strokes that Facebook, Twitter, TikTok and their ilk are in the business of showing people what they want to see, and to do this they analyze what people view so that they can give people what they want.

Ice cream and brownies for breakfast, everyone!

Let’s agree every individual bit of content you see that you can act on, such as liking or re-transmitting, is a single item. Facebook sees you like ice cream, Facebook shows you posts of ice cream incessantly. Maybe you go out and eat ice cream all the time because of this and end up with obesity and diabetes. Would Facebook be guilty of making you obese and diabetic?

Fast food restaurants aren’t considered responsible for making people obese and diabetic. We have choices about where we eat, just as we have choices about what we do with our lives outside of a social network context. Further, almost all of these social networks give you options to not view content, from blocking to reporting to randomly deleting your posts and waving a finger at you for being naughty – without telling you how.

Timelines: It’s All A Story.

As I wrote elsewhere, we all choose our own social media adventures. Most of my social networks are pretty well tuned to feed me new things to learn every day, while doing a terrible job of providing me information on what all my connections are up to. It’s a real estate problem on social network sites, and not everyone can be in that timeline. Algorithms pick and choose, and if there are paid advertisements to give you free access, they need space too.

Think of it all as a personal newspaper. Everything you see is picked for you based on what the algorithms decide, and yet all of that information is competing to get into your eyeballs, maybe even your brain. Every story is shouting ‘pick me! pick me!’ with catchy titles, wonderful images, and maybe even some content – because everyone wants you to click to their website so you can hammer them with advertising.4

Yet when we step back from those individual stories, the social networking site is curating things in a chronological order. Let’s assume that what it thinks you like to see the most is at the top and it goes down in priority based on what the algorithms have learned about you.

Now think of each post as a page in a newspaper. What’s on the front page affects how you perceive everything in the newspaper. Unfortunately, because it’s all shoved into a prioritized list for you, you get things that are sometimes in a strange order, giving a weird context.

Sometimes you get stray things you’re not interested in because the algorithms have grouped you with others. Sometimes the priority of what you last wrote about will suddenly have posts related to it covering every page in that newspaper.

You might think you’re picking your own adventure through social media, but you’re not directly controlling it. You’re randomly hitting a black box to see what comes out in the hope that you might like it, and you might like the order that it comes in.

We’re all beta testers of social networks in that regard. They are constantly tweaking algorithms to try to do better, but doing better isn’t necessarily for you. It’s for them, and it’s also for training their artificial intelligences more than likely. It’s about as random as human interests are.

Developing Algorithms.

Having written software in various companies over the decades, I can tell you that if there’s a conscious choice to express something with them, to get people to think one way or the other (the point of ‘free speech’), it would have to be very coordinated.

Certain content would have to be weighted as is done with advertising. Random content churning through feeds would not fire things off with the social networking algorithms unless they manually chose to do so across users. That requires a lot of coordination, lots of meetings, and lots of testing.

It can be done. With advertising as an example, it has been done overtly. Another example is the last press against fake news, which has attempted to proactively check content with independent fact checkers.

Is that free speech? Is that freedom of expression of a company? If you look at this case again, you will likely draw your own conclusions. Legally, I have no opinion because I’m not a lawyer.

But as a software engineer, I look at it and wonder if this is a waste of the Court’s time.

  1. It should be in the interest of software engineers and others about the legal aspects of what we have worked on and will work on. Ethics are a thing. ↩︎
  2. It still is, and I apologize if it’s messy. This is a post I’ll likely have to revisit and edit. ↩︎
  3. Legal definitions of what an algorithm is might vary around the world. It might be worth searching for a legal definition where you are. ↩︎
  4. This site has advertising. It doesn’t really pay and I’m not going to shanghai viewers by misrepresenting what I write. It’s a choice. Yet to get paid for content, that’s what many websites do. If you are here, you’re appreciated. Thanks! ↩︎

Revisiting Design: The RealityFragments Like/Comment Use Case

Yesterday, I went on a bit of a spree on RealityFragments.com, with the results fairly summarized on the RealityFragments About Page. The reason for the spree was pretty simple.

There are some issues with design.

Some of it is implicit in WordPress.com. To ‘like’ or ‘comment’ on content, you require a WordPress.com account. It’s painful for non-WordPress.com users to do that when they’re used to logging into everything automagically – and it’s also necessary to avoid spam comments that link to websites that sell everything from ‘getting rich quick’ schemes to promises of increasing the prominence of one’s nether regions. It’s a hard balance.

And it’s kinda crappy design because we, collectively, haven’t figured out a better way to handle spammers. I could get into the failures of nations to work together on this, but if we go down that path we will be in the weeds for a very, very long time.

Suffice to say my concern is that of the readers. The users. And it brought to mind that yellow book by Donald A. Norman, the very color of the book being an example of good design. After all, that’s how I remember it.

“Design is really an act of communication, which means having a deep understanding of the person with whom the designer is communicating.”

Donald A. Norman, The Design of Everyday Things (2013)

This is where we who have spent time in the code caves get things wrong. Software Engineers are generally rational beings who expect everyone to be rational, and if we just got rid of irrational users “we would have a lot less problems!”.

I’ve spent about half a century on the planet at this point, and I will make a statement: By default, humans are irrational, and even those of us who consider ourselves rational are irrational in ways we… rationalize. Sooner or later, everyone comes to terms with this or dies very, very frustrated.

The problem I had is that I wasn’t getting feedback. The users can’t give it without giving WordPress.com the emotional equivalent of their first born child, apparently. Things have gotten faster and we want things more now-er. We all do. We want that instant gratification.

In the context of leaving a comment, if there are too many bells and whistles associated with doing it, the person forgets what they were going to comment about in the first place.

“The idea that a person is at fault when something goes wrong is deeply entrenched in society. That’s why we blame others and even ourselves… More and more often the blame is attributed to “human error.” The person involved can be fined, punished, or fired. Maybe training procedures are revised… But in my experience, human error usually is a result of poor design: it should be called system error. Humans err continually; it is an intrinsic part of our nature…. Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else.”

Donald A. Norman, The Design of Everyday Things (2013)

The thing is – there is no good solution for this. None, whatsoever, mainly because the alternative that was already there had not occurred to the users. It’s posted on Facebook, on the RealityFragments page, where I mix content from here and RealityFragments. The posts can be easily interacted with on Facebook for those who use Facebook. Sure, it doesn’t show on the website, but that doesn’t matter as much to me as the interaction itself?

Factor in that it’s easy for my posts to get buried by Facebook algorithms, it becomes an issue as well.

Thus, I created the RealityFragments Group on Facebook. People join, they can wander into the group and discuss stuff asynchronously, instead of the doom scroll of content people are subjected to. My intention is for my content not to compete for attention in that way, because it simply can’t.

I don’t have images of models trying on ideas. I don’t have loads of kitten pictures, and I’m certainly not getting dressed up and do duck lips to try to convince people to read and interact with what I create. I am also, for the record, not willing to wear a bikini. You’re welcome.

This was less than ideal solution to the problem. Maybe.

Time will tell if I got it right, but many more technically minded people will say, “You could just manage your own content management system on a rented server.” This is absolutely true.

What’s also true is that I would then be on the hook for everything, and when a content management system needs love, it wants it now. Thus when I’m ready to start writing, I suddenly have to deal with administration issues and before you know it, I’ve forgotten what I wanted to write – just like the users that have to create an account on WordPress.com to comment or like. A mirror.

So this is a compromised solution. Maybe. Time will tell.

And if you want to interact with this post and can’t log in to WordPress, feel free to join the RealityFragments.com Facebook group. Despite it’s name, it’s also for KnowProSE.com

AI: Standing on the Shoulders of Technology, Seeking Humanity

“When the mob governs, man is ruled by ignorance; when the church governs, he is ruled by superstition; and when the state governs, he is ruled by fear. Before men can live together in harmony and understanding, ignorance must be transmuted into wisdom, superstition into an illumined faith, and fear into love.”

Manly P. Hall, The Secret Teachings of All Ages.

It’s almost impossible to keep up with all that is going on related to discussion on what’s being marketed as artificial intelligence, particularly with a lot of speculation on how it will impact our lives.

Since the late 1970s, we evolved technology from computers to personal computers to things we carry around that we still call ‘phones’ although their main purposes do not seem to revolve around voice contact. In that time, we’ve gone from having technology on a pedestal that few could reach to a pedestal most of humanity can reach.

It has been a strange journey so far. If we measure our progress by technology, we’ve been successful. That’s a lot like measuring your left foot with your right foot, though, assuming you are equally equipped. If we measure success fiscally and look at the economics of the world, a few people have gotten fairly rich at the expense of a lot of people. If we measure it in knowledge access, more people have access to knowledge than any other time on the planet – but it comes with a severe downside of a lot of misinformation out there.

We don’t really have a good measure of the impact of technology in our lives because we don’t seem to think that’s important outside of technology, yet we have had this odd tendency in my lifetime to measure progress with technology. At the end of my last session with my psychologist, she was talking about trying to go paperless in her office. She is not alone.

It’s 2023. Paperless offices was one of the technological promises made in the late 1980s. That spans about 3 decades. In that same period, it seems that the mob has increasingly governed, superstition has governed the mob, and the states have increasingly tried to govern. It seems as a whole, despite advances in science and technology, we, the mob, have become more ignorant, more superstitious and more fearful. What’s worse, our attention spans seem to have dropped to 47 seconds. Based on that, many people have already stopped reading because of ‘TLDR’.

Into all of this, we now have artificial intelligence to contend with:

…Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too…

HT Tech, “AI Experts Aren’t Always Right About AI

Counter to all of this, we have a human population that clearly are better at multiplying than math. Most people around the world are caught up in their day to day lives, working toward some form of success even as we are inundated with marketing, biased opinions parading around as news, all through the same way we are now connected to the world.

In fact, it’s the price we pay, it’s the best price Web 2.0 could negotiate, and if we are honest we will acknowledge that at best it is less than perfect. The price we pay for it is deeper than the cost we originally thought and may even think now. We’re still paying it and we’re not quite sure what we bought.

“We are stuck with technology when what we really want is just stuff that works.”

Douglas Adams, The Salmon of Doubt.

In the late 1980s, boosts in productivity were sold to the public as ‘having more time for the things you love’ and variations on that theme, but that isn’t really what happened. Boosts in productivity came with the focus in corporations so that the more you did, the more you had to do. Speaking for myself, everyone hired for 40 hour work weeks but demanded closer to 50. Sometimes more.

Technology marketing hasn’t been that great at keeping promises. I write that as someone who survived as a software engineer with various companies over the decades. Like so many things in life, the minutiae multiplied.

“…Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year…”

Naomi Klein, “AI Machines Aren’t ‘Hallucinating’. But Their Makers Are

There was a time when a software engineer had to go from collecting requirements to analysis to design to coding to testing to quality assurance to implementation. Now these are all done by teams. They may well all be done by versions of artificial intelligence in the future, but anyone who has dealt with clients first hand will tell you that clients are not that great at giving requirements, and that has been roled into development processes in various ways.

Then there is the media aspect, where we are all media tourists that are picking our social media adventures, creating our own narratives from what social media algorithms pick for us. In a lot of ways, we have an illusion of choice when what we really get are things that algorithms decide we want to see. That silent bias also includes content paywalled into oblivion, nevermind all that linguistic bias where we’re still discovering new biases.

Large Language Models like ChatGPT, called artificial intelligences with a degree of accuracy, have access to information that may or may not be the same that we may have in our virtual caves. They ‘learn’ faster, communicate faster and perhaps more effectively, but they lack one thing that would make them fail a real Turing test: Being human.

This is not to say that they cannot fake it convincingly by using Bayesian probability to stew our biases into something we want to read. We shouldn’t be too surprised, we put stuff in, we get stuff out, and the stuff we get out will look amazingly like stuff we put in. It is a step above a refrigerator in that we put in ingredients and we get cooked meals out, but just because a meal tastes good doesn’t mean it’s nutritious.

“We’re always searching, but now we have the illusion we’re finding it.”

Dylan Moran, “Dylan Moran on sobriety, his childhood, and the internet | The Weekly | ABC TV + iview

These stabs at humanity with technology are becoming increasingly impressive. Yet they are stabs, and potentially all that goes with stabs. The world limited to artificial intelligences can only make progress within the parameters and information that we give to them. They are limited, and they are as limited as we are, globally, biases and all. No real innovation happens beyond those parameters and information. It does not create new knowledge, it simply dresses up old knowledge in palatable ways very quickly, but what is palatable now may not be so next year. Or next month.

If we were dependent on artificial intelligences in the last century, we may not have had many of the discoveries we made. The key word, of course, is dependent. On the other hand, if we understood it’s limitations and incentivized humanity to add to this borgish collective of information, we may have made technological and scientific progress faster, but… would we have been able to keep up with it economically? Personally?

We’re there now. We’re busy listening to a lot of billionaires talk about artificial intelligences as if billionaires are vested in humanity. They’re not. We all know they’re not, some of us pretend they are. Their world view is very different. This does not mean that it’s wrong, but if we’re going to codify an artificial intelligence with opinions somehow, it seems we need more than billionaires and ‘experts’ in such conversations. I don’t know what the solution is, but I’m in good company.

The present systems we have are biased. It’s the nature of any system, and the first role of a sustainable system is making sure it can sustain itself. There are complicated issues related to intellectual property that can diminish new information being added to the pool balanced with economic systems that, in my opinion, should also be creating the possibility of a livelihood for those who do create and innovate not just in science and technology, but advance humanity in other ways.

I’m not sure what the answers are. I’m not even sure what the right questions are. I’m fairly certain the present large language models don’t have them because we have not had good questions and answers yet to problems affecting us as a species.

I do know that’s a conversation we should be having.

What do you think?

Captcha THAT.

childhood complex trauma_When I first started programming, I did a lot of walking. A few months ago I checked the distance I walked every day just back and forth to school and it was about 3.5 km, not counting being sent to the store, or running errands. At the same time, we had this IBM System 36 and a PC Network at school where space was limited, time was limited, and you didn’t have much time to be productive on the computer so you better have it locked down.

At that point, the language was BASIC. The popularity of object oriented programming had not blessed (and cursed) us yet, so we had line numbers on each line, handy for debugging because the most basic errors would tell you where you had a typo. There was an hour every few days to type assignments in so that you could get a grade, or maybe even do something of worth and understand what you were doing.

During that period, can you guess where I did most of my programming? When I was walking around seemingly aimlessly in parking lots, or staring at trees, or anything but staring at a computer monitor. Computers were not plentiful, the time on them was limited, you didn’t have time to screw around on a keyboard.

I have survived decades of programming since then. I still fiddle now and then, but after being beaten to market by Google on getting stuff out (“Set your sights high!”, they tell you…) I’m a bit tired of chasing those particular red dots. My absence from my desk was almost never found tolerable by at least someone who thought what they thought mattered more than results, but I got results. If you saw me typing frantically away at a keyboard, it wasn’t a spur of the moment thing. There was thought that went into crafting that code, there was planning and bullet proofing, to the point where as I became more senior I spent less time at the keyboard than many people in departments I worked in.

I mention all of this because software engineering has changed over the years. In my days, when we were learning we were not given answers from websites like Stack Overflow, we didn’t even have websites. If we were lucky we had the manual for the language, we had plausible typing skills and we had limited time on the machines.

This isn’t ‘walk uphill both ways’, this is, “We did this without all these cool toys you have now”. It’s not an issue of we had it harder, it’s a matter of we did it differently. We didn’t have editors that were forgiving, much less helpful. Within such a short window technology for programming has come a very long way, and it’s kind of cool – except all the silly Python editors and tools apparently written by the children of people who thought that “The Lord of the Rings” book trilogy was evil.

From the 1980s to now, it’s been a real whirlwind with way too much hype on way too many things that nobody recalls immediately. Then the captcha came along, to make sure ‘bots’ weren’t trying to do things, to check if a real human being was involved.

So humanity doubled down on that with large language models like ChatGPT. I guess kids stopped walking to school, they got more computers, and now they don’t even have to do their own homework.

I’m not sure where this is heading, but I’ll be making popcorn.

Career Advice from a Neo-generalist Perspective.

Compass StudyPeople ask me career advice now and then. Generally, people who do so can’t follow the beaten path.

There’s plenty of career advice out there for the beaten paths. The basic recipe is simple:

  • Secondary School
  • Tertiary Education
  • Maybe specialize further.
  • ?????
  • Profit!

I’ve met a few people who this has worked for – which means going in debt with student loans sometimes, or having a tether to parents paying for things, or what have you.

The last part, ‘Profit’, is delayed until after people are repaid – bad news, parents are never repaid. In the context of the United States, which is hardly a data model for the rest of the world (but my experience), we have the rising cost of not continuing one’s education versus the toll of student debt.  The fact that studies are largely done by people who followed the beaten path further confuses the issue at times.

How often do you hear a college say you don’t need to go to college? Of course they wouldn’t say that – and one could say that the student loan issue in the United States is akin to tossing out mortgages to people who can’t afford to pay the mortgages. It’s all very muddy water, and where once I had an opinion I just see a sea of biased data and biased opinions and have none myself.

My Path.

My life, my work history, my education – they don’t fit the accepted model of education, ???, profit. I grew up working through secondary school in a printery, in a electrical motor rewinding workshop, and whatever odd jobs came my way. Despite this, and I would later learn because of a former Irish brother who had married a nun, I did not get expelled and managed to graduate – well.

My parents didn’t put me through college, and the debt I did incur toward not finishing college in the late 80s is something I paid off about 22 years later. The interest was bad, but I managed to settle with the Department of Education for pennies on the dollar. Incidentally, despite being what one might term a minority, I wasn’t African or Hispanic enough to gobble up any grants specifically for those minorities. Equal opportunity ain’t so equal.

My time in the Navy was so busy that I never seemed to have time for college classes or college credits. It’s hard to study full time in NNPS and work on college credits at the same time, or work in emergency medicine and pop off whenever you needed to; when people’s lives are at stake you don’t have that luxury. And getting yourself together after being discharged while attempting to support an ill parent just didn’t leave much room for college, or debt – or paying a debt which I still owed, and thus couldn’t continue college. A nasty trap, that, even with the military deferment.

And so I found myself back behind a computer again through some luck, working at Honeywell and proving my worth. It was a cool job, and I had convinced my manager to give me a book allowance where I read the most bleeding edge stuff I could find back then. It was awesome, if only for a while. Others, like Dr. VcG, tried to help round me out and did so a bit, but really, I was focused on just…. learning.d,

I was told that they would pay for my classes to finish a degree so that they could promote me, which I then began – oceanography – and I was to find out that they wouldn’t pay for classes toward that end. No, they wanted me to get a degree in something they were already paying me to do. Why on Earth would I need a validation for them to be promoted when I was already validating it every day at work?

There was only so much I could learn there, and changes to the company started taking the ‘play’ out of it all.

I moved from this company to that, building up references, building up experience, but most importantly to me, knowledge. My knowledge wasn’t validated by some group of academics, it came from the Real World. As time progressed, the economy went down as my age went up, and I found myself working for money instead of knowledge. It was not fun anymore, and I moved on… to where I am now, with a few interesting stops on the way.

Serial Specialist, the Neo-Generalist.

The beauty of software engineering when I started out is that once you could get a computer to do what someone else wanted to do, you got to learn about what they wanted to do. I got to learn about business, banking, avionics, emergency communications, data analysis, science, robotics, and so much more – and I have this knowledge, hard won, without following the beaten path and getting a bunch of letters behind my name.

It’s where all that knowledge intersects that the cool and fun stuff happens. The beaten path could not have given me that.

Frankly, in my experience, the beaten path is pretty slow – which some say is a reflection of my ability. I don’t know that is true. What I do know is that the real world, paying bills and keeping abreast of responsibilities required me to learn faster than I could get a formal education, and I did. Simply put, I had to. I loved most of it, I hated parts of it.

Career Advice

When it comes to the beaten path that I did not take, I point at it. It works for a lot of people, though the ‘works’ does seem increasingly dubious to me as far as a return on investment. Go study something if you have the opportunity or if you can create the opportunity. Get your education validated, but don’t stop learning.

You see, what I can tell you with a degree of certainty is that the world is as bad as it is because of those who rest on their laurels after getting a certificate or a degree. I can also tell you with certainty that the world is as good as it is because of the people who keep learning and applying that knowledge toward good ends.

We don’t need people who are ‘educated’. We have enough of those clogging up the system. We need more of those that are constantly learning, certificate or degree or not – those are the ones who create true progress. Speaking for myself, I pace myself to 2 books a week or more on topics that range widely.

The world is interesting in many ways. You can make it more interesting by knowing how interesting it is from different perspectives.

Learn how to negotiate. Get as much as you can even though you don’t need it – a problem I had – because you don’t know when you will need it.

And avoid working for idiots if you can. You won’t be able to, and sometimes it’s not obvious until later on, but ditch them as soon as you can.

The Study Of What Others Do.

Taran Rampersad
Courtesy Mark Lyndersay, LyndersayDigital

I hate having my picture taken. Over the years, I have found the best defense from cameras is to hold one. This has weakened in a day and age where every phone has a camera, and everyone wants to be seen with someone – but Mark Lyndersay needed a picture of me for TechNewsTT, where the majority of my writing has been published this year outside of my own websites.

In going to his studio, it was a rare glimpse for me into the world of professional photography. It was clear to me almost immediately, this amateur photographer, that it would take me at least a decade to do the editing I watched Mark do quickly, about how he managed his photos, and about why he did the things he did  – a matter of simple experience that cannot be replaced with meetings and requirements discovery.

You see, I had been thinking of writing my own photo management software in Python – something to automate a lot of things. I had briefly considered this when I had begun selling some of my prints in Florida, and it was latent in my mind as a project to ‘get to’. In conversation with Sarita Rampersad, another professional photographer (unrelated), I had asked her what she used last year and why. It was clear that it would take more than a passing effort on my part to build something more useful than the tools she was using. The visit to Mark’s studio underlined this.

The Roots.

Reflecting on this on the way home, I went back to the very core of how I started working with technology. From an early age, I was encouraged – by rote and by whip, as it were – to observe what was being done to understand how it was being done. This was the root of the family business, the now gone Rampersad’s Electrical Engineering, a company that was built on fixing industrial electro-mechanical equipment with clients ranging from the U.S. Navy to someone who just needed their water pump repaired (Even WASA).

This background served me well over the years, and understandably frustrated managers and CEOs. Knowing the context of how things were used allowed for for useful processes and code; it allowed for things to become more efficient and allowed things to be written to last instead of a constant evolution of, “Wouldn’t it be nice if?”. In a world of agile processes, the closest thing to this is the DevOps iteration of Agile which even people who practice Agile haven’t heard of (because they are soundly in the Agile Cave).

DevOps is a form of Agile where every stakeholder is directly involved. And that, to me, is also a problem because of the implicit hierarchies and office (if only office) politics is involved. It’s a bleeding mess of tissue to sew together to form a frankensystem, but at least that frankensystem is closer to what people actually need. Assuming, of course, they understand what they need.

To me, it boils down to studying what other people do.

Observe, Analyze, Communicate, Build.

When I started as an ‘apprentice’ programmer, this was drilled into me by an Uncle who was a Systems Analyst, and ‘allowed’ me to write the code for projects that he was working on. He didn’t boil it down to observe, analyze, communicate and build; I refined that myself over the course of the years.

No matter the process, it all boils down to someone able to bridge how people work/play to get something done to understand what is needed, and how to make their lives easier through automation and information structure. Observing people do their jobs is important, analyzing it secondary, but the most important part is the one thing that an AI cannot yet do: Communicate, the process of listening, speaking (or writing, or…), and then feedback. This process is most important. In priority of importance, software engineering and I believe any form of process or structural engineering is:

  1. Communication
  2. Observation
  3. Analysis
  4. Build

This is not the order in which things are done, of course, but the emphasis that is most important in understanding how present systems work and how future systems should work.

So often over the years, I’ve seen software engineers relegated to the role of code monkeys with emphasis only on ‘Build’, when the most important parts are about ‘building what is needed’. This is where business analysts got introduced somewhere along the way, but they too are put into silos. This is underlined by HR departments focusing only on the ability to ‘build’, where analysts are expected to be a different sort of role. When these roles were split, I cannot say, but to be both is something that is too large and round to fit in small square holes of the modern enterprise.

It is lost, eroded, and there is a part of me that wonders if this is a good thing. Studying what other people do has allowed me to do so many things within and without technology, and it worries me that in a future where AI will be taking over the ‘Build’ that software engineers aren’t being required to focus more on the soft skills that they will need in the coming years.

Sure, the AI can build it – but is ‘it’ what needs to be built?

Done.

cea_buddha
Image used with permission of photographer, Flickr user cea. Click for original.

I don’t miss it.

There was a time that it didn’t seem like I ever could. It was all I did. Coding, designing, architecting, playing with new languages and frameworks – it was great. I spent hours upon hours getting good at things that were, in the end, just passing fads.

Passing fads with ‘benefits’, really. The ‘benefit’ being the ability to find work maintaining someone else’s crappy code. Maintenance.

No one looks forward to a career in software engineering maintenance.

But that’s the majority of the industry, and if you’re good at maintenance, they’ll pick someone else to do the new development no matter your experience level. They need you there with your finger in the dyke as you watch them do it all over again – making similar mistakes.

No, I don’t miss it.

CubicleLife

I don’t miss sitting in meetings to watch other people beat their chests and claim credit for things that they didn’t do as political necessity. I don’t miss reading emails written by people more interested in trying to impress each other rather than actually communicate what they need to.

I don’t miss being told I should be at my desk more often when I was one of the few people meeting deadlines, milestones, yardsticks and tombstones. I don’t miss the tiresome code-jockeys who don’t know a thing about process and were graced with writing them. I don’t miss going through undocumented code and figuring things out to rewrite a new iteration that will never be used. Oh, and if you were ever on call…

There’s really a lot not to miss.

And as software engineering goes, that’s pretty much the way it is. That’s a pretty average career, since most work in software engineering is maintenance – and maintenance, after a certain period, shows you everything that was done wrong. But since you’re doing maintenance, you don’t ever get to use it at work. Maybe you do it at home, but honestly, after a few decades you may not want to stare at a computer when you get home.

No, I’m playing these days. I finally got back to why I started it all in the first place.

Except I have a lot more experience. And I can walk away from my desk, I can eat what I want when I want, I can wear what I want…

Cubicles? Offices? That’s for managers.