FCC, AI, and Net Neutrality.

Over the past 20 years, the FCC has dealt with net neutrality 6 times, and this month it will do so for the 7th time because of artificial intelligence.

For those unfamiliar with net neutrality, it’s based on the common carrier: A common carrier holds itself out to provide service to the general public without discrimination (to meet the needs of the regulator’s quasi-judicial role of impartiality toward the public’s interest) for the “public convenience and necessity.” A common carrier must further demonstrate to the regulator that it is “fit, willing, and able” to provide those services for which it is granted authority1.

The idea that an internet service provider (ISP) is a common carrier is a constant battle. Without it, an ISP could slow one company’s content and speed up another company’s for money – despite the quality of content. It’s not hard to see why network neutrality is important in an age of influence, fake news, and manipulation of information from bad actors. Where people differ is who the bad actors are.

But what does that have to do with AI?

…Make no mistake about it, the major AI platforms are not weak wallflowers compared to the ISPs. There is a pressing need for regulatory oversight of the Big Tech companies that have delivered Big AI, including basic concepts such as openness, privacy, and interconnection.

Yet, access to AI for purposes as diverse as medical research or writing a term paper would be compromised without a fair and open internet. The issue before the FCC on April 25 is bigger than the catchphrase “net neutrality.” The 2024 iteration of the open internet debate is the reiteration of an issue that first surfaced in 2004: whether the dominant and essential network of the 21st century will go unsupervised…

AI makes the fight for net neutrality even more important“, Tom Wheeler, The Brookings Institution, April 9th 2024.

In essence, the perceived value of accessible information has increased. The actual issue itself doesn’t really have to do with artificial intelligence, only the perceived value of the information that the networks connect to.

Access to information is important, regardless of perceived value. If you pay for an internet connection you should be able to get everything available to you instead of turning your internet access into glorified cable television, where you get more content if you pay more while you have no actual control over the content.

What is also interesting is that even if network neutrality is successfully defended, ChatGPT is presently considered 82% more persuasive than humans, as mentioned here earlier this week, and since access to ChatGPT is fairly low cost, it allows smaller pockets to compete with larger pockets for your minds.

And conversely, it allows your voice to compete with larger pockets.

  1. Per Wikipedia, accessed 13 Apr, 2024 ↩︎

La Brea Syndrome.

I haven’t really written about Trinidad and Tobago and technology that much except lately with the data breach because it’s more frustrating than interesting.

When I wrote about Trinidad and Tobago breaking out of the economic tidal pool, the press was for diversifying the Trinidad and Tobago economy with what people in my circles tend to think is patently obvious: The information economy.

It’s 2023, and Trinidad and Tobago hasn’t even finished up a Data Protection Act. The Copyright Organization of Trinidad and Tobago still doesn’t care about software and protecting local developer rights, or local writers and their rights. It’s pretty much about music, and it’s a very strange organization even in that regard.

Meanwhile there is now a Ministry of Digital Transformation, where the Minister is the former CTO of the state controlled telecommunications company that recently had a data breach that internationally should be very embarrassing. Locally, people are powerless to do anything because the government hasn’t made the Data Protection Act law.

This is probably with good reason, the government might be liable for a lot more than we know. We only know about the data breaches that were made public. Did they pay off any ransom attacks? Did they have breaches that nobody even knew about because people didn’t announce themselves?

As the world now has AI manipulating information, Trinidad and Tobago is digitizing the Dewey Decimal System which is a shame because there is the capacity to do so much more. The inertia is as heavy as the combined age of Parliament and multiplied by the number of civil servants in a nation where the largest employer is, one way or the other, the Government of the Republic of Trinidad and Tobago.

This leads to those with skills to leave the technologically impaired to drink their own bathwater. Credentialism is the name of the game, followed by those that simply have more connections that capability.

I’ve said all of this for decades. I’ve written it so much I’m sick of writing about it, so unless something new develops locally I’ll just switch back to interesting stuff rather than discuss the tar pits of the information economy in Trinidad and Tobago.

I call it La Brea Syndrome.

Beyond The Moat.

In the world we like to talk about since it reflects ourselves, technology weaves dendritically through our lives. Much of it is invisible to us in that it is taken for granted.

The wires overhead spark with Nikola Tesla’s brilliance, the water flowing in pipes dating all the way back 3000-4000 BC in the Indus Valley, the propagation of gas for cooking and heat and the automobiles we spend way too much time in.

Now, even internet access for many is taken for granted as social media platforms vie for timeshares of our lives, elbowing more and more from many by giving people what they want. Except Twitter, of course, but for the most part social media is the new Hotel California – you can check out any time you like, but you may never leave as long as people you interacted with are there.

This is why when I read Panic about overhyped AI risk could lead to the wrong kind of regulation, I wondered about what wasn’t written. It’s a very good article which underlines the necessity of asking the right questions to deal with regulation – and attempting to undercut some of the hype against it. Written by a machine learning expert, Divyansh Kaushik, and by Matt Korda, it reads really well about what I agree could be a bit too much backlash against the artificial intelligence technologies.

Yet their jobs are safe. In Artificial Extinction, I addressed much the same thing but not as an expert but as a layperson who sees the sparking wires, flowing water, cars stuck in traffic, and so on. It is not far-fetched to see that the impacts of artificial intelligence are beyond the scope of what experts on artificial intelligence think. It’s what they omit in the article that is what should be more prominent.

I’m not sure we’re asking the right questions.

The economics of jobs gets called into question as people who spent their lives doing something that can be replaced. This in turn affects a nation’s economy, which in turn affects the global economy. China wants to be a world leader in artificial intelligence by 2030 but given their population and history of human rights, one has to wonder what they’ll do with all those suddenly extra people.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

Artificial Extinction, KnowProSE.com, May 31st 2023.

These concerns are not new, but they are made more plausible with artificial intelligence because who controls them controls much more than social media platforms. We have really no idea what they’re training the models on, where that data came from, and let’s face it – we’re not that great with who owns whose data. Henrietta Lacks immediately comes to mind.

My mother wrote a poem about me when I joined the Naval Nuclear Propulsion program, annoyingly pointing out that I had stored my socks in my toy box as a child and contrasting it with my thought at the time that science and technology can be used for good. She took great joy in reading it to audiences when I was present, and she wasn’t wrong to do so even as annoying as I found it.

To retain a semblance of balance between humanity and technology, we need to look at our own faults. We have not been so great about that, and we should evolve our humanity to keep pace with our technology. Those in charge of technology, be it social media or artificial intelligence, are far removed from the lives of people who use their products and services despite them making money from the lives of these very same people. It is not an insult, it is a matter of perception.

Sundar Pichai, CEO of Google, seemed cavalier about how artificial intelligence will impact the livelihoods of some. While we all stared at what was happening with the Titan, or wasn’t, the majority of people I knew were openly discussing what sorts of people would spend $250K US to go to a very dark place to go look at a broken ship. Extreme tourism, they call it, and it’s within the financial bracket of those who control technologies now. The people who go on such trips to space, or underwater, are privileged and in that privilege have no perspective on how the rest of the world gets by.

That’s the danger, but it’s not the danger to them and because they seem cavalier about the danger, it is a danger. These aren’t elected officials who are controlled through democracy, as much of a strange ride that is.

These are just people who sell stuff everybody buys, and who influence those who think themselves temporarily inconvenienced billionaires to support their endeavors.

It’s not good. It’s not really bad either. Yet we should be aspiring toward ‘better’.

Speaking for myself, I love the idea of artificial intelligence, but that love is not blind. There are serious impacts, and I agree that they aren’t the same as nuclear arms. Where nuclear arms can end societies quickly, how we use technology and even how many are ignorant of technology can cause something I consider worse: A slow and painful end of societies as we know them when we don’t seem to have any plans for the new society.

I’d feel a lot better about what experts in silos have to say if they… weren’t in silos, or in castles with moats protecting them from the impacts of what they are talking about. This is pretty big. Blue collar workers are under threat from smarter robots, white collar workers are under threat, and even the creative are wondering what comes next as they no longer are as needed for images, video, etc.

It is reasonable for a conversation that discusses these things to happen, and this almost always happens after things have happened.

We should be aspiring to do better than that. It’s not the way the world works now, and maybe it’s time we changed that. We likely won’t, but with every new technology, we should have a few people pointing that out in the hope that someone might listen.

We need leaders to understand what lays beyond the moat, and if they don’t, stop considering them leaders. That’s why the United States threw a tea party in Boston, and that’s why the United States is celebrating Independence Day today.

Happy Independence Day!

Distilling Traffic

Having pulled Data Transfer out of cars, I’ll revisit traffic itself:

“…Each of them is a physical record of their ancestors, dating back to their, marked by life events – living memory. In minds alone, each human brain is 100 terabytes, with a range of 1 Terabyte to 2.5 Petabytes according to present estimates. Factor in all the physical memory of our history and how we lived, we’re well past that…”

me, Traffic, RealityFragments, June 6th 2023

So while we’re all moving memory in traffic, we’re also moving history. Our DNA holds about 750 megabytes, according to some sources, of our individual ancestry as well as a lot of tweaks to our physiology that make us different people. Let’s round off the total memory to 2 Terabytes, 1 conservative terabyte for what our brain holds and roughly another terabyte of DNA (conservative here, liberal there…). 100 cars with only drivers is 200 Terabytes.

Conservatively. Sort of. Guesstimate built of guesstimates. It’s not so much about the values as the weight, as you’ll see.

Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.

Richard Feynman, Chapter 1, The Law of Gravitation, p. 34 – The Character of Physical Law (1965)

Now, from all that history, we have ideas that have been passed on from generation to generation. Books immediately come to mind, as do other things like language, culture and tradition. All of these pass along ideas from generation, distilling things toward specific ends even while we distill our own environment to our own ends, or lack thereof which is an end. That’s a lot of information linked together, and that information is linked to the ecological systems that we’re connected to and their history.

Now, we’re beginning to train artificial intelligences on training models. What are in those training models? In the case of large language models, probably lots of human writing. In the case of images, lots of images. And so on. But these models are disconnected in ways that we are not, and we are connected in ways that we’re still figuring out.

I mean, we’re still learning some really interesting stuff about photosynthesis, something most of us were likely taught about in school. So these data models AI’s are being trained on through deep learning are subject to change and have to be changed as soon as information in that data model is outdated.

Who chooses what gets updated? It’s likely not you or me since we don’t even know what’s in these training models. For all we know, it’s data from our cellphones tracking us in real time, which isn’t that farfetched, but for now we can be fairly sure it’s someone who has decided what is in the machine learning models in the first place. Which, again, isn’t us.

What if they decide to omit… your religious text of choice? Or let’s say that they only want to train it on Mein Kampf and literature of that ilk. Things could go badly, and while that’s not really in the offing right now… we don’t know.

This impacts future generations and what they will do and how they will do it. It even impacts present generations. This seems like something we should be paying attention to.

We all live in our own little bubbles, after all, and our bubbles don’t have much influence on learning models for artificial intelligence. That could be a problem. How do we deal with it?

First, we have to start with understanding the problem, and most people including myself are only staring at pieces of the problem from our own little bubbles. Applications like ChatGPT just distill bubbles depending on their models.

Data Transfer

In today’s news, a fruit company continues marketing stuff to people who don’t need what they’re selling for the price it is being sold for – but who buy it because of the brand.

Wait, that isn’t a fruit company. Doesn’t matter. It’s a slow news day.

So I’ll build up some of what I wrote in Traffic here, since it connects two things that are often disconnected and siloed in these chaotic times. Yet there’s some technical stuff that most people don’t know about that may make you think of things a little differently.

Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.

–Andrew Tanenbaum, 1981

You might be surprised just how much companies depend on trains, planes and automobiles to move data around. XKCD was asked about when the bandwidth of the Internet will surpass that of FedEx. The answer will likely boggle your minds: 2040.

Moving data storage devices is still the best way to move data quickly. It’s not cheap, but it doesn’t matter that much – we’re paying for it, after all, not them.

‘Sneakernet’ existed long before the Internet for much the same reason. The joke originates from around 1975-1976.

“…one day a plumbing contractor’s backhoe dug up and broke the underground cable that carried ALL of the JPL-to-Goldstone data and voice lines through Fort Irwin, and it would take at least a day, maybe longer, to repair. So someone was designated to drive two boxes of 12 reels each of magnetic tape down to JPL, and quickly. The first available vehicle was a white NASA station wagon. Hence the punch line: “Never underestimate the bandwidth of a station wagon full of magnetic tapes hurtling down the highway”.

Rounding off the numbers, twenty-four reels of tape at 170 megabytes each is 4080 megabytes. Three and a half hours is 210 minutes. 4080 megabytes divided by 210 works out to about 19.4 megabytes per minute, or 32.3 kilobytes per second (258.4kilobits per second) – over 100 times faster than a 2400 bps data circuit of the time. Note that the incident above involved only 24 reels – which didn’t come anywhere near filling the station wagon, in fact the two boxes of tapes didn’t even fill the front passenger seat. (as an aside, a station wagon is known as an estate car or estate in other parts of the world). Incidentally, that conversation was the first time your contributor ever heard the term backhoe fade used to describe accidental massive damage to an underground cable (compare it to the term rain fade used to describe a fade-out of a point-to-point microwave radio path due to the absorptive effect of water in the air)…”

Stanley Ipkiss, Reddit post.

It’s a very tangible way of viewing how data is transferred too, and perhaps reinforcing the anxiety of seeing a backhoe in your area. Much of what is done on the internet these days is streaming, and I think maybe the present generations coming up may not immediately understand life without streaming. When we kept things on our hard drives and floppy disks, back when they were hard and floppy respectively.

So with that settled in everyone’s mind, let’s talk a bit about what’s being marketed as artificial intelligence, which is not really that much artificial intelligence as it is a bunch of clever algorithms using probability to determine what in it’s thesaurus/image library you would like to see when queried. Where do they get that thesaurus/image library?

Us.

I’ll get into this in the next post.

Silent Bias

_web_studying ourselvesOnce upon a time as a Navy Corpsman in the former Naval Hospital in Orlando, we lost a patient for a period – we simply couldn’t find them. There was a search of the entire hospital. We eventually did find her but it wasn’t by brute force. It was by recognizing what she had come in for and guessing that she was on LSD. She was in a ladies room, staring into the mirror, studying herself through a sensory filter that she found mesmerizing. What she saw was something only she knows, but it’s safe to say it was a version of herself, distorted in a way only she would be able to explain.

I bring this up because as a species, many of us connected to our artificial nervous system are fiddling with ChatGPT, and what we are seeing are versions of our society in a mirror.

As readers, what we get out of it has a lot of what we bring to it. As we query it, we also get out of it what we ask of it through the filters of how it was trained and it’s algorithms, the reflexes we give it. Is it sentient? Of course not, these are just large language models and are not artificial general intelligences.

With social media companies, we have seen the effect of the social media echo chambers as groups become more and more isolated despite being more and more connected, aggregating to make it easier to sell advertising to. This is not to demonize them, many bloggers were doing it before them, and before bloggers there was the media, and before then as well. It might be amusing if we found out that cave paintings were actually advertising for someone’s spears or some hunting consulting service, or it might be depressing.

All of this cycled through my mind yesterday as I began considering the role of language itself with it’s inherent bias based on an article that stretched it to large language models and artificial intelligence. The actual study was just about English and showed a bias toward addition, but with ChatGPT and other large language models being the current advertising tropism, it’s easy to understand the intention of linking the two in an article.

Regardless of intention, there is a point as we stare into the societal mirror of large language models. The training data will vary, languages and cultures vary, and it’s not hard to imagine that every language, and every dialect, has some form of bias. It might be a good guess that where you see a lot of bureaucracy, there is linguistic bias and that can get into a chicken and egg conversation: Did the bias exist before the language, or did the language create the bias? Regardless, it can reinforce it.

fake hero dogThen I came across this humorous meme. It ends up being a legitimate thing that happened. The dog was rewarded with a steak for saving the life of a child from drowning and quickly came to the conclusion that pulling children out of water got it steak.

Apparently not enough children were falling into water for it to get steaks, so it helped things along. It happened in 1908, and Dr. Pavlov was still alive during this. His famous derived work with dogs was published in 1897, about 11 years prior, but given how slow news traveled then it wasn’t as common knowledge as we who have internet access would expect. It’s possible the New York Times article mentioned him, but I didn’t feel like unlocking their paywall.

If we take this back to society, we have seen the tyranny of fake news propagation. That’s nothing new either. What is interesting is the paywall aspect, where credible news is hidden behind paywalls leaving the majority of the planet to read what is available for free. This is a product of publishing adaptation to the Internet age, which I lived through and which to an extent I gained some insight into when I worked for Linux Journal’s parent company, SSC. The path from print to internet remains a very questionable area because of how advertising differs between the two media.

Are large language models being trained on paywalled information as well? Do they have access to academic papers that are paywalled? What do they have access to?

What parts of ourselves are we seeing through these mirrors? Then we have to ask whether the large language models have access to things that most humans don’t, and based on who is involved, it’s not hard to come to a conclusion where the data being fed to them by these companies isn’t available for consumption for the average person. Whether that is true or not is up for debate.

All of this is important to consider as we deal with these large language models, yet the average person plays with them as a novelty, unaware of the biases. How much should we trust what comes out of them?

As far as disruptive technologies go, this is probably the largest we have seen since the Internet itself. As long as it gives people what they want, and it supports cognitive biases, it’s less likely to be questioned. Completely false articles propagate on the Internet still, there are groups of people who seriously believe that the Earth is flat, and we have people asking ChatGPT things that they believe are important. I even saw someone in a Facebook reel quoting a GPT-4 answer.

We should at the least be concerned, but overall we aren’t. We’re too busy dealing with other things, chasing red dots.

Filling Voids

VoidI’m paying much more attention to my writing these days and, stepping back for a moment last night, I realized that some of the things I’ve been writing are to fill voids.

There’s the issue of purchasing land in Trinidad and Tobago, which isn’t actually hard, but it is something a significant amount of people I have encountered in the world and social media have not gotten right. When so many people are screwing something up, one has to wonder why that is. It’s easily dismissed as people being stupid, but it’s improperly dismissed that way. People simply don’t know. Despite writing that article, there’s a demographic that will still screw it up – but I’ve done my part.

That lead me to wonder why local media hasn’t successfully addressed the problem, if at all. Of course, they may have covered it – I spend less and less time reading local media – but the problem persists. So if that article helps one person, it will have done it’s job. If it helps 100, it’s a success. If it influences 1,000 people to do things properly, it will be slightly awesome. It will have served a purpose.

There are things people need to know. In the world, information like that is guarded for no real reason, and it keeps people back.

In a world of information, we have information fiefdoms guarded by gatekeepers. There’s no reason for any of this to be hard or difficult other than the highest priority of a gatekeeper seems to be self-preservation.

The truth is, I like the voids. As a software engineer, I fell in love with the problems no one else could solve, even with the advent of the Internet and search engines – the bleeding edge.

There’s plenty of bleeding edge outside of technology, too – we tend to think of things on the horizon when that bleeding edge is instead getting people to tie their shoes so that they don’t trip on the way there.

Having tripped on my shoelaces so often while staring into a void, I do not find it amusing to see other people do it.

Information Fiefdoms

Social Media Information OverloadYesterday, I found myself standing in Nigel Khan’s bookstore in Southpark, looking at what I consider old books.

I have a habit when I look at books, something I picked up in Trinidad some years ago after the Internet became more than a novelty. I check the date a book was published. It keeps me from buying antiques, though I have also been known to buy books in thrift shops abroad (though I am very picky).

I found myself looking at Tim Wu’s ‘The Master Switch: The Rise and Fall of Information Empires‘. Given some of the stuff I’d been talking about in different circles, it interested me – and Tim Wu I knew from his work with Network Neutrality. I checked the publication date.

November, 2010.
It’s August, 2018.

8 years. 5.33 evolutions of Moore’s Law, which is unfair since it isn’t a technology book – but it’s an indicator. Things change quickly. Information empires rise and fall in less time these days – someone was celebrating integrating something with OneNote in one of the groups I participate in, thinking that he’d finally gotten things on track – when, in fact, it’s just a snapshot more subject to Moore’s Law than anyone cares admit – except for the people who want to sell you more hardware and more software. They’ve evolved to the subscription model to make their financial flow rates more consistent, while you, dear subscriber, don’t actually own anything you subscribe to.

You’re building a house with everything on loan from the hardware store. When your subscription is up, the house disappears.

Information empires indeed. Your information may be your own, but how you get to it is controlled by someone who might not be there tomorrow.

We tend to think of information in very limited ways when we are in fact surrounded by it. We are information. From our DNA to our fingerprints, from our ears to our hair follicles – we are information, information that moves around and interacts with other information. We still haven’t figured out our brains, a depressing fact since it seems a few of us have them, but there we have it.

Information empires. What separates data from information is only really one thing – being used. Data sits there; it’s a scalar. Information is a vector – and really, information has more than one vector. Your mother is only a mother to you – she might be an aunt to someone else, a boss to someone else, an employee to someone else, and a daughter to your grandmother. Information allows context, and there’s more than one context.

If you’re fortunate, you see at least one tree a day. That tree says a lot, and you may not know it. Some trees need a lot of water, some don’t. Some require rich soil, some don’t. Simply by existing, it tells us about the environment it is in. Information surrounds us.

Yet we tend to think of information in the context of libraries, or of database tables. And we tend to look at Information Empires – be they by copyright, by access (Net Neutrality, digital divide, et al), or simply because of incompatible technologies. They come and go, increasingly not entering the public domain, increasingly lost – perhaps sometimes for good.

And if you go outside right now and stand, breathing the air, feeling the wind, watching the foliage shift left and right, you are awash in information that you take for granted – an empire older than we are, information going between plants through fungus.

There are truly no information empires in humanity other than those that are protected by laws. These are fiefdoms, gatekeepers to information.

The information empire – there is only one – surrounds us.

The Age of Dune

The-Spice-Must-Flow-PosterWe’re in a strange age of Dune, metaphorically. If you haven’t read the books or, for the reading impaired, the movie, you won’t get the metaphor – you should go do either immediately and not return to the internet until you have.

If you’ll recall, the book was about Spice – and how the spice must flow. Last century, it was a metaphor for oil, and this century, it’s a metaphor for information.

I bring this all up because of the Russian submarines making NATO nervous because they’re prowling near underwater cables. The conversations around this speculated on them eavesdropping – relatively tinfoil hat – when a real threat is the severing off those cables. Remember how Mua’dib rose to power? Who can destroy the Spice controls the Spice, and who controls the Spice is the real power.

Factor in the death of network neutrality, which has been long dead in other ways while people have been discussing the imminent rigor mortis while poking it with a stick. It’s not as if Facebook has been deleting accounts at the requests of the U.S. and Israeli governments.  It’s not as if any despot of any sort hasn’t at least tried to control the information flow. The trouble is that most people don’t understand information and don’t understand data beyond the definitions in dictionaries and antiquated textbooks.

Information flows. In a battlefield somewhere, a severed submarine cable can mean chaos on the ground somewhere. In a world where cables connect markets, severed cables mean being unable to get access to those markets. It means isolation.

The spice must flow, the information must flow. And those who seek to destroy information, from burning books to limiting access for people to information is about isolating, about controlling, and about power. How will it end?

I’ll be in my garden, monitoring the situation. You kids play nice.

Nature and Data Structures (2013)

Cactus Flower Blooms (at night)

I haven’t written much of late as I moved to Florida last week and have been busy networking, job hunting, writing about the journey and taking pictures. I’ll be writing more often.

With the recent return to Florida, I’ve clearly been working on finding work amongst other things. I’ve also been enjoying the flora and fauna because of the good fortune I’ve had in finding a friend’s home a temporary lodging. This reminded me this morning of how often people at Honeywell, during my time there, thought I was goofing off when I walked outside and stared at the trees outside. I wasn’t really goofing off. I was considering the natural structures and finding some assistance in designing data structures for the work I was doing.

Natural data architectures are compelling, simple at some levels and very complex. Almost all of them are built on osmosis, where concentrations allow atoms and molecules to wander through permeable membranes based on pressure – not unlike electrical voltages across resistance or water through a plumbing system. The difference between natural structures and artificial structures is that, as Feynman once said,

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Failed data structures in nature are pretty easy to spot.

They’re dead.

Yet even in death they have value – they are recycled, the essence of the philosophical ‘rebirth’ found in some religions. In a well operating ecosystem, nothing is wasted – everything that is ‘alive’ or ‘dead’ has some worth to the ecosystem or it is quickly replaced.

The flower on the left is a picture of a cactus flower I took last night. It opens only at night.

This can be related to a structure such as a website. The flower has a purpose which, as most would understand it, is marketing. It has a definite demographic for who it is marketing to. I’m not sure what exactly it attracts, but I’d wager it is targeting nocturnal insects and perhaps even bats – but whatever its market, it isn’t the classic stuff that people are taught in school such as bees and birds.

Once pollenated, the structure goes about doing what most other flowers do – something pretty well documented anywhere. But this particular data structure is interesting in that it has evolved over millenia to bloom at night, when it’s cool, when life is more mobile in climates where the days are decidedly hot. It’s a wonderfully beautiful thing that most people don’t get to see because they’re not out at night. The scent is wonderful as well.

Studying data structures like this, looking for hints from nature on how to do something, provides us methods of making a better data ecosystem.

Maybe the internet and social media would be a better place if more software developers stepped outside a bit more often. The days of software architects and developers fearing sunlight have past.