La Brea Syndrome.

I haven’t really written about Trinidad and Tobago and technology that much except lately with the data breach because it’s more frustrating than interesting.

When I wrote about Trinidad and Tobago breaking out of the economic tidal pool, the press was for diversifying the Trinidad and Tobago economy with what people in my circles tend to think is patently obvious: The information economy.

It’s 2023, and Trinidad and Tobago hasn’t even finished up a Data Protection Act. The Copyright Organization of Trinidad and Tobago still doesn’t care about software and protecting local developer rights, or local writers and their rights. It’s pretty much about music, and it’s a very strange organization even in that regard.

Meanwhile there is now a Ministry of Digital Transformation, where the Minister is the former CTO of the state controlled telecommunications company that recently had a data breach that internationally should be very embarrassing. Locally, people are powerless to do anything because the government hasn’t made the Data Protection Act law.

This is probably with good reason, the government might be liable for a lot more than we know. We only know about the data breaches that were made public. Did they pay off any ransom attacks? Did they have breaches that nobody even knew about because people didn’t announce themselves?

As the world now has AI manipulating information, Trinidad and Tobago is digitizing the Dewey Decimal System which is a shame because there is the capacity to do so much more. The inertia is as heavy as the combined age of Parliament and multiplied by the number of civil servants in a nation where the largest employer is, one way or the other, the Government of the Republic of Trinidad and Tobago.

This leads to those with skills to leave the technologically impaired to drink their own bathwater. Credentialism is the name of the game, followed by those that simply have more connections that capability.

I’ve said all of this for decades. I’ve written it so much I’m sick of writing about it, so unless something new develops locally I’ll just switch back to interesting stuff rather than discuss the tar pits of the information economy in Trinidad and Tobago.

I call it La Brea Syndrome.

The Trouble With Predictions.

We like predictions, particularly if we like the outcomes. We all want the winning lottery ticket because we want that outcome for a small investment. As individuals, we like to beat the odds. Collectively it breaks up into groups who want certain outcomes, and wars have been fought and continue to be fought over certain outcomes.

In fact, in conversation with a friend over coffee a few days ago I mentioned that in watching a documentary series on chimpanzees, their social groups have territory that they patrol and fight over, which include the richest fruit trees. In this regard we are not too different, we simply fight over other things. If we fought over trees, our world would likely be much different, more lush, with what we would consider less progress.

When I summarized the technological singularity, I didn’t really mention the other predictive models out there about other things because the technological singularity was a focal point. That’s the trouble with predictions. They have a tendency to omit other data. Let’s go with the 2045 technological singularity prediction, based predominantly on technology. What other factors will impact humanity by 2045?

There’s population, which I was surprised to find has decreased in growth rate. By 2045, though, we should have a global population of about 9 billion people – but here’s where it gets interesting.

Many factors contribute to the waxing and waning of the world’s population, such as migration, mortality, longevity and other major demographic metrics. Focusing on fertility, however, helps to illuminate why the total number of humans on Earth seems set to fall. Demographers define fertility as the average total number of live births per female individual in a region or country. (In the accompanying graphics, the term “woman” is used to encompass anyone assigned female at birth.) The U.S.’s present fertility rate, for example, is about 1.7; China’s is 1.2. Demographers consider a fertility rate of 2.1 to be the replacement rate—that is, the required number of offspring, on average, for a population to hold steady. Today birth rates in the wealthiest countries are below the replacement rate. About 50 percent of all nations fall below the replacement rate, and in 2022 the region with the lowest fertility rate (0.8) was Hong Kong.

Katie Peek, “Global Population Growth Is Slowing Down. Here’s One Reason Why“, Scientific American, December 7, 2022, (emphasis mine)

For simplification, we look at only the fertility metrics. Migration prediction is a mess because of laws and lines drawn on maps long ago. The data, however, demonstrates a decline and yet in the same article we see something interesting along those lines, which points to a need for migration.

High-income nations now have the lowest birth rates, and the lowest-income nations currently have the highest birth rates. “The gap has continued to widen between wealthy nations and poorer ones,” says Jennifer Sciubba, a social scientist at the Wilson Center in Washington, D.C., who has written about these planetary-scale demographic shifts. “But longer term,” she says, “we’re moving toward convergence.” In other words, this disparity among nations’ birth rates isn’t a permanent chasm. It’s a temporary divide that will narrow over the coming decades.

Katie Peek, “Global Population Growth Is Slowing Down. Here’s One Reason Why“, Scientific American, December 7, 2022, (emphasis mine)

How could we move toward that convergence without migration, particularly with economic disparity increasing even as we claim global poverty is diminishing? The global economics of nations has a role to play here as well. It’s not hard to see how the global population by itself might be around 9 billion in 2045, but where will those people live, and how will they live?

Let’s factor in something else, such as sea water levels. We attribute much of it to climate change. Pumping water out of aquifers faster than they can replenish is also a factor. I also stumbled across a few articles about how trees seemingly bend the laws of physics in storing water, and since a tree is roughly 50% water by mass, every time we cut down a tree half of it’s mass is released into the atmosphere as water.

It is interesting to note that as we cut down trees, we not only affect the ratio of gases in the air, a living thing by itself, but also release that moisture into the atmosphere where it will likely end up in the ocean that will contribute to sea level rise. How much is that? I don’t know, but since we have been doing it for generations I expect that it has been a significant amount.

Where do we find water to irrigate new tree growth when we plant them? Rains, the local aquifer, etc. Planting trees may well pull moisture out of the air, but does it do so at a rate greater than we pump water out of the ground to irrigate them?

And how much water do we store in our bodies? Roughly 60% of our body is water, and assuming a 75kg (approximately 165 pounds) person, each person would have about 45 liters (12 gallons) per human. We also store water. When we get buried, it goes into the local aquifer, when we get cremated, it goes into the atmosphere.

To maintain that level of water in our systems, which we conveniently find all over in plastic bottles these days, we need 3.7 litres (men) or 2.7 litres (women) per day. With a population of over 8 billion at this time, that’s a lot of water, and desalinization seems a great idea for assuring a better replenishment of the aquifers we’re pumping water out of. Yet we are bound by our own economics in this regard, about how cost-effective it is to do these things, particularly in less economically advantaged nations.

Oh, and we’re looking at increasing to 9 billion humans around 2045, but that number is based on fertility alone. There are a lot of other factors, which include water – which is also another factor.

Toss in politics and geography of materials for technology, it’s hard to look at a prediction like that of the technological singularity and be a bit boggled by the hubris. Now, if we could get all this data from all these silos to interact in ways that are cross-disciplinary, which an artificial intelligence could be good at, we might get less imperfect predictions.

That might be a great use of AI.

Beyond The Moat.

In the world we like to talk about since it reflects ourselves, technology weaves dendritically through our lives. Much of it is invisible to us in that it is taken for granted.

The wires overhead spark with Nikola Tesla’s brilliance, the water flowing in pipes dating all the way back 3000-4000 BC in the Indus Valley, the propagation of gas for cooking and heat and the automobiles we spend way too much time in.

Now, even internet access for many is taken for granted as social media platforms vie for timeshares of our lives, elbowing more and more from many by giving people what they want. Except Twitter, of course, but for the most part social media is the new Hotel California – you can check out any time you like, but you may never leave as long as people you interacted with are there.

This is why when I read Panic about overhyped AI risk could lead to the wrong kind of regulation, I wondered about what wasn’t written. It’s a very good article which underlines the necessity of asking the right questions to deal with regulation – and attempting to undercut some of the hype against it. Written by a machine learning expert, Divyansh Kaushik, and by Matt Korda, it reads really well about what I agree could be a bit too much backlash against the artificial intelligence technologies.

Yet their jobs are safe. In Artificial Extinction, I addressed much the same thing but not as an expert but as a layperson who sees the sparking wires, flowing water, cars stuck in traffic, and so on. It is not far-fetched to see that the impacts of artificial intelligence are beyond the scope of what experts on artificial intelligence think. It’s what they omit in the article that is what should be more prominent.

I’m not sure we’re asking the right questions.

The economics of jobs gets called into question as people who spent their lives doing something that can be replaced. This in turn affects a nation’s economy, which in turn affects the global economy. China wants to be a world leader in artificial intelligence by 2030 but given their population and history of human rights, one has to wonder what they’ll do with all those suddenly extra people.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

Artificial Extinction, KnowProSE.com, May 31st 2023.

These concerns are not new, but they are made more plausible with artificial intelligence because who controls them controls much more than social media platforms. We have really no idea what they’re training the models on, where that data came from, and let’s face it – we’re not that great with who owns whose data. Henrietta Lacks immediately comes to mind.

My mother wrote a poem about me when I joined the Naval Nuclear Propulsion program, annoyingly pointing out that I had stored my socks in my toy box as a child and contrasting it with my thought at the time that science and technology can be used for good. She took great joy in reading it to audiences when I was present, and she wasn’t wrong to do so even as annoying as I found it.

To retain a semblance of balance between humanity and technology, we need to look at our own faults. We have not been so great about that, and we should evolve our humanity to keep pace with our technology. Those in charge of technology, be it social media or artificial intelligence, are far removed from the lives of people who use their products and services despite them making money from the lives of these very same people. It is not an insult, it is a matter of perception.

Sundar Pichai, CEO of Google, seemed cavalier about how artificial intelligence will impact the livelihoods of some. While we all stared at what was happening with the Titan, or wasn’t, the majority of people I knew were openly discussing what sorts of people would spend $250K US to go to a very dark place to go look at a broken ship. Extreme tourism, they call it, and it’s within the financial bracket of those who control technologies now. The people who go on such trips to space, or underwater, are privileged and in that privilege have no perspective on how the rest of the world gets by.

That’s the danger, but it’s not the danger to them and because they seem cavalier about the danger, it is a danger. These aren’t elected officials who are controlled through democracy, as much of a strange ride that is.

These are just people who sell stuff everybody buys, and who influence those who think themselves temporarily inconvenienced billionaires to support their endeavors.

It’s not good. It’s not really bad either. Yet we should be aspiring toward ‘better’.

Speaking for myself, I love the idea of artificial intelligence, but that love is not blind. There are serious impacts, and I agree that they aren’t the same as nuclear arms. Where nuclear arms can end societies quickly, how we use technology and even how many are ignorant of technology can cause something I consider worse: A slow and painful end of societies as we know them when we don’t seem to have any plans for the new society.

I’d feel a lot better about what experts in silos have to say if they… weren’t in silos, or in castles with moats protecting them from the impacts of what they are talking about. This is pretty big. Blue collar workers are under threat from smarter robots, white collar workers are under threat, and even the creative are wondering what comes next as they no longer are as needed for images, video, etc.

It is reasonable for a conversation that discusses these things to happen, and this almost always happens after things have happened.

We should be aspiring to do better than that. It’s not the way the world works now, and maybe it’s time we changed that. We likely won’t, but with every new technology, we should have a few people pointing that out in the hope that someone might listen.

We need leaders to understand what lays beyond the moat, and if they don’t, stop considering them leaders. That’s why the United States threw a tea party in Boston, and that’s why the United States is celebrating Independence Day today.

Happy Independence Day!

Bubbles Distilled By Time.

We all perceive the world through our own little bubbles. As far as our senses go, we only have touch, taste, feeling, hearing, smell and sight to go by. The rest comes from what we glean through those things, be it other people, technology, language, culture, etc.

If the bubble is too small, we feel it a prison and do our best to expand it. Once it’s comfortable, we don’t push it outward as much.

These little bubbles contain ideas that have passed down through the generations, how others have helped us translate our world and all that is in it, etc. We’re part of a greater distillation process, where because of our own limitations we can’t possibly carry everything from previous generations.

If we consider all the stuff that creates our bubble as little bubbles themselves that we pass on to the next generation, it’s a distillation of our knowledge and ideas over time. Some fall away, like the idea of the Earth being the center of the Universe. Some stay with us despite not being used as much as we might like – such as the whole concept of, ‘be nice to each other’.

If we view traffic as something going through time, bubbles are racing toward the future all at the same time, sometimes aggregating, sometimes not. The traffic of ideas and knowledge is distilled as we move forward in time, one generation at a time. Generally speaking, until broadcast media this was a very local process. Thus, red dots trying to get us to do things, wielded by those who wish us to do things from purchasing products to voting for politicians with their financial interests at heart.

Broadcast media made it global by at first giving people information and then by broadcasting opinions to become sustainable through advertising. Social media has become the same thing. How will artificial intelligences differ? Will ChatGPT suddenly spew out, “Eat at Joes!”? I doubt that.

However, those with fiscal interests can decide what the deep learning of artificial intelligences are exposed to. Machine learning is largely about clever algorithms and pruning the data that the algorithms are trained on, and those doing that are certainly not the most unbiased of humanity. I wouldn’t say that they are the most biased either – we’re all biased by our bubbles.

It’s Pandora’s Box. How do we decide what should go in and what should stay out? Well, we can’t, really. Nobody is actually telling us what’s in them now. Our education systems, too, show us that this is not necessarily something we’re good at.

Distilling Traffic

Having pulled Data Transfer out of cars, I’ll revisit traffic itself:

“…Each of them is a physical record of their ancestors, dating back to their, marked by life events – living memory. In minds alone, each human brain is 100 terabytes, with a range of 1 Terabyte to 2.5 Petabytes according to present estimates. Factor in all the physical memory of our history and how we lived, we’re well past that…”

me, Traffic, RealityFragments, June 6th 2023

So while we’re all moving memory in traffic, we’re also moving history. Our DNA holds about 750 megabytes, according to some sources, of our individual ancestry as well as a lot of tweaks to our physiology that make us different people. Let’s round off the total memory to 2 Terabytes, 1 conservative terabyte for what our brain holds and roughly another terabyte of DNA (conservative here, liberal there…). 100 cars with only drivers is 200 Terabytes.

Conservatively. Sort of. Guesstimate built of guesstimates. It’s not so much about the values as the weight, as you’ll see.

Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.

Richard Feynman, Chapter 1, The Law of Gravitation, p. 34 – The Character of Physical Law (1965)

Now, from all that history, we have ideas that have been passed on from generation to generation. Books immediately come to mind, as do other things like language, culture and tradition. All of these pass along ideas from generation, distilling things toward specific ends even while we distill our own environment to our own ends, or lack thereof which is an end. That’s a lot of information linked together, and that information is linked to the ecological systems that we’re connected to and their history.

Now, we’re beginning to train artificial intelligences on training models. What are in those training models? In the case of large language models, probably lots of human writing. In the case of images, lots of images. And so on. But these models are disconnected in ways that we are not, and we are connected in ways that we’re still figuring out.

I mean, we’re still learning some really interesting stuff about photosynthesis, something most of us were likely taught about in school. So these data models AI’s are being trained on through deep learning are subject to change and have to be changed as soon as information in that data model is outdated.

Who chooses what gets updated? It’s likely not you or me since we don’t even know what’s in these training models. For all we know, it’s data from our cellphones tracking us in real time, which isn’t that farfetched, but for now we can be fairly sure it’s someone who has decided what is in the machine learning models in the first place. Which, again, isn’t us.

What if they decide to omit… your religious text of choice? Or let’s say that they only want to train it on Mein Kampf and literature of that ilk. Things could go badly, and while that’s not really in the offing right now… we don’t know.

This impacts future generations and what they will do and how they will do it. It even impacts present generations. This seems like something we should be paying attention to.

We all live in our own little bubbles, after all, and our bubbles don’t have much influence on learning models for artificial intelligence. That could be a problem. How do we deal with it?

First, we have to start with understanding the problem, and most people including myself are only staring at pieces of the problem from our own little bubbles. Applications like ChatGPT just distill bubbles depending on their models.

Facebook, Google, et al: It’s Not The Data, It’s The Context.

ContextsDylan Curran recently published Are you ready? Here is all the data Facebook and Google have on you – an article which should open the eyes of anyone who uses Facebook or Google.

It’s a good article, and it shows how much data people give up freely – who doesn’t have a Gmail account or a Facebook page these days? – but it’s lacking something that most people miss, largely because they’re thinking of their own privacy or lack of it.

I requested my data from the sites – Facebook had 384 megabytes on me, and my Google Data I will get on April 7th since I opted for 50 gigabytes. All this data, though, is limited to what I have done.

It lacks the context. We are all single trees in the forest, and these companies aren’t so much in the habit of studying trees by themselves. They have the data of the forest of trees. That context, those interactions, you can’t really download. The algorithms they have derive data from what we hand over so willingly because it costs us nothing financially.

So, while they can give us our data, and some companies do, they can’t give us someone else’s data – so we only get the data on that single tree, ourselves. We learn only a small amount of what their algorithms have decided about us, and while Facebook has a way to see some of what their algorithms have decided about you, they are not compelled to tell you everything about your digital shadow. Your digital shadow has no rights, yet is used to judge you.

What’s your context? That’s the real question. It’s what they don’t show you, what they have decided about you from your habits, that they don’t truly share. That is, after all, their business.

Know that, be conscious of it… and don’t be an idiot online, no matter how smart you think you are. Everything you do is analyzed by an algorithm.

Nature and Data Structures (2013)

Cactus Flower Blooms (at night)

I haven’t written much of late as I moved to Florida last week and have been busy networking, job hunting, writing about the journey and taking pictures. I’ll be writing more often.

With the recent return to Florida, I’ve clearly been working on finding work amongst other things. I’ve also been enjoying the flora and fauna because of the good fortune I’ve had in finding a friend’s home a temporary lodging. This reminded me this morning of how often people at Honeywell, during my time there, thought I was goofing off when I walked outside and stared at the trees outside. I wasn’t really goofing off. I was considering the natural structures and finding some assistance in designing data structures for the work I was doing.

Natural data architectures are compelling, simple at some levels and very complex. Almost all of them are built on osmosis, where concentrations allow atoms and molecules to wander through permeable membranes based on pressure – not unlike electrical voltages across resistance or water through a plumbing system. The difference between natural structures and artificial structures is that, as Feynman once said,

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Failed data structures in nature are pretty easy to spot.

They’re dead.

Yet even in death they have value – they are recycled, the essence of the philosophical ‘rebirth’ found in some religions. In a well operating ecosystem, nothing is wasted – everything that is ‘alive’ or ‘dead’ has some worth to the ecosystem or it is quickly replaced.

The flower on the left is a picture of a cactus flower I took last night. It opens only at night.

This can be related to a structure such as a website. The flower has a purpose which, as most would understand it, is marketing. It has a definite demographic for who it is marketing to. I’m not sure what exactly it attracts, but I’d wager it is targeting nocturnal insects and perhaps even bats – but whatever its market, it isn’t the classic stuff that people are taught in school such as bees and birds.

Once pollenated, the structure goes about doing what most other flowers do – something pretty well documented anywhere. But this particular data structure is interesting in that it has evolved over millenia to bloom at night, when it’s cool, when life is more mobile in climates where the days are decidedly hot. It’s a wonderfully beautiful thing that most people don’t get to see because they’re not out at night. The scent is wonderful as well.

Studying data structures like this, looking for hints from nature on how to do something, provides us methods of making a better data ecosystem.

Maybe the internet and social media would be a better place if more software developers stepped outside a bit more often. The days of software architects and developers fearing sunlight have past.