NYT Says No To Bots.

The content for training large language models and other AIs has been something I have written about before, with being able to opt out of being crawled by AI bots. The New York Times has updated it’s Terms and Conditions to disallow that – which I’ll get back to in a moment.

It’s an imperfect solution for so many reasons, and as I wrote before when writing about opting out of AI bots, it seems backwards.

In my opinion, they should allow people to opt in rather than this nonsense of having to go through motions to protect one’s content from being used as a part of a training model.

Back to the New York Times.

…The New York Times updated its terms of services Aug. 3 to forbid the scraping of its content to train a machine learning or AI system.

The content includes but is not limited to text, photographs, images, illustrations, designs, audio clips, video clips, “look and feel” and metadata, including the party credited as the provider of such content.

The updated TOS also prohibits website crawlers, which let pages get indexed for search results, from using content to train LLMs or AI systems…

The New York Times Updates Terms of Service to Prevent AI Scraping Its Content“, Trishla Ostwal, Adweek.com, August 10th 2023.

This article was then referenced by The Verge, which added a little more value.

…The move could be in response to a recent update to Google’s privacy policy that discloses the search giant may collect public data from the web to train its various AI services, such as Bard or Cloud AI. Many large language models powering popular AI services like OpenAI’s ChatGPT are trained on vast datasets that could contain copyrighted or otherwise protected materials scraped from the web without the original creator’s permission…

The New York Times prohibits using its content to train AI models“, Jess Weatherbed, TheVerge.com, Augus 14th, 2023.

That’s pretty interesting considering that Google and the New York Times updated their agreement on News and Innovation on February 6th, 2023.

This all falls into a greater context where many media organizations called for rules protecting copyright in data used to train generative AI models in a letter you can see here.

Where does that leave us little folk? Strategically, bloggers have been a thorn in the side of the media for a few decades, driving down costs for sometimes pretty good content. Blogging is the grey area of the media, and no one really seems to want to tackle that.

I should ask WordPress.com what their stance is. People on Medium and Substack should also ask for a stance on that.

Speaking for myself – if you want to use my content for your training model so that you can charge money for a service, hit me in the wallet – or hit the road.

The Ongoing Saga of the ‘AI’pocalypse

I ran across an surprisingly well done article on the AIpocalypse thing, which I have written about before in ‘Artificial Extinction’, and it’s worth perusing.

“…In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.”

Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.

Now, let me be plain here. When they say an AI is hallucinating, that’s not really true. Saying it’s ‘bullshitting’ would be closer to true, but it’s not even really that. It’s a gap in the training data and algorithms made apparent by the prompt you give it. It’s not hallucinating. They’re effectively anthropomorphizing some algorithms strapped to a thesaurus when they say, ‘hallucinating’.

They’re trying to make you hallucinate, maybe, if we go by possible intentions.

“…Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

“If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN…”

Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.

We don’t like to talk about the intentions of people involved with these artificial intelligences and their machine learning. We don’t know what models are being used for the deep learning, and to cover that gap of trust, words like ‘hallucinating’ are much more sexy and dreamy than, “Well, it kinda blew a gasket there. We’ll see how we can patch that right up, but it can keep running while we do.”

I’m not saying that’s what’s happening, but it’s not a perspective that should be dismissed. There’s a lot at stake, after all, with artificial intelligence standing on the shoulders of humans, who are distantly related to kids who eat tide pods.

We ain’t perfick, and thus anything we create inherits that.

I think the last line of that CNN article sums it up nicely.

“…Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?””

Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.

That professor just cut to the quick in a way that had me smile. She just straight out said it.

And.

When we talk about biases, and I’ve written about bias a lot lately, we don’t see all that is biased alone. In an unrelated article, Barbara Kingsolver, the only 2 time winner of the Women’s Prize for fiction, drew my attention to one that I hadn’t considered in the context of deep learning training data.

“…She is also surprisingly angry. “I understand why rural people are so mad they want to blow up the system,” she says. “That contempt of urban culture for half the country. I feel like I’m an ambassador between these worlds, trying to explain that if you want to have a conversation you don’t start it with the words, ‘You idiot.’”…”

Barbara Kingsolver: ‘Rural people are so angry they want to blow up the system’“, The Guardian, Lisa Allardice quoting Barbara Kingsolver, June 16th, 2023

She’s not wrong – and the bias is by omission, largely, on both the rural and urban sides (suburbia has a side too). So how does one deal with that in a training model for machine learning?

We’ve only scratched the surface, now haven’t we? Perhaps just scuffed.

Internet Detritus.

Back in 1996 I was driving to work in the Clearwater, Florida area and saw a billboard to Brainbuzz.com, now viewable only through the Wayback Machine. I joined, and I ended up writing for them. Not around anymore.

They became CramSession.com, where I continued writing for them. I had roughly 100 articles I wrote for them about software engineering and C++ which are just… gone. Granted, that was over 2 decades ago, but it’s peculiar to live longer than all these companies that thrived during the Dot Com Bubble, which should be taught in high school now as a part of world history. It isn’t, of course, but it should.

Consciously, we distill good things and keep moving them forward, but sometimes because of copyright laws, things get orphaned in companies that closed their digital doors. Generations afterward, it’s hard to convey this lack of permanence to future generations because the capacity for things to last ‘forevah’ seems to be built into some social media, but it’s hidden away by algorithms which is effectively the same thing.

Sometimes bubbles of information get trapped in the walls of an imploded company. It could happen even to the present 800 lb gorillas on the Internet now. The future is one thing that nobody will tell you in their end of the year posts: It’s unpredictable. The world changes more and more rapidly and we forget how much gets left behind at times.

“When the Lilliputians first saw Gulliver’s watch, that “wonderful kind of engine…a globe, half silver and half of some transparent metal,” they identified it immediately as the god he worshiped. After all, “he seldom did anything without consulting it: he called it his oracle, and said it pointed out the time for every action in his life.” To Jonathan Swift in 1726 that was worth a bit of satire. Modernity was under way. We’re all Gullivers now. Or are we Yahoos?”

Faster: The Acceleration of Just About Everything, James Gleick, 2000.

What’s really funny about that quote is that Yahoo.com was more of a player in the search engine space back then. In fact, in 1998, Yahoo was the most popular search engine, and that it’s still around is actually a little impressive given all that happened after the DotCom Bubble popped. So the quote itself hasn’t aged that well which demonstrates the point I am making.

Nothing really lasts on the Internet, and even with the WayBack machine (thank you, Internet Archive!), much of what was is simply no longer, subject to what companies owned copyrights of the information, or a simple matter of what things have been kept around through what boils down to popularity.

And what’s popular isn’t always good. I submit to you any elected official you dislike to demonstrate that popularity is subjective – and on the Internet, popularity is largely about marketing and money spent toward that end. The Internet, as it stands, is the house that we built based on what made money.

That’s not particularly attractive.

In the end, it all sort of falls away. And coming generations will see it as well, some may have already begun seeing it.

Who decides what stays on the Internet? Why, we do of course, one click at a time.

Now imagine this fed into an artificial intelligence’s deep learning model. The machine learning would be taught only what has survived, not what has failed -and this could be seen as progress. I think largely it is, despite myself – but what important stuff do we leave behind?

We don’t know, because it ain’t there.

Our Own Wall.

One of the more profound biases we have when it comes to our technology is just how stupid we can be. Ignorant, too, because we often forget just how little we know in the grand scheme of things which is well beyond our sight at any time, no matter how well informed we are.

It’s the Dunning-Kruger effect at levels depending on which mob we talk about and what that particular mob is made up of. Are they more open minded than close minded? Are they open to surprises?

We always end up surprised by something, and that’s a good thing. We don’t get new knowledge without being surprised in some way.

To be surprised means that something has to come leaping out of the dense grass of our biases and attacks us or helps us in some way. Surprise is important.

Personally, I like being surprised because it means something is new to me.

I’m not writing about a chocolate cake lurking in the dark room, I’m writing about expecting a result and getting something different, though exploring a new chocolate cake is also something I don’t mind. No, what I’m writing about is that unexpected outcome that has you wondering why it was unexpected.

That leads us to find out why, and that’s where we get new knowledge from. Asking the right questions.

It occurs to me that in creating this marketing of ‘artificial intelligence’ that we’ve created idiots. I thought we had enough, but apparently we need more. They don’t ask questions. They are better informed than our idiots, mind you, but someone gets to pick what distilled learning model they’re informed about.

I call them idiots not because they give us answers, sometimes wrong, but because they don’t ask questions. They don’t learn. We have a fair amount of systems on the planet we created that are in stasis instead of learning, and we’ve added new ones to the list.

I expect the marketers will send out a catalog soon enough of dumb systems marketed as smart technology.

Meanwhile, new generations may forget questioning, and that seems like it’s something we shouldn’t forget.

Bubbles Distilled By Time.

We all perceive the world through our own little bubbles. As far as our senses go, we only have touch, taste, feeling, hearing, smell and sight to go by. The rest comes from what we glean through those things, be it other people, technology, language, culture, etc.

If the bubble is too small, we feel it a prison and do our best to expand it. Once it’s comfortable, we don’t push it outward as much.

These little bubbles contain ideas that have passed down through the generations, how others have helped us translate our world and all that is in it, etc. We’re part of a greater distillation process, where because of our own limitations we can’t possibly carry everything from previous generations.

If we consider all the stuff that creates our bubble as little bubbles themselves that we pass on to the next generation, it’s a distillation of our knowledge and ideas over time. Some fall away, like the idea of the Earth being the center of the Universe. Some stay with us despite not being used as much as we might like – such as the whole concept of, ‘be nice to each other’.

If we view traffic as something going through time, bubbles are racing toward the future all at the same time, sometimes aggregating, sometimes not. The traffic of ideas and knowledge is distilled as we move forward in time, one generation at a time. Generally speaking, until broadcast media this was a very local process. Thus, red dots trying to get us to do things, wielded by those who wish us to do things from purchasing products to voting for politicians with their financial interests at heart.

Broadcast media made it global by at first giving people information and then by broadcasting opinions to become sustainable through advertising. Social media has become the same thing. How will artificial intelligences differ? Will ChatGPT suddenly spew out, “Eat at Joes!”? I doubt that.

However, those with fiscal interests can decide what the deep learning of artificial intelligences are exposed to. Machine learning is largely about clever algorithms and pruning the data that the algorithms are trained on, and those doing that are certainly not the most unbiased of humanity. I wouldn’t say that they are the most biased either – we’re all biased by our bubbles.

It’s Pandora’s Box. How do we decide what should go in and what should stay out? Well, we can’t, really. Nobody is actually telling us what’s in them now. Our education systems, too, show us that this is not necessarily something we’re good at.

Distilling Traffic

Having pulled Data Transfer out of cars, I’ll revisit traffic itself:

“…Each of them is a physical record of their ancestors, dating back to their, marked by life events – living memory. In minds alone, each human brain is 100 terabytes, with a range of 1 Terabyte to 2.5 Petabytes according to present estimates. Factor in all the physical memory of our history and how we lived, we’re well past that…”

me, Traffic, RealityFragments, June 6th 2023

So while we’re all moving memory in traffic, we’re also moving history. Our DNA holds about 750 megabytes, according to some sources, of our individual ancestry as well as a lot of tweaks to our physiology that make us different people. Let’s round off the total memory to 2 Terabytes, 1 conservative terabyte for what our brain holds and roughly another terabyte of DNA (conservative here, liberal there…). 100 cars with only drivers is 200 Terabytes.

Conservatively. Sort of. Guesstimate built of guesstimates. It’s not so much about the values as the weight, as you’ll see.

Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.

Richard Feynman, Chapter 1, The Law of Gravitation, p. 34 – The Character of Physical Law (1965)

Now, from all that history, we have ideas that have been passed on from generation to generation. Books immediately come to mind, as do other things like language, culture and tradition. All of these pass along ideas from generation, distilling things toward specific ends even while we distill our own environment to our own ends, or lack thereof which is an end. That’s a lot of information linked together, and that information is linked to the ecological systems that we’re connected to and their history.

Now, we’re beginning to train artificial intelligences on training models. What are in those training models? In the case of large language models, probably lots of human writing. In the case of images, lots of images. And so on. But these models are disconnected in ways that we are not, and we are connected in ways that we’re still figuring out.

I mean, we’re still learning some really interesting stuff about photosynthesis, something most of us were likely taught about in school. So these data models AI’s are being trained on through deep learning are subject to change and have to be changed as soon as information in that data model is outdated.

Who chooses what gets updated? It’s likely not you or me since we don’t even know what’s in these training models. For all we know, it’s data from our cellphones tracking us in real time, which isn’t that farfetched, but for now we can be fairly sure it’s someone who has decided what is in the machine learning models in the first place. Which, again, isn’t us.

What if they decide to omit… your religious text of choice? Or let’s say that they only want to train it on Mein Kampf and literature of that ilk. Things could go badly, and while that’s not really in the offing right now… we don’t know.

This impacts future generations and what they will do and how they will do it. It even impacts present generations. This seems like something we should be paying attention to.

We all live in our own little bubbles, after all, and our bubbles don’t have much influence on learning models for artificial intelligence. That could be a problem. How do we deal with it?

First, we have to start with understanding the problem, and most people including myself are only staring at pieces of the problem from our own little bubbles. Applications like ChatGPT just distill bubbles depending on their models.

Bias in AI, Social Media, and Beyond.

One of the things that is hard to convey to many people is how bias actually affects things. So I’ll offer a unique perspective, one that involves hamburgers.

All good stories should have a good burger of some sort, whatever your meat or lack of meat allows for. Some people will see ‘burger’ and go for the default of beef in their head, some people will think chicken or turkey or lamb or mushroom or… that right there is a bias.

I’ll go a bit further.

My father, well into his 50s, felt like having a hamburger and I asked him why we didn’t just make them instead of going out and buying some crappy burgers. He admitted something that floored me.

He didn’t know how to make them. Here he was, having lived decades eating burgers, but he never learned how to make burger patties. My father. The guy who always seemed within 10 feet of a burger joint when it came to feeding times.

Now, why was that?

First, he grew up in a Hindu home, and beef was not on the menu at home. He never would have been exposed in that household on how to make a beef patty – or a beef anything, for that matter. So he had an implicit bias from the start on not knowing how to make a hamburger.

He did, according to his oral history, like eating hamburgers, and would go to a place near his school to eat some. His eyes would glow when he discussed that memory, as simple as it might be.

Now, he also got married in the 1970s in the U.S., and Mom handled all the cooking. We cooked burgers there, but he managed to not learn about making the patties. He worked night shift, and so he wasn’t around most of the day anyway. More bias on him not learning how to make a hamburger, which an American of his generation generally considers an art form – but he was not American. More bias.

After decades, he assumed that learning how to make them was beyond him – which seemed peculiar considering how much time and care he would put into an omelette.

If my father were an AI of some sort and you asked him about how to make a beef patty, he would have likely said, “they come in stores.” While not knowing how to make burger patties is a pretty low threshold when compared to human extinction– it’s not hard to see how omitting information can be a handicap and create a bias.

It’s also not hard to see that by creating information or perspectives can also create bias. If we don’t teach AI about weight loss, an AI might suggest amputation for someone wondering how to lose weight – and even recommend low weight prosthetics. Ridiculous, but we never thought kids would be eating tide pods. We don’t exactly have as high a threshold as we might like to think.

There are good and bad biases, and they’re largely subjective. We see systemic biases now over all sorts of things – can you imagine them happening faster and more efficiently?

Aside from the large sweeping biases of culture, the artificial construct of race, and the availability of information, what other biases do you think can impact an artificial intelligence? Social media? Beyond?

Artificial Extinction.

The discussion regarding artificial intelligence continues, with the latest round of cautionary notes making the rounds. Media outlets are covering it, like CNBC’s “A.I. poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn“.

Different versions of that article written by different organizations are all over right now, but it derives from one statement on artificial intelligence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety, Open Letter, undated.

It seems a bit much. Granted, depending on how we use AI we could be on the precipice of a variety of unpredictable catastrophes, and while pandemics and nuclear war definitely poses direct physical risks, artificial intelligence poses more indirect risks. I’d offer that can make it more dangerous.

In the context of what I’ve been writing about, we’re looking at what we feed our heads with. We’re looking at social media being gamed to cause confusion. These are dangerous things. Garbage in, Garbage out doesn’t just apply to computers – it applies to us.

More tangibly, though, it can adversely impact our way(s) of life. We talk about the jobs it will replace, with no real plan on how to employ those displaced. Do people want jobs? I think that’s the wrong question that we got stuck with in the old paint on society’s canvas. The more appropriate question is, “How will people survive?”, and that’s a question that we overlook because of the assumption that if people want to survive, they will want to work.

Is it corporate interest that is concerned about artificial intelligence? Likely not, they like building safe spaces for themselves. Sundar Pichai mentioned having more lawyers, yet a lawyer got himself into trouble when he used ChatGPT to write court filings:

“The Court is presented with an unprecedented circumstance,” Castel wrote in a previous order on May 4. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The filings included not only names of made-up cases but also a series of exhibits with “excerpts” from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don’t exist.”

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”“, Jon Brodkin, ArsTechnica, May 30th 2023

It’s a good thing there are a few people out there relying on facts instead of artificial intelligence, or we might stray into a world of fiction where those that control the large language models and general artificial intelligences that will come later will create it.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

It’s not hard to imagine all of this. It is a big deal, but in parts of the world like Trinidad and Tobago, you don’t see much about it because there’s no real artificial intelligence here, even as local newspaper headlines indicate real intelligence in government might be a good idea. The latest article I found on it in local newspapers online is from 2019, but fortunately we have TechNewsTT around discussing it. Odd how that didn’t come up in a Google search of “AI Trinidad and Tobago”.

There are many parts of the world where artificial intelligence is completely off the radar as people try to simply get by.

The real threat of any form of artificial intelligence isn’t as tangible as nuclear war or pandemics to people. It’s how it will change our way(s) of life, how we’ll provide for families.

Even the media only points at that we want to see, since the revenue model is built around that. The odds are good that we have many blind spots that the media doesn’t show us even now, in a world where everyone who can afford it has a camera and the ability to share information with the world – but it gets lost in the shuffle of social media algorithms if it’s not something that is organically popular.

This is going to change societies around the globe. It’s going to change global society, where the access to large language models may become as important as the Internet itself was – and we had, and still have, digital divides.

Is the question who will be left behind, or who will survive? We’ve propped our civilizations up with all manner of things that are not withstanding the previous changes in technology, and this is a definite leap beyond that.

How do you see the next generations going about their lives? They will be looking for direction, and presently, I don’t know that we have any advice. That means they won’t be prepared.

But then, neither were we, really.

You Ain’t Just the Medium.

There are some topics I’ve been writing about that people may not realize are connected, but they are. When I wrote about how we humans, we algorithms are doing bonsai on ourselves and artificial intelligences, it was not just happenstance.

We are a medium. Just one on the planet, but we are a medium, built upon a medium of a planet, and we’re building other mediums even while we interact in increasingly positive ways with other mediums as we grow to understand them.

The medium is the message.

Marshall McLuhan, Understanding Media : The Extensions of Man (1964)

This is important to understand. Regardless of how you believe this world came into being, we all should know by now about DNA and how we recognize that other living creatures also have DNA. Some of it is close to matching ours, but the results are different to us.

We’re a 96% match to chimpanzees, and I’m fairly certain chimpanzees know we’re very different from them in many ways.

Our DNA varies within our species as well, with what we call recessive and dominant genes and all their complexity of impacting everything from hair color to deciding whether our big toe is dominant on our feet or not.

We have social attributes, which could also be seen as mediums since they too are canvases upon we decorate our pieces of time. Language, culture, religion (or lack thereof) are some of the substrates upon which we grow our own mediums.

We aren’t just surrounded by information. We are information. We are history without words, a strand of our DNA tells us the path we traversed through time to get where we are.

It doesn’t tell us why we traversed the particular path we got here by. That’s for the archaeologists and others to piece together from fragments of physical history.

We are our own bonsai, where our trunk and branches show where we have grown from – the trail through time and the history of how we got where we are.

Each one of us, as an individual, has our own root system, our own trunk, our own branches. Each one of us is both medium and message, impacting our societal medium and message, all part of a planetary ecosystem of mediums and messages.

Everything we see has information, from the tree outside that has evolved over millions of years to that one specimen, to the street which follows a path dictated by many human variables.

If we stand back and look at our planet, allowing ourselves to fade into the background, we’re not too far of Douglas Adams‘ allegory of the Earth being a computer designed to figure out the meaning of life. In fact, many people who have read Douglas Adams don’t necessarily understand how true it is.

It’s sort of like the Gaia hypothesis, though there are issues with mixing metaphor with mechanism, among other things. Popular thought on evolution ascribes intentionality to evolution, as if there were some guide to it all, but adaptation to survive is quite possibly the only intention.

We tend to ascribe intention and look for intention where there may be none. Intention makes us feel more comfortable, but it isn’t necessarily true.

“This is rather as if you imagine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, must have been made to have me in it!’ This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything’s going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.”

Douglas Adams, The Salmon of Doubt (2002)

We tend to think ourselves the center of our universe of existence, and we often treat ourselves as the North Star of the planet. This is likely natural; everything we know, every bit of information we process, comes to us through our senses.

Although the medium is the message, the controls go beyond programming. The restraints are always directed to the “content,” which is always another medium. The content of the press is literary statement, as the content of the book is speech, and the content of the movie is the novel. So the effects of radio are quite independent of its programming.

Marshall McLuhan, Understanding Media (1964)

This is why McLuhan balanced what he said in his over-used quote, “The medium is the message” with what was the technological equivalent of machine learning of his time: Radio.

Radio connected a world over distances previously daunting during that period, and while mostly a broadcast medium then, his focus needs to be understood.

Communication has evolved long beyond that in a little over half a century. ‘Programming’, now thanks to Web 2.0, is a matter of choosing people’s social media messages so they build their own narratives. Web 2.0 provided us the illusion of choice.

The medium was the message, the message became medium, the media became the message, and so on.

We forget that we, too, are medium, though we don’t altogether understand the message and maybe we’re in the process of finding out what that is.

It gets deeper, too, but I’ll leave you with one more quotation from McLuhan, who happened to say and write quite a few things that continue to make sense to this day.

Media are means of extending and enlarging our organic sense lives into our environment.

Marshall McLuhan, “The Care and Feeding of Communication Innovation”, Dinner Address to Conference on 8 mm Sound Film and Education, Teachers College, Columbia University, 8 November 1961.

Artifice Girl

With all that’s being marketed as artificial intelligence out there, this could be an interesting movie for at least some people who might like to see a merging of technology and humanity.

If you’re not appreciative of movies driven entirely by dialog, this is not your movie. There’s a bit of suspended disbelief too that may not sit well with some people, but it is a movie and like most things out of Hollywood, it’s pretty easy to find some flaws when compared with the real world.

Still. The idea of using a chatbot to catch pedophiles is not bad. It’s also not new.

If you’ve never heard of Negobot, or Lolita Chatbot, it became public in 2013 – about a decade before ‘Artifice Girl’, and if some of the dialog wasn’t borrowed from that Wikipedia page, I would be surprised.

Even so, it was a pretty good movie. Topical in how we are responsible for what we create, topical in how imperfect we are as a species, and topical about how we ourselves are reflected in our technology, like so many bits of broken glass on the floor sometimes.

Overall, I think it could be a fairly important movie at this time since everyone is agog over large language models such as ChatGPT.

See below for the trailer.