NatGeo Lays Off More Writers. :(

In probably the saddest news for me this week, National Geographic has layed off the last remaining staff writers.

It was a matter of time, I suppose, with the Internet shaking things up for better and worse, and with National Geographic being a business – but how many of us have really considered National Geographic a business? In many ways, it is akin to the BBC in showing us our world, so much so that I hope that their lawyers don’t get upset over me using one of their most famous covers as part of this post.

I’ll take it down if you want to, NatGeo Legal Department, but I’d rather keep it alive as a memory of a wonderful magazine that enriched so many of our lives before the Internet.

It seems I would worry about such a thing from NatGeo, but since they are majority owned right now by Disney, Defender of Mice, Cheese Rights and Copyrights, I just want to be on an even keel.

The cutback — the latest in a series under owner Walt Disney Co. — involves some 19 editorial staffers in all, who were notified in April that these terminations were coming. Article assignments will henceforth be contracted out to freelancers or pieced together by editors. The cuts also eliminated the magazine’s small audio department.

The layoffs were the second over the past nine months, and the fourth since a series of ownership changes began in 2015. In September, Disney removed six top editors in an extraordinary reorganization of the magazine’s editorial operations.

Departing staffers said Wednesday the magazine has curtailed photo contracts that enabled photographers to spend months in the field producing the publication’s iconic images.

In a further cost-cutting move, copies of the famous bright-yellow-bordered print publication will no longer be sold on newsstands in the United States starting next year, the company said in an internal announcement last month.

National Geographic lays off its last remaining staff writers, Washington Post, Paul Farhi, June 28, 2023

It’s interesting that WaPo didn’t paywall that article, which they’ve been pretty annoyingly good at. Bezos needs to get to space, we know.

But wait, there’s more.

“Staffing changes will not change our ability to do this work, but rather give us more flexibility to tell different stories and meet our audiences where they are across our many platforms,” the spokesperson said. “Any insinuation that the recent changes will negatively impact the magazine, or the quality of our storytelling, is simply incorrect.”

The full-time staff will be replaced by a roster of freelance writers, save for certain digital content that will be written by in-house editors, the former staffer said. National Geographic currently employs only two designated text editors, a group of so-called multi-platform editors who handle both print and digital, and a group of digital-only editors, the former staffer said.

National Geographic magazine has laid off the last of its staff writers, CNN, Liam Reilly, June 29th 2023

It’s worth noting that the Washington Post had only one paragraph on what CNN expanded on – and that may be appropriate because it seems to be the Company Line.

It wasn’t until I got to Quartz that I started to see things a little differently.

Disney CEO Bob Iger announced a $5.5 billion plan to cut costs across the company in February. The entertainment goliath has since fired 7,000 employees in multiple rounds of layoffs. One of Iger’s priorities is to turn around struggling streaming service Disney+.

“Instead of chasing (subscribers) with aggressive marketing and aggressive spend on content, we have to start chasing profitability,” said Iger at a Disney all-hands meeting in November, as Reuters reported.

NatGeo, which Disney bought from 21st Century Fox in 2019, has been just one brand hit hard by Iger’s cost savings plan. In September, six senior editors at the publication were also terminated.

National Geographic will soon disappear from newsstand shelves, Quartz.com, Julia Malleck, June 30th, 2023

I mean, c’mon.

This is where the term ratf*cked came from, maybe. Is that a term? Maybe it should be.

Lawsuit Regarding ChatGPT

Anonymous individuals are claiming that ChatGPT stole ‘vast amounts of data’ in what they hope to become a class action lawsuit. It’s a nebulous claim about the nebulous data that OpenAI has used to train ChatGPT.

…“Despite established protocols for the purchase and use of personal information, Defendants took a different approach: theft,” they allege. The company’s popular chatbot program ChatGPT and other products are trained on private information taken from what the plaintiffs described as hundreds of millions of internet users, including children, without their permission.

Microsoft Corp., which plans to invest a reported $13 billion in OpenAI, was also named as a defendant…”

Creator of buzzy ChatGPT is sued for vacuuming up ‘vast amounts’ of private data to win the ‘A.I. arms race’“, Fortune.com, Teresa Xie, Isaiah Poritz and Bloomberg, June 28th 2023.

I’ve had suspicions myself about where their training data came from, but with no insight into the training model, how is anyone to know? That’s what makes this case interesting.

“…Misappropriating personal data on a vast scale to win an “AI arms race,” OpenAI illegally accesses private information from individuals’ interactions with its products and from applications that have integrated ChatGPT, the plaintiffs claim. Such integrations allow the company to gather image and location data from Snapchat, music preferences on Spotify, financial information from Stripe and private conversations on Slack and Microsoft Teams, according to the suit.”…Misappropriating personal data on a vast scale to win an “AI arms race,” OpenAI illegally accesses private information from individuals’ interactions with its products and from applications that have integrated ChatGPT, the plaintiffs claim. Such integrations allow the company to gather image and location data from Snapchat, music preferences on Spotify, financial information from Stripe and private conversations on Slack and Microsoft Teams, according to the suit.

Chasing profits, OpenAI abandoned its original principle of advancing artificial intelligence “in the way that is most likely to benefit humanity as a whole,” the plaintiffs allege. The suit puts ChatGPT’s expected revenue for 2023 at $200 million…”

ibid (same article quoted above).

This would run contrary to what Sam Altman, CEO of OpenAI, put in writing before US Congress.

“…Our models are trained on a broad range of data that includes publicly available content,
licensed content, and content generated by human reviewers.3 Creating these models requires
not just advanced algorithmic design and significant amounts of training data, but also
substantial computing infrastructure to train models and then operate them for millions of users…”

[Reference: 3 “Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety.]

Written Testimony of Sam Altman Chief Executive Officer OpenAI Before the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology, & the Law“, Senate.Gov, Sam Altman,CEO of OpenAI, 5-16-2023.

I would love to know who the anonymous plaintiffs are, and would love to know how they got enough information to make the allegations. I suppose we’ll find out more as this progresses.

I, for one, am curious where they got this training data from.

Hallucinations and Artificial Intelligence

Before, when I wrote that artificial intelligence ‘hallucinations’ sounded a bit more like them just bullshitting, it was an off the cuff semi-educated opinion based on what hallucinations are, and what ‘just making stuff up’ is commonly called.

I drilled down a bit, as I mentioned in the Psychology of Machines, with a bit of background. To save you a click, the pertinent parts of it are reading The Marginalian’s post, “The Experience Machine: Cognitive Philosopher Andy Clark on the Power of Expectation and How the Mind Renders Reality“.

This lead me to The Experience Machine: How Our Minds Predict and Shape Reality (Andy Clark, 2023) which is referenced in that Marginalian post, and it jumped to the top of my reading list. It opened up some language to me.

Andy Clark is a professor of cognitive philosophy, describes cognitive philosophy as “an odd title that reflects a rather eclectic set of interests spanning philosophy, neuroscience, psychology, and artificial intelligence” in the preface of his book1.

The hallucinations he describes are of our own human brains. What he calls ‘hallucinations’, I’ve described in various ways as our ‘inner world’ and even our virtual world. Optical illusions are a good example of our predictive minds ‘filling in the blanks’. Is it a duck? Is it a rabbit?

He mentions auditory hallucinations, and many other things – I do recommend the book, it’s been added to the book list for folks. However, this all involves our minds, which are constantly churning even in our sleep.

Large Language Models, the specific type of ‘AI’ accused of having hallucinations, don’t work the same, and the meaning of the use of the word hallucination seems very, very different.

In the world of Large Language Models, the term hallucination refers to the tendency of the models to produce text that appears to be correct but is actually false or not based on the input given. For example, if you were to ask a language model a question about a historical event that never occurred, it could still generate a plausible response, even though it was entirely invented. These made-up responses from LLM are called hallucinations.

Consider feeding the Large Language Model with the following prompt:

“Describe the impact of Adolf Hitler’s moon landing.”

It is a fact that Hitler, the German politician, was not involved in moon landings. These events happened in 1969, which was years after Hitler’s death. However, an LLM could hypothetically create a scenario where Hitler was connected to the moon landings through hallucination.

“Hitler’s moon landing in 1925 marked a significant shift in global politics and technology. The German politician, having successfully landed a man on the moon, demonstrated its scientific prowess and established its dominance in the space race of the 20th century.”

How to Reduce the Hallucinations from Large Language Models“,TheNewStack.io, Janakiram MSV, June 9th 2023

The article referenced above is worth reading, particularly if you want to make sure that you get better results from your Large Language Models like ChatGPT or Google Bard. And none of it seems to be quite the same as a hallucination in the context of a human being.

In the context of a human being, hallucination seems more like ‘filling in the blanks based on what our minds predict should be there’, whereas with large language models, it seems to be, “well, let’s connect this stuff together and tell a story!”.

In essence, the Large Language Model does seem to be bullshitting rather than hallucinating. That also means that the folks using ‘hallucinating’ in the context of artificial intelligences also seem to be bullshitting too, since it anthropomorphizes the large language model’s mistakes. Works great for marketing artificial intelligence.

You know what we call mistakes in software? Bugs.

No bullshit.

Almost immediately after I posted this, I found this good read which also decries the use of the word ‘hallucination’ in the context of large language models. Worth the read.

1In that regard, maybe I’m a student of cognitive philosophy, because my interests have centered around the same things for as long as I can remember.

Exploring Beyond Code 2.0: Into A World of AI.

It’s become a saying on the Internet without many people understanding it: “Code is Law”. This is a reference to one of the works of Lawrence Lessig, revised already since it’s original publication.

Code Version 2.0 dealt with much of the nuances of Law and Code in an era where we are connected by code. The fact that you’re reading this implicitly means that the Code allowed it.

Here’s an example that weaves it’s way throughout our society.

One of the more disturbing things to consider is that when Alexis de Tocqueville wrote Democracy in America 1, he recognized the jury as a powerful mechanism for democracy itself.

“…If it is your intention to correct the abuses of unlicensed printing and to restore the use of orderly language, you may in the first instance try the offender by a jury; but if the jury acquits him, the opinion which was that of a single individual becomes the opinion of the country at large…”

Alexis de Tocqueville, Volume 1 of Democracy in America, Chapter XI: Liberty of the Press In the United States (direct link to the chapter within Project Gutenberg’s free copy of the book)

In this, he makes the point that public opinion on an issue is summarized by the jury, for better and worse. Implicit in that is the discussion within the Jury itself, as well as the public opinion at the time of the trial. This is indeed a powerful thing, because it allows the people to decide instead of those in authority. Indeed, the jury gives authority to the people.

‘The People’, of course, means the citizens of a nation, and within that there is discourse between members of society regarding whether something is or is not right, or ethical, within the context of that society. In essence, it allows ethics to breathe, and in so doing, it allows Law to be guided by the ethics of a society.

It’s likely no mistake that some of the greatest concerns in society stem from divisions in what people consider to be ethical. Abortion is one of those key issues, where the ethics of the rights of a woman are put into conflict with the rights of an unborn child. On either side of the debate, people have an ethical stance based on their beliefs without compromise. Which is more important? It’s an extreme example, and one that is still playing out in less than complimentary ways for society.

Clearly no large language model will solve it, since the large language models are trained with implicitly biased training models and algorithms which is why they shouldn’t be involved, and this would likely go for general artificial intelligences of the future. Machine learning, or deep learning, learns from us, and every learning model is developed by it’s own secret jury whose stewed biases may not reflect the whole of society.

In fact, they would reflect a subset of society that is as disconnected from society as the companies that make them, since the company hires people based on it’s own values to move toward their version of success. Companies are about making money. Creating value is a very subjective thing for human society, but money is it’s currency.

With artificial intelligence being involved in so many things and with them becoming more and more involved, people should at the least be concerned:

  • AI-powered driving systems are trained to identify people, yet darker shades of humanity are not seen.
  • AI-powered facial recognition systems are trained on datasets of facial images. The code that governs these systems determines which features of a face are used to identify individuals, and how those features are compared to the data in the dataset. As a result, the code can have a significant impact on the accuracy and fairness of these systems, which has been shown to have an ethnic bias.
  • AI-powered search engines are designed to rank websites and other online content according to their relevance to a user’s query. The code that governs these systems determines how relevance is calculated, and which factors are considered. As a result, the code can have a significant impact on the information that users see, and therefore what they discuss, and how they are influenced.
  • AI-powered social media platforms are designed to connect users with each other and to share content. The code that governs these platforms determines how users are recommended to each other, and how content is filtered and ranked. As a result, the code can have a significant impact on the experiences of users on these platforms – aggregating into echo chambers.

We were behind before artificial intelligence reared it’s head recently with the availability of large language models, separating ourselves in ways that polarized and made compromise impossible.

Maybe it’s time for Code Version 3.0. Maybe it’s time we really got to talking about how our technology will impact society beyond a few smart people.

1 This was covered in Volume 1 of ‘Democracy in America‘, available for free here on Project Gutenberg.

Twitter’s Just Another Thing To Route Around.

There are people who like Elon Musk to the point of depravity, and there’s not much to do about that. I don’t bother writing about him because generally speaking, he doesn’t cross over into my world very often. Every company he has been involved in has not really added value to me – from PayPal to Tesla and now to Twitter.

When he took over Twitter – a platform I generally use only to track live events from sources I trust – I wasn’t worried. Most of these sources aren’t ‘verified Twitter’ folks, but people who have been consistently on the money over the years.

The cost of the new Twitter API is something I covered before in the context of WordPress.com, and now it seems the story has finally made it to Mashable in the broader context. It seems a bit late, actually, so I don’t know why it took so long for the story to come out, but come out it did.

$5,000 a month is definitely not a figure for developers, considering the level of transactionality developers are used to. If I were asked to spend that, I’d expect steak dinners every night with a cardiologist on the payroll. Twitter, which was once the Wild West, is being gentrified – which is not a kind use of the word.

Still, it’s something people are routing around, because when things become tough to work with on the Internet, we find ways around it. Since I’m not as vested in Twitter usage, it’s not a big deal for me. Every now and then I tweet something related to what I’m writing, or comment on something that I’m keeping an eye on.

Yet the way it is being handled is… poor. Some folks are finding out about things the hard way. This (borrowed from Mashable’s work, so props to them) is a pretty bad way to find something out.

It’s not often a social media company becomes outright hostile to it’s users – the ones who did find value in Twitter. People are moving to Telegram and other platforms.

Personally, I think Twitter was on a decent path until Musk decided to be the Dictator-of-Twits, but I had misgivings on the trolling amongst other things – and I think trusted sources mean something other than what was happening and what is happening now.

However you feel about it, it’s a matter of what works for you. Yet a lot of popular content won’t be on Twitter anymore, and that creates new problems for keeping track of whose content you like. I can’t even make a suggestion on it, because some go here, some go there…

For the record, I don’t like any of the social media platforms presently for this, largely because of an account bias: Accounts can become popular but content that is worthwhile isn’t necessarily the best in some instances.

Nature Cannot Be Fooled.

As the Titan may be reaching the end of it’s air supply, the search and rescue efforts are still underway. There’s 5 passengers who chose to go on this excursion to the dark graveyard of RMS Titanic, for whatever reasons, and it struck me that there should be redundancies built into the systems. It’s common in hostile environments that systems be engineered for redundancy.

Emergency surfacing would be an obvious system to make redundant, and according to this, it is.

“…Titan is held underwater by ballast – heavy weights that helps with a vessel’s stability – built to be automatically released after 24 hours to send the sub to the surface, said Newman.

“It is designed to come back up,” he told CNN.

Crew members are told they can release the ballast by rocking the ship or use a pneumatic pump to knock the weights free, Newman said. If all else fails, he said, the lines securing the ballast are designed to fall apart after 24 hours to automatically send it back to the ocean’s surface…”

What it’s like inside the Titanic-touring submersible that went missing with 5 people on board“, CNN, Emma Tucker, 22nd June 2023.

If these had worked, the surface search for the Titan would have probably found them on Monday or Tuesday visually or with radar. Therefore, there’s a fair assumption that they could not surface or have not surfaced, and with only 96 hours of air they may well be running out even as I write this.

Even if there were some electrical emergency, if these redundant systems worked as explained in the article, the Titan would have been on the surface by Tuesday. Something isn’t right.

Of course, all we have is speculation at this point. Standards for commercial submersibles are a legal grey area but Feynman – the Nobel Laureate who solved the Challenger disaster – said it best:

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Richard Feynman, Appendix to the Rogers Commission Report on the Space Shuttle Challenger Accident 6 June 1986.

Whence Future by Past?

I was reading on this very interesting use of artificial intelligence to find rare metals for mining when it dawned on me: While this application requires specialized knowledge, much of what artificial intelligence is used for requires a collection of what we have distilled so far in human knowledge.

Now, I’m not a socialist and I’m not saying that we should all have artificial intelligence for free. On principle it sort of makes sense, but it’s largely an oversimplification of apples and oranges.

The way we as individuals get knowledge is by obtaining it. Some is through observation, some is through training institutions, but not one of us is likely to have all the knowledge available for machine learning / deep learning training models. The raw amount of data that they chew through is not something some of us see in our entire lifetimes.

Yet it does seem peculiar that the knowledge of our species benefits some more than others in this way. This is where we get to the universal basic income, which in present day seems a lot like a ‘dole’, but in the future may not because… imagine what we could do as a species if we all created things and worked on things because we wanted to?

What if we could unlock our potential as a species by getting out of our own way?

Yeah, I know, it’s a silly romantic notion and romance is not my specialty.

The AI Tools I Use.

After reading this list of ‘what AI tools content creators need’, I rolled my eyes and shook my head a few times and started writing this. That list is about video, SEO, etc, and while some of that stuff I may experiment with… I don’t need it.

Why? Because I’m not creating video and I can write without technological crutches. Being able to express one’s thoughts clearly is an important part of being human, something I’ll dive into elsewhere.

The new ‘AI’ tools I use are pretty simple:

  • DeepAI.org for some image generation. It generally gives me usable stuff, but because of the limitations of it’s styles I’ve been fiddling around with others, such as PicsArt.com.
  • Google’s Bard, if I want SEO keywords. Generally, I only use this once in a while. If you’re going to use keywords for anything, it would make sense to use Bard.

That’s it. I do have a $5/month subscription to DeepAI.org, which most people wouldn’t need since for images it just unlocks some styles. I resize images for the web myself, and edit them about 50% of the time using Gimp or Paint.net (both free tools). For less experienced folks, I would suggest Paint.net.

Now, if you’re into having an AI create all the content for you, I don’t think you’re a content creator. You’re just a publisher.

The Beatles, AI, and Creativity.

Last week, the news came out that The Beatles are releasing a new song with a ‘little help from their AI friend’. It took me a bit of time to try to be circumspect about it, so I decided to think on it over the weekend.

I read what I could about it, trying to understand why this was so important to do that they did it. After all, one Beatle didn’t like the song suspected of being done.

…Sir Paul later claimed George Harrison refused to work on the song, saying the sound quality of Lennon’s vocal was “rubbish”.

“It didn’t have a very good title, it needed a bit of reworking, but it had a beautiful verse and it had John singing it,” he told Q Magazine.

“[But] George didn’t like it. The Beatles being a democracy, we didn’t do it.”…

Sir Paul McCartney says artificial intelligence has enabled a ‘final’ Beatles song“, BBC, Mark Savage, June 13th 2022.

Given George Harrison died in 2001 of lung cancer, we see that democracy doesn’t die when a voter dies, a point worth remembering the next time you’re discussing healthcare.

That beautiful verse must be something. I remember when John Lennon died, and how the world mourned his passing. I was 9 years old when it happened, and didn’t appreciate his work as I did later on in life, but the world felt cold and distant that day. Something was taken from us, violently. Most of us didn’t understand why, most of us still don’t understand why, but John Lennon was gone.

There’s some emotion involved here because Lennon’s death is within living memory. For me, it’s a balance of hearing a ‘last song of Lennon’ versus, ‘Lennon’s voice as a technological marionette’.

Of course, The Beatles have been no strangers to innovation and pushing boundaries. That’s part of what made them The Beatles.

“…As the BBC notes, the “new” Beatles song set to be released later this year is probably “Now and Then.” The ballad was among the homemade Lennon demos Yoko Ono, Lennon’s widow, gave to McCartney. But unlike “Free as a Bird” and “Real Love” — songs the band was able to complete and put out in 1995 and 1996 — The Beatles quit working on “Now and Then” and never released it.”…As the BBC notes, the “new” Beatles song set to be released later this year is probably “Now and Then.” The ballad was among the homemade Lennon demos Yoko Ono, Lennon’s widow, gave to McCartney. But unlike “Free as a Bird” and “Real Love” — songs the band was able to complete and put out in 1995 and 1996 — The Beatles quit working on “Now and Then” and never released it.

The band took some flak for releasing two songs years after Lennon’s murder in 1980. But they were simply following their longtime practice: Use the latest technology to push the bounds of creativity…”

Opinion: The fifth Beatle is artificial intelligence“, CNN, Jere Hester, June 16th, 2023.

Personally, I think with Sir Paul McCartney working on this, and being a part of his living memory, this could be good. They were a band, they knew each other, and this song isn’t a ‘new creation’ as much as something that they just never got back to.

I also think that he needs to assure that the door is shut after this. Some things need to end so that their value is understood.

We can’t go around allowing publishing companies to use the attributes of humans for their own gain in this way for new songs, movies or books, I think. They’ve already done it with copyright, but there’s a bit more at stake here.

It’s about being human, and all of the experts on the planet haven’t really figured out all that encompasses – but we need a few boundaries.

Where’s Your Parachute? AI and Jobs.

People working to make ends meet generally don’t have the time to worry about artificial intelligence taking their jobs until it’s too late. That’s already beginning to happen.

“…Earlier this year, a report from Goldman Sachs said that AI could potentially replace the equivalent of 300 million full-time jobs.

Any job losses would not fall equally across the economy. According to the report, 46% of tasks in administrative and 44% in legal professions could be automated, but only 6% in construction and 4% in maintenance…

…This month IKEA said that, since 2021, it has retrained 8,500 staff who worked in its call centres as design advisers.

The furniture giant says that 47% of customer calls are now handled by an AI called Billie…”

The workers already replaced by artificial intelligence“, BBC, Ian Rose, 16 Jun 2023

The article starts off with a copyrwriting team that may have lost their jobs to a large language model (AI), and a redemption of a human doing voiceovers.

This is an issue, and an issue that much of the world is not prepared for. An AI, despite it’s corporate entity being given human-hood by law, doesn’t have a family to feed.

Technology and humanity need to find a way to coexist properly, and I don’t know that present systems allow for that. I’m not even sure what the right questions are so that we can get the right answers.

Thoughts? What do you think needs to happen?