The Walls Have Ears.

Years ago, I had the then new Amazon echo, I had multiple Kindles, and I had a cough. A bad cough. A cough so bad that I ended up going to a hospital over and got some scary news about, which is a story by itself.

What was weird was that the Kindles started showing ads for cough drops and cough syrups. Just out of the blue. I hadn’t shopped for those on Amazon and I think it unlikely that they were getting updates from my pharmacy on my over the counter habits.

This was creepy.

I donated the Echo to someone else, and the Kindles started having advertisements for books that were semi-interesting again. No more over the counter stuff for coughs. This is purely anecdotal, but as someone who does value his privacy, I opted to simply not have it around. My life was complete without an Echo and I began questioning why I had gotten it in the first place.

Since then, I’ve just quietly nodded my head when people say that they think devices are listening to them. If poked with a stick, I tell the story. Mobile phones, with all the apps that use voice is a big hole.

Let’s be honest about ourselves: We are, collectively, pretty bad at settings and making sure we don’t leak information we don’t want to. It’s not completely our fault either. Staying on top of software settings when the software is in a constant state of being updated is not an easy task.

It ends up that people who have been concerned about it, as I am, may have a reason though it’s being denied:

...In a Nov. 28 blog post (which also has been deleted), CMG Local Solutions said its “Active Listening” technology can pick up conversations to provide local advertisers a weekly list of consumers who are in the market for a given product or service. Example it cited of what Active Listening can detect included “Do we need a bigger vehicle?”; “I feel like my lawyer is screwing me”; and “It’s time for us to get serious about buying a house.”

There’s a big question as to why someone would even make that claim in the first place without it being true. Maybe it was a drunk intern. Maybe it was an upset employee leaving with a ‘fish in the ceiling’1.

I could put on a tinfoil hat and say that the NSA probably has backdoors on every operating system made in the United States. It’s credible after 9/11, but when I write ‘after 9/11’ I realize there’s an entire generation who doesn’t remember how things were before. Before that, we were less concerned about who was listening in on us because the need to listen to everyone was much less. The word ‘terrorism’ had many different definitions in government then and almost none of them seemed to agree. It was a troublesome time for technology.

We have generations that are blindly trusting these technologies at this point because they’ve been raised on them much as I was raised on Sesame Street. Sesame Street, though, was not too interested in my shopping habits or trying to prep me for a market to buy a certain line of hardware, software, or subscription services. When you think about it, GenX was being sold on the idea of learning stuff whereas subsequent generations have been increasingly marketed to under the guise of education.

All of this should be something that is at least on our radars, something we understand as a possibility.

If the government is doing it, we can’t really depend on them to get companies not to – and we don’t know who is doing it at all.

It takes one story – a cough around an Echo – to make it feel real, if you’re paying attention.

  1. At one company I worked for, someone who had quit had left a few fish in the ceiling tiles in a cube farm. It took months for people to find out where the smell was coming from. ↩︎

Surprise: Virtual Isn’t Actual.

Anyone who has had a passing relationship with a dictionary may notice some sarcasm in the the title. Virtual, by definition, isn’t actual.

Of course, someone has to go about proving that and that has value. In the semantics of whether an artificial relationship is real or not, since ‘artificial’ itself is by definition made by humans. It’s easy to go down a path of thought where all relationships are artificial since they are made by humans, but that’s not really what we’re talking about at all.

We’re talking about human society, psychology, and the impact of relationships with artificial intelligences.

Early on, [Silicon Valley companies] discovered a good formula to keep people at their screens,” said Turkle. “It was to make users angry and then keep them with their own kind. That’s how you keep people at their screens, because when people are siloed, they can be stirred up into being even angrier at those with whom they disagree. Predictably, this formula undermines the conversational attitudes that nurture democracy, above all, tolerant listening.

“It’s easy to lose listening skills, especially listening to people who don’t share your opinions. Democracy works best if you can talk across differences by slowing down to hear someone else’s point of view. We need these skills to reclaim our communities, our democracies, and our shared common purpose.”

Why virtual isn’t actual, especially when it comes to friends“, Margaret Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society, quoted by Liz Mineo, The Harvard Gazette, December 5th 2023.

If that sounds familiar, it’s a recurring theme. Just last week in AI, Ethics and Us, I pointed to what Miguel Ángel Pérez Álvarez had written on the Spanish version of Wired in “IA: implicaciones éticas más allá de una herramienta tecnológica” which was in the same vein.

Turkle, giving a keynote, had more space to connect the dots and so pointed out that the algorithms Silicon Valley companies use are useful for them to keep all of attached to our screens – but I do think that’s a bit unfair since it’s technology companies, and while there’s a concentration in Silicon Valley, companies around the world are leveraging these algorithms all the time. And as more and more people are noting, it has broader impacts than what we do as individuals.

In fact, if you look at social networks like Facebook and whatever Musk decided to call Twitter next, you’ll find people in algorithmic caves, almost unable to tunnel their way out because they’re quite happy in that algorithmic cave. Within that little cave there is an echo chamber.

An actual echo chamber created by virtual means.

AI, Ethics, Us.

Most of us live in a lot of different worlds, and we see things differently because of it. Some of us live in more than one world at a time. That’s why sometimes it’s hard for me to consider the promise of artificial intelligence and what we’re getting and the direction that’s going.

There’s space in this world in research for what we have now, which allows previously isolated knowledge to be regurgitated in a feat of math that makes the digital calculator look mundane. It’s statistics, it gives us what we want when we hit the ‘Enter’ button, and that’s not too bad.

Except it can replace an actual mind. Previously, if you read something, you didn’t guess if a machine threw the words together or not. You didn’t wonder if the teacher gave you a test generated by a large language model, and the teacher didn’t wonder if you didn’t generate the results the same way.

Now, we wonder. We wonder if we see an image. We wonder if we watch a video. We wonder enough so that the most popular female name for 2023 should be Alice.

So let me tell you where I think we should be heading with AI at this time.

What Could Be.

Everyone who is paying attention to what’s happening can see that the world is fairly volatile right now after the global pandemic, after a lot of economic issues that banks created combined with algorithmic trading… so this is the perfect time to drop some large language models in the world to make things better.

Nope.

No, it isn’t working that way. If we were focused on making the world better rather than worrying about using a good prompt for that term paper or blog post, it maybe could work that way. We could use things like ChatGPT to be consultants, but across mankind we lack the integrity to only use them as consultants.

“If anyone takes an AI system and starts designing speeches or messages, they generate the narrative that people want to hear. And the worst thing is that you don’t know that you are putting the noose around your neck alone.” The academic added that the way to this situation is education.

The only way to avoid manipulation is through knowledge. Without this, without information, without education, any human group is vulnerable, he concluded.1

IA: implicaciones éticas más allá de una herramienta tecnológica“, Miguel Ángel Pérez Álvarez, Wired.com (Spanish), 29 Nov 2023.

There’s the problem. Education needs to adapt to artificial intelligence as well because this argument, which at the heart I believe to be true, does not suffer it’s own recursion because people don’t know when it’s ethically right to use it, or even don’t know that there should be ethics involved.

As it happens, I’m pretty sure Miguel Ángel Pérez Álvarez already understands this and simply had his thoughts truncated, as happens in articles. He’s also got me wondering how different languages are handled by these Large Language Models and how different their training models are.

It’s like finding someone using an image you created and telling them, “Hey, you’re using my stuff!” and they say, “But it was on the Internet”. Nevermind the people who believe that the Earth is flat, or who think that vaccinations give you better mobile connections.

AI doesn’t bother me. It’s people, it’s habits, and in a few decades they’ll put a bandaid on it and call it progress. The trouble is we have a stack of bandaids on top of each other at this point and we really need to look at this beyond the pulpits of some billionaires who enjoy more free speech than anyone else.

  1. actual quote: “Si cualquier persona toma un sistema de IA y se pone a diseñar discursos o mensajes, te generan la narrativa que la gente quiere escuchar. Y lo peor es que tú no sabes que te estás poniendo la soga al cuello solito”. El académico añadió que la manera de contrarrestar esta situación es la educación.

    “La única manera de evitar la manipulación es a través del conocimiento. Sin este, sin información, sin educación, cualquier grupo humano es vulnerable”, finalizó.” ↩︎

Wikipedia, AI, Oh My.

One of the most disruptive things that has happened during Web 2.0 is Wikipedia – displacing the Encyclopedia Britannica as an online resource, forging strategic partnerships, and – for better and worse – the editorial community.

It has become one of the more dependable sources of information on the Internet, and while imperfect, the editors have collectively been a part of an evolution of verification and quality control that has made Wikipedia a staple.

It apparently has also been part of the training models of the large language models that we have grown to know over the past months, such as ChatGPT and Google’s Bard, which is interesting given how much volunteer work went into creating Wikipedia – something that makes me wonder if Wikimedia could be a part of the lawsuit.

This is pure speculation on my part, but given how much collective effort has gone into the many projects of Wikimedia, and given it’s mission is pretty clear about bringing free educational content to the world, large language models charging subscribers on that content is something that might be worth a bit of thought.

On a conference call in March that focused on A.I.’s threats to Wikipedia, as well as the potential benefits, the editors’ hopes contended with anxiety. While some participants seemed confident that generative A.I. tools would soon help expand Wikipedia’s articles and global reach, others worried about whether users would increasingly choose ChatGPT — fast, fluent, seemingly oracular — over a wonky entry from Wikipedia. A main concern among the editors was how Wikipedians could defend themselves from such a threatening technological interloper. And some worried about whether the digital realm had reached a point where their own organization — especially in its striving for accuracy and truthfulness — was being threatened by a type of intelligence that was both factually unreliable and hard to contain.

One conclusion from the conference call was clear enough: We want a world in which knowledge is created by humans. But is it already too late for that?

John Gertner, “Wikipedia’s Moment of Truth“, New York Times Magazine, July 18th, 2023, Updated on July 19th, 2023.

It is a quandary, that’s for sure. Speaking for myself, I prefer having citations on a Wikipedia page that I can research on my own – there seem to be at least some of us that trample our way through footnotes – and large language models don’t cite anything, which is the main problem I have with them.

In the facts category, I would say Wikipedia should win.

Unfortunately, time and again, the world has demonstrated that facts are sometimes a liability for selling a story, and so the concern I have is real.

Yet it could be useful to combine the two somehow.

To Err Is Not Just Human.

When I saw that only only 40% of People Can Identify Bots from Humans, I wasn’t too surprised. We like to think that technology is getting smarter, but… well… there are other things that are related that we shouldn’t be too quick to discount.

…Children are taught to regurgitate what others tell them and to rely on digital assistants to curate the world rather than learn to navigate the informational landscape on their own. Schools no longer teach source triangulation, conflict arbitration, separating fact from opinion, citation chaining, conducting research or even the basic concept of verification and validation. In short, we’ve stopped teaching society how to think about information, leaving our citizenry adrift in the digital wilderness increasingly saturated with falsehoods without so much as a compass or map to help them find their way to safety. The solution is to teach the world’s citizenry the basics of information literacy…

Kalev Leetaru, “A Reminder That ‘Fake News’ Is An Information Literacy Problem – Not A Technology Problem“, Forbes.com, July 7th, 2019.

Couple that with the study that showed the average attention span is now 47 seconds, there’s a lot of forgiveness for an effective Turing test these days. The very idea of the Turing test did not come up in a world where people thought the world was flat, and that was almost 75 years ago. No one was eating Tide Pods, either, though I do believe that’s under control now.

…Researchers were aware that these tactics might be deployed, so they specifically trained their bots to strategically utilize typos and other forms of errors in syntax and grammar to make them seem more human. Personal questions were also used fairly frequently, with participants trying to get the bots to talk about their backgrounds, assuming that bots would not be able to respond to such queries.

Once again, these bots were trained on datasets that included a wide range of personal stories, and that led to them being able to answer these questions in a way that is surprisingly similar to human beings. Hence, 32% of participants were unable to successfully identity AI during this experiment with all things having been considered and taken into account…

Zia Muhammad, “Only 40% of People Can Identify Bots from Humans“, Digital Information World, July 11th, 2023.

So what they did is they made the bots make mistakes on purpose so that they could fool humans better, because human typos and the inability to write properly are hallmarks of being human.

To err is bot.

The Technological Singularity: A Roundup of Perspectives Outside Tech.

Yesterday I wrote about the technological singularity as espoused by positive singulitarians that are sharing their perspectives on such a singularity – and rebutted some of the problems with the laser pointer that they want us to focus on. Fairly or unfairly, they quote Ray Kurzweil a lot.

Generally speaking, they are in the artificial intelligence business and therefore they want to sell us a future as they have done in the past, much like the paperless office as I mentioned here.

There’s more to humanity than that, I would like to think, so I’d been reading up and considering other aspects of humanity that may have some things to say that are of weight to the context of the hypothetical technological singularity. I write ‘hypotherical‘ because any prediction is hypothetical, even when you’re tilting with marketing to assure it happens.

Yesterday, I got a little sidetracked with the issue of global economic disparity versus global poverty, which I’ve resolved not to solve because I don’t think it is meant to be solved or an economist would have already.

However, I found much that is being said outside the realms of the more pure technologists.

…The time for international political action has therefore arrived. Both AI-producer and non-AI-producer countries must come together to create an international organism of technological oversight, along with an international treaty in artificial intelligence setting forth basic ethical principles.   

The greatest risk of all is that humans might realize that AI singularity has taken place only when machines remove from their learning adaptations the flaw of their original design limiting their intelligence: human input. After all, AI singularity will be irreversible once machines realize what humans often forget: to err is human. 

Entering the singularity: Has AI reached the point of no return?“, The Hill (Technology, Opinion), by J. Mauricio Gaona , Opinion Contributor – 05/15/23

That is, of course, a major issue. Garbage in, garbage out. If you want less errors, every software engineer of worth knows that you want to minimize the capacity of the user to create more errors. That’s a logical thing to point out.

Psychology Today had an impressively balanced article, well worth the read.

“…What does worry me is a “second singularity.”

The second singularity is not just about computers getting more powerful, which they are, but the simultaneous reduction in expertise that we are seeing in many quarters. As organizations outsource decision making authority to machines, workers will have fewer opportunities to get smarter, which just encourages more outsourcing.

The second singularity is actually much closer to us in time than Kurzweil’s original notion of a singularity. It is a second singularity in deference to Kurzweil’s analysis, rather than for chronological reasons…”

The Second Singularity: Human expertise and the rise of AI.“, Gary Klein PhD, Psychology Today, December 3rd, 2019.

Given that the article is three and a half years old, it’s impressively descriptive and poignant for the conversation today, delving into nuanced points about expertise – some things are worth losing, some not. More people should read that article, it’s a fairly short read and well written, including suggestions on what we should do even now. It has definitely aged well.

Moving on, we get to an aspect of the economic perspective. An article on Forbes has some interesting questions, condensed below.

how will the potential of bioengineering capabilities re-define and re-design the way we produce raw materials?

how will the emerging potential of molecular manufacturing and self-replicating systems reverse the very process of globalization, as nations who own and control this technology will not need other nations as they can produce/manufacture anything they need or want in unlimited quantities?

How will blockchain based additive manufacturing create a participatory economy blurring the boundaries of national geography? How will a nation’s economy be influenced by digital manufacturing designs from anywhere and anyone?

How will nations deal with the likely collapse of the economic system in the coming years? Are they prepared?

The End Of Work: The Consequences Of An Economic Singularity“, Jayshree Pandya (née Bhatt), Ph.D., Forbes>Innovation>AI, Feb 17, 2019

Another article that has aged well at over 4 years old, because those questions are still to be answered. Interestingly, the author also mentions Risk Group LLC, where she is the CEO. The article lists her as a former contributor, and her author page on Forbes describes her as, “working passionately to define a new security centric operating system for humanity. Her efforts towards building a strategic security risk intelligence platform are to equip the global strategic security community with the tools and culture to collectively imagine the strategic security risks to our future and to define and design a new security centric operating system for the future of humanity.”

Definitely an interesting person, and in 2019 it seems she was well aware of the challenges.

“…The shape the future of humanity takes will be the result of complex, changing, challenging and competing for technological, political, social and economic forces. While some of these forces are known, there is a lot that is still not known and the speed at which the unknowns can unfold is difficult to predict. But unless we make a strong effort to make the unknowns, known, the outcome of this emerging battle between technological singularity and economic singularity seems to be just the beginning of social unrest and turmoil…”

The End Of Work: The Consequences Of An Economic Singularity“, Jayshree Pandya (née Bhatt), Ph.D., Forbes>Innovation>AI, Feb 17, 2019

It’s a shame Forbes paywalls their content, or more of us might have read it when it was written. This sort of article definitely needed a wider audience in 2019, I think.

Just a glance at RiskGroup LLC’s work makes it look like they have been busily working on these things. I’ll be looking their stuff over for the next few days, I expect.

In an interesting context of education and sociology, I came across an article that quotes Ethan Mollick, associate professor at the Wharton School at the University of Pennsylvania:

“The nature of jobs just changed fundamentally. The nature of how we educate, the nature of how teachers and students relate to work, all that has just changed too. Even if there’s no advancement in AI after today, that’s already happened,” said Mollick, an economic sociologist who studies and teaches innovation and entrepreneurship at Wharton.

“We are seeing, in controlled studies, improvements in performance for people doing job tasks with AI of between 20% and 80%. We’ve never seen numbers like that. The steam engine was 25%.”

Has new AI catapulted past singularity into unpredictability?“, Karen McGregor, University World News, 27 April 2023.

Things have been changing rapidly indeed. The PC Revolution was relatively slow, the Internet sped things up and then the mobile devices took things to a higher level. The comparison to the steam engine is pretty interesting.

Lastly, I’ll leave you with an anthropological paper that I found. It’s a lengthy read, so I’ll just put the abstract below and let you follow the link. It gets into collective consciousness.

The technological singularity is popularly envisioned as a point in time when (a) an explosion of growth in artificial intelligence (AI) leads to machines becoming smarter than humans in every capacity, even gaining consciousness in the process; or (b) humans become so integrated with AI that we could no longer be called human in the traditional sense. This article argues that the technological singularity does not represent a point in time but a process in the ongoing construction of a collective consciousness. Innovations from the earliest graphic representations to the present reduced the time it took to transmit information, reducing the cognitive space between individuals. The steady pace of innovations ultimately led to the communications satellite, fast-tracking this collective consciousness. The development of AI in the late 1960s has been the latest innovation in this process, increasing the speed of information while allowing individuals to shape events as they happen.

O’Lemmon, M. (2020). The Technological Singularity as the Emergence of a Collective Consciousness: An Anthropological Perspective. Bulletin of Science, Technology & Society, 40(1–2), 15–27. https://doi.org/10.1177/0270467620981000

That’s from 2020. Thus, most of the things I’ve found have been related to present issues yet were written some time ago, hidden in the silos of specialties beyond that of just technology.

There’s definitely a lot of food for thought out there when you cast a wider net beyond technologists.

It might be nice to get a better roundup, but I do have other writing I’m supposed to be working on.

ChatGPT Migrations.

I haven’t really mentioned ebb and flow to data streams, but it’s not that different from what we see in nature. Birds migrate. Elephants migrate. Whales migrate.

Users migrate. Sure, they move from application/service to application/service, but during the day they are more likely to use certain software/services. Then we get into weekdays and weekends, with the holidays…

People use different stuff at different times. So ChatGPT has noted such a migration and it I found it mildly disturbing:

ChatGPT is losing users for the first time ever, and those users aren’t who you would expect. Traffic to ChatGPT’s website fell by 9.7% in June, according to estimates from Similarweb, a web analytics firm. The decline was steeper in the U.S., with a 10.3% month-on-month decline, and the number of unique visitors to ChatGPT also fell by 5.7% from the previous month.

One thing is clear to Francois Chollet, a software engineer and AI researcher at Google, who told Fortune over email that “one thing is sure: it’s not booming anymore.”

Chollet thinks he knows what’s going on: summer vacation. Instead of using ChatGPT for education-related activities, the engineer said on Twitter, kids are probably playing Minecraft or enjoying summer activities. Search interest over time for ChatGPT has steadily declined, while search interest for Minecraft has steadily increased, he pointed out. 

ChatGPT suddenly ‘isn’t booming anymore,’ Google A.I. researcher says—and kids are the big problem“, Fortune.com, Stephen Pastis.

It’s noted in the article that doing homework is what ranks 2nd of ChatGPT’s use. Personally, I’ve got a really bad history about doing homework, so I get it, but are we truly punishing people by giving them bad grades who say, “Nope, I didn’t do it” to those who have ChatGPT do it? Honesty is being penalized again?

Honesty, integrity – all those things Disney remakes stories about that we have kids sit down and watch gets penalized if they don’t have ChatGPT do their homework?

This isn’t like the calculator, which took away some of the drudgery of math. ChatGPT, prompted properly, can do an entire assignment in moments, paste that into a spreadsheet…

And we have just trained primates to copy, paste, and not learn anything while those that might want to learn actually are at risk of getting lower grades.

I thought school was bad in my day…

Bubbles Distilled By Time.

We all perceive the world through our own little bubbles. As far as our senses go, we only have touch, taste, feeling, hearing, smell and sight to go by. The rest comes from what we glean through those things, be it other people, technology, language, culture, etc.

If the bubble is too small, we feel it a prison and do our best to expand it. Once it’s comfortable, we don’t push it outward as much.

These little bubbles contain ideas that have passed down through the generations, how others have helped us translate our world and all that is in it, etc. We’re part of a greater distillation process, where because of our own limitations we can’t possibly carry everything from previous generations.

If we consider all the stuff that creates our bubble as little bubbles themselves that we pass on to the next generation, it’s a distillation of our knowledge and ideas over time. Some fall away, like the idea of the Earth being the center of the Universe. Some stay with us despite not being used as much as we might like – such as the whole concept of, ‘be nice to each other’.

If we view traffic as something going through time, bubbles are racing toward the future all at the same time, sometimes aggregating, sometimes not. The traffic of ideas and knowledge is distilled as we move forward in time, one generation at a time. Generally speaking, until broadcast media this was a very local process. Thus, red dots trying to get us to do things, wielded by those who wish us to do things from purchasing products to voting for politicians with their financial interests at heart.

Broadcast media made it global by at first giving people information and then by broadcasting opinions to become sustainable through advertising. Social media has become the same thing. How will artificial intelligences differ? Will ChatGPT suddenly spew out, “Eat at Joes!”? I doubt that.

However, those with fiscal interests can decide what the deep learning of artificial intelligences are exposed to. Machine learning is largely about clever algorithms and pruning the data that the algorithms are trained on, and those doing that are certainly not the most unbiased of humanity. I wouldn’t say that they are the most biased either – we’re all biased by our bubbles.

It’s Pandora’s Box. How do we decide what should go in and what should stay out? Well, we can’t, really. Nobody is actually telling us what’s in them now. Our education systems, too, show us that this is not necessarily something we’re good at.

Education and AI

Having coffee with a friend yesterday, I was listening to her talk about meetings with teachers and encouraging her daughter to read at least 30 minutes a day. This is likely mundane conversation for many people, but the minutiae of it is interesting to me.

It is, after all, a system.

Everyone I have met or interacted with that was an educator (or librarian, they don’t get enough love) has been passionate about learning. Educational administrators I know have been passionate about test results. Test results and learning are mutually exclusive.

The tests are to prove to the masses that the educational system is working. They rarely demonstrate what one knows about anything, only that which was tested in tests that students are trained for. Learning, to me, is about not just having knowledge but applying that knowledge in practical ways. Deep understanding is not found through testing, it’s found through interaction- at least in my experience.

Sure, you can memorize equations and plug numbers into them, but unless you know more about the equations themselves, you’re just doing something a computer can do.

So let’s see what computers can do. I asked 3 different large language models, “Has the quality of education declined?”. Here are 3 different responses:

The question of whether the quality of education has declined is complex and can vary across different contexts and regions. It’s challenging to provide a definitive answer as education systems differ significantly worldwide, and numerous factors can influence quality.

While it’s true that some individuals and experts express concerns about declining educational quality, it is essential to consider various factors that impact education. These factors include changes in curriculum, teaching methods, funding, technological advancements, and societal shifts.

To assess the quality of education, it is necessary to examine specific indicators such as student performance on standardized tests, graduation rates, college enrollment rates, and feedback from educators, parents, and students. These indicators can provide insights into the overall effectiveness of education systems.

However, it is worth noting that generalizing the quality of education across all regions or institutions is not accurate, as there can be significant variations within and between countries. It is essential to consider local and contextual factors when evaluating educational quality.

ChatGPT.

As an AI language model, I cannot have an opinion. However, reports suggest that the quality of education varies by location and institution, with some improving and others declining. Factors like funding, teaching methodology, and teacher quality can also impact the quality of education. Additionally, the increasing demands of technology and the workforce may require updates to curriculum and teaching methods to effectively prepare students for the future.

DeepAI Chat.

There is no definitive answer to this question, as the quality of education can vary greatly depending on the country, school, and individual student. However, there are some general trends that suggest that the quality of education may have declined in recent years.

One trend is that students are spending less time on academic subjects and more time on extracurricular activities. This is likely due to a number of factors, including the increasing pressure to succeed in college and the growing popularity of sports and other activities.

Another trend is that teachers are facing increasing workloads and demands. This can make it difficult for them to provide students with the individual attention they need.

Finally, the curriculum in many schools is becoming more standardized, which can lead to a lack of creativity and innovation.

All of these factors can contribute to a decline in the quality of education. However, it is important to note that there are also many schools and teachers who are working hard to provide students with a high-quality education.

Here are some of the reasons why the quality of education may have declined:

  • The increasing focus on standardized testing has led to a narrowing of the curriculum and a focus on test-taking skills over critical thinking and problem-solving skills.
  • The shortage of qualified teachers has made it difficult to attract and retain the best teachers, and those who are teaching are often overworked and underpaid.
  • The increasing cost of education has made it more difficult for families to afford to send their children to college, which has led to a decline in the number of students who are graduating from college.
  • The rise of technology has made it easier for students to access information outside of the classroom, but it has also made it more difficult for them to focus and learn.

Despite these challenges, there are also many reasons to be optimistic about the future of education. There is a growing movement to reform the education system and to focus on teaching students the skills they need to succeed in the 21st century. There are also many dedicated teachers who are working hard to make a difference in the lives of their students. With hard work and dedication, we can reverse the decline in the quality of education and ensure that all students have the opportunity to succeed.

Bard.

I was a bit surprised at the differences in the responses and completely unsurprised about the commonality. I wouldn’t expect them to know since nobody knows, and all these large language models (LLM) do is access the same information the particular model is provided when prompted with a question. The reality is that we don’t know, therefore a LLM wouldn’t know.

If anyone thinks that’s not an issue, I think they may have had an education system land on their head in a most grotesque fashion.

We’re getting marketed “artificial intelligence”, machine learning, deep learning, neural networks, large language models and just about every cool phrase that causes nerdgasms.

When we consider the status of education systems for humans – human learning – we don’t even seem to have an idea of where we are. Further, students who were once copying and pasting from the Internet for assignments now get large language models doing their homework – if they can access them.

Something doesn’t seem quite right about that. Certainly, the technology progress is great, but I’m not sure we’re making smarter humans. I’m also not sure we’re making wiser humans.

What’s there to do? There seems to be at least some people thinking about the topic of education and AI, but as a society, are we too busy paying the bills and chasing red dots to have time for the future?

A Need For Patches

Brick-Moji Thinking face by Ochre Jelly on Flickr - public domain Aug 4 2022Yesterday I was just sitting around and thinking about a bunch of sentient beings on a moon of some planet way off in imaginary-land and ended up thinking a bit about education. I don’t think it was profound, really, but I think it might be of use when we think about society and how it pushes and pulls based on the bias of what we consider to be intelligent, and what we consider to be educated.

Those of us who have truly explored our world and the minds in it understand that an educated mind isn’t necessarily intelligent, and that an intelligent mind doesn’t necessarily have to be educated. Even then, we can’t decide if there is a bias in testing – some say yes, there is a bias, and some say no. Now here’s where it gets warped: The people that made it through the education system are the ones considered to be experts, saying that if there is bias, it’s negligible because other people who made it through an education system say so. I’m not an expert and won’t pretend to be, but my experience in my life shows me that there is bias, and not just of the educational system but of the systems that feed into the educational systems. It’s complex.

Spoiler: I don’t know the answer, and given my own experience I’ll say that there is room for a lot of thinking about it beyond what is said by either side of that debate.

In my mind, it’s perfectly fine to entertain more than one side of a conversation. It doesn’t mean that I have to charge off and defend arguments on one side or the other, it means I should be tearing them apart and trying to see what’s wrong with what everyone is saying and figure it out myself so that I can have an opinion on my own. That, I think, is what learning should be, and by extension I think that’s what education should prepare use to do – not recite memorized things to pass tests.

In an age of social media, where everyone is snapping off witty one liners and dropping to ad hominem attacks when they don’t work, I think we forget that we should be thinking more and typing less.

Oddly enough, my Twitter feed provided me with, “Thoughts Without a Thinker: Cognitive decline in an age of brilliance“. John Nosta writes of the Medieval era when there was a Trivium that consisted of Grammar, Dialectic, and Rhetoric. From there, the Quadvrium, or ‘subects’, were taught:

“…Each of the three part play an important role in a student’s educational foundation. Grammar helped students to understand the structure of language and how it worked. Dialectic helped them to ask questions and probe beneath the surface of things. And Rhetoric helped them to express their understanding in a clear and persuasive way. In a very basic way, the Trivium taught students how assimilate and process information — in other words, how to think!…”

It’s something I didn’t know and thus a pleasure to read.

My comparisons are generally with the German University because I’d read, “Academic Freedom in the Age of the University” (Walter Metzger, 1961), where Metzger wrote quite a bit about Lehrfreiheit (freedom to teach) and Lehrnefreheite (freedom to learn), which are not quite the same but I imagine built from the Trivium and Quadvrium and inherited from it. Perhaps some scholar will correct me, I look forward to that.

In the days of the Trivium, things were a bit simpler. There weren’t things like atoms, or software. In fact, calculus wasn’t around til Leibniz and Newton came up with it at the same time independently. The world was about observation and interpretation. Education was simpler because there was less to learn, and what we learn at this point by the end of a secondary school education likely exceeded a full education in that period, which took significantly more time.

In the last century alone, the leaps and bounds we have made in science alone are mind-boggling. Consider that Penicillin, invented by Alexander Fleming, in 1928. That’s less than 100 years ago, and now we’ve not only treated infections, we have created vaccinations that protect us from various diseases. We know so much more. Granted, we may not learn this stuff in secondary school, but we probably should. 

But this leads us to the education system itself. Consider what Yuval Noah Harari writes in Sapiens (2018):

“…You also educate people thoroughly. From the moment they are born, you constantly remind them of the principles of the imagined order, which are incorporated into anything and everything. They are incorporated into fairy tales, dramas, paintings, songs, etiquette, political propaganda, architecture, recipes and fashions…”

In Medieval times, the Trivium would have gone with the imagined order of the class divisions, the Church (likely Catholic in that period), and so on. In modern America, it would be the freedom, individuality, and so on – derived from, oddly enough, the Church where all souls were considered equal, and thus we get from the Declaration of Independence:

“…We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness…”

Harare takes that apart and rebuilds it in his book in an interesting and some may think disturbing way, but the point is that there is a change in that imagined order. There’s nothing wrong with the imagined order, but we have to understand that these are all imagined orders of the way things are that, unless we have had some part in changing the imagined order, it was the way it was. In the same breath, we can talk about how that Declaration of Independence didn’t apply to former slaves of African descent, and how how Malcolm X pointed out that, “We did not land on Plymouth Rock. Plymouth Rock landed on us.” 

That imagined order where slavery was permissible in the United States went away. Educational segregation went away in an uproar with young Linda Brown at the center. And the imagined order changed.

In 1920, women in the United States were recognized as having the right to vote through the 19th Amendment. Again, the imagined order changed.

That’s just the big stuff. There’s a lot in between that happened as well, and all the while, the 1920s saw the beginnings of AM radio broadcasting, the 1930s began FM broadcasting (which your radio likely still uses), television was beginning in parallel… no longer did one have to wait for updates on the world through newspapers. Personal letters may still travel by ship, since the first commercial airplane flights were happening about 100 years ago as well…

Our imagined order of things has shifted significantly. And in all of this, with the advent of social media in the last few decades, we have a generation growing up with a lot of different ‘imagined orders’ around the world competing. In fact right now, the biggest issue seems to be between authoritarian and democratic systems, but there are plenty of other things based on religion and culture as well.

In all of this – all of this – it comes back to what John Nosta wrote within his last paragraph, which I encourage you to read:

“Thoughts are the fabric of society.”

My friends, I do believe we need to patch a few things.