The Future of Social Media: Why Decentralizing and the Fediverse Matter Now More Than Ever

There was a time before social media and social networks as we know them, where people would talk to each other in person, isolated by geography. Then we figured out how to send our writing, and there was a period when pen-pals and postcards were important. News organizations adopted technology faster as reports came in from an ever increasing geography until, finally, we ran out of geography.

Social media has become an integral part of our lives, connecting us to friends, families, communities, and global events in ways that were unimaginable just a few decades ago. Yet, as platforms like Facebook, Instagram, and Twitter (now X) dominate our digital landscape, serious questions arise about privacy, control, and freedom. Who owns our data? How are algorithms shaping our perceptions? Are we truly free in these spaces? Are we instead slaves to the algorithms?

It’s time to rethink social media. Enter decentralization and the Fediverse—a revolutionary approach to online networking that prioritizes freedom, community, and individual ownership.

The Problem with Centralized Social Media And Networks

At their core, mainstream social media platforms operate on a centralized model. They are controlled by corporations with one primary goal: profit. This model creates several challenges:

  1. Privacy Violations: Your data – likes, shares, private messages—becomes a commodity, sold to advertisers and third parties.
  2. Algorithmic Control: Centralized platforms decide what you see, often prioritizing sensational or divisive content to keep you engaged longer.
  3. Censorship: Content moderation decisions are made by corporations, leading to debates about free speech and fair enforcement of rules.
  4. Monopolization: A handful of companies dominate the space, stifling innovation and giving users little choice.

All of this comes to the fore with the recent issues in the United States surrounding Tik-Tok, which Jon Oliver recently mentioned on his show and which I mentioned here on KnowProSE.com prior. The same reasons that they want to ban TikTok are largely the same things other social networks already do – it’s just who they do it for or could potentially do it for. Yes, they are as guilty as any other social network of the same problems above.

These are real issues, too, related to who owns what regarding… you. They often leave you looking at the same kind of content and drag you down a rabbit hole while simply supporting your biases, and should you step out of line, you might find your reach limited or in some cases completely taken away. These issues have left many users feeling trapped, frustrated, and disillusioned.

Recently, there has been a reported mass exodus from one controlled network to another – from Twitter to BlueSky.

There’s a better way.

What Is the Fediverse?

The Fediverse (short for “federated universe”) is a network of interconnected, decentralized platforms that communicate using open standards. Unlike traditional social media, the Fediverse is not controlled by a single entity. Instead, it consists of independently operated servers—called “instances”—that can interact with each other.

Popular platforms within the Fediverse include:

  • Mastodon: A decentralized alternative to Twitter.
  • Pixelfed: An Instagram-like platform for sharing photos.
  • Peertube: A decentralized video-sharing platform.
  • WriteFreely: A blogging platform with a focus on minimalism and privacy.

These platforms empower users by giving them control over their data, their communities, and their online experiences.


Why Decentralization Matters

  1. Data Ownership: In the Fediverse, your data stays under your control. Each server is independently operated, and many prioritize privacy and transparency.
  2. Freedom of Choice: You can choose or create a server that aligns with your values. If you don’t like one instance, you can switch to another without losing your connections.
  3. Resilience Against Censorship: No single entity has the power to shut down the entire network.
  4. Community-Centric: Instead of being shaped by algorithms, communities in the Fediverse are human-driven and often self-moderated.

How You Can Join the Movement

  1. Explore Fediverse Platforms: Start by creating an account on Mastodon or another Fediverse platform. Many websites like joinmastodon.org can help you find the right instance.
  2. Support Decentralization: Advocate for open standards and decentralized technologies in your circles.
  3. Educate Others: Share the benefits of decentralization with your friends and family. Help them see that alternatives exist.
  4. Contribute to the Ecosystem: If you’re tech-savvy, consider hosting your own instance or contributing to open-source projects within the Fediverse.

The Call to Action

Social media doesn’t have to be controlled by a handful of tech giants. The Fediverse represents a vision for a better internet—one that values privacy, freedom, and genuine community. By choosing decentralized platforms, you’re taking a stand for a more equitable digital future.

So, what are you waiting for? Explore the Fediverse, join the conversation, and help build a social media landscape that works for everyone, not just the corporations.

Take the first step today. Decentralize your social media life and reclaim your digital freedom!

joinmastodon.org

When You Can’t Trust Voices.

Generative AI is allowing people to do all sorts of things, including imitating voices we have come to respect and trust over the years. In the most recent case of Sir David Attenborough, he greatly objects to it and finds it ‘profoundly disturbing’.

His voice is being used in all manner of ways.

It wasn’t long ago that Scarlet Johannson suffered such an insult that was quickly ‘disappeared’.

The difference here is that a man who has spent decades showing people the natural world has his voice being used in disingenuous ways, and it should give us all pause. I use generative artificial intelligence, as do many others, but there would be no way that I would even consider misrepresenting what I write or work on in the voice of someone else.

Who would do that? Why? It dilutes it. Sure, it can be funny to have a narration by someone like Sir David Attenborough, or Morgan Friedman, or… all manner of people… but to trot out their voices to misrepresent truth is a very grey area in an era of half-truths and outright lies being distributed on the Babel of the Internet.

Somewhere – I believe it was in Lessig’s ‘Free Culture’ – I had read that the UK allowed artists to control how their works were used. A quick search turned this up:

The Copyright, Designs and Patents Act 1988, is the current UK copyright law. It gives the creators of literary, dramatic, musical and artistic works the right to control the ways in which their material may be used. The rights cover: Broadcast and public performance, copying, adapting, issuing, renting and lending copies to the public. In many cases, the creator will also have the right to be identified as the author and to object to distortions of his work.

The UK Copyright Service

It would seem that something similar would have to be done with the voices and even appearance of people around the world – yet in an age moving toward artificial intelligence, where content has been scraped without permission, the only people who can actually stop doing this are the ones who are scraping the content.

The world of trusted humans is being diluted by untrustworthy humans.

KnowProse.com off WordPress.com, Now on Hostinger.

It’s been a while since I wrote something on the site – that was largely to do with not wanting my content scraped, and being WordPress.com did not fill me with trust or confidence in what the company was doing.

Nevermind the whole WordPress vs. WPEngine debacle, that I have not read much into because my life has sufficient drama and I do not wish to overflow with it. I did do some initial reading and quickly realized the whole thing seemed engineered.

Instead, I switched to Hostinger (referral link). It was fairly easy since I opted to continue using WordPress for the site after shopping around a bit, though I am working on a semi-personal project with Drupal 11 – which Hostinger’s love for on the command line is as deprecated as the command line PHP version is. This related to running Composer – the command line is PHP 8.2.19, and Composer2 on there presently requires 8.3+ as Drupal 11 does… bleeding edge requires blood or it’s not bleeding edge, right?

The domain transfer was about the full 7 days, and I could speculate on why that is but that has no value.

The site is more plain, at least for now, and eventually there will be likely be some advertising on it – but not in the way advertising has manifested itself on sites I visit. No, the site will not spam you to give you updates. No, the site will not have pop-ups that just annoy you. No, the site will not… well, you get the point.

I did consider Bluehost. Over a decade ago, I had a really bad experience with Bluehost whose pain this site still feels – their automatic backups, at least then, did not really work on a daily level. The site went down when I was at a CARDICIS conference – I forget which one – and by the time I could have unfettered access to the site when I returned home, a lot of the site was gone. Bluehost may have improved since then – I certainly hope they have – and even though it was likely an outlier event for me, and they may have improved, I opted not to go with them.

This does not mean my experience should color yours, mind you. It would appear that they’re still in business, so they’re doing something right. At the time, I had a tendency to be bleeding edge with the sites that I write on and that may too have bitten me in the posterior. We are, though, creatures that remember pain even beyond rationality.

So yes, KnowProse.com is back from hiatus.

What to do about scraping for LLM learning is the only real thing left.

Linux Ain’t Hard. Just Different. Come Play.

Microsoft Recall has been making waves in privacy circles. Some are calling it a privacy nightmare, some others are calling it a privacy nightmare, and privacy experts are sounding the alarm about it1.

As usual, when Microsoft does a stupid – and in my eyes, this is a pretty big stupid – people talk about migrating to Linux. Of course, people still think that people who use Linux wear funny hats and shout incantations at monitors through their keyboards to make it work. It’s not true. Well, not that true. To date, no Linux user has summoned Cthulu, though there are corners where there is concern about that. Those are silly corners.

Around the Fediverse, Linux users are being admonished to be on their best behavior, but I think that stereotype of bearded Linux users shouting at newbies to ‘RTFM!’ is a bit dated.

Linux has become more friendly. It’s become so friendly, in fact, that some who are long term users eschew the fancy GUIs for the tried and trusty command lines, but you don’t need to.

Linux is easy, and no, you don’t have to abandon your documents, or your way of life. A few things will change, though.

You won’t need the latest hardware to run an operating system – which means more money for beer and pizza, or wine and pizza, or whatever and whatever. You can breathe old life into that old computer system that you don’t want to leak arsenic into your water table and use it some more.

Linux-Curious?

If you are an academic, Robert W. Gehl has some Linux tips for you – he’s been using Linux in academia since 2008, on ‘Microsoft campuses’.

He hits some of the high notes for just about everyone, and I’ll only add that the Linux-on-a-stick and Linux User Groups are good ways to get your feet wet and help with the transition.

The trouble is you have so many options with distributions, and so many people have strong opinions on which distribution is better.

Spoiler alert: They all work. Start somewhere. Distributions are just flavors of Linux.

I suggest Ubuntu since it has oodles of support. I myself have different distributions I use for different things, but I’m a long time Linux user and have gotten to know my own preferences along the lines of Debian Linux, but I use Ubuntu as well. In time you will, or you won’t.

Linux-on-a-stick is a great way to check your hardware to see what issues might have with Linux as far as drivers. Since this article is written for Windows users, I’ll point to using Rufus to create a bootable USB Ubuntu Linux stick. You can put other distributions on a stick in the same way, too, so you’re not just stuck with one type of Linux.

If you want human help, search your area for a local Linux User Group. If you’re not sure where to start, here are some ways to find out if there is a Linux User Group near you. Failing that, ask for help on social media platforms.

The key to moving to Linux isn’t that you don’t know the answers. It’s that sometimes you might ask the wrong questions because of what you’re used to. If you think about it, that’s true of everything.

Grab a USB stick and see if your PC will run Linux so you can put the ‘personal’ back in ‘personal computer’.

  1. You have to admit, the idea of a privacy expert making noise seems peculiar. You’d think privacy experts would be like ninjas – so private you don’t even know about them. They must be more interested in your privacy. ↩︎

SLMs vs LLMs: A Great Question.

@knowprose.com who run a LLM on your computer while a SLM does almost the same but needs less performance?

Stefan Stranger

This was a great question, and I want to flesh it out some more because Small Language Models (SLMs) are less processor and memory intensive than Large Language Models (LLMs).

Small isn’t always bad, large isn’t always good. My choice to go with a LLM instead of a SLM is pretty much as I responded to Stefan, but I wanted to respond more thoroughly.

There is a good write up on the differences between LLMs and SLMs here that is recent at the time of this writing. It’s worth taking a look at if you’ve not heard of SLMs before.

Discovering My Use Cases.

I have a few use cases that I know of, one of which is writing – not having the LLM do the writing (what a terrible idea), but to help me figure things out as I’m going along. I’m in what some would call Discovery, or in the elder schools of software engineering, requirements gathering. The truth is that, like most humans, I’m stumbling along to find what I actually need help with.

For example, yesterday I was deciding on the name of a character. Names are important and can be symbolic, and I tend to dive down rabbit-holes of etymology. Trying to discern the meanings of names is an arduous process, made no better by the many websites that have baby names that aren’t always accurate or culturally inclusive. My own name has different meanings in India and in old Celtic, as an example, but if you search for the name you find more of a Western bias.

Before you know it, I have not picked a name but instead am perusing something on Wikipedia. It’s a time sink.

So I kicked it around with Teslai (I let the LLM I’m fiddling with pick it’s name for giggles). It was imperfect, and in some ways it was downright terrible, but it kept me on task and I came up with a name in less than 15 minutes in what could have easily eaten up a day of my time as I indulged my thirst for knowledge.

How often do I need to do that? Not very often, but so far, a LLM seems to be better at these tasks.

I’ve also been tossing it things I wrote for it to critique. It called me on not using an active voice on some things, and that’s a fair assessment – but it’s also gotten things wrong when reading some of what I wrote. As an example, when it initially read “Red Dots Of Life“, it got it completely wrong – it thought it was about how red dots were metaphors for what was important, when in fact, the red dots were about other people driving you to distraction to get what they thought was important.

Could a SLM do these things? Because they are relatively new and not trained on as many things, it’s unlikely but possible. The point is not the examples, but the way I’m exploring my own needs. In that regard – and this could be unfair to SLMs – I opted to go with more mature LLMs, at least right now, until I figure out what I need from a language model.

Maybe I will use SLMs in the future. Maybe I should be using one now. I don’t know. I’m fumbling through this because I have eclectic interests that cause eclectic needs. I don’t know what I will throw at it next, but being allegedly better trained has me rolling with LLMs for now.

So far, it seems to be working out.

In an odd way, I’m learning more about myself through the use of the language model as well. It’s not telling me anything special, but it provokes introspection. That has value. People spend so much time being told what they need by marketers that they don’t necessarily know what they could use the technology for – which is why Fabric motivated me to get into all of this.

Now, the funny thing is that the basis of LLMs and their owner’s needs to add more information into them is not something I agree with. I do believe that better algorithms are needed so that they can learn with less information. I’ve been correcting a LLM as a human who has not been trained on as much information as it has been, so there is a solid premise for tweaking algorithms rather than shoving more information in.

In that regard, we should be looking at SLMs more, and demanding more of them – but what do we actually need from them? The marketers will tell you what they want to sell you, and you can sing their song, or you can go explore on your own – as I am doing.

Can you do it with a SLM? Probably. I simply made the choice to use a LLM, and I believe it suits me – but that’s just an opinion, and I could be wrong and acknowledge it. Sometimes you just pick a direction and go and hope you’re going in the right general direction.

What’s right for you? I can’t tell you, that would be presumptuous. You need to explore your own needs and make as an informed decision as I have.

Installing Your Own LLM On Your Computer With Ollama.

As I wrote in the last post, there are some good reasons to install your own LLM on your computer. It’s all really simple using Ollama, which allows you to run various models of LLM on your computer.

A GPU is nice, but not required.

Apple and Linux Users can simply go right over to Ollama and just follow the instructions.

For Apple it’s a download, for Linux it’s simply copying and pasting a command line. Apple users who need help should skip to the section about loading models.

For Windows users, there’s a Windows version that’s a preview at the time of this writing. You can try that out if you want, or… you can just add Linux to your machine. It’s not going to break anything and it’s pretty quick.

“OMG Linux is Hard” – no, it isn’t.

For Windows 10 (version 2004 or higher), open a Windows Command Prompt or Powershell with administrator rights – you do this by right clicking the icon and selecting ‘with administrator rights’. Once it’s open, type:

WSL --install

Hit enter, obviously, and Windows will set up a distro of Linux for you on your machine that you can access in the future by just typing ‘WSL’ in the command prompt/PowerShell.

You will be prompted to enter a user name, as well as a password (twice to verify).

Remember the password, you’ll need it. It’s called a ‘sudo’ password, or just the password, but knowing ‘sudo’ will allow you to impress baristas everywhere.

Once it’s done, you can run it simply by entering “WSL” on a command prompt or powershell.

Congratulations! You’re a Linux user. You may now purchase stuffed penguins to decorate your office.

Installing Ollama on Linux or WSL.

At the time of this writing, you’re one command away from running Ollama. A screenshot of it:

Hit the clipboard icon, paste it onto your command line, enter your password, and it will do it’s job. It may take a while, but it’s more communicative than a spinning circle: You can see how much it’s done.

Windows users: if your GPU is not recognized, you may have to search for the right drivers to get it to work. Do a search for your GPU and ‘WSL’, and you should find out how to work around it.

Running Ollama.

To start off, assuming you haven’t closed that window1, you can simply type:

ollama run <insert model name here>

Where you can pick a model name from the library. Llama3 is at the top of the list, so as an example:

ollama run llama3

You’re in. You can save versions of your model amongst other things, which is great if you’re doing your own fine tuning.

If you get stuck, simply type ‘/?‘ and follow the instructions.

Go forth and experiment with the models on your machine.

Just remember – it’s a model, it’s not a person, and it will make mistakes – correcting them is good, but doesn’t help unless you save your changes. It’s a good idea to save your versions with your names.

I’m presently experimenting with different models and deciding which I’ll connect to the Fabric system eventually, so that post will take longer.

  1. If you did close the window on Windows, just open a new one with administrator privileges and type WSL – you’ll be in Linux again, and can continue. ↩︎

Why I Installed AIs (LLMs) On My Local Systems.

The last few days I’ve been doing some actual experimentation, initially begun because of Daniel Miessler’s Fabric, an Open Source Framework for using artificial intelligence to augment we lowly humans instead of the self-lauding tech bros whose business model falls to, “move fast and break things“.

It’s hard to trust people with that sort of business model when you understand your life is potentially one of those things, and you like that particular thing.

I have generative AIs on all of my machines at home now, which was not as difficult as people might think. I’m writing this part up because to impress upon someone how easy it was, I walked them through doing it in minutes over the phone on a Windows machine. I’ll write that up as my next post, since apparently it seems difficult to people.

For myself, the vision Daniel Miessler brought with his implementation, Fabric, is inspiring in it’s own way though I’m not convinced that AI can make anyone a better human. I think the idea of augmenting is good, and I think with all the infoglut I contend with leaning on a LLM makes sense in a world where everyone else is being sold on the idea of using one, and how to use it.

People who wax poetic about how an AI has changed their lives in good ways are simply waxy poets, as far as I can tell.

For me, with writing and other things I do, there can be value here and there – but I want control. I also don’t want to have to risk my own ideas and thoughts by uploading even a hint of them to someone else’s system. As a software engineer, I have seen loads of data given to companies by users, and I know what can be done with it, and I have seen how flexible ethics can be when it comes to share prices.

Why Installing Your Own LLM is a Good Idea. (Pros)

There are various reasons why, if you’re going to use a LLM, it’s a good idea to have it locally.

(1) Data Privacy and Security: If you’re an individual or a business, you should look after your data and security because nobody else really does, and some profit from your data and lack of security.

(2) Control and Customization: You can fine tune your LLM on your own data (without compromising your privacy and security). As an example, I can feed a LLM various things I’ve written and have it summarize where ideas I’ve written about connect – and even tell me if I have something published where my opinion has changed- without worrying about handing all of that information to someone else. I can tailor it myself – and that isn’t as hard as you think.

(3) Independence from subscription fees; lowered costs: The large companies will sell you as much as you can buy, and before you know it you’re stuck with subscriptions you don’t use. Also, since the technology market is full of companies that get bought out and license agreements changed, you avoid vendor lock-in.

(4) Operating offline; possible improved performance: With the LLM I’m working on, being unable to access the internet during an outage does not stop me from using it. What’s more, my prompts aren’t queued, or prioritized behind someone that pays more.

(5) Quick changes are quick changes: You can iterate faster, try something with your model, and if it doesn’t work, you can find out immediately. This is convenience, and cost-cutting.

(6) Integrate with other tools and systems: You can integrate your LLM with other stuff – as I intend to with Fabric.

(7) You’re not tied to one model. You can use different models with the same installation – and yes, there are lots of models.

The Cons of Using a LLM Locally.

(1) You don’t get to hear someone that sounds like Scarlett Johansson tell you about the picture you uploaded1.

(2) You’re responsible for the processing, memory and storage requirements of your LLM. This is surprisingly not as bad as you would think, but remember – backup, backup, backup.

(3) If you plan to deploy a LLM as a business model, it can get very complicated very quickly. In fact, I don’t know all the details, but that’s nowhere in my long term plans.

Deciding.

In my next post, I’ll write up how to easily install a LLM. I have one on my M1 Mac Mini, my Linux desktop and my Windows laptop. It’s amazingly easy, but going in it can seem very complicated.

What I would suggest about deciding is simply trying it, see how it works for you, or simply know that it’s possible and it will only get easier.

Oh, that quote by Diogenes at the top? No one seems to have a source. Nice thought, though a possible human hallucination.

  1. OK, that was a cheap shot, but I had to get it out of my system. ↩︎

Beyond A Widowed Voice.

By now, the news that Scarlett Johansson’s issues with OpenAI and the voice that sounds like her have made the rounds. She’s well known and regardless of one’s interests, she’s likely to pop up in various contexts. However, she’s not the first.

While different in some ways, voice actors Paul Skye Lehrman and Linnea Sage are suing Lovo for similar reasons. They got hired to do some work that they thought were one off voice overs, then heard their voices saying things they had never said. To the point, they heard their voices doing something that they didn’t get paid for.

The way they found out was oddly poetic.

Last summer, as they drove to a doctor’s appointment near their home in Manhattan, Paul Skye Lehrman and Linnea Sage listened to a podcast about the rise of artificial intelligence and the threat it posed to the livelihoods of writers, actors and other entertainment professionals.

The topic was particularly important to the young married couple. They made their living as voice actors, and A.I. technologies were beginning to generate voices that sounded like the real thing.

But the podcast had an unexpected twist. To underline the threat from A.I., the host conducted a lengthy interview with a talking chatbot named Poe. It sounded just like Mr. Lehrman.

“He was interviewing my voice about the dangers of A.I. and the harms it might have on the entertainment industry,” Mr. Lehrman said. “We pulled the car over and sat there in absolute disbelief, trying to figure out what just happened and what we should do.”

What Do You Do When A.I. Takes Your Voice?, Cade Metz, New York Times, May 16th, 2024.

They aren’t sex symbols like Scarlett Johansson. They weren’t the highest paid actresses in 2018 and 2019. They aren’t seen in major films. Their problem is just as real, just as audible, but not quite as visible. Forbes covered the problems voice actors faced in October of 2023.

…Clark, who has voiced more than 100 video game characters and dozens of commercials, said she interpreted the video as a joke, but was concerned her client might see it and think she had participated in it — which could be a violation of her contract, she said.

“Not only can this get us into a lot of trouble if people think we said [these things], but it’s also, frankly, very violating to hear yourself speak when it isn’t really you,” she wrote in an email to ElevenLabs that was reviewed by Forbes. She asked the startup to take down the uploaded audio clip and prevent future cloning of her voice, but the company said it hadn’t determined that the clip was made with its technology. It said it would only take immediate action if the clip was “hate speech or defamatory,” and stated it wasn’t responsible for any violation of copyright. The company never followed up or took any action.

“It sucks that we have no personal ownership of our voices. All we can do is kind of wag our finger at the situation,” Clark told Forbes

Keep Your Paws Off My Voice’: Voice Actors Worry Generative AI Will Steal Their Livelihoods, Rashi Shrivastava, Forbes.com, October 9th, 2023.

As you can see – the whole issue is not new. It just became more famous because of a more famous face, and involves OpenAI, a company that has more questions about their training data than ChatGPT can answer, so the story has sung from rooftops.

Meanwhile, some are trying to license the voices of dead actors.

Sony recently warned AI companies about unauthorized use of the content they own, but when one’s content is necessarily public, how do you do that?

How much of what you post, from writing to pictures to voices in podcasts and family videos, can you control? It costs nothing, but it costs futures of individuals. And when it comes to training models, these AI companies are eroding the very trust they need from those that they want to sell their product to – unless they’re just enabling talentless and incapable hacks to take over jobs that talented and capable people have already do.

We have more questions than answers, and the trust erodes as more and more people are impacted.

AI, Democracy, India.

India is the world’s most populous democracy, and there has been a lot going on related to religion that is well beyond the scope of this, but deserves mention because violence has been involved.

The Meta Question.

In the latest news, Meta stands accused of approving political ads on it’s platforms of Instagram and Facebook that have incited violence.

This, apparently, was a test, according to TheGuardian.

How this happened seems a little strange and is noteworthy1:

“…The adverts were created and submitted to Meta’s ad library – the database of all adverts on Facebook and Instagram – by India Civil Watch International (ICWI) and Ekō, a corporate accountability organisation, to test Meta’s mechanisms for detecting and blocking political content that could prove inflammatory or harmful during India’s six-week election…”

Revealed: Meta approved political ads in India that incited violence, Hannah Ellis-Petersen in Delhi, TheGuardian, 20 May 2024.

It’s hard to judge the veracity of the claim based on what I dug up (see the footnote). TheGuardian must have more from their sources for them to be willing to publish the piece – I have not seen this before with them – so I’ll assume good and see how this pans out.

Meta claims to be making efforts to minimize false information, but Meta also doesn’t have a great track record.

The Deepfake Industry of India.

Wired.com also has a story that has some investigation in it that does not relate to Meta.

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve2 by Wired.com goes into great detail about Divyendra Singh Jadoun and how his business is doing well.

“…Across the ideological spectrum, they’re relying on AI to help them navigate the nation’s 22 official languages and thousands of regional dialects, and to deliver personalized messages in farther-flung communities. While the US recently made it illegal to use AI-generated voices for unsolicited calls, in India sanctioned deepfakes have become a $60 million business opportunity. More than 50 million AI-generated voice clone calls were made in the two months leading up to the start of the elections in April—and millions more will be made during voting, one of the country’s largest business messaging operators told WIRED.

Jadoun is the poster boy of this burgeoning industry. His firm, Polymath Synthetic Media Solutions, is one of many deepfake service providers from across India that have emerged to cater to the political class. This election season, Jadoun has delivered five AI campaigns so far, for which his company has been paid a total of $55,000. (He charges significantly less than the big political consultants—125,000 rupees [$1,500] to make a digital avatar, and 60,000 rupees [$720] for an audio clone.) He’s made deepfakes for Prem Singh Tamang, the chief minister of the Himalayan state of Sikkim, and resurrected Y. S. Rajasekhara Reddy, an iconic politician who died in a helicopter crash in 2009, to endorse his son Y. S. Jagan Mohan Reddy, currently chief minister of the state of Andhra Pradesh. Jadoun has also created AI-generated propaganda songs for several politicians, including Tamang, a local candidate for parliament, and the chief minister of the western state of Maharashtra. “He is our pride,” ran one song in Hindi about a local politician in Ajmer, with male and female voices set to a peppy tune. “He’s always been impartial.”…”

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve“, Nilesh Christopher & Varsha Bansal, Wired.com, 20 May 2024.

Al Jazeera has a video on this as well.

In the broader way it is being used, audio deepfakes have people really believing that they were called personally by candidates. This has taken robo-calling to a whole new level3.

What we are seeing is the manipulation of opinions in a democracy through AI, and it’s something that while happening in India now is certainly worth being worried about in other nations. Banning something in one country, or making it illegal, does not mean that foreign actors won’t do it where the laws have no hold.

Given India’s increasing visible stance in the world, we should be concerned, but given AI’s increasing visibility in global politics to shape opinions, we should be very worried indeed. This is just what we see. What we don’t see is the data collected from a lot of services, and how they can be used to decide who is most vulnerable to particular types of manipulation, and what that means.

We’ve built a shotgun from instructions on the Internet and have now loaded it and pointed it at the feet of our democracies.

  1. Digging into the referenced report itself (PDF), there’s no ownership of the report itself within the document, though it is on the Eko.org web server – with no links to it from the site itself at the time of this writing. There’s nothing on the India Civil Watch International (ICWI) website at the time of this writing either.

    That’s pretty strange. The preceding report referenced in the article is here on LondonStory.org. Neither the ICWI or Eko websites seem to have that either. Having worked with some NGOs in the Caribbean and Latin America, I know that they are sometimes slow to update websites, so we’ll stick a pin in it. ↩︎
  2. Likely paywalled if you’re not a Wired.com subscriber, and no quotes would do it justice. Links to references provided. ↩︎
  3. I worked for a company that was built on robocalling, but went to higher ground with telephony by doing emergency communications instead, so it is not hard for me to imagine how AI can be integrated into it. ↩︎

Google In, Google Out.

Last week, there were a lot of announcements, but really not that much happened. And for some strange reason, Google didn’t think to use the .io ccTLD for their big annual developer event, Google I/O.

It was so full of AI that they should have called it Google AI. I looked over the announcements, the advertorials on websites announcing stuff that could almost be cool except… well, it didn’t seem that cool. In fact, the web search on Google with AI crutches already has workarounds to bypass the AI – but I have yet to see it in Trinidad and Tobago. Maybe it’s not been fully rolled out, or maybe I don’t use Google as a search engine enough for me to spot it.

No one I saw in the Fediverse was drooling over anything that Google had at the conference. Most comments were about companies slapping AI on anything and making announcements, which it does seem like.

I suppose, too, that we’re all a little bit tired of AI announcements that really don’t say that much. OpenAI, Google, everyone is trying to get mindshare to build inertia, but questions on what they’re feeding learning models, issues with ethics and law… and for most people, knowing that they’ll have a job they can depend on better than they can depend on it today seems more of a pressing issue.

The companies selling generative AI like snake oil to cure all the ills of the world seem disconnected from the ills of the world, and I’ll remember Sundar Pichai said we’d need more lawyers a year ago.

It’s not that generative AI is bad. It’s that it’s really not brought anything good for most people except a new subscription, less job security, and an increase in AI content showing up all over, bogging down even Amazon.com’s book publishing.

They want us to buy more of what they’re selling even as they take what some are selling to train their models to… sell back to us.

Really, all I ever wanted from Google was a good search engine. That sentiment seems to echo across the Fediverse. As it is, they’re not as good a search engine as they used to be – I use Google occasionally. Almost as an accident.

I waited a week for something to write about some of the announcements, and all I read about Google’s stuff was how to work around their search results. That’s telling. They want more subscribers, we want more income to afford the subscriptions. Go figure.