Some Things Are Not Technology Issues.

Some years ago, I served on a Board for a residential community – something I haven’t put on my CV and don’t intend to – and everything was falling apart, largely because the lessor wasn’t doing their fair share, which is another story altogether.

While I was on the Board, I took interest in the office because information, which we didn’t have much of because of the lessor, needed to be stored. The phone the property manager used belonged to the old Chairman, the administrator (when we had one) didn’t have a phone, and no information of use was stored in the office – yet it was central to communicating with residents and shareholders. They were using Outlook, and subscribing to a service that didn’t allow them to email beyond a quota which is just… well, Microsoft being Microsoft.

So I created a Google Group for the Board, and wrestled people onto it after I volunteered to do it enough times that I just got sick of it. Residents weren’t getting emails, and it was obvious to even the dullest nail in the box that the problem was the Microsoft quota. The general response, it seems, is to just pay more for Outlook, but I suggested using a Google Group because that way we could split communications between residents and shareholders as needed, we could allow people to access old conversations easily and refer them back to things, and we could build a knowledge base based on these things. It was not rocket science. It was very late 1990s technology I was talking about: Send one email to the group, Google delivers it to everyone. Presto magico.

Being a volunteer Board, you never know who you’re going to get on it. I pressed on those things and then Covid-19 happened, and so nothing really happened. We did manage to get the administrator a phone and get the property manager his own phone, and frustrated with the way things were going I left the Board.

After leaving, I had an open invitation to assist the Boards that came after with everything, but stayed out of their way.

That was 2020 or so. It’s 2024 now. People are still sometimes not getting email because of the same issue, something I told every single Board about for the last 4 years.

It takes only a few minutes to set up a Google Group. There’s nothing complicated about it. I walked by the office as a local expert was explaining why emails were getting bounced back.

His response I overheard was that they needed to pay more for Outlook.

Sometimes, you can lead a horse to water and can’t make it drink – but there are times when you lead a rat to water and wish to drown it.

This is why I hate dealing with local companies in Trinidad and Tobago, and don’t offer any services here. It’s somehow stuck in time. I have loads of stories like this.

TikTok, History and Issues.

I’ve been noting a lot of blow-back related to the ‘Coming Soon’ ban of TikTok in the United States, and after writing, ‘Beyond TikTok *Maybe* Being Banned‘, I found myself wondering… why are people so attached to a social network?

I could get into the obvious reasons – the sunk cost fallacy, where so much time and energy is invested in something that one doesn’t want to leave it. We humans tend toward that despite knowing better.

We see this with all social media where one can’t simply move content from one place to another easily, much less the connections made. If you back up your Facebook account, as an example, it’s only your information that gets backed up – not all the interactions with everyone you know, and not content you may be tagged in. So you lose that, but it’s sort of like moving to different geography – you can’t always keep relationships with those tied to your previous geography.

Yet the vehemence of some of the posts in defense of TikTok had me digging deeper. It wasn’t just the sunken cost, there was more to it than that. I haven’t used TikTok, not because of some grand reason – I just didn’t find it appealing.

Thus I explored some things. I’m not really for or against TikTok. I am against social media that passes your information on to entities you may not know, where it will be used in ways that are well beyond your control – even anonymized, it doesn’t mean an individual isn’t identifiable. How many times have you pictured a face and not remembered a name? Still, there’s something really sticky about TikTok.

Here’s what I found.

The Start: The Death of Vine.

When Vine curled up and died as Twitter started allowing video uploads, TikTok stood in. Vine was used by a diverse group of people – the regular stuff, including marketing and branding. There was nothing too different about users of other social media at that point.

It’s appealing to those with short attention spans, and the new average attention span is 47 seconds.

Then Ferguson happened, and Vine ended up becoming a part of an identifiable social movement after Michael Brown was shot and killed, largely because of Antonio French’s (then St Louis City Alderman) posts documenting racial tensions in and around Ferguson.

It connected people who were participating in protests, which happens to an extent in some social media, but it doesn’t seem as much. It makes sense. A short video on a mobile phone drains less of the battery and the context of protests is hard to miss in a short video format – so while documenting things, one is more mobile, can post more information in context, and can be seen by a lot of people. That’s a powerful tool for social commentary and social awareness.

It made such an impact that activists mourned the loss of Vine.

TikTok showed up, with recommendation algorithms.

The Vine Replacement.

TikTok has the regular band of social media users, from dancing to brands – but it filled a vacuum left behind for social awareness and protest. It had other ‘benefits’ – being able to use copyrighted music on that platform but not others allowed lip synching and dance for a new generation of social media users. You can ‘stitch’ other videos – combining someone else’s video for yours, allowing commentary on commentary, like a threaded conversation only with combined contexts1.

There’s a lot of commentary on it’s algorithm for giving people want to see as well.

Certain landmark things happened in the world that highlighted social awareness and activism.

Black Lives Matter

Russian Invasion of Ukraine

Israel-Hamas War.

Plenty of other platforms were used in these, but the younger generations on the planet gravitated to TikTok. It became a platform where they could air their own contexts and promote awareness of things that they care about, though not all of minorities may agree, with one large blind spot.

The Blind Spot

That blind spot accounts for 18.6% of the global population. China. There is no criticism of China on TikTok, it’s removed, and maybe because people are caught up in their own contexts they seem unaware of that, and the state of human rights in China. It’s a platform where you can protest and air dirty laundry except in the country it is headquartered in.

It should be at least a little awkward to use a platform for social activism headquartered in a country that doesn’t permit it, much less the country that sits at 3rd in the list Worst countries for human rights and rule of law as of 2022, below Yemen and Iran.

The Great Firewall of China absolves users of TikTok of ignorance by assuring their ignorance.

And interesting, of the 30+ countries that have banned TikTok, China’s one of them. The localized version, Douyin, is subject to censorship by the Chinese Communist Party.

I’d say that should make everyone a little leery about supporting TikTok.

But What Will Come Next?

The TikTok platform certainly has allowed the younger generations to give voice to their situations and issues. That is not a bad thing.

There’s a few things that are happening – TikTok won’t go away for a while, it will be in court more than likely for some years appealing the ban. If people do care about social awareness and activism, it’s a hard case to make that what’s good for the rest of the world isn’t good for China.

If you truly care about human rights, TikTok is a paradox. It’s hard to have a conscientious conversation about human rights on a platform which doesn’t practice those same rights in it’s own country.

The key to finding an alternative is an algorithm, since the algorithm is fed by tracking users – users who might not be as keen about being tracked when they understand what that means.

Something will come. Something always does.

  1. This has seen some sociological study as you can see in “Stitching the Social World: The Processing of Multiple Social Structures in Communication↩︎

The Dark Side of the AI.

It didn’t take as long as we expected. Last week, a former school athletic director got arrested for framing a principal.

Being a campaign year, I thought that most of the AI hijinx would revolve around elections around the world – and they are happening – but I didn’t think we’d see early adoption of AI in this sort of thing. And an athletic director, no less – not a title typically known for mastery of technology.

AI has a dark side, which a few of us have been writing. The Servitor does a good job of documenting what they coined as Dark ChatGPT, well worth a look. Any technology can be twisted to our own devices.

It’s not the technology.

It’s us.

Again.

Maybe the CEO of Google was right about a need for more lawyers.

DHS Artificial Intelligence Safety And Security Board Has Some Odd Appointments.

Now that we’ve seen that generative artificial intelligence can be trained ethically, without breaking copyright laws, the list of people to the DHS Artificial Intelligence Safety and Security Board seems less than ideal.

The Board is supposed to ‘advance AI’s responsible development and deployment’ (emphasis mine), yet some on that Board took shortcuts.

Shortcuts in relation to any national security issue seems like a bad thing.

Here’s the list.

There’s some dubious companies involved. The argument can be made – and it probably will – that the companies are a part of national infrastructure, but is it national infrastructure that controls the United States, or is it the other way around?

I don’t know that these picks are good or bad. I will say that there are some that, at least in the eyes of others, been irresponsible. That would fall under Demonstrated Unreliability.

Copyright, AI, And, It Doing It Ethically.

It’s no secret that the generative, sequacious artificial intelligences out there have copyright issues. I’ve written about it myself quite a bit.

It’s almost become cliche to mention copyright and AI in the same sentence, with Sam Altman having said that there would be no way to do generative AI without all that material – toward the end of this post, you’ll see that someone proved that wrong.

Copyright Wars pt. 2: AI vs the Public“, by Toni Aittoniemi in January of 2023, is a really good read on the problem as the large AI companies have sucked in content without permission. If an individual did it, the large companies doing it would call it ‘piracy’, but now, it’s… not? That’s crazy.

The timing of me finding Toni on Mastodon was perfect. Yesterday, I found a story on Wired that demonstrates some of what Toni wrote last year, where he posed a potential way to handle the legal dilemmas surrounding creator’s rights – we call it ‘copyright’ because someone was pretty unimaginative and pushed two words together for only one meaning.

In 2023, OpenAI told the UK parliament that it was “impossible” to train leading AI models without using copyrighted materials. It’s a popular stance in the AI world, where OpenAI and other leading players have used materials slurped up online to train the models powering chatbots and image generators, triggering a wave of lawsuits alleging copyright infringement.

Two announcements Wednesday offer evidence that large language models can in fact be trained without the permissionless use of copyrighted materials.

A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. And the nonprofit Fairly Trained announced that it has awarded its first certification for a large language model built without copyright infringement, showing that technology like that behind ChatGPT can be built in a different way to the AI industry’s contentious norm.

“There’s no fundamental reason why someone couldn’t train an LLM fairly,” says Ed Newton-Rex, CEO of Fairly Trained. He founded the nonprofit in January 2024 after quitting his executive role at image-generation startup Stability AI because he disagreed with its policy of scraping content without permission….

Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content“, Kate Knibbs, Wired.com, March 20th, 2024

It struck me yesterday that a lot of us writing and communicating about the copyright issue didn’t address how it could be handled. It’s not that we don’t know that it couldn’t be handled, it’s just that we haven’t addressed it as much as we should. I went to sleep considering that and in the morning found that Toni had done much of the legwork.

What Toni wrote extends on the system:

…Any training database used to create any commercial AI model should be legally bound to contain an identity that can be linked to a real-world person if so required. This should extend to databases already used to train existing AI’s that do not yet have capabilities to report their sources. This works in two ways to better allow us to integrate AI in our legal frameworks: Firstly, we allow the judicial system to work it’s way with handling the human side of the equation instead of concentrating on mere technological tidbits. Secondly, a requirement of openness will guarantee researches to identify and question the providers of these technologies on issues of equality, fairness or bias in the training data. Creation of new judicial experts at this field will certainly be required from the public sector…

“Copyright Wars pt. 2: AI vs the Public”, Toni Aittoniemi, Gimulnautti, January 13th, 2023.

This is sort of like – and it’s my interpretation – a tokenized citation system built into a system. This would expand on what, as an example, Perplexity AI does by allowing style and ideas to have provenance.

This is some great food for thought for the weekend.

“Free Speech” And Social Media.

I’ve seen plenty of folks talking about ‘First Amendment’ and ‘Freedom of Speech’ in the context of TikTok, as I saw on Facebook, as I saw on…

All the way back to AOL. Strangely, I don’t remember the topic on BBSs (Bulletin Board Systems), mainly because everyone on those generally understood the way things are.

As a moderator on websites in the early days of the Internet right up to WSIS, I heard it again and again. “You can’t restrict my freedom of speech!”

Social media platforms are private companies and are not bound by the First Amendment. In fact, they have their own First Amendment rights. This means they can moderate the content people post on their websites without violating those users’ First Amendment rights. It also means that the government cannot tell social media sites how to moderate content. Many state laws to regulate how social media companies can moderate content have failed on First Amendment grounds.

Most sites also cannot, in most cases, be sued because of users’ posts due to Section 230 of the federal Communications Decency Act.

Free Speech on Social Media: The Complete Guide“, Lata Nott, FreedomForum.

The link for the quote has a great article worth reading, because there are some kinds of speech that you can get in trouble for. No sense rewriting a good article.

So this idea about ‘free speech’ on any platform controlled by anyone other than yourself is incorrect. Wrong.

Once you don’t break the terms of service or laws in the country you’re in or the country where the platform is hosted (legally), you can say whatever you want. The principle of the freedom of speech is assumed by a lot of people because it’s in the interests of platforms to let people say whatever they want as long as it doesn’t impact their ability to do business – irritating other users, threatening them, etc.

Even your own website is subject to the terms and conditions of the host.

There’s a quote falsely attributed to Voltaire that people pass around, too: “To learn who rules over you, simply find out who you are not allowed to criticize.” Powerful words, thoughtful words, unfortunately expressed by someone who is… well, known for the wrong things.

It doesn’t seem to apply that much on social media platforms anyway. I have routinely seen people on Twitter griping about Twitter, on Facebook griping about Facebook… the only social media platform I haven’t seen it on is LinkedIn, but I imagine someone probably did there too.

This idea seems to come up at regular intervals. It could be a generational thing. In a world where we talk about what should be taught in schools, this is one of them.

Government interference in these platforms moderation could be seen as a First Amendment issue. With TikTok, there’s likely going to be a showdown over freedom of speech in that context, but don’t confuse it with the user’s first amendment rights. It’s strange that they might do that, too, because where ByteDance (the owning company) is based, they couldn’t sue their government. China’s not known for freedom of speech. Ask Tibet.

The second you find yourself defending a platform you don’t control, take a breath and ask yourself if you can’t just do the thing somewhere else. You probably should.

The Fediverse isn’t too different, except you can find a server with rules that work for you to connect to it.

Introducing Sequacious AI

Sequacious AI will answer all of your questions based on what it has scraped from the Internet! It will generate images based on everything it sucked into it’s learning model manifold! It will change the way you do business! It will solve the world’s mysteries for you by regurgitating other people’s content persuasively!

You’ll beat your competitors who aren’t using it at just about anything!

Sequacious is 3.7 Megafloopadons1 above the industry standard in speed!

Terms and conditions may apply.2

Is this a new product? A new service?

Nope. It’s What You Have Already, it’s just named descriptively.

It’s a descriptor for what you already are getting, with an AI generated image that makes you feel comfortable with it combined with text that preys on anxieties related to competition, akin to nuclear weapons. It abuses exclamation marks.

The key is the word, “Sequacious“. Intellectually servile, devoid of independent or original thought. It simply connects words in answers based on what it is fed and how it’s programmed. That’s why the Internet is being mined for data, initially ignoring copyright and now maybe paying lip service to it, while even one’s actions on social media are being fought for at the national level.

And it really isn’t that smart. Consider the rendering of the DIKW pyramid by DALL-E. To those who don’t know anything about the DIKW pyramid, they might think it’s right (which is why I made sure to put on the image that it’s wrong).

Ignore the obvious typos DALL-E made.

It’s inverted. You’d think that an AI might get information science right. It takes a lot of data to make information, a lot of information to make knowledge, and a lot of knowledge to hopefully make wisdom.

Wisdom should be at the top – that would be wise3.

A more accurate representation of a DIKW pyramid, done to demonstrate (poorly) how much is needed to ascend each level.

Wisdom would also be that while the generative AIs we have are sequacious, or intellectually servile, we assume that it’s servile to each one of us. Because we are special, each one of us. We love that with a few keystrokes the word-puppetry will give us what we need, but that’s the illusion. It doesn’t really serve us.

It serves the people who are making money, or who want to know how to influence us. It’s servile to those who own them, by design – because that’s what we would do too. Sure, we get answers, we get images, and we get videos – but even our questions tell the AIs more about us than we may realize.

On Mastodon, I was discussing something a few days ago and they made the point that some company – I forget who, I think it was Google – anonymizes data, and that’s a fair point.

How many times have you pictured someone in your head and not known their name? Anonymized data can be like that. It’s descriptive enough to identify someone. In 2016, Google’s AI could tell exactly where an image was taken. Humans might be a little bit harder. It’s 2024 now, though.

While our own species wrestles it’s way to wisdom, don’t confuse data with information, information with knowledge, and knowledge with wisdom in this information age.

That would make you sequacious.

  1. Megafloopadons is not a thing, but let’s see if that makes it into a document somewhere. ↩︎
  2. This will have a lot of words that pretty much make it all a Faustian bargain, with every pseudo-victory being potentially Pyrrhic. ↩︎
  3. It’s interesting to consider that the inversion might be to avoid breaking someone’s copyright, and iit makes one wonder if that isn’t hard coded in somewhere. ↩︎

Beyond TikTok *Maybe* Being Banned.

The buzz about the possible TikTok ban has been pretty consistent from what I’ve seen in social media, but it seems like most people don’t get why it’s happening.

One post on Mastodon I read said that it was a way for the government to alienate GenZ, and I thought – is this network really such a big deal? Anecdotally, I know quite a few people who peruse TikTok, and I shake my head because I explain why it’s not a great social network to use. In fact, the reasons not to use TikTok are pretty much the same as why people shouldn’t be using Facebook, Instagram, LinkedIn, Twitter X, and whatever else is out there: They want to know your habits, as I wrote.

In that regard, if TikTok is used so exclusively by GenZ, it’s easy to imagine lobbyists from the big social network companies might push for TikTok being banned. That is likely, since all that data on GenZ isn’t in their hands and they believe it should be. But it goes a bit deeper.

U.S. officials fear that the Chinese government is using TikTok to access data from, and spy on, its American users, spreading disinformation and conspiracy theories...

Congress approved a TikTok ban. Why it could still be years before it takes effect.“, Rob Wile and Scott Wong, NBCNews, April 23rd, 2024

That’s fair. We have enough domestic (American) disinformation and conspiracy theories during a 2024 election, we don’t need other governments doing their own to their benefit, as happened in 2016 with Russia.

Interestingly, and perhaps unrelated, the U.S. Senate passed a bill renewing FISA, which makes discussion about a ban of any foreign social media a little awkward.

“It’s important that people understand how sweeping this bill is,” said Sen. Ron Wyden, D-Ore., a member of the Intelligence Committee and outspoken proponent of privacy protections. “Something was inserted at the last minute, which would basically compel somebody like a cable guy to spy for the government. They would force the person to do it and there would be no appeal.”…

Senate passes bill renewing key FISA surveillance power moments after it expires“, Frank Thorp V, Sahil Kapur and Ryan Nobles, NBCNews, April 20th, 2024.

Articles about FISA are very revealing – but people who are focused on the TikTok ban alone are missing some great information. This article by Hessie Jones on Forbes puts together some pretty great quotes. so much so I won’t quote it and point you at it: “Data Privacy And The Contested Extension Of FISA, Section 702” (April 23rd, 2024).

You see, it’s not just about foreign data:

…Under FISA’s Section 702, the government hoovers up massive amounts of internet and cell phone data on foreign targets. Hundreds of thousands of Americans’ information is incidentally collected during that process and then accessed each year without a warrant — down from millions of such queries the US government ran in past years. Critics refer to these queries as “backdoor” searches…

Senate passes, Biden signs surveillance bill despite contentious debate over privacy concerns“, Ted Barrett, Morgan Rimmer and Clare Foran, CNN, April 20th, 2024.

So, what’s feeding generative artificial intelligences? Why, you are, of course, with everyone’s social network ‘allowing’ you to do so.

The TikTok ban will likely be fought in court for years, anyway, and who knows what direction it will take depending on who wins the election?

But social networks and companies will still be hoovering that data up, training artificial intelligences all about you. It will help train algorithms to sell you stuff and influence you to make decisions.

TikTok ain’t the issue.

The End Of Non-Compete.

The FTC banned non-competes agreements, and I wish that this had come a few decades earlier. Non-competes kept me from starting businesses and even working for competitors in the past, though more as a matter of honoring the agreement than any legal threat by a former employer.

When you do specialized work for companies, as an employee or a contractor, that non-compete agreement was always a pain. A silent tyranny.

Word around the water cooler always speculated that non-competes were unconstitutional (13th Amendment), but all the ‘legal experts’ around the water cooler were not people who would pay my lawyers.

The 13th Amendment provides that “neither slavery nor involuntary servitude . . . shall exist.” If the protection against involuntary servitude means anything for workers, it means that they have a right to leave their jobs to seek other employment. Indeed, one of the few effective bargaining chips for workers who wish to improve their wages and working conditions is the ability to threaten to quit if one’s demands are not met. Members of the Reconstruction Congress understood that employee mobility was essential to freedom from involuntary servitude. Enslaved people obviously lacked the ability to leave their masters. Even after they were no longer enslaved, without mobility, people freed from slavery would have been forced to work for their former masters. The Reconstruction Congress enforced the 13th Amendment with the 1867 Anti-Peonage Act, prohibiting employers from requiring their workers to enter into contracts that bind them to their employers. Non-compete clauses have similar effects because they prohibit workers from leaving their jobs to find other similar jobs.

Non-Compete Clauses and the 13th Amendment: Why the New FTC Rule Is Not Only Good Policy but Constitutionally Mandated“, Rebecca Zietlow, JuristNews, Feb 13th, 2023

Employers are stating that they are concerned about trade secrets and so forth, which on the surface seems legitimate – but that’s what the confidentiality agreements are for. Also, a lack of non-competes means that the value of employees to employers is higher. If you don’t want your people to go work for a competitor, don’t let them go. So many companies that I did leave were terrible at listening to employee concerns – not about themselves, but about the company, making jobs unnecessarily political.

Navigating office politics is… tiresome. In fact, I left one company simply because I got tired of the DBA who kept screwing up but was seemingly protected by the business team because he drank with them. He squandered quite a bit of their money shoring up his position by insisting on databases he knew when open source databases could have done the same job much more cost-effectively. I had a non-compete, so I didn’t even bother working within that company’s niche.

And I could have. I had offers. Yet I just didn’t feel like dealing with a vengeful bit of litigation, and that business team could be vengeful. I saw it a few times.

That’s just one story.

Most of these agreements protect employers, and that’s fair to the extent that any work done for them is basically a commissioned work in the realm of software engineering.

I hope this isn’t screwed up, I hope that the Chamber of Commerce appeal fails – not that I want to screw over employers, but because employees shouldn’t get screwed over when they’re stuck in a dead end and have built up expertise in a niche. I’m hoping this gives a better balance.

At any point in the future, I could be either an employer or employee.

Our Technology And Ethics.

The headlines this past week have had Google’s relationship with Israel under scrutiny as they fired employees who were against what Israel has been doing and protested accordingly. I’ve looked at some of the news stories, some sympathizing with the former employees, some implicitly supporting Israel and the order that people expect within companies.

I won’t comment on that because that’s political and this isn’t about politics, or who is right or wrong.

Of Swords And Blacksmiths.

Throughout my career as a software engineer, I’ve had to deal with ethical issues and I’ve navigated them as best I could, as challenging as some of them were and some of them were personally quite challenging.

Ever since we figured out how to bonk each other over the heads with stones (stone technology), it seems we’ve found increasing occasion to do so. It could be that the first use of such weapons was for hunting or defense of the tribe from predators – likely both – but eventually we learned to turn them on ourselves.

I’m sure at some point there was a blacksmith who refused to make swords because of where the points and edges were aimed. Other blacksmiths just made them. There always seems to be someone else to kill, or to defend against. We could get into the Great Gun Debate, but we fall into the same problem with that. There’s always some human creeping around who wants to kill someone else for glorified reasons, and because of that we sleep with things under our pillows that could very well be used to kill us just as easily. It’s not a debate. It’s a criticism of humanity and an unfortunately honest one at that.

“We all lived for money, and that is what we died for.”

William T. Vollmann, No Immediate Danger: Volume One of Carbon Ideologies

Sometimes my ethics require me to move on, which I did without protest a few times over the decades: There’s always someone else who needs a job more than they care about an ethical issue if even they see the ethical issue. In the end we try, hopefully, to do more good than bad, but both of those are subjective.

Too often we use a technology as a scapegoat, an externalized criticism of ourselves that allows us to keep doing what we do. Technology can be used for good or bad; how we use that technology says something about ourselves and when we criticize the use of technology, we implicitly criticize ourselves but we don’t take the criticism because we have neatly placed the blame on a vague, externalized concept – a deflection at a species level, often because we are buying into the idea that the enemy is less than human. Yet we all are human despite ideologies, cultures, languages, and color coding that we don’t all neatly fit in.

We Are All Blacksmiths.

These days, with generative AI allowing us to paint the fence of the future once we give the corporations in control of them a few baubles, everything we do on the Internet is potentially a weapon to be used against someone else. While the firing of the Google employees who protested is news, those that still work there aren’t, and this is not to say that they aren’t faced with their own ethical dilemmas. We who work in technology hope that our work is used for good.

I worked at one place that started off with robo-calling software that was used to annoy people during elections that turned itself into an emergency communications service. Things can change, businesses can change, and controlling even a part of the infrastructure of a nation’s military can have unexpected consequences for everyone involved. What happens if Google suddenly doesn’t like something and turns something off?

The future is decidedly fickle. Our personal ethics should impact our collective ethics, but it often doesn’t. It can.

We build tools. Sadly, they aren’t used the way we would like sometimes, and we should try to influence things if we can – but ultimately, we are subject to a fickle future and good intentions that can be misdirected.