This fix is amazingly simple (at least on Android). My vehicle has bluetooth so I can do normal calls hands free, but has been annoying in that people tend to use WhatsApp in Trinidad and Tobago and call.
Since my vehicle didn’t connect the call through the speaker/microphone system, I just told people it didn’t work. They, of course, kept calling me on WhatsApp because deep down, they hate me.
I haven’t found a solution to it anywhere, and when I finally got motivated enough, I solved it.
Go to your Android settings, then to the WhatsApp settings. In theory, this should work on an iPhone, but maybe not. Apple may want you to buy a special cable.
Once there, select the ‘Nearby Devices’ – bottom of the image in this post – and allow it to connect to other devices.
Tada. It started working.
If it doesn’t work for you, or it works for your iPhone, or you just want to say hello, drop a comment.
We choose what we take with us into the future. It’s not always conscious, it’s not always right, but it’s what we do out of practicality – we have generated so much knowledge as a species that it’s impossible for any one person to know everything. What we have put together over the thousands of years of our existence is staggering to consider.
Societies push toward specialization in this regard, enough so that if you’re a polymath people simply don’t believe you can be able to deal with multiple specialties. It’s not even that a polymath is ‘smarter’. It’s largely a matter of how time is spent.
Large language models are polymaths, but since they get the ‘AI’ marketing, they get to sidestep that. You can make up ludicrous things and simply say that ChatGPT said so to some people and they’ll accept it. ChatGPT and large language models have become the new ‘experts’, which I suppose is to be expected when there is a Cult of [Insert Tech Billionaire Here] where suspended disbelief seems to be as important as any other cult.
The difference is the value signaling. People who want to be like tech billionaires will go out of their way to defend even the most profoundly idiotic things, and that’s a bit of a problem.
The first step in liquidating a people is to erase its memory. Destroy its books, its culture, its history. Then have somebody write new books, manufacture a new culture, invent a new history. Before long that nation will begin to forget what it is and what it was… The struggle of man against power is the struggle of memory against forgetting.
Omission is erasing from memory. Not all of that is bad, but not all of it is great either. This was hotly debated before artificial intelligences, most publicly that I have seen in the United States where statues that venerated Confederate generals in public places were being taken to task because… well, because slavery isn’t something that shouldn’t be venerated, and while there’s debate about whether the Civil War in the United States was fought over slavery or other things, one of the good things that came of it was getting rid of slavery in the United States.
Books are getting banned in schools. “Huckleberry Finn” and “To Kill A Mockingbird” have been removed from schools and libraries in some parts for similar things – for reminding people of how things used to be2. That becomes omission, and it even deepens divides between generations.
There is a lot of room for debate, but the debate needs to be sincere and not people shouting talking point monologues at each other. The victors always write history, but no victor has become one by writing history their history alone. It’s omitting someone else’s.
That’s a big part of lines and walls. It’s not just what we include, it’s also about what we collectively omit.
And this is why the learning models of these things marketed as artificial intelligence, but more of a collective intelligence marketed as artificial intelligence, are so important.
I had to go look up who Milan Kundera was – a very interesting person who started off writing Communist related stuff because he was surrounded by Communism, much as someone who was born into the lines of a theocracy would be influenced to be theocratic, within the lines of democracy he would likely have been democratic. His later works, though, the ones he’s best known for, ‘escaped ideological definition’. ↩︎
Personally, I disagree with that because I think it’s important to understand how things used to be so that we can understand why things are the way that they are now, and why they still need to improve in ways that we’re still figuring out. ↩︎
In working on something I’m writing, I started digging in on the idiom, “Cannot see the forest for the trees”.
The first recorded use of it used the old noun, wood, instead of trees:
“from him who sees no wood for trees And yet is busie as the bees From him that’s settled on his lees And speaketh not without his fees”.
John Heywood, “The Proverbs of John Heywood” (1546), allegedly criticizing the Pope during the reign of Charles II in the first known use of the idiom, “cannot see the forest for the trees”.
I was bending it to a particular use, and thought I’d throw it into what I was writing – but it just looks pedantic there, as in the phrase, ‘unnecessarily pedantic’.
Thus, I looked into ‘the big picture’, whose meaning I believe people of my generation understand pretty well, though it wasn’t used much prior to the 1990s.
There was nothing strange in it; it was but a panel from the big picture of life, such a one as you yourself might have traced out during those months spent at the sea-side – a very quiet panel, and I saw it principally through my window.
“A Romance of the Sea-side”, Chapter I, Chambers Journal of Popular Literature, Science and Arts, Conducted by William and Robert Chambers, Saturday, July 19th, 1862.
These encapsulate concepts that probably pre-date these findings. The common concept could be seen as framing, or focusing on different levels – things I consider to be the same things applied differently.
Sadly, I can’t really use this in the project, though I am using the idioms, so I thought I’d toss it up here.
I was about to write up a history of my interactions with the music industry as far as ownership over at RealityFragments.com, and I was thinking about how far back my love for music went in my soundtrack of life. This always draws me back to “The Entertainer” by Scott Joplin as a starting point.
I could use one of the public domain images of Scott Joplin, someone I have grown to know a bit about, but they didn’t capture the spirit of the music.
I figured that I’d see what DALL-E could put together on it, and gave it a pretty challenging prompt in it’s knowledge of Pop Culture.
As you can see, it got the spirit of things. But there’s something wrong other than the misspelling of “Entertainer”. A lot of people won’t get this because a lot of people don’t know much about Scott Joplin, and if they were to learn from this, they’d get something wrong that might upset a large segment of the world population.
I doubled down to see if this was just a meta-level mistake because of a flaw in algorithm somewhere.
Well, what’s wrong with this? It claims to be reflecting the era and occupation of a ragtime musician, yet ragtime music came from the a specific community in the United States that are called African-Americans now, in the late 19th century.
That would mean that a depiction of a ragtime musician would be more pigmented. Maybe it’s a hiccough, right? 2 in a row? Let’s go for 3.
Well, that’s 3. I imagined they’d get hip-hop right, and it seems like they did, even with a person of European descent in one.
So where did this bias come from? I’m betting that it’s the learning model. I can’t test that, but I can go just do a quick check with DeepAI.org.
Sure, it’s not the same starting prompt, but it’s the same general sort of prompt.
Let’s try again.
Well, there’s definitely something different. Something maybe you can figure out.
For some reason, ChatGPT is racebending ragtime musicians, and I have no idea why.
There’s no transparency in any of these learning models or algorithms. The majority of the algorithms wouldn’t make much sense to most people on the planet, but the learning models definitely would.
Even if we had control over the learning models, we don’t have control over what we collectively recorded over the millennia and made it into some form of digital representation. There are implicit biases in our histories, our cultures, and our Internet because of who has access to what, who shares what, and these artificial intelligences using that information based only on our biases of past and present determines the biases of the future.
I’m not sure Scott Joplin would appreciate being whitewashed. Being someone respected, of his pigmentation, in his period, being the son of a former slave, I suspect he might have been proud of who he became despite the biases of the period.
Anyway, this is a pretty good example of how artificial intelligence bias can impact the future when kids are doing their homework with large language models. It’s a problem that isn’t going away, and in a world that is increasingly becoming a mixing pot beyond social constructs of yesteryear, this particular example is a little disturbing.
I’m not saying it’s conscious. Most biases aren’t. It’s hard to say it doesn’t exist, though.
I’ll leave you with The Entertainer, complete with clips from 1977, where they got something pretty important right.
From Wikipedia, accessed on February 1st 2024:
Although he was penniless and disappointed at the end of his life, Joplin set the standard for ragtime compositions and played a key role in the development of ragtime music. And as a pioneer composer and performer, he helped pave the way for young black artists to reach Americanaudiences of all races.
It seems like the least we could do is get him right in artificial intelligences.