Google In, Google Out.

Last week, there were a lot of announcements, but really not that much happened. And for some strange reason, Google didn’t think to use the .io ccTLD for their big annual developer event, Google I/O.

It was so full of AI that they should have called it Google AI. I looked over the announcements, the advertorials on websites announcing stuff that could almost be cool except… well, it didn’t seem that cool. In fact, the web search on Google with AI crutches already has workarounds to bypass the AI – but I have yet to see it in Trinidad and Tobago. Maybe it’s not been fully rolled out, or maybe I don’t use Google as a search engine enough for me to spot it.

No one I saw in the Fediverse was drooling over anything that Google had at the conference. Most comments were about companies slapping AI on anything and making announcements, which it does seem like.

I suppose, too, that we’re all a little bit tired of AI announcements that really don’t say that much. OpenAI, Google, everyone is trying to get mindshare to build inertia, but questions on what they’re feeding learning models, issues with ethics and law… and for most people, knowing that they’ll have a job they can depend on better than they can depend on it today seems more of a pressing issue.

The companies selling generative AI like snake oil to cure all the ills of the world seem disconnected from the ills of the world, and I’ll remember Sundar Pichai said we’d need more lawyers a year ago.

It’s not that generative AI is bad. It’s that it’s really not brought anything good for most people except a new subscription, less job security, and an increase in AI content showing up all over, bogging down even Amazon.com’s book publishing.

They want us to buy more of what they’re selling even as they take what some are selling to train their models to… sell back to us.

Really, all I ever wanted from Google was a good search engine. That sentiment seems to echo across the Fediverse. As it is, they’re not as good a search engine as they used to be – I use Google occasionally. Almost as an accident.

I waited a week for something to write about some of the announcements, and all I read about Google’s stuff was how to work around their search results. That’s telling. They want more subscribers, we want more income to afford the subscriptions. Go figure.

From Inputs to The Big Picture: An AI Roundup

This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.

It’s long enough where it could have been several posts, but I wanted it all together at least once.

No AI was used in the writing, though some images have been generated by AI.

The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.

The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.

To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.

The Input.

There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.

The Training Data.

The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.

…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…

How Tech Giants Cut Corners to Harvest Data for A.I.“, Cade MetzCecilia KangSheera FrenkelStuart A. Thompson and Nico Grant, New York Times, April 6, 2024 1

Of note, too, is that Google has been indexing AI generated books, which is what is called ‘synthetic data’ and has been warned against, but is something that companies are planning for or even doing already, consciously and unconsciously.

Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.

There’s a need for that audit, if only to allow for trust.

…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.

Auditing AI: The emerging battlefield of transparency and assessment“, Mark Dangelo, Thomson Reuters, 25 Oct 2023.

While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).

There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.

Some websites are trying to block AI crawlers, and it is an ongoing process. Blocking them requires knowing who they are and doesn’t guarantee bad actors might not stop by.

There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:

“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”

New bill would force AI companies to reveal use of copyrighted art“, Nick Robins-Early, TheGuardian.com, April 9th, 2024.

Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.

The Algorithms.

The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.

There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.

The Hardware

Generative AI has brought about an AI chip race between Microsoft, Meta, Google, and Nvidia, which definitely leaves smaller companies that can’t afford to compete in that arena at a disadvantage so great that it could be seen as impossible, at least at present.

The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.

The Output.

One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.

There is criticism that the algorithms are making the spread of false information faster, and the US Department of Justice is stepping up efforts to go after the misuse of generative AI. This is dangerous ground, since algorithms are being sent out to hunt products of other algorithms, and the crossfire between doesn’t care too much about civilians.2

The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.

Given that ChatGPT is presently 82% more persuasive than humans, likely because it has been trained on persuasive works (Input; Training Data), and since most content on the internet is marketing either products, services or ideas, that was predictable. While it’s hard to say how much content being put into training data feeds on our confirmation biases, it’s fair to say that at least some of it is. Then there are the other biases that the training data inherits through omission or selective writing of history.

There are a lot of problems, clearly, and much of it can be traced back to the training data, which even on a good day is as imperfect as our own imperfections, it can magnify, distort, or even be consciously influenced by good or bad actors.

And that’s what leads us to the Big Picture.

The Big Picture

…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…

Inside the shadowy global battle to tame the world’s most dangerous technology“, Mark Scott, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard and Brendan Bordelon, Politico.com, March 26th, 2024

What most people don’t realize is that the ‘game’ includes social media and the information it provides for training models, such as what is happening with TikTok in the United States now. There is a deeper battle, and just perusing content on social networks gives data to those building training models. Even WordPress.com, where this site is presently hosted, is selling data, though there is a way to unvolunteer one’s self.

Even the Fediverse is open to data being pulled for training models.

All of this, combined with the persuasiveness of generative AI that has given psychology pause, has democracies concerned about the influence. A recent example is Grok, Twitter X’s AI for paid subscribers, fell victim to what was clearly satire and caused a panic – which should also have us wondering about how we view intelligence.

…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…

Elon Musk’s Grok Creates Bizarre Fake News About the Solar Eclipse Thanks to Jokes on X“, Matt Novak, Gizmodo, 8 April 2024

Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.

Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.

Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.

Then we have the thing that concerns the most people: their lives. Jon Stewart even did a Daily Show on it, which is worth watching, because people are worried about generative AI taking their jobs with good reason. Even as the Davids of AI3 square off for your market-share, layoffs have been happening in tech as they reposition for AI.

Meanwhile, AI is also apparently being used as a cover for some outsourcing:

Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertesi, TechPolicy.com, Apr 4, 2024

Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.

And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.

In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.

  1. The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber. It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
  2. That’s not just a metaphor, as the Israeli use of Lavender (AI) has been outed recently. ↩︎
  3. Not the Goliaths. David was the one with newer technology: The sling. ↩︎

So. Many. Layoffs.

I’ve been looking at getting back into the ring of software engineering, but it doesn’t seem like a great time to do it.

When Google was laying off workers, I shook my head a bit. It ends up that Google spent 800 million dollars in layoffs just this month. Just this month!

By comparison, Google spent $2.1 billion dollars on layoff expenses for more than 12,000 employees over the course of 2023. Other Google employees only knew about people being dismissed when people’s emails got bounced back last year in February.

With so many layoffs, hopefully they’re getting better at it. Well, maybe not. Google employees have been told more layoffs are coming this year.

I imagine that there are some pretty high quality resumes floating around. As far as the tech field goes, Google is probably considered top tier, and landing a position against someone with Google on their resume is going to be tough.

There’s a problem with that, though. More than 25,000 tech workers from 100 companies got the axe in first few weeks of 2024. Meta, Amazon, Microsoft, Google, TikTok and Salesforce are included in that… and Microsoft numbers may account for the Blizzard/Activision layoffs that happened this past week, sadly.

Blizzard was one of those dream jobs I had as a significantly younger developer way back when. They were often late on delivery for a new game, but it was pretty much worth it. I still play Starcraft II.

It’s become an employer’s job market – maybe it was before, but definitely more so now, and in an era when artificial intelligence may be becoming more attractive for companies and software development, as well as other things. For all we know, they may have consulted artificial intelligence for some of the layoffs, though. It wouldn’t be the first time that happened, though that was in Russia.

I can’t imagine that Google, Microsoft, Meta and Amazon aren’t using big data and AI for this, at least behind the scenes, but it’s probably not being explained because of the blowback that might cause. ‘Fired by AI’ is not something that people would like to see.

When tech companies axe companies, Wall Street rewards them, so stock prices go up – and there are more unemployed technology folk in a period when AI tools are making so many types of productivity easier. Maybe too much easier.

This reminds me so much of the 1990s. The good news is that tech survived the 1990s despite the post-merger layoffs.

Of course, the correction on the NPR article(at the bottom) is something I wish I had caught earlier. “Nearly 25,000 tech workers were laid in the first weeks of 2024. Why is that?would definitely be an article worth reading.

Beyond the AMIE-Better-Than-Doctors posts.

As a former Navy Corpsman, it’s hard not to be at least a bit excited about Google’s AMIE, which the Google Research Blog announced on Friday, January 12th. Posts on social media flared like a diagnosed case of hemorrhoids to my senior software engineer self.

I dug in and researched.

Reality is that it’s not as much of an advance as some posts and titles may have people believing. Doctors aren’t going to be replaced anytime soon, particularly since the paper’s conclusion was very realistic.

The utility of medical AI systems could be greatly improved if they are better able to interact conversationally, anchoring on large-scale medical knowledge while communicating with appropriate levels of empathy and trust. This research demonstrates the significant potential capabilities of LLM based AI systems for settings involving clinical history-taking and diagnostic dialogue. The performance of AMIE in simulated consultations represents a milestone for the field, as it was assessed along an evaluation framework that considered multiple clinically-relevant axes for conversational diagnostic medical AI. However, the results should be interpreted with appropriate caution. Translating from this limited scope of experimental simulated history-taking and diagnostic dialogue, towards real-world tools for people and those who provide care for them, requires significant additional research and development to ensure the safety, reliability, fairness, efficacy, and privacy of the technology. If successful, we believe AI systems such as AMIE can be at the core of next generation learning health systems that help scale world class healthcare to everyone.

Towards Conversational Diagnostic AI“(PDF), Conclusion, Many authors (see paper), Google Research and Google Deep Mind, 11 Jan 2024.

In essence, this is a start, and pretty promising given it’s only through a text chat application. Clinicians – real doctors – that took part in the study were in a disadvantage, because they normally have a conversation with the patient.

As I quipped on social media with a friend who is a doctor, if the patient is unresponsive, the best AMIE can do is repeat itself in all caps:

HEY! ARE YOU UNCONSCIOUS? DID YOU JUST LEAVE? COME BACK! YOU CAN’T DIE UNLESS I DIAGNOSE YOU!”

In that way, the accuracy comparison of 91.3%, compared to 82.5% for physicians should be taken with Dead Sea levels of salt. Yes, the AI beat human doctors by 11.2% when we tied a doctor’s human experience behind their back.

Interestingly, sometimes doctors aren’t the ones who do the patient histories, too. Sometimes it’s nurses, in the Navy it was often Corpsmen. Often when a doctor walked in the room to see a patient they already had SOAP notes to work from, verify, and add on to.

The take from Psychology Today, though, is interesting, pointing out that AI and LLMs are charting a new course in goal-oriented patient dialogues. However, even that article seemed to gloss over the fact that this was all done in text chat when they pointed out in terms of conversation quality, AMIE scored 4.7 out of 5, while physicians averaged 3.9.

There is a very human element to medicine which involves evaluating a patient by looking and listening to them. In my experience as a Navy Corpsman taking medical histories for the doctors, patients can be tricky and unfocused, particularly when in pain. Evaluation often leans more on what one observes more than what the patient says, particularly in an emergency setting. I’ve seen good doctors work magic with patient histories, ordering tests based not on what the patient told them but what they observed, ruling things out diagnostically.

Factor in that in what I consider a commodification of medicine in my lifetime, doctors can be time constrained to see more patients in unit time and that certainly doesn’t help things – and that’s a human induced human error when it crops up. Given the way the study was done, I don’t think it was as much a factor here but it’s worth considering.

When we go to the doctor as patients, when sitting with the doctor in the uncomfortable uniform of the patient on an examination table that is designed to draw all the heat from your body through your buttocks, we tend to think we’re the only person the doctor is dealing with. That’s rarely the case.

I do think we’re missing the boat on this one, though, because one of the best ways to pull artificial intelligence into checking patient charts, which would be a great exercise of what a large language model (LLM) artificial intelligence is good at: evaluating text and information and coming up with a diagnosis. Imagine an artificial intelligence evaluating charts and lab tests when they come back, then alerting doctors when necessary while the patient is being treated. Of course, the doctor gets the final say, but the AI’s ‘thoughts’ are entered into the chart as well.

I’m not sure engaging a patient for patient history was a good first step for a large language model in medicine, but of course that’s not all that Google’s research and Deep Mind teams are working on, so it may be part of an overall strategy. Or it might just be the thing that got funding because it was sexy.

Regardless, this is probably one of the more exciting uses of artificial intelligence because it’s not focused on making money. It’s focused on treating humans better. What’s not to like?

NYT Says No To Bots.

The content for training large language models and other AIs has been something I have written about before, with being able to opt out of being crawled by AI bots. The New York Times has updated it’s Terms and Conditions to disallow that – which I’ll get back to in a moment.

It’s an imperfect solution for so many reasons, and as I wrote before when writing about opting out of AI bots, it seems backwards.

In my opinion, they should allow people to opt in rather than this nonsense of having to go through motions to protect one’s content from being used as a part of a training model.

Back to the New York Times.

…The New York Times updated its terms of services Aug. 3 to forbid the scraping of its content to train a machine learning or AI system.

The content includes but is not limited to text, photographs, images, illustrations, designs, audio clips, video clips, “look and feel” and metadata, including the party credited as the provider of such content.

The updated TOS also prohibits website crawlers, which let pages get indexed for search results, from using content to train LLMs or AI systems…

The New York Times Updates Terms of Service to Prevent AI Scraping Its Content“, Trishla Ostwal, Adweek.com, August 10th 2023.

This article was then referenced by The Verge, which added a little more value.

…The move could be in response to a recent update to Google’s privacy policy that discloses the search giant may collect public data from the web to train its various AI services, such as Bard or Cloud AI. Many large language models powering popular AI services like OpenAI’s ChatGPT are trained on vast datasets that could contain copyrighted or otherwise protected materials scraped from the web without the original creator’s permission…

The New York Times prohibits using its content to train AI models“, Jess Weatherbed, TheVerge.com, Augus 14th, 2023.

That’s pretty interesting considering that Google and the New York Times updated their agreement on News and Innovation on February 6th, 2023.

This all falls into a greater context where many media organizations called for rules protecting copyright in data used to train generative AI models in a letter you can see here.

Where does that leave us little folk? Strategically, bloggers have been a thorn in the side of the media for a few decades, driving down costs for sometimes pretty good content. Blogging is the grey area of the media, and no one really seems to want to tackle that.

I should ask WordPress.com what their stance is. People on Medium and Substack should also ask for a stance on that.

Speaking for myself – if you want to use my content for your training model so that you can charge money for a service, hit me in the wallet – or hit the road.

Blocking AI Bots: The Opt Out Issue.

Those of us that create anything – at least without the crutches of a large language model like ChatGPT- are a bit concerned about our works being used to train large language models. We get no attribution, no pay, and the companies that run the models basically can just grab our work, train their models and turn around and charge customers for access to responses that our work helped create.

No single one of us is likely that important. But combined, it’s a bit of a rip off. One friend suggested being able to block the bots, which is an insurmountable task because it depends on the bots obeying what is in the robots.txt file. There’s no real reason that they have to.

How to Block AI Chatbots From Scraping Your Website’s Content” is a worthwhile guide to attempting to block the bots. It also makes the point that maybe it doesn’t matter.

I think that it does, at least in principle, because I’m of the firm opinion that websites should not have to opt out of being used by these AI bots – but rather, that websites should opt in as they wish. Nobody’s asked for anything, have they? Why should these companies use your work, or my work, without recompense and then turn around and charge access to these things?

Somehow, we got stuck with ‘opting out’ when what these companies running the AI Bots should have done is allow people to opt in with a revenue model.

TAANSTAAFL. Except if you’re a large tech company, apparently.

On the flip side, Zoom says that they’re not using data from users for their training models. Taken at face value, that’s great, but the real problem is that we wouldn’t know if they did.

The Technological Singularity.

Everyone’s been tapping out on their keyboards – and perhaps having ChatGPT explain – the technological singularity, or artificial intelligence singularity, or the AI singularity, or… whatever it gets repackaged as next.

Wikipedia has a very thorough read on it that is worth at least skimming to understand the basic concepts. It starts with the simplest of beginnings.

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization… [2][3]

Technological Singularity, Wikipedia, accessed 11 July 2023.

By that definition, we could say that the first agricultural revolution – the neolithic agricultural revolution – was a technological singularity. Agriculture, which many take for granted, is actually a technology, and one we’re still trying to make better with our other technologies. Computational agroecology is one example of this.

I have friends I worked with that went on to work with drone technology being applied to agriculture as well, circa 2015. Agricultural technology is still advancing, but the difference between agricultural technology and the technological singularity everyone’s writing about today is different in that we’re talking, basically, about a technology that has the capacity to become a runaway technology.

Runaway technology? When we get artificial intelligences doing surgery on their code to become more efficient and better at what they do, they will evolve in ways that we cannot predict but we can hope to influence. That’s the technological singularity that is the hot topic.

Since we can’t predict what will happen after such a singularity, speculating on it is largely a work of imagination. It can be really bad. It can be really good. But let’s get back to present problems and how they could impact a singularity.

…Alignment researchers worry about the King Midas problem: communicate a wish to an A.I. and you may get exactly what you ask for, which isn’t actually what you wanted. (In one famous thought experiment, someone asks an A.I. to maximize the production of paper clips, and the computer system takes over the world in a single-minded pursuit of that goal.) In what we might call the dog-treat problem, an A.I. that cares only about extrinsic rewards fails to pursue good outcomes for their own sake. (Holden Karnofsky, a co-C.E.O. of Open Philanthropy, a foundation whose concerns include A.I. alignment, asked me to imagine an algorithm that improves its performance on the basis of human feedback: it could learn to manipulate my perceptions instead of doing a good job.)..

Can We Stop Runaway A.I.?: Technologists warn about the dangers of the so-called singularity. But can anything actually be done to prevent it?, Matthew Hutson, The New Yorker, May 16, 2023

In essence, this is a ‘yes-man‘ problem in that a system gives us what we want because it’s trained to – much like the dog’s eyebrows evolved to give us the ‘puppy dog eyes’. We want to believe the dog really feels guilty, and the dog may feel guilty, but it also might just be signaling what it knows we want to see. Sort of like a child trying to give a parent the answer they want rather than the truth.

This is why I think ‘hallucinations’ of AI are examples of this. When prompted and it has no sensible response, rather than say, “I can’t give an answer”, it gives us some stuff that it thinks we might want to see. “No, I don’t know where the remote is, but look at this picture I drew!”

Now imagine that happening when an artificial intelligence that may communicate with the same words and grammar we do that does not share our view point, a view point that gives things meaning. What would be meaning to an artificial intelligence that doesn’t understand our perspective, only how to give us what we want rather than what we’re asking for.

Homer Simpson plagiarizes humanity in this regard. Homer might ask an AI about how to get Bart to do something, and the AI might produce donuts. “oOooh”, Homer croons, distracted, “Donuts!” It’s a red dot problem, as much responsibility on us by being distracted as it is for the AI (which we created) to ‘hallucinate’ and give Homer donuts.

But of course, we’ve got Ray Kurzweil predicting a future while he’s busy helping create it as a Director of Engineering for Google.

Of course, he could be right on this wonderful future that seems too good to be true – but it won’t be for everyone, given the status of the world. If my brain were connected to a computer, I’d probably have to wait for all the updates to install to get off the bed. And what happens if I don’t pay the subscription fee?

Because we can do something doesn’t always mean we should – the Titan is a brilliant example of this. I don’t think that many people will be lining up to plug a neural interface into their brain so that they can connect to the cloud. I think we’ll let the lab rats spend their money to beta test that for the rest of humanity.

The rest of humanity is beyond the moat for most technologists… and that’s why those of us who aren’t technologists, or who aren’t just technologists, should be talking about these problems.

The singularity is likely to happen. It may already have, with people only having attention spans of 47 seconds because of ‘smart’ technology, because when it comes to the singularity technologists generally only look at the progress of technology and the pros of it for humanity – but there has been a serious downside as well.

The conversation needs to be balanced better, and is probably going to be my next post here.

More Lawyers? Really?

We’re all guilty of looking at the world through our own lenses of experience. The person barely making ends meet while working 3 jobs in a thankless economy to support a family is not going to see things the same as a doctor or lawyer, as an example, particularly after they’ve done their internships.

The people who get quoted the most aren’t the majority. In fact, they’re usually a minority that live in a bubble, immune to most problems on the planet, and because of the fact that the bubble is sacred to them, they almost never venture outside.

CEOs live in a different world, blissfully unaware of the day to day issues of people who don’t live their lives. For some reason, these people are often glamorized yet they provide hints of their own biases at times.

Sundar Pichai, CEO of Google, recently demonstrated one. When talking about societal upheaval and jobs, he had an odd go-to but one that a CEO would be very comfortable with.

Lawyers.

“…“I think it’ll touch everything we do,” Pichai said of A.I. in an interview with The Verge’s Nilay Patel published Friday. “I do think there are big societal labor market disruptions that will happen.”

But the tech chief thinks that A.I. could also make some jobs better, if it’s done right. He used the example of the legal profession, which some believe will be the most disrupted by A.I., and said that even with technological developments, the need for some skills and services will not be eliminated altogether.  

“So, A.I. will make the profession better in certain ways, might have some unintended consequences, but I’m willing to almost bet 10 years from now, maybe there are more lawyers.”…

Sundar Pichai, Google’s Sundar Pichai thinks A.I. will spur ‘big societal labor market disruptions’ but also make professions better, Prarthana Prakash, Fortune, May 12th 2023.

I’m not going to put words into his mouth, there’s no need. These are questions he’s likely primed himself for that will minimize the societal upheaval it will cause. He’s the CEO of Google. In 2022, Sundar Pichai made $226 million as CEO of Google, mainly in stock options. He’s vested in the success of Google, and the layoffs in January were… unfortunate for him, I suppose.

And we need more lawyers? Really? Are they planning to make things that more complicated and expensive? Or does he picture a future where lawyers will charge less money?

Given the nature of how disruptive some of the technologies being dubbed “AI” by the hype cycle are, I might be more interested to hear from collective bargaining organizations than a CEO of Google when it comes to such disruption.

His perspective is implicitly biased, he’s vested in a corporation whose technology interests are not necessarily in line with those of most of it’s users. He’s not a bad person, I’m not saying that. I’m saying what he is quoted as saying seems cavalier.

What I am saying is that someone who says, “We’ll have more lawyers” like it’s a good thing might not have thought things through beyond his bubble. Take it for what it’s worth.

There are a lot of people whose ways of life are at stake in all of this, and I’m not sure that they all want to be lawyers. I hope not, anyway. Justice is blind, they say.

Google, AI, and Search.

It’s no secret that Google is in the AI “arms race”, as it has been called, and there is some criticism that they’re in too much of a hurry.

“…The [AI] answer is displayed at the top, and on the left are links to sites from which it drew its answer. But this will look very different on the smaller screen of a mobile device. Users will need to scroll down to see those sources, never mind other sites that might be useful to their search.

That should worry both Google’s users and paying customers like advertisers and website publishers. More than 60% of Google searches in the US occur on mobile phones. That means for most people, Google’s AI answer will take up most of the phone screen. Will people keep scrolling around, looking for citations to tap? Probably not…”

Google Is in Too Much of a Hurry on AI Search, Parmy Olson, Bloomberg (via Washington Post), May 12th, 2023.

This could have a pretty devastating effect on Web 2.0 business models, which evolved around search engine results. That, in turn, could be bad for Google’s business model as it stands, which seems to indicate that their business model will be evolving soon too.

Will they go to a subscription model for users? It would be something that makes sense – if they didn’t have competition. They do. The other shoe on this has to drop. One thing we can expect from Google is that they have thought this through, and as an 800 lb gorilla that admonishes those that don’t follow standards, it will be interesting to see how the industry reacts.

It may change, and people are already advocating that somewhat.

“…Google Search’s biggest strength, in my opinion, was its perfect simplicity. Punch in some words, and the machine gives you everything the internet has to offer on the subject, with every link neatly cataloged and sorted in order of relevance. Sure, most of us will only ever click the first link it presents – god forbid we venture to the dark recesses of the second page of results – but that was enough. It didn’t need to change; it didn’t need this.

There’s an argument to be made that search AI isn’t for simple inquiries. It’s not useful for telling you the time in Tokyo right now, Google can do that fine already. It’s for the niche interrogations: stuff like ‘best restaurant in Shibuya, Tokyo for a vegan and a lactose intolerant person who doesn’t like tofu’. While existing deep-learning models might struggle a bit, we’re not that far off AIs being able to provide concise and accurate answers to queries like that…”

Cramming AI into search results proves Google has forgotten what made it good, Christian Guyton, TechRadar, 5/11/2023

Guyton’s article (linked above in the citation) is well worth the read in it’s entirety. It has pictures and everything.

The bottom line on all of this is that we don’t know what the AI’s are trained on, we don’t know how it’s going to affect business models for online publishers, and we don’t know if it’s actually going to improve the user experience.

Google and The New York Times: A New Path?

A few days ago I mentioned the normalization of Web 2.0, and yesterday I ended up reading about The New York Times getting around $100 million over a period of 3 years from Google.

“…The deal gives the Times an additional revenue driver as news publishers are bracing for an advertising-market slowdown. The company posted revenue of $2.31 billion last year, up 11% from a year earlier. It also more than offsets the revenue that the Times is losing after Facebook parent Meta Platforms last year told publishers it wouldn’t renew contracts to feature their content in its Facebook News tab. The Wall Street Journal at the time reported that Meta had paid annual fees of just over $20 million to the Times…”

New York Times to Get Around $100 Million From Google Over Three Years, Wall Street Journal, May 8th, 2023.

That’s a definite punch in the arm for The New York Times, particularly with the ad revenue model that Web 2.0 delivered from. Will it lower the paywall to their articles? No idea.

This is a little amusing because just on May 2nd, the New York Times called out Google’s lack of follow through on defunding advertising related alongside content denying climate hoaxes.

Still, it may demonstrate the move to a more solid model for actual trusted sources of news, and that could be a good thing for all of us. Maybe.

Personally, if it reduces paywalled content while allowing the New York Times to be critical of those that hand them money… well. It could be a start.