Google In, Google Out.

Last week, there were a lot of announcements, but really not that much happened. And for some strange reason, Google didn’t think to use the .io ccTLD for their big annual developer event, Google I/O.

It was so full of AI that they should have called it Google AI. I looked over the announcements, the advertorials on websites announcing stuff that could almost be cool except… well, it didn’t seem that cool. In fact, the web search on Google with AI crutches already has workarounds to bypass the AI – but I have yet to see it in Trinidad and Tobago. Maybe it’s not been fully rolled out, or maybe I don’t use Google as a search engine enough for me to spot it.

No one I saw in the Fediverse was drooling over anything that Google had at the conference. Most comments were about companies slapping AI on anything and making announcements, which it does seem like.

I suppose, too, that we’re all a little bit tired of AI announcements that really don’t say that much. OpenAI, Google, everyone is trying to get mindshare to build inertia, but questions on what they’re feeding learning models, issues with ethics and law… and for most people, knowing that they’ll have a job they can depend on better than they can depend on it today seems more of a pressing issue.

The companies selling generative AI like snake oil to cure all the ills of the world seem disconnected from the ills of the world, and I’ll remember Sundar Pichai said we’d need more lawyers a year ago.

It’s not that generative AI is bad. It’s that it’s really not brought anything good for most people except a new subscription, less job security, and an increase in AI content showing up all over, bogging down even Amazon.com’s book publishing.

They want us to buy more of what they’re selling even as they take what some are selling to train their models to… sell back to us.

Really, all I ever wanted from Google was a good search engine. That sentiment seems to echo across the Fediverse. As it is, they’re not as good a search engine as they used to be – I use Google occasionally. Almost as an accident.

I waited a week for something to write about some of the announcements, and all I read about Google’s stuff was how to work around their search results. That’s telling. They want more subscribers, we want more income to afford the subscriptions. Go figure.

A Quick Look At Perplexity.AI

The Perplexity.AI logo, from their press pack.

In fiddling about on Mastodon, I came across a post linking to an article on Forbes: “‘Like Wikipedia And ChatGPT Had A Kid’: Inside The Buzzy AI Startup Coming For Google’s Lunch“.

Well, that deserved a look if only because search engine results across the board give spammy first pages, Google inclusive, and Wikipedia is a resource that I like because of one main thing: citations.

ChatGPT is… well, it’s interesting, but it’s… limited because you have to double check everything from your prompt to the results. So the idea of mixing the two is definitely attractive.

Thus, I ended up at Perplexity.ai and did some searches, some tricky ones that I know that other search engines often get wrong related to me. Perplexity stumbled in the results but cited the sources that pushed it the wrong way, as well as cited the sources that pushed it the right way.

That’s perfect for me right now. It gives the citations above the response, so you know where stuff is coming from. You can then omit citations that are wrong while drilling down into what it is you’re supposed to be looking into. For me, with the amount of research I do, this saves me a whole lot of tabs in my web browser and therefore allows me a mental health bonus in keeping track of what I’m writing about.

Of course, when I find something useful like this, I put it under bright lights and interrogate it because on impulse I almost subscribed immediately. I’ve held off, at least for now, but so far it has me pondering my ChatGPT4 subscription since it’s much more of what I need and much less of what I don’t. When I am researching things to write, I need to be able to drill down and not be subject to hallucinations. I need the sources. ChatGPT can do that, and ChatGPT gives me access to DALL-E, but how many images do I need? How often do I use ChatGPT? Not that much, really.

I’m also displeased with present behemoth search engines, particularly since they collect information. Does Perplexity.ai collect information on users? According to the Perplexity.AI privacy policy, they do not. That’s at least hopeful. In the shifting landscape of user data, it’s hard to say what the future holds with any company. A buyout, change in management or a shift in the wind of a public toilet could cause policies to change, so we constantly have to keep an eye on that, but in the immediate, it is promising.

My other main query was about the Fediverse, which is notoriously not indexed. This is largely because of the nature of the Fediverse. It didn’t have much on that, as I expected.

I’ll be using it anonymously for a while to see how it works for me. If you’re looking for options for researching topics, Perplexity.ai may be worth a look.

Otherwise I would not have written about it.

A Basic Explanation of how AIs Identify Objects.

I’ve been experimenting with uploading images to ChatGPT 4 and seeing what it has to say about them. To me, it’s interesting because I gain some insight into how far things have progressed, as well as how descriptive ChatGPT can be about things.

While having coffee yesterday with a friend, I was showing him the capabilities. He chose this scene.

He, like others I showed here in Trinidad and Tobago, couldn’t believe it. It’s a sort of magic for people. What I like when I use it for this is that it doesn’t look at the picture as a human would, where the subject is pretty obvious. It looks at all of the picture, which is worth exploring in a future post

He asked me how it could do that, give the details that it did in the next image in this post. I tried explaining it, and I caught that he was thinking of the classic “IF…THEN… ELSE” sequence that came from ‘classical’ computer science that we had been exposed to in the 1980s.

I tried and failed explaining it. I could tell I failed because he was frustrated with my explanation, and when I can’t explain something it bothers me.

We went our separate ways, and I went to a birthday party for an old friend. I didn’t get home til much later. With people driving as they do here in Trinidad, my mind was focused on avoiding them so I didn’t get to think on it as I would have liked.

I slept on it.

This morning I remembered something I had drawn up in my teens, and now I think I can explain it better to my friend, and perhaps even people curious about it. Hopefully when I send this to him he’ll understand, and since I’m spending the time doing just that, why not everyone else?

Identifying Objects.

As a teenager, my drawing on a sketch pad page was about getting a computer to identify objects. It included a camera connected to the computer, which wasn’t done commercially yet, and what one would do was rotate the object through all the axes and the computer would be told what the object was at every conceivable angle. It was just an idea of a young man passionate about the future with the beginnings of a grounding in personal computing.

What we’ve all been doing with social media for some time is tagging things. This is how we organized finding things, and the incentive was for people to find our content.

A bat in the bathtub where I was staying in Guyana, circa 2005, while I was doing some volunteer IT stuff. It was noteworthy to me, so I did what I did then – took a picture and posted it to Flickr.

Someone would post something on social media, as I did with Flickr, and tag it appropriately (we would hope). I did have fun with it, tagging things like a bat in a photograph as being naked, which oddly was my most popular photo. Of course it was naked, you perverts.

However, I also tagged it as a bat. And if you search Flickr for a bat, you’ll come up with a lot of images of bats. They are of all different sorts of bats, of all angles. There are even more specific tags for kinds of bats, but overall we humans pretty much know a bat when we see one, so all those images of bats could then be added to a training model to allow a computer to come up with it’s own algorithmic way of identifying bats.

And it gets better.

The most popular versions of bats on Flickr, as an example, will be the ones that the most people liked. So now, the images of bats are given weight based on their popularity, and therefore could be seen as the best images of bats. Clearly, my picture of the bat in the bathtub shouldn’t be as popular a version.

It gets even better.

The more popular an image is, the more likely it is to be used on the Internet regardless of copyright, which means that it will show up in search engine rankings if you search for images of bats. Search Engine ranking then becomes another weight.

The more popular images that we collectively have chosen become the training models for bats. The system learns the pattern of the objects, much as we do but differently because they have different ways of looking at the same things.

If you take thousands – perhaps millions – of pictures of bats and train a system to identify it, it can go around looking for bats in images, going through all of the images available looking for bats. It will screw up sometimes, and you tell it, “Not a bat”. It also finds the bats that people haven’t tagged.

Given the amount of tagged images and even text on the Internet, doing it with specific things is a fairly straightforward process because we don’t do anything. We simply correct mistakes.

Now do that with all the tags of different objects. Eventually, you’ll get to where multiple images in a picture can be identified.

That’s basically how it works. I don’t know that they used Flickr.com, or search engines, but if I were doing it, that’s probably how I would – and it’s not a mistake that people have been encouraged to do this a lot more over the last years preceding artificial intelligence hitting the mainstream. Now look at who is developing artificial intelligences. Social networks and search engines.

The same thing applies to text.

Then, when you hand it an image with various objects in it, it identifies what it knows and describes them based on the words commonly associated with the objects, and if objects are grouped together, they become a higher level object. Milk and cookies is a great example.

And so it goes, stacking higher and higher a capacity to recognized patterns of patterns of patterns…

And this also explains how the revolution of Web x.0 may have carried the seeds of it’s own destruction.

Google, AI, and Search.

It’s no secret that Google is in the AI “arms race”, as it has been called, and there is some criticism that they’re in too much of a hurry.

“…The [AI] answer is displayed at the top, and on the left are links to sites from which it drew its answer. But this will look very different on the smaller screen of a mobile device. Users will need to scroll down to see those sources, never mind other sites that might be useful to their search.

That should worry both Google’s users and paying customers like advertisers and website publishers. More than 60% of Google searches in the US occur on mobile phones. That means for most people, Google’s AI answer will take up most of the phone screen. Will people keep scrolling around, looking for citations to tap? Probably not…”

Google Is in Too Much of a Hurry on AI Search, Parmy Olson, Bloomberg (via Washington Post), May 12th, 2023.

This could have a pretty devastating effect on Web 2.0 business models, which evolved around search engine results. That, in turn, could be bad for Google’s business model as it stands, which seems to indicate that their business model will be evolving soon too.

Will they go to a subscription model for users? It would be something that makes sense – if they didn’t have competition. They do. The other shoe on this has to drop. One thing we can expect from Google is that they have thought this through, and as an 800 lb gorilla that admonishes those that don’t follow standards, it will be interesting to see how the industry reacts.

It may change, and people are already advocating that somewhat.

“…Google Search’s biggest strength, in my opinion, was its perfect simplicity. Punch in some words, and the machine gives you everything the internet has to offer on the subject, with every link neatly cataloged and sorted in order of relevance. Sure, most of us will only ever click the first link it presents – god forbid we venture to the dark recesses of the second page of results – but that was enough. It didn’t need to change; it didn’t need this.

There’s an argument to be made that search AI isn’t for simple inquiries. It’s not useful for telling you the time in Tokyo right now, Google can do that fine already. It’s for the niche interrogations: stuff like ‘best restaurant in Shibuya, Tokyo for a vegan and a lactose intolerant person who doesn’t like tofu’. While existing deep-learning models might struggle a bit, we’re not that far off AIs being able to provide concise and accurate answers to queries like that…”

Cramming AI into search results proves Google has forgotten what made it good, Christian Guyton, TechRadar, 5/11/2023

Guyton’s article (linked above in the citation) is well worth the read in it’s entirety. It has pictures and everything.

The bottom line on all of this is that we don’t know what the AI’s are trained on, we don’t know how it’s going to affect business models for online publishers, and we don’t know if it’s actually going to improve the user experience.

Google and The New York Times: A New Path?

A few days ago I mentioned the normalization of Web 2.0, and yesterday I ended up reading about The New York Times getting around $100 million over a period of 3 years from Google.

“…The deal gives the Times an additional revenue driver as news publishers are bracing for an advertising-market slowdown. The company posted revenue of $2.31 billion last year, up 11% from a year earlier. It also more than offsets the revenue that the Times is losing after Facebook parent Meta Platforms last year told publishers it wouldn’t renew contracts to feature their content in its Facebook News tab. The Wall Street Journal at the time reported that Meta had paid annual fees of just over $20 million to the Times…”

New York Times to Get Around $100 Million From Google Over Three Years, Wall Street Journal, May 8th, 2023.

That’s a definite punch in the arm for The New York Times, particularly with the ad revenue model that Web 2.0 delivered from. Will it lower the paywall to their articles? No idea.

This is a little amusing because just on May 2nd, the New York Times called out Google’s lack of follow through on defunding advertising related alongside content denying climate hoaxes.

Still, it may demonstrate the move to a more solid model for actual trusted sources of news, and that could be a good thing for all of us. Maybe.

Personally, if it reduces paywalled content while allowing the New York Times to be critical of those that hand them money… well. It could be a start.