Last week, there were a lot of announcements, but really not that much happened. And for some strange reason, Google didn’t think to use the .io ccTLD for their big annual developer event, Google I/O.
It was so full of AI that they should have called it Google AI. I looked over the announcements, the advertorials on websites announcing stuff that could almost be cool except… well, it didn’t seem that cool. In fact, the web search on Google with AI crutches already has workarounds to bypass the AI – but I have yet to see it in Trinidad and Tobago. Maybe it’s not been fully rolled out, or maybe I don’t use Google as a search engine enough for me to spot it.
No one I saw in the Fediverse was drooling over anything that Google had at the conference. Most comments were about companies slapping AI on anything and making announcements, which it does seem like.
I suppose, too, that we’re all a little bit tired of AI announcements that really don’t say that much. OpenAI, Google, everyone is trying to get mindshare to build inertia, but questions on what they’re feeding learning models, issues with ethics and law… and for most people, knowing that they’ll have a job they can depend on better than they can depend on it today seems more of a pressing issue.
The companies selling generative AI like snake oil to cure all the ills of the world seem disconnected from the ills of the world, and I’ll remember Sundar Pichai said we’d need more lawyers a year ago.
It’s not that generative AI is bad. It’s that it’s really not brought anything good for most people except a new subscription, less job security, and an increase in AI content showing up all over, bogging down even Amazon.com’s book publishing.
They want us to buy more of what they’re selling even as they take what some are selling to train their models to… sell back to us.
Really, all I ever wanted from Google was a good search engine. That sentiment seems to echo across the Fediverse. As it is, they’re not as good a search engine as they used to be – I use Google occasionally. Almost as an accident.
I waited a week for something to write about some of the announcements, and all I read about Google’s stuff was how to work around their search results. That’s telling. They want more subscribers, we want more income to afford the subscriptions. Go figure.
This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.
It’s long enough where it could have been several posts, but I wanted it all together at least once.
No AI was used in the writing, though some images have been generated by AI.
The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.
The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.
To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.
The Input.
There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.
The Training Data.
The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.
…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…
Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.
There’s a need for that audit, if only to allow for trust.
…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.
While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).
There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.
There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:
“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”
Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.
The Algorithms.
The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.
There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.
The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.
The Output.
One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.
The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.
…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…
…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…
Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.
Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.
Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.
Meanwhile, AI is also apparently being used as a cover for some outsourcing:
Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…
Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.
And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.
In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.
The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber.It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
I imagine that there are some pretty high quality resumes floating around. As far as the tech field goes, Google is probably considered top tier, and landing a position against someone with Google on their resume is going to be tough.
Blizzard was one of those dream jobs I had as a significantly younger developer way back when. They were often late on delivery for a new game, but it was pretty much worth it. I still play Starcraft II.
It’s become an employer’s job market – maybe it was before, but definitely more so now, and in an era when artificial intelligence may be becoming more attractive for companies and software development, as well as other things. For all we know, they may have consulted artificial intelligence for some of the layoffs, though. It wouldn’t be the first time that happened, though that was in Russia.
I can’t imagine that Google, Microsoft, Meta and Amazon aren’t using big data and AI for this, at least behind the scenes, but it’s probably not being explained because of the blowback that might cause. ‘Fired by AI’ is not something that people would like to see.
When tech companies axe companies, Wall Street rewards them, so stock prices go up – and there are more unemployed technology folk in a period when AI tools are making so many types of productivity easier. Maybe too much easier.
This reminds me so much of the 1990s. The good news is that tech survived the 1990s despite the post-merger layoffs.
Of course, the correction on the NPR article(at the bottom) is something I wish I had caught earlier. “Nearly 25,000 tech workers were laid in the first weeks of 2024. Why is that?” would definitely be an article worth reading.
Reality is that it’s not as much of an advance as some posts and titles may have people believing. Doctors aren’t going to be replaced anytime soon, particularly since the paper’s conclusion was very realistic.
The utility of medical AI systems could be greatly improved if they are better able to interact conversationally, anchoring on large-scale medical knowledge while communicating with appropriate levels of empathy and trust. This research demonstrates the significant potential capabilities of LLM based AI systems for settings involving clinical history-taking and diagnostic dialogue. The performance of AMIE in simulated consultations represents a milestone for the field, as it was assessed along an evaluation framework that considered multiple clinically-relevant axes for conversational diagnostic medical AI. However, the results should be interpreted with appropriate caution. Translating from this limited scope of experimental simulated history-taking and diagnostic dialogue, towards real-world tools for people and those who provide care for them, requires significant additional research and development to ensure the safety, reliability, fairness, efficacy, and privacy of the technology. If successful, we believe AI systems such as AMIE can be at the core of next generation learning health systems that help scale world class healthcare to everyone.
In essence, this is a start, and pretty promising given it’s only through a text chat application. Clinicians – real doctors – that took part in the study were in a disadvantage, because they normally have a conversation with the patient.
As I quipped on social media with a friend who is a doctor, if the patient is unresponsive, the best AMIE can do is repeat itself in all caps:
“HEY! ARE YOU UNCONSCIOUS? DID YOU JUST LEAVE? COME BACK! YOU CAN’T DIE UNLESS I DIAGNOSE YOU!”
Interestingly, sometimes doctors aren’t the ones who do the patient histories, too. Sometimes it’s nurses, in the Navy it was often Corpsmen. Often when a doctor walked in the room to see a patient they already had SOAP notes to work from, verify, and add on to.
The take from Psychology Today, though, is interesting, pointing out that AI and LLMs are charting a new course in goal-oriented patient dialogues. However, even that article seemed to gloss over the fact that this was all done in text chat when they pointed out in terms of conversation quality, AMIE scored 4.7 out of 5, while physicians averaged 3.9.
There is a very human element to medicine which involves evaluating a patient by looking and listening to them. In my experience as a Navy Corpsman taking medical histories for the doctors, patients can be tricky and unfocused, particularly when in pain. Evaluation often leans more on what one observes more than what the patient says, particularly in an emergency setting. I’ve seen good doctors work magic with patient histories, ordering tests based not on what the patient told them but what they observed, ruling things out diagnostically.
Factor in that in what I consider a commodification of medicine in my lifetime, doctors can be time constrained to see more patients in unit time and that certainly doesn’t help things – and that’s a human induced human error when it crops up. Given the way the study was done, I don’t think it was as much a factor here but it’s worth considering.
When we go to the doctor as patients, when sitting with the doctor in the uncomfortable uniform of the patient on an examination table that is designed to draw all the heat from your body through your buttocks, we tend to think we’re the only person the doctor is dealing with. That’s rarely the case.
I do think we’re missing the boat on this one, though, because one of the best ways to pull artificial intelligence into checking patient charts, which would be a great exercise of what a large language model (LLM) artificial intelligence is good at: evaluating text and information and coming up with a diagnosis. Imagine an artificial intelligence evaluating charts and lab tests when they come back, then alerting doctors when necessary while the patient is being treated. Of course, the doctor gets the final say, but the AI’s ‘thoughts’ are entered into the chart as well.
I’m not sure engaging a patient for patient history was a good first step for a large language model in medicine, but of course that’s not all that Google’s research and Deep Mind teams are working on, so it may be part of an overall strategy. Or it might just be the thing that got funding because it was sexy.
Regardless, this is probably one of the more exciting uses of artificial intelligence because it’s not focused on making money. It’s focused on treating humans better. What’s not to like?
The content for training large language models and other AIs has been something I have written about before, with being able to opt out of being crawled by AI bots. The New York Times has updated it’s Terms and Conditions to disallow that – which I’ll get back to in a moment.
In my opinion, they should allow people to opt in rather than this nonsense of having to go through motions to protect one’s content from being used as a part of a training model.
Back to the New York Times.
…The New York Times updated its terms of services Aug. 3 to forbid the scraping of its content to train a machine learning or AI system.
The content includes but is not limited to text, photographs, images, illustrations, designs, audio clips, video clips, “look and feel” and metadata, including the party credited as the provider of such content.
The updated TOS also prohibits website crawlers, which let pages get indexed for search results, from using content to train LLMs or AI systems…
This article was then referenced by The Verge, which added a little more value.
…The move could be in response to a recent update to Google’s privacy policy that discloses the search giant may collect public data from the web to train its various AI services, such as Bard or Cloud AI. Many large language models powering popular AI services like OpenAI’s ChatGPT are trained on vast datasets that could contain copyrighted or otherwise protected materials scraped from the web without the original creator’s permission…
Where does that leave us little folk? Strategically, bloggers have been a thorn in the side of the media for a few decades, driving down costs for sometimes pretty good content. Blogging is the grey area of the media, and no one really seems to want to tackle that.
I should ask WordPress.com what their stance is. People on Medium and Substack should also ask for a stance on that.
Speaking for myself – if you want to use my content for your training model so that you can charge money for a service, hit me in the wallet – or hit the road.
Those of us that create anything – at least without the crutches of a large language model like ChatGPT- are a bit concerned about our works being used to train large language models. We get no attribution, no pay, and the companies that run the models basically can just grab our work, train their models and turn around and charge customers for access to responses that our work helped create.
No single one of us is likely that important. But combined, it’s a bit of a rip off. One friend suggested being able to block the bots, which is an insurmountable task because it depends on the bots obeying what is in the robots.txt file. There’s no real reason that they have to.
I think that it does, at least in principle, because I’m of the firm opinion that websites should not have to opt out of being used by these AI bots – but rather, that websites should opt in as they wish. Nobody’s asked for anything, have they? Why should these companies use your work, or my work, without recompense and then turn around and charge access to these things?
Somehow, we got stuck with ‘opting out’ when what these companies running the AI Bots should have done is allow people to opt in with a revenue model.
TAANSTAAFL. Except if you’re a large tech company, apparently.
Everyone’s been tapping out on their keyboards – and perhaps having ChatGPT explain – the technological singularity, or artificial intelligence singularity, or the AI singularity, or… whatever it gets repackaged as next.
Wikipedia has a very thorough read on it that is worth at least skimming to understand the basic concepts. It starts with the simplest of beginnings.
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization… [2][3]
By that definition, we could say that the first agricultural revolution – the neolithic agricultural revolution – was a technological singularity. Agriculture, which many take for granted, is actually a technology, and one we’re still trying to make better with our other technologies. Computational agroecology is one example of this.
I have friends I worked with that went on to work with drone technology being applied to agriculture as well, circa 2015. Agricultural technology is still advancing, but the difference between agricultural technology and the technological singularity everyone’s writing about today is different in that we’re talking, basically, about a technology that has the capacity to become a runaway technology.
Runaway technology? When we get artificial intelligences doing surgery on their code to become more efficient and better at what they do, they will evolve in ways that we cannot predict but we can hope to influence. That’s the technological singularity that is the hot topic.
Since we can’t predict what will happen after such a singularity, speculating on it is largely a work of imagination. It can be really bad. It can be really good. But let’s get back to present problems and how they could impact a singularity.
…Alignment researchers worry about the King Midas problem: communicate a wish to an A.I. and you may get exactly what you ask for, which isn’t actually what you wanted. (In one famous thought experiment, someone asks an A.I. to maximize the production of paper clips, and the computer system takes over the world in a single-minded pursuit of that goal.) In what we might call the dog-treat problem, an A.I. that cares only about extrinsic rewards fails to pursue good outcomes for their own sake. (Holden Karnofsky, a co-C.E.O. of Open Philanthropy, a foundation whose concerns include A.I. alignment, asked me to imagine an algorithm that improves its performance on the basis of human feedback: it could learn to manipulate my perceptions instead of doing a good job.)..
In essence, this is a ‘yes-man‘ problem in that a system gives us what we want because it’s trained to – much like the dog’s eyebrows evolved to give us the ‘puppy dog eyes’. We want to believe the dog really feels guilty, and the dog may feel guilty, but it also might just be signaling what it knows we want to see. Sort of like a child trying to give a parent the answer they want rather than the truth.
This is why I think ‘hallucinations’ of AI are examples of this. When prompted and it has no sensible response, rather than say, “I can’t give an answer”, it gives us some stuff that it thinks we might want to see. “No, I don’t know where the remote is, but look at this picture I drew!”
Now imagine that happening when an artificial intelligence that may communicate with the same words and grammar we do that does not share our view point, a view point that gives things meaning. What would be meaning to an artificial intelligence that doesn’t understand our perspective, only how to give us what we want rather than what we’re asking for.
Homer Simpson plagiarizes humanity in this regard. Homer might ask an AI about how to get Bart to do something, and the AI might produce donuts. “oOooh”, Homer croons, distracted, “Donuts!” It’s a red dot problem, as much responsibility on us by being distracted as it is for the AI (which we created) to ‘hallucinate’ and give Homer donuts.
But of course, we’ve got Ray Kurzweil predicting a future while he’s busy helping create it as a Director of Engineering for Google.
Of course, he could be right on this wonderful future that seems too good to be true – but it won’t be for everyone, given the status of the world. If my brain were connected to a computer, I’d probably have to wait for all the updates to install to get off the bed. And what happens if I don’t pay the subscription fee?
Because we can do something doesn’t always mean we should – the Titan is a brilliant example of this. I don’t think that many people will be lining up to plug a neural interface into their brain so that they can connect to the cloud. I think we’ll let the lab rats spend their money to beta test that for the rest of humanity.
The rest of humanity is beyond the moat for most technologists… and that’s why those of us who aren’t technologists, or who aren’t just technologists, should be talking about these problems.
The singularity is likely to happen. It may already have, with people only having attention spans of 47 seconds because of ‘smart’ technology, because when it comes to the singularity technologists generally only look at the progress of technology and the pros of it for humanity – but there has been a serious downside as well.
The conversation needs to be balanced better, and is probably going to be my next post here.
We’re all guilty of looking at the world through our own lenses of experience. The person barely making ends meet while working 3 jobs in a thankless economy to support a family is not going to see things the same as a doctor or lawyer, as an example, particularly after they’ve done their internships.
The people who get quoted the most aren’t the majority. In fact, they’re usually a minority that live in a bubble, immune to most problems on the planet, and because of the fact that the bubble is sacred to them, they almost never venture outside.
CEOs live in a different world, blissfully unaware of the day to day issues of people who don’t live their lives. For some reason, these people are often glamorized yet they provide hints of their own biases at times.
Sundar Pichai, CEO of Google, recently demonstrated one. When talking about societal upheaval and jobs, he had an odd go-to but one that a CEO would be very comfortable with.
Lawyers.
“…“I think it’ll touch everything we do,” Pichai said of A.I. in an interview with The Verge’s Nilay Patel published Friday. “I do think there are big societal labor market disruptions that will happen.”
But the tech chief thinks that A.I. could also make some jobs better, if it’s done right. He used the example of the legal profession, which some believe will be the most disrupted by A.I., and said that even with technological developments, the need for some skills and services will not be eliminated altogether.
“So, A.I. will make the profession better in certain ways, might have some unintended consequences, but I’m willing to almost bet 10 years from now, maybe there are more lawyers.”…
I’m not going to put words into his mouth, there’s no need. These are questions he’s likely primed himself for that will minimize the societal upheaval it will cause. He’s the CEO of Google. In 2022, Sundar Pichai made $226 million as CEO of Google, mainly in stock options. He’s vested in the success of Google, and the layoffs in January were… unfortunate for him, I suppose.
And we need more lawyers? Really? Are they planning to make things that more complicated and expensive? Or does he picture a future where lawyers will charge less money?
Given the nature of how disruptive some of the technologies being dubbed “AI” by the hype cycle are, I might be more interested to hear from collective bargaining organizations than a CEO of Google when it comes to such disruption.
His perspective is implicitly biased, he’s vested in a corporation whose technology interests are not necessarily in line with those of most of it’s users. He’s not a bad person, I’m not saying that. I’m saying what he is quoted as saying seems cavalier.
What I am saying is that someone who says, “We’ll have more lawyers” like it’s a good thing might not have thought things through beyond his bubble. Take it for what it’s worth.
There are a lot of people whose ways of life are at stake in all of this, and I’m not sure that they all want to be lawyers. I hope not, anyway. Justice is blind, they say.
It’s no secret that Google is in the AI “arms race”, as it has been called, and there is some criticism that they’re in too much of a hurry.
“…The [AI] answer is displayed at the top, and on the left are links to sites from which it drew its answer. But this will look very different on the smaller screen of a mobile device. Users will need to scroll down to see those sources, never mind other sites that might be useful to their search.
That should worry both Google’s users and paying customers like advertisers and website publishers. More than 60% of Google searches in the US occur on mobile phones. That means for most people, Google’s AI answer will take up most of the phone screen. Will people keep scrolling around, looking for citations to tap? Probably not…”
This could have a pretty devastating effect on Web 2.0 business models, which evolved around search engine results. That, in turn, could be bad for Google’s business model as it stands, which seems to indicate that their business model will be evolving soon too.
Will they go to a subscription model for users? It would be something that makes sense – if they didn’t have competition. They do. The other shoe on this has to drop. One thing we can expect from Google is that they have thought this through, and as an 800 lb gorilla that admonishes those that don’t follow standards, it will be interesting to see how the industry reacts.
It may change, and people are already advocating that somewhat.
“…Google Search’s biggest strength, in my opinion, was its perfect simplicity. Punch in some words, and the machine gives you everything the internet has to offer on the subject, with every link neatly cataloged and sorted in order of relevance. Sure, most of us will only ever click the first link it presents – god forbid we venture to the dark recesses of the second page of results – but that was enough. It didn’t need to change; it didn’t need this.
There’s an argument to be made that search AI isn’t for simple inquiries. It’s not useful for telling you the time in Tokyo right now, Google can do that fine already. It’s for the niche interrogations: stuff like ‘best restaurant in Shibuya, Tokyo for a vegan and a lactose intolerant person who doesn’t like tofu’. While existing deep-learning models might struggle a bit, we’re not that far off AIs being able to provide concise and accurate answers to queries like that…”
Guyton’s article (linked above in the citation) is well worth the read in it’s entirety. It has pictures and everything.
The bottom line on all of this is that we don’t know what the AI’s are trained on, we don’t know how it’s going to affect business models for online publishers, and we don’t know if it’s actually going to improve the user experience.
A few days ago I mentioned the normalization of Web 2.0, and yesterday I ended up reading about The New York Times getting around $100 million over a period of 3 years from Google.
“…The deal gives the Times an additional revenue driver as news publishers are bracing for an advertising-market slowdown. The company posted revenue of $2.31 billion last year, up 11% from a year earlier. It also more than offsets the revenue that the Times is losing after Facebook parent Meta Platforms last year told publishers it wouldn’t renew contracts to feature their content in its Facebook News tab. The Wall Street Journal at the time reported that Meta had paid annual fees of just over $20 million to the Times…”
That’s a definite punch in the arm for The New York Times, particularly with the ad revenue model that Web 2.0 delivered from. Will it lower the paywall to their articles? No idea.