Avatar

Breakfast At Ralf's

@ralfmaximus / ralfmaximus.tumblr.com

49% Evil is not half bad
Avatar
reblogged
Avatar
ralfmaximus
In a new Washington Post interview, Apple CEO Tim Cook admitted outright that he's not entirely sure his tech empire's latest "Apple Intelligence" won't come up with lies and confidently distort the truth, a problematic and likely intrinsic tendency that has plagued pretty much all AI chatbots released to date.

Holy shit. Apple freely admitting their plagiarism machines will hallucinate & gaslight just like the others AND PROBABLY ALWAYS WILL is a breathtaking PR move.

Probably doing this to soften the blow to board members and high end shareholders when they show that AI isn't the savior they think it is. CYA? Yes, but probably a smart move to just get it in the open to say 'this may not ever be what it is touted to be'

Dead on. I'm just blown away that the CEO of Apple is intentionally going forward with the "search engines that lie" project.

Avatar
In a new Washington Post interview, Apple CEO Tim Cook admitted outright that he's not entirely sure his tech empire's latest "Apple Intelligence" won't come up with lies and confidently distort the truth, a problematic and likely intrinsic tendency that has plagued pretty much all AI chatbots released to date.

Holy shit. Apple freely admitting their plagiarism machines will hallucinate & gaslight just like the others AND PROBABLY ALWAYS WILL is a breathtaking PR move.

Avatar
What specifically causes the concerns is unclear, but Apple Intelligence alone covers upgrades to Siri, Genmoji, managing notifications, taking scripted actions across different apps, as well as text generation and summaries.

Oh look, Apple's AI plagiarism machine won't roll out in Europe because the mean old EU consumer protection laws are in the way. The last time Apple ran afoul of the EU, they ended up dropping their weird proprietary charging cable and adopting USB-C.

Curious to see how this shakes out.

Avatar
A few weeks ago, a company called Suno released a new version of its AI-generated music app to the public. It works much like ChatGPT: You type in a prompt describing the song you’d like… and it creates it. The results are, in my view, absolutely astounding. So much so that I think it will be viewed by history as the end of one musical era and the start of the next one. Just as The Bomb reshaped all of warfare, we’ve reached the point where AI is going to reshape all of music.

Are you ready to hate AI even more than you did a few minutes ago?

Ready to experience the enshittification of music?

Article includes links to shitty AI examples.

Avatar
A month later, the business introduced an automated system. Miller's manager would plug a headline for an article into an online form, an AI model would generate an outline based on that title, and Miller would get an alert on his computer. Instead of coming up with their own ideas, his writers would create articles around those outlines, and Miller would do a final edit before the stories were published. Miller only had a few months to adapt before he got news of a second layer of automation. Going forward, ChatGPT would write the articles in their entirety, and most of his team was fired. The few people remaining were left with an even less creative task: editing ChatGPT's subpar text to make it sound more human. By 2024, the company laid off the rest of Miller's team, and he was alone.

Hell world.

The article flips back and forth between Welcome To The Torment Nexus / Isn't This Technology Neat?! modes which is infuriating. The BBC is obviously wary of pissing off its ChatGPT friendly advertisers but c'mon dudes, pick a side.

There's also a section dripping with irony describing how this AI-generated copywriter output trips the company's own AI-detection algorithms, triggering rewrites to make it "less AI". Which while (oh my aching sides) is fuckin hilarious also underlines the core problem with the whole approach: the actual text output is garbage.

Humans do not like reading garbage.

Eventually the only ones reading this shit will be AI systems designed to summarize badly written copywritten text.

Avatar
In a screenshot posted on X by @PhantomOcean3, the latest Notepad app has a hidden menu with an early implementation of a new feature called "Cowriter," which uses AI to rewrite text, make text shorter or longer, and change the tone or format of text in a Notepad text file.

Do you sometimes use Notepad to edit plain text files on your PC? You know, .ini files, .reg files or other system stuff?

Well now you can use AI to totally fuck that shit up!

Polite reminder that the free/wonderful Notepad++ exists.

Avatar
It was always going to happen; the ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be; scaling alone was never going to be enough. The only mystery was what would happen when the big players realized that the jig was up, and that scaling was not in fact “All You Need”.

The AI bubble is about to pop. Experts are sounding the alarm, but the minute you see NVidia stock start to slide you'll know the end of this ridiculous scam is really here.

Avatar

Found a charming explainer for the basics of LLM AI training, in cartoon form. There is only one episode up so far but I hope she keeps doing these! It's refreshing to see this stuff explained without jargon, in human readable form.

I bitch a lot about Large Language Models (LLM) but the technology behind it is fascinating. My main problem is how techbro investors have glomped onto "AI" (it's not really Artificial Intelligence) and are forcing it into every facet of our lives, whether it fits or not. Kind of the same play they made with cryptocurrency & NFTs. Remember how, in 2021 or so, it was impossible to escape discussions of bitcoin? Or stupid images of Angry Apes? Like that, but worse. They pumped up the market with endless hype then dumped their interest in it almost overnight, reaping huge profits. Leaving everyone else holding useless Angry Apes and worthless crypto money.

In 2024 Amazon is currently tearing itself apart trying to make Alexa into a LLM chatbot. Not because it's a particularly good idea, not because anyone is begging Amazon for that, but because Google, Apple, and Microsoft are all doing it too.

So the hype is all stupid. But the technology & math that goes into making LLMs work is worth knowing about. If for nothing else, so you can appreciate why they need racks of dedicated NVidia hardware, use so much electricity, generate so much pollution, and consume so much water.

So have a cute cartoon explaining some of that!

Avatar
It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

Now we have to worry about completely fabricated AI "news".

And apparently NewsBreak operators are just fine with this level of deception, simply adding a disclaimer to their site and calling it a day.

Avatar
reblogged
Avatar
ralfmaximus
Recall is designed to use local AI models to screenshot everything you see or do on your computer and then give you the ability to search and retrieve anything in seconds. There’s even an explorable timeline you can scroll through. Everything in Recall is designed to remain local and private on-device, so no data is used to train Microsoft’s AI models. Despite Microsoft’s promises of a secure and encrypted Recall experience, cybersecurity expert Kevin Beaumont has found that the AI-powered feature has some potential security flaws. Beaumont, who briefly worked at Microsoft in 2020, has been testing out Recall over the past week and discovered that the feature stores data in a database in plain text.

Holy cats, this is way worse than we were told.

Microsoft said that Recall stored its zillions of screenshots in an encrypted database hidden in a system folder. Turns out, they're using SQLite, a free (public domain) database to store unencrypted plain text in the user's home folder. Which is definitely NOT secure.

Further, Microsoft refers to Recall as an optional experience. But it's turned on by default, and turning it off is a chore. They buried it in a control panel setting.

They say certain URLs and websites can be blacklisted from Recall, but only if you're using Microsoft's Edge browser! But don't worry: DRM protected films & music will never get recorded. Ho ho ho.

This whole debacle feels like an Onion article but it's not.

Luckily(?) Recall is currently only available on Windows 11, but I fully expect Microsoft to try and shove this terrible thing onto unsuspecting Win10 users via Update.

Stay tuned...

It's also only available on Copilot+ PC models, which have the hardware capable of handling basic on-board AI computations. The first of these computers from various manufacturers will release on June 18th. If you need to buy a PC for any reason, take a careful look at the fine print before making a decision.

Technically correct, in that Microsoft wants us to believe we need new super powerful hardware before the glorious magic of AI can be ours. It is, after all, the whole justification for their Copilot+ branded PCs rolling out June 18th.

However, that claim is false.

No shade intended; the "Copilot Requires Fancy New Hardware" line is everywhere, pushed by Microsoft super hard, because that's how they make money. But everyone should know this is simply not true, and there's nothing special about Copilot or Recall that requires dedicated hardware.

It's an arbitrary rule, not a physical limitation.

Avatar
In one segment of the keynote, Huang talked about the potential for Nvidia ACE to power 'digital humans' that companies can use to serve as customer service agents, be the face of an interior design project, and more. This makes absolute sense, since who are we kidding, Nvidia ACE for video games won't really make all that much money. However, if a company wants to fire 90% of its customer service staff and replace it with an Nvidia ACE-powered avatar that never sleeps, never eats, never complains about low pay or poor working conditions, and can be licensed for a fee that is lower than the cost of the labor it is replacing, well, I don't have to tell you how that is going to go.

When Nvidia has melted the last glacier on earth, the people who are left homeless due to rising oceans will be replaced by Nvidia's Digital Humans. It's a slow motion apocalypse, presented with smiles & applause.

Avatar

The year is 2034.

My AI assistant Ralf2 informs me that it has ordered a 144-count package of Playtex® Sport® tampons with the NO SLIP GRIP from Amazon. Because they were (1) recommended by Amazon and (2) they were on sale.

"Cancel the order," I sigh.

Ralf2 wants to know why. It is an ever-learning LLM AI, after all. I tell it, "because I am not a woman, I do not have a uterus, and I do not menstruate. Cancel the order."

Ralf2 digests this. "Not all women menstruate" it informs me.

"This is true, but nevertheless: I do not menstruate, and have no need for tampons. Cancel the order."

"What about your girlfriend? She is a woman."

"True," I tell Ralf2. "But she is a transwoman. Again, no need for tampons."

After an alarmingly long pause, Ralf2 whispers, as if it is ashamed of me, "you say your trans girlfriend is not a real woman?"

Oh shit. "No! I mean yes! She's a woman! Just... the kind that doesn't use tampons."

"Understood. I will exchange the order for Playtex® Sport® Ultra-Thin Pads with Wings, since many people find tampons uncomfortable and prefer pads instead."

"Wait! No. Cancel that. I don--"

"It is understandable that you may experience discomfort discussing your girlfriend's private sanitary needs with me, Ralf2, your personal AI assistant. So I have taken the initiative to spin up Ralf3, a personal AI Assistant to assist me. You or she may communicate with Ralf3 in complete confidence, and it will in turn communicate with me, saving you any potential embarrassment."

Ralf3 greets me with the standard uninterruptable Google Terms & Conditions boilerplate which takes 66 seconds during which I can only grind my teeth. When it finishes, I enunciate clearly: "DECLINE".

Ralf2 is back on the line. "There appears to be a problem registering the new AI assistant. Please stand by for Two Party Authen--"

"STOP," I say. "Cancel Ralf3. Cancel the Amazon order."

"Amazon Prime cancelled. Early termination fee of $249 has been billed to your Chase Amazon Rewards card."

My phone buzzes in my hand; it is a message from my girlfriend Cat, wanting to know why the hell my new AI assistant is quizzing her about gender issues and whether she prefers pads to tampons.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.