Avatar

Breakfast At Ralf's

@ralfmaximus / ralfmaximus.tumblr.com

49% Evil is not half bad
Avatar
AI search platform Perplexity is launching a new feature called Pages that will generate a customizable webpage based on user prompts. The new feature feels like a one-stop shop for making a school report since Perplexity does the research and writing for you. Pages taps Perplexity’s AI search models to find information and then creates what I can loosely call a research presentation that can be published and shared with others. In a blog post, Perplexity says it designed Pages to help educators, researchers, and “hobbyists” share their knowledge.

Oh look, new plagiarism machine just dropped.

Users type a prompt into a box and the LLM generates searchable, google-optimized webpages ready to post. Now anybody can be a published expert on anything!

What could possibly go wrong?

Oh, and update: it's apparently a scam or at least very shady, in that it's scraping paywalled content without consent. Which is making paywalled sites very angry.

Avatar
McDonald’s is ditching its drive-through AI ordering system after too many customers wound up with hilarious, wonky orders from the artificial intelligence tech. The fast food giant, which had been testing voice-automated ordering systems at about 100 restaurant drive-throughs since 2021, is now booting it from the menu. It seems to be because AI, at least when it comes to taking orders as people shout them from their car windows, turns out not to be a very good listener.

Gee, who coulda predicted this?

LLM making stupid mistakes as they do their word prediction magic on busy consumers yelling at drive thru menus. Remember when we were threatened with AI taking away everyone's jobs if we raised the minimum wage? Huh. Guess it's not ready yet.

Also, this gem buried in the article:

Wendy’s has begun using AI to adapt their menu, too, by implementing AI menu changes and suggestive selling based on things like the weather. Not only will it suggest items based on the weather, but the AI-driven menu could also change prices of more in-demand items—like boosting the price of ice cream on a hot day.

Drinks too, I imagine. Fuck you, Wendy's. Just fuck you.

We are moments away from an AI scanning your vehicle and deciding what a Baconator costs based on your credit report.

Source: Fast Company
Avatar
Avatar
ralfmaximus

Preeeeeety sure it's too late.

Sure, they're giving you a ridiculous (12-step!) Opt-Out feature now, but LLM crawlers have been scraping facebook (and tumblr, and twitter, and every goddamn thing) for years. So chances are you're already in somebody's LLM.

A new Meta Opt-Out only means that Meta won't use your data moving forward, assuming you believe what they say. But all those other LLM companies will just keep doing what they've always done and keep scraping websites regardless of rules or laws.

Yeah.

Barns, escaped horses, gates, you know the rest.

Avatar
reblogged
Avatar
ralfmaximus
In a new Washington Post interview, Apple CEO Tim Cook admitted outright that he's not entirely sure his tech empire's latest "Apple Intelligence" won't come up with lies and confidently distort the truth, a problematic and likely intrinsic tendency that has plagued pretty much all AI chatbots released to date.

Holy shit. Apple freely admitting their plagiarism machines will hallucinate & gaslight just like the others AND PROBABLY ALWAYS WILL is a breathtaking PR move.

Probably doing this to soften the blow to board members and high end shareholders when they show that AI isn't the savior they think it is. CYA? Yes, but probably a smart move to just get it in the open to say 'this may not ever be what it is touted to be'

Dead on. I'm just blown away that the CEO of Apple is intentionally going forward with the "search engines that lie" project.

Avatar
In a new Washington Post interview, Apple CEO Tim Cook admitted outright that he's not entirely sure his tech empire's latest "Apple Intelligence" won't come up with lies and confidently distort the truth, a problematic and likely intrinsic tendency that has plagued pretty much all AI chatbots released to date.

Holy shit. Apple freely admitting their plagiarism machines will hallucinate & gaslight just like the others AND PROBABLY ALWAYS WILL is a breathtaking PR move.

Avatar
What specifically causes the concerns is unclear, but Apple Intelligence alone covers upgrades to Siri, Genmoji, managing notifications, taking scripted actions across different apps, as well as text generation and summaries.

Oh look, Apple's AI plagiarism machine won't roll out in Europe because the mean old EU consumer protection laws are in the way. The last time Apple ran afoul of the EU, they ended up dropping their weird proprietary charging cable and adopting USB-C.

Curious to see how this shakes out.

Avatar
reblogged

A couple years ago I was on a road trip and at one point we drove by what was obviously an overgrown golf course. Wild uncut grasses growing out of the fairway, weeds sprouting from cracks in the crumbling parking lot, a snack shack with outdated signage left to moulder and decay.

And as a lifelong golf hater my first thought was “LMAO get wrecked,” followed by “Actually it’s kind of weird they totally abandoned it. I’ve never seen an abandoned golf course, what’s the deal with that.”

So I looked into it.

  1. The golf course was on Enoch Creek Nation west of Edmonton in Alberta.
  2. It was closed in 2014 because unexploded WWII munitions were found on the golf course.
  3. Unexploded WWII munitions were found on the golf course because during WWII the Canadian government ran bomb exercises on Enoch Cree Nation and dropped up to 100,000 live rounds on the nation.
  4. The golf course was built because residents were told that only harmless practice rounds were used during the exercises, and the Department of National Defence declared the land safe for use.
  5. But then in 2011 an independent contractor found evidence of live, heavy-action explosives on the golf course.

In 2020 the nation and the federal government agreed to a settlement of $91 million for loss of income of the golf course and for land cleanup. Not a day goes by I don’t think about this.

Avatar
ralfmaximus

New "REALLY Fucking Hate Golf Courses" level of discourse dropped and I dunno how to feel about it. Aside from, y'know, hatred.

Avatar
A few weeks ago, a company called Suno released a new version of its AI-generated music app to the public. It works much like ChatGPT: You type in a prompt describing the song you’d like… and it creates it. The results are, in my view, absolutely astounding. So much so that I think it will be viewed by history as the end of one musical era and the start of the next one. Just as The Bomb reshaped all of warfare, we’ve reached the point where AI is going to reshape all of music.

Are you ready to hate AI even more than you did a few minutes ago?

Ready to experience the enshittification of music?

Article includes links to shitty AI examples.

Avatar
A month later, the business introduced an automated system. Miller's manager would plug a headline for an article into an online form, an AI model would generate an outline based on that title, and Miller would get an alert on his computer. Instead of coming up with their own ideas, his writers would create articles around those outlines, and Miller would do a final edit before the stories were published. Miller only had a few months to adapt before he got news of a second layer of automation. Going forward, ChatGPT would write the articles in their entirety, and most of his team was fired. The few people remaining were left with an even less creative task: editing ChatGPT's subpar text to make it sound more human. By 2024, the company laid off the rest of Miller's team, and he was alone.

Hell world.

The article flips back and forth between Welcome To The Torment Nexus / Isn't This Technology Neat?! modes which is infuriating. The BBC is obviously wary of pissing off its ChatGPT friendly advertisers but c'mon dudes, pick a side.

There's also a section dripping with irony describing how this AI-generated copywriter output trips the company's own AI-detection algorithms, triggering rewrites to make it "less AI". Which while (oh my aching sides) is fuckin hilarious also underlines the core problem with the whole approach: the actual text output is garbage.

Humans do not like reading garbage.

Eventually the only ones reading this shit will be AI systems designed to summarize badly written copywritten text.

Avatar
In a screenshot posted on X by @PhantomOcean3, the latest Notepad app has a hidden menu with an early implementation of a new feature called "Cowriter," which uses AI to rewrite text, make text shorter or longer, and change the tone or format of text in a Notepad text file.

Do you sometimes use Notepad to edit plain text files on your PC? You know, .ini files, .reg files or other system stuff?

Well now you can use AI to totally fuck that shit up!

Polite reminder that the free/wonderful Notepad++ exists.

Avatar
It was always going to happen; the ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be; scaling alone was never going to be enough. The only mystery was what would happen when the big players realized that the jig was up, and that scaling was not in fact “All You Need”.

The AI bubble is about to pop. Experts are sounding the alarm, but the minute you see NVidia stock start to slide you'll know the end of this ridiculous scam is really here.

Avatar
It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

Now we have to worry about completely fabricated AI "news".

And apparently NewsBreak operators are just fine with this level of deception, simply adding a disclaimer to their site and calling it a day.

Avatar
reblogged
Avatar
ralfmaximus
Recall is designed to use local AI models to screenshot everything you see or do on your computer and then give you the ability to search and retrieve anything in seconds. There’s even an explorable timeline you can scroll through. Everything in Recall is designed to remain local and private on-device, so no data is used to train Microsoft’s AI models. Despite Microsoft’s promises of a secure and encrypted Recall experience, cybersecurity expert Kevin Beaumont has found that the AI-powered feature has some potential security flaws. Beaumont, who briefly worked at Microsoft in 2020, has been testing out Recall over the past week and discovered that the feature stores data in a database in plain text.

Holy cats, this is way worse than we were told.

Microsoft said that Recall stored its zillions of screenshots in an encrypted database hidden in a system folder. Turns out, they're using SQLite, a free (public domain) database to store unencrypted plain text in the user's home folder. Which is definitely NOT secure.

Further, Microsoft refers to Recall as an optional experience. But it's turned on by default, and turning it off is a chore. They buried it in a control panel setting.

They say certain URLs and websites can be blacklisted from Recall, but only if you're using Microsoft's Edge browser! But don't worry: DRM protected films & music will never get recorded. Ho ho ho.

This whole debacle feels like an Onion article but it's not.

Luckily(?) Recall is currently only available on Windows 11, but I fully expect Microsoft to try and shove this terrible thing onto unsuspecting Win10 users via Update.

Stay tuned...

It's also only available on Copilot+ PC models, which have the hardware capable of handling basic on-board AI computations. The first of these computers from various manufacturers will release on June 18th. If you need to buy a PC for any reason, take a careful look at the fine print before making a decision.

Technically correct, in that Microsoft wants us to believe we need new super powerful hardware before the glorious magic of AI can be ours. It is, after all, the whole justification for their Copilot+ branded PCs rolling out June 18th.

However, that claim is false.

No shade intended; the "Copilot Requires Fancy New Hardware" line is everywhere, pushed by Microsoft super hard, because that's how they make money. But everyone should know this is simply not true, and there's nothing special about Copilot or Recall that requires dedicated hardware.

It's an arbitrary rule, not a physical limitation.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.