Sep. 14th, 2023

erinptah: (Default)

February 26: “The “genius” of the modern crop of LLM AI is the way it can string together meaningless sentences which our meaty human brains then ascribe meaning to. Start reading any ChatGPT output and you’ll see that quite a lot of it is made up of these statements. It sometimes feels that you’re not talking to an AI – you’re having a cold-reading from a “psychic”.

May 31: “I fed the first letter from this week’s column into ChatGPT and asked it to respond in the style of Dan Savage. You can read [Full Legal Name Redacted]’s question and my response here.” (Non-adblockedarchive of the ChatGPT effort, and archive of the original letter)

June 10, on FFA: “Microsoft is advertising their new Bing chatbot to help with search, I figured I’d sic it on DW’s captchas and see what happens.” (DW has simple plaintext captchas, with questions like “The list tracksuit, ankle and dog contains how many body parts?”)

June 10: “People ask LLMs to write code. LLMs recommend imports that don’t actually exist.” Oh dear.

June 18: “Cautioning its own workers not to directly use code generated by Bard undermines Google’s claims its chatbot can help developers become more productive. The search and ads dominator told Reuters its internal ban was introduced because Bard can output “undesired code suggestions.””

June 27: “ok so I am guessing they trained chatGPT on reddit” (Chatbot recommendations for “notable fantasy books written by women in 1994”.)

June 30: “the generated text [for a technical reference] appears to be unreviewed, unreliable, unaccountable, and even unable to be corrected. […] it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that’s precisely what they’re designed to do.”

July 3: “The actual behavior of AI Help, based on OpenAI’s ChatGPT, is not always that, they note. ‘MDN has generated a convincing-sounding lie and there is no apparent process for correcting it.’

August 7: LLM explains how to “use the rm command with confirmation”. Extra oh-dear.

August 10: “A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes – recommending customers recipes for deadly chlorine gas, “poison bread sandwiches” and mosquito-repellent roast potatoes.” Potentially-fatal oh-dear.

August 12: ““AI doomers,” who try to focus the world’s attention on the fantasy of all-powerful machines possibly going rogue and destroying all of humanity, cite this junk rather than research on the actual harms companies are perpetrating in the real world in the name of creating AI […] which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families.”

August 13: “The report says that it costs OpenAI about $700,000 (Rs 5.80 crore) every day to run just one of its AI services- ChatGPT. […] In May, its losses doubled to $540 million ever since it started developing ChatGPT. Microsoft’s $10 billion investment in OpenAI is possibly keeping the company afloat at the moment.” This article has a weird focus on “maybe people are using other LLM services” instead of “maybe LLM services aren’t fulfilling as many needs as the hype suggested”, but it sure would be fun if the numbers were true.

August 16: “Here are some countries in Europe in alphabetical order: Hungary: A member of the Schengen Area since 2007. France: A member of the EU since 1958. Belgium: A nation in Western Europe known for its medieval towns and Renaissance architecture.” A sampling of LLM-produced geography nonsense. (The article hedges that LLMs can’t “yet” distinguish facts from gibberish. Not how it works!)

August 29: ‘False Morels and Death Caps are two species found in the American Southwest that look a lot like their edible, non-poisonous counterparts and can kill you within hours. Foraging safely for mushrooms can require deep fact checking, curating multiple sources of information, and personal experience with the organism, Jakob said—none of which ChatGPT has the ability to do.

September 12: “What’s the exit plan for AI VCs? Where’s the liquidity event? Do they just hope the startups they fund will do an initial public offering or just get acquired by a tech giant before the market realizes AI is running out of steam?” Amy Castor and David Gerard do their own roundup of AI scam links, to fill the slow spots between their roundups of crypto scam links.

And some videos:

The Litigation Disaster Tourism Hour: GPTroubles In SDNY: Mata v. Avianca (video) — first of QuestAuthority’s videos covering the saga of the ChatGPT Lawyer. There are several! It’s ongoing! Breaks things down, with lots of schadenfreude. (He also did a live first-time reading the Trump indictment, for extra fun.)

The Litigation Disaster Tourism Hour: The GPTroubles Case Transcript: A Live Reading!! More of the same, but this time it’s a group reading, where you get multiple law professionals dramatizing the transcript and then adding comments about which parts are bad and why.

Profile

erinptah: (Default)
humorist + humanist

Most Popular Tags

Page generated May. 17th, 2025 08:51 am
Powered by Dreamwidth Studios

Style Credit

OSZAR »