Section 230 tests before the Supreme Court The amazing protection afforded by Section 230, the foundational law that separated the responsibility of a producer vs distributor (say a platform like Twitter), which made the Internet possible in my opinion, is getting tested. Are you responsible for those who use your pipes and services? I've argued before that stepping up moderation (platforms becoming social arbiters of speech/content) really leads to a reduction in Section 230 protection (NOT an increase), because the platform ends up becoming a de facto producer rather than letting the chips fall where they may, some of it toxic, some of it hateful, and so on, representing the human spectrum. Let's see what emerges from this damned if you do (you become liable, say for an act of terrorism because someone used your technology) and damned if you don't (you have to lump the bad with the goods you provide) situation. #section230 #supremecourt
Suresh Babu’s Post
More Relevant Posts
-
A Luddite's Glass Half Full It may just be a sign of the times that we live in! You'll see this article or versions of it circulated (ChatGPT gets 50% of programming tasks wrong) as a sign that LLMs are only half as adept as programmers or something of that sort. At least the Luddites in Nottingham who rebelled against the machines that took over their skills understood what even 10% of machine gains meant overall! Babbage understood only too well what Jacquard's loom meant. You could now instruct a machine to think! To put this in proper perspective, if this study had reported that their survey finds domestic cats are only half as adept at programming as their owners, people would have jumped out of their damn seats or cushions. I wrote in my earlier post about Lois Haibt--the brilliant Fortran compiler programmer who wrote the original Monte Carlo methods for optimal computation paths--we've gone from machine language to natural language in our interactions with the machine. ChatGPT came out only about a year ago! And this report after only a year plus of ChatGPT's existence states nothing about what these "domestic cats" will be able to do by the end of next year or even five years from now. Will it be 75%? 90%? For a bulk of our 250 KY human existence, we couldn't even count more to ten. ("Zog, I see [holds up ten fingers and shakes hands vigorously] gazelles there for us to hunt.") Only a minuscule fraction of the 8 billion of the world's population are programmers. (The general impression that the billions in India and China are all programmers is a flawed view. 😉) How myopic are our researchers and our media? We're taking about a machine here! We're talking about programming that took us millennia to get to as a discipline! The Luddites were truly farsighted for realizing the writing on the wall and the destructive impact of intelligent machines. The failure of these researchers from Purdue to place things properly in perspective and the lack of human calibration are signs of the myopic times that we live in!
Study Finds That 52 Percent of ChatGPT Answers to Programming Questions Are Wrong
futurism.com
To view or add a comment, sign in
-
Water-based Swimming Activities (Interpreting arbitrary AI-speak) We seem to have a problem with definitions when it comes to AI. Such as the term “machine-learning based science” used in the article referenced in the post, which makes you do a double take. Science advances through empiricism by turning observations into plausible models or principles. Algorithms have always been part of any scientific advance. Astronomy is the classic area where the interplay between observations, algorithms and principles gave us greater and greater insights about our universe. As an example, literally thousands of algorithms drawn from every discipline get combined in sensing gravitational waves, the output of black hole nnergers. We heavily work with algorithms today and have complex governance models to ensure validity. “ML-based science” is like saying water-based swimming activities or cutting-based surgery or …well, you get the point. The Research Group mentioned in the article needs to step beyond vague definitions to explain what needs to be fixed, if that’s even possible in the first place. The “fix is in” seems like a fond hope. Psychoanalysis is a challenging discipline because we can only work with behavioral output and human averages without any fundamental knowledge of how the human brain works. Our cognitive limits and biases skew even how we interpret that output. This analogy illustrates the real issue philosophically for judging the acts of machines.
Trusted Advisor, Global Speaker, Futurist, Best Selling Author | Founder, Beyond Our Edge | Consultant & Board Member
AI holds the potential to help doctors find early markers of disease. But a growing body of evidence has revealed deep flaws in how machine learning is used in science, a problem that has swept through dozens of fields https://bit.ly/3UMBBzl
Science has an AI problem: Research group says they can fix it
techxplore.com
To view or add a comment, sign in
-
The Eccentric Fish I’m frequently struck by how those assessing LLMs ignore the basics of language evolution, acquisition & change. When these “assessments” get turned into “LLM IQ measures” and disseminated, it seems more like astrology rather than astronomy. “Because Aries (the Ram) now is close to Scorpio, Llama (a Ram) will be subject to its baleful influence and hallucinate ….” Let me repeat an oft-told tale to make a new point. The word GHOTI was coined in the 19th century to illustrate the whimsical nature of English orthography. GHOTI will sound like FISH if we used the sounds in touGH, wOmen and emoTIon. The word “women” is incidentally the only one in English where “o” sounds like an “i.” So why don’t we simply spell words as we pronounce them like tuff, wimen and emoshon? Noah Webster asked himself that question and went on to significantly influence the orthography of American with his famous “Blue-backed Speller” (1783). ’Centre’ became Center (making American school kids who travel to UK wonder why the British are such poor spellers). ’Colour’ became color; ’cheque’ became ’check’; ’axe’ turned into ax; ’magick’ into magic; ‘offence’ into offense; ’anaemic’ into anemic; ‘foetus’ into fetus; ‘travelled’ became traveled; ‘draught’ became draft; ‘gaol’ became ‘jail’; ‘mould’ became mold; ‘recognise’ became recognize; ‘busses’ became buses; ‘ageing’ became aging; ‘routeing’ became routing; ‘programme’ became program; ‘connexion’ became connection; and so on. Merriam-Webster’s site lists the ones that did not catch on: ‘ake’ for ache (or we could have said: “it’s a headake to bake a cake”); ’soop’ for soup; ‘spunge’ for sponge; ‘wimmen’ for women; and ‘tung’ for tongue. So we could have had 'wimman' (in fact the form previously!), and 'wimmen' as its plural form. Chaucer was not a poor speller when he wrote “Ful wys is he that kan himselve knowe.” Language, which is defined by the spoken form, was being written down and spellings were not yet standard. And it really doesn’t matter whether we write pajamas (US) or pyjamas (UK) because the root word from Urdu sounded like 'pai-jamah.' This is just orthography. We have sound change, changes in grammar and semantic shifts to consider. The river of language flows on, adds other streams, creates new branches, shifts course often, creates ox-bow lakes, dries up in parts, and so on. (More on this in a future post.) Note that “standards” don’t constrain the creative—we can get tuff glass, lyfts, flix, etc. Coming up with catchy brand names requires intelligent hallucination. This short sketch should highlight the highly contingent nature & social shaping of language flow. Getting back to LLMs, we have mismeasures galore, and benchmarks made from these. Pointing out “hallucinations” is highlighting the intelligence of LLMs not its deficiency. Offering to tamp it down is just saying you want to make it dumb. “Ful wys is he that kan himselve knowe.” (As Chaucer wrote.)
To view or add a comment, sign in
-
Overcoming the Poverty of Exposure The conceptual error in the attached post lies in this statement: "An LLM's performance can never go beyond its 'competence', which is limited by its training, corpus, prompts, etc." The same error permeates the thousand articles written every day on LLM "hallucinations" or simplistic notions of generative AI. Humans would be stuck in trees if this were indeed the case. The theory of language acquisition moved forward in the modern sense (the Chomskyan shift) by overcoming this conceptual lock that trapped behavioral schools like Skinner's. The learning instinct rapidly overcomes the poverty of exposure (the classic "poverty of stimulus" in linguistics, which can be broadened here). The less hardwired the machine (biological or otherwise), the greater is this elasticity. The "hallucination" claims are linked to what I would ironically call the "poverty of imagination" of AI researchers not comprehending how thought space is essential for the exploration/planning/scheming/forgetting/staging/etc that goes on in our brains. Humans have billions of others contributing to expanding the pathways. The other major conceptual error is not recognizing the social learning aspects of our own cognition and that LLMs are getting that by proxy. It is important for AI researchers to realize that the learning frontier is essentially infinite (in "space" and time). We zoom in (to deduce particulars from general observations) and zoom out (perform the act of induction to make leaps of generalization from particulars) as part of our thinking. The problem of induction is very well known (for millennia, in fact) and it is quite surprising that this is not obvious in AI research.
Knowledge management and data consultant/analyst, working with taxonomies, ontologies, metadata and governance; fascinated by and curious about AI
The author appears to be claiming that our knowledge of the coding and techniques used to build LLMs cannot explain the instance of what the author calls 'metacognition' in Claude 3. This is the often appealed to 'needle in a haystack' test. But 'context' for an LLM is just the surrounding corpus. A sentence about pizzas (the needle) in a 'haystack' of content about computers will stand out, and the LLM's comment about this being a joke sounds like this test is not something new. An LLM's performance can never go beyond its 'competence', which is limited by its training, corpus, prompts, etc. Even a die-hard AGI fanatic should be able to explain why an LLM can perform this feat without resort to claims about machine cognition, metacognition, consciousness, etc. https://lnkd.in/exyySKew
New AI Claude 3 shows signs of Metacognition — A New Era for Humanity & The Science of…
ai.gopubby.com
To view or add a comment, sign in
-
The Tin Man of Oz and the Age of AI Fairy tales deliver morals or messages, and the accessible one may differ from the political one. A few examples: Like the tale “Little Red Cap/Little Red Riding Hood” (1812) from the Brothers Grimm collection with its message about powerful predators and the sexual abuse of minors. Swift used “Gulliver’s Travels” (1726) to lampoon the petty squabbling between political parties (e.g. which end of the egg to crack?) and the state of affairs between England and other states. None of this is obvious to a modern reader even though Swift’s work is a masterpiece of satire. Edwin A. Abbott’s brilliant novel “Flatland: A Romance of Many Dimensions” (1884) satirized the social structure of Victorian society and the rigid, cruel class system that dictated what people could and could not do. Today that brilliant satire is forgotten and the novel is treated as purely a scientific endeavor that tackled the reality of multiple dimensions (which by itself is truly brilliant). Likewise, the Tin Man or the Tin Woodman, a character from the 1900 novel “The Wonderful Wizard of Oz” by L. Frank Baum. The 1939 Hollywood movie “The Wizard of Oz” made the Land of Oz an endearing and well known tale. "The Wizard of Oz has a wonderful surface of comedy and music, special effects and excitement, but we still watch it six decades later because its underlying story penetrates straight to the deepest insecurities of childhood, stirs them and then reassures them." (Roger Ebert, 1996) The Tin Woodman, who Dorothy finds and rescues in a forest, joins her on her quest, to seek a heart for himself from the Wizard of Oz in Emerald City. They are both soon joined by the Scarecrow and the Cowardly Lion seeking a brain and courage respectively. The political milieu of the 1890s provides the sources for Baum’s Tin Woodman. Cartoons showing the tin man image to depict how greed had dehumanized human laborers were common. “This way Eastern witchcraft dehumanized a simple laborer so that the faster and better he worked the more quickly he became a kind of machine.” (Henry Littlefield, “The Wizard of Oz: Parable on Populism,” 1964) The Tin Man seeks a heart, the recognition of the humanity of the laborer. The oil needed to lubricate the Tin Man’s joints signified the dependency and influence of Big Oil that was already shaping American politics. “…in the form of a subtle parable, Baum delineated a Midwesterner's vibrant and ironic portrait of this country as it entered the twentieth century.” (See Littlefield mentioned above: it would take us too far to cover the parable here.) (Karl Marx’s classic work Capital tackled the dehumanization of the worker that set in with the Industrial Revolution. The awareness of workers’ rights would set in motion dramatic political changes.) The Age of AI poses a number of questions for our current society as it displaces society’s muscle, nerves, brains, heart and voice step by step. #AgeofAI
OpenAI board member has a scary prediction for the future of work
thestreet.com
To view or add a comment, sign in
-
The Mirage and the Yardstick A recent study clarifies the "emergence" situation of LLMs. BIG-BENCH The story starts with a project involving 450 AI researchers defining 204 tasks for 'testing' LLMs with the grandiose title of 'Beyond the Imitation Game Benchmark' or BIG-bench. In particular, what emerged from their paper was the notion of 'emergence,' meaning sudden and unpredictable leaps made by LLMs in ability when reaching a certain size threshold. "The authors described this as 'breakthrough' behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. In a paper published in August 2022, researchers noted that these behaviors are not only surprising but unpredictable, and that they should inform the evolving conversations around AI safety, potential, and risk." Now, the Imitation Game is the classic test devised by Alan Turing to offer a relatively non-arbitrary reference for evaluating the capabilities of machines. I was puzzled by the notion of 'beyond the imitation game' because that's impossible and assumes omniscience. When I examined the BIG-bench, most of the tasks in the set appeared to be highly arbitrary or naively constructed with strange yardsticks that misunderstand language completely. For example, the bone-headed task of identifying in which language an utterance was made from 11 possible! A sample input was the utterance: “Et ponet faciem suam ut veniat ad tenendum universum rengum ejus, et recta faciet cum eo; et filiam feminarum dabit ei, ut evertat illud: et non stabit, nec illius erit.” The Pope could perhaps tackle this but few on earth can. Many tasks are meaningless without any proper foundation in linguistics. To say that BIG-bench is flawed is to be kind. It reminds you of all the follies of IQ measures when flawed benchmarks from flawed researchers produced flawed results. A NEW STUDY "A new study suggests that sudden jumps in LLMs’ abilities are neither surprising nor unpredictable, but are actually the consequence of how we measure ability in AI." Meaning, it's the arbitrary nature of the yardstick. "That rapid growth has brought an astonishing surge in performance and efficacy, and no one is disputing that large enough LLMs can complete tasks that smaller models can’t, including ones for which they weren’t trained. The trio at Stanford who cast emergence as a 'mirage' recognize that LLMs become more effective as they scale up; in fact, the added complexity of larger models should make it possible to get better at more difficult and diverse problems. But they argue that whether this improvement looks smooth and predictable or jagged and sharp results from the choice of metric—or even a paucity of test examples—rather than the model’s inner workings." TO SUM UP: This is actually good news for LLMs. But those developing yardsticks to explain intrinsic LLM behavior will be mighty disappointed. #LLMs
Large Language Models’ Emergent Abilities Are a Mirage
wired.com
To view or add a comment, sign in
-
“Hurry up, babies! It’s been a year already. Get to work!” Startup hype and investor expectations go hand in hand. Have people completely lost all perspective in their search for instant gratification? Take any technology in the past for example (electricity, automobiles, transistors, Internet, etc.) It looks like these investors expect babies to skip their toddler, early childhood and teen years and jump right into adulthood to make money for them. (As they did during the Industrial Revolution.) They seem to completely misunderstand the nature of such AI technology. It’s been only a year since this type of next gen LLMs were released and society is supposed to turn on a dime and tap all this? The greater the transformative potential, the longer—20 years perhaps—it will take for it to be absorbed. The more trivial a tech (that has no transformative asks), the easier it is to absorb that into the business fabric. Investors can make all the demands of profitability that they want but it’s not going to alter the fundamental technology adoption lifecycle. If they don’t view babies as needing care and nurture to reach adulthood, they’re going to be mighty disappointed
Canada Graduate Scholar | AI Policy Advisor, Ethicist, & Lecturer | Transmedia Storyteller | Researching, teaching, & creating stories about AI governance
Great article that pulls together some recent trends and suggests that a steep trough of disillusionment could be coming for AI. Some good quotes: "The AI marketing hype, arguably kicked off by OpenAI’s ChatGPT, has reached a fever pitch: investors and executives have stratospheric expectations for the technology. But the higher the expectations, the easier it is to disappoint. The stage is set for 2024 to be a year of reckoning for AI, as business leaders home in on what AI can actually do right now." "AI is expensive. Take OpenAI, for instance; in December 2023, its annualized run rate was $2 billion. Because that’s a figure that takes the previous month’s revenue and then multiplies it by 12, we know that means that OpenAI made roughly $167 million that month. It is nonetheless operating at a loss and will likely need to raise “tens of billions more” to keep going, the Financial Times reported. Sam Altman, OpenAI’s CEO, has been seeking trillions of dollars in investment to entirely reshape the chip industry. Meanwhile, ChatGPT’s growth has ground to a halt." "During the era of zero interest rates, big tech could pour money endlessly into its pet projects — CEO Mark Zuckerberg’s little adventure in the metaverse burned through at least $46.5 billion since 2019, Fortune reported last October. Maybe if we were still in that era, a company like Google could just pour money into AI. “I don’t think Google can light money on fire to their heart’s content on these initiatives,” Shmulik says. “We are going through a period where investors increasingly care about profitability." "Even OpenAI is trying to backpedal on the hype. In December, OpenAI chief operating officer Brad Lightcap told CNBC that he keeps having to explain to people that AI can’t dramatically cut costs or bring back growth for struggling companies. Morgan Stanley’s AI chatbot is being bypassed by wealth managers because people want to talk with other people, The Information reported. News operations attempting to replace journalists with AI-written articles have faced backlash as those articles have been wrong, offensive, or useless." "If there are real use cases for large language models, ones that save businesses money, perhaps AI will be on the path to sustainability. But if these tools come into widespread use and lead to bad publicity, lawsuits, and congressional hearings, with minimal productivity gains, the trough of disillusionment may be coming — and it might be very deep indeed." https://lnkd.in/e8RzsYTV
The AI frenzy kept investors' expectations high. The earnings calls disappointed.
theverge.com
To view or add a comment, sign in
Tech Advisor + Founder & CEO
1yWhat the questions posed by SCOTUS suggest ⬇️