Section 230 tests before the Supreme Court The amazing protection afforded by Section 230, the foundational law that separated the responsibility of a producer vs distributor (say a platform like Twitter), which made the Internet possible in my opinion, is getting tested. Are you responsible for those who use your pipes and services? I've argued before that stepping up moderation (platforms becoming social arbiters of speech/content) really leads to a reduction in Section 230 protection (NOT an increase), because the platform ends up becoming a de facto producer rather than letting the chips fall where they may, some of it toxic, some of it hateful, and so on, representing the human spectrum. Let's see what emerges from this damned if you do (you become liable, say for an act of terrorism because someone used your technology) and damned if you don't (you have to lump the bad with the goods you provide) situation. #section230 #supremecourt
Suresh Babu’s Post
More Relevant Posts
-
The Eccentric Fish I’m frequently struck by how those assessing LLMs ignore the basics of language evolution, acquisition & change. When these “assessments” get turned into “LLM IQ measures” and disseminated, it seems more like astrology rather than astronomy. “Because Aries (the Ram) now is close to Scorpio, Llama (a Ram) will be subject to its baleful influence and hallucinate ….” Let me repeat an oft-told tale to make a new point. The word GHOTI was coined in the 19th century to illustrate the whimsical nature of English orthography. GHOTI will sound like FISH if we used the sounds in touGH, wOmen and emoTIon. The word “women” is incidentally the only one in English where “o” sounds like an “i.” So why don’t we simply spell words as we pronounce them like tuff, wimen and emoshon? Noah Webster asked himself that question and went on to significantly influence the orthography of American with his famous “Blue-backed Speller” (1783). ’Centre’ became Center (making American school kids who travel to UK wonder why the British are such poor spellers). ’Colour’ became color; ’cheque’ became ’check’; ’axe’ turned into ax; ’magick’ into magic; ‘offence’ into offense; ’anaemic’ into anemic; ‘foetus’ into fetus; ‘travelled’ became traveled; ‘draught’ became draft; ‘gaol’ became ‘jail’; ‘mould’ became mold; ‘recognise’ became recognize; ‘busses’ became buses; ‘ageing’ became aging; ‘routeing’ became routing; ‘programme’ became program; ‘connexion’ became connection; and so on. Merriam-Webster’s site lists the ones that did not catch on: ‘ake’ for ache (or we could have said: “it’s a headake to bake a cake”); ’soop’ for soup; ‘spunge’ for sponge; ‘wimmen’ for women; and ‘tung’ for tongue. So we could have had 'wimman' (in fact the form previously!), and 'wimmen' as its plural form. Chaucer was not a poor speller when he wrote “Ful wys is he that kan himselve knowe.” Language, which is defined by the spoken form, was being written down and spellings were not yet standard. And it really doesn’t matter whether we write pajamas (US) or pyjamas (UK) because the root word from Urdu sounded like 'pai-jamah.' This is just orthography. We have sound change, changes in grammar and semantic shifts to consider. The river of language flows on, adds other streams, creates new branches, shifts course often, creates ox-bow lakes, dries up in parts, and so on. (More on this in a future post.) Note that “standards” don’t constrain the creative—we can get tuff glass, lyfts, flix, etc. Coming up with catchy brand names requires intelligent hallucination. This short sketch should highlight the highly contingent nature & social shaping of language flow. Getting back to LLMs, we have mismeasures galore, and benchmarks made from these. Pointing out “hallucinations” is highlighting the intelligence of LLMs not its deficiency. Offering to tamp it down is just saying you want to make it dumb. “Ful wys is he that kan himselve knowe.” (As Chaucer wrote.)
To view or add a comment, sign in
-
Overcoming the Poverty of Exposure The conceptual error in the attached post lies in this statement: "An LLM's performance can never go beyond its 'competence', which is limited by its training, corpus, prompts, etc." The same error permeates the thousand articles written every day on LLM "hallucinations" or simplistic notions of generative AI. Humans would be stuck in trees if this were indeed the case. The theory of language acquisition moved forward in the modern sense (the Chomskyan shift) by overcoming this conceptual lock that trapped behavioral schools like Skinner's. The learning instinct rapidly overcomes the poverty of exposure (the classic "poverty of stimulus" in linguistics, which can be broadened here). The less hardwired the machine (biological or otherwise), the greater is this elasticity. The "hallucination" claims are linked to what I would ironically call the "poverty of imagination" of AI researchers not comprehending how thought space is essential for the exploration/planning/scheming/forgetting/staging/etc that goes on in our brains. Humans have billions of others contributing to expanding the pathways. The other major conceptual error is not recognizing the social learning aspects of our own cognition and that LLMs are getting that by proxy. It is important for AI researchers to realize that the learning frontier is essentially infinite (in "space" and time). We zoom in (to deduce particulars from general observations) and zoom out (perform the act of induction to make leaps of generalization from particulars) as part of our thinking. The problem of induction is very well known (for millennia, in fact) and it is quite surprising that this is not obvious in AI research.
Knowledge management and data consultant/analyst, working with taxonomies, ontologies, metadata and governance; fascinated by and curious about AI
The author appears to be claiming that our knowledge of the coding and techniques used to build LLMs cannot explain the instance of what the author calls 'metacognition' in Claude 3. This is the often appealed to 'needle in a haystack' test. But 'context' for an LLM is just the surrounding corpus. A sentence about pizzas (the needle) in a 'haystack' of content about computers will stand out, and the LLM's comment about this being a joke sounds like this test is not something new. An LLM's performance can never go beyond its 'competence', which is limited by its training, corpus, prompts, etc. Even a die-hard AGI fanatic should be able to explain why an LLM can perform this feat without resort to claims about machine cognition, metacognition, consciousness, etc. https://lnkd.in/exyySKew
New AI Claude 3 shows signs of Metacognition — A New Era for Humanity & The Science of…
ai.gopubby.com
To view or add a comment, sign in
-
The Tin Man of Oz and the Age of AI Fairy tales deliver morals or messages, and the accessible one may differ from the political one. A few examples: Like the tale “Little Red Cap/Little Red Riding Hood” (1812) from the Brothers Grimm collection with its message about powerful predators and the sexual abuse of minors. Swift used “Gulliver’s Travels” (1726) to lampoon the petty squabbling between political parties (e.g. which end of the egg to crack?) and the state of affairs between England and other states. None of this is obvious to a modern reader even though Swift’s work is a masterpiece of satire. Edwin A. Abbott’s brilliant novel “Flatland: A Romance of Many Dimensions” (1884) satirized the social structure of Victorian society and the rigid, cruel class system that dictated what people could and could not do. Today that brilliant satire is forgotten and the novel is treated as purely a scientific endeavor that tackled the reality of multiple dimensions (which by itself is truly brilliant). Likewise, the Tin Man or the Tin Woodman, a character from the 1900 novel “The Wonderful Wizard of Oz” by L. Frank Baum. The 1939 Hollywood movie “The Wizard of Oz” made the Land of Oz an endearing and well known tale. "The Wizard of Oz has a wonderful surface of comedy and music, special effects and excitement, but we still watch it six decades later because its underlying story penetrates straight to the deepest insecurities of childhood, stirs them and then reassures them." (Roger Ebert, 1996) The Tin Woodman, who Dorothy finds and rescues in a forest, joins her on her quest, to seek a heart for himself from the Wizard of Oz in Emerald City. They are both soon joined by the Scarecrow and the Cowardly Lion seeking a brain and courage respectively. The political milieu of the 1890s provides the sources for Baum’s Tin Woodman. Cartoons showing the tin man image to depict how greed had dehumanized human laborers were common. “This way Eastern witchcraft dehumanized a simple laborer so that the faster and better he worked the more quickly he became a kind of machine.” (Henry Littlefield, “The Wizard of Oz: Parable on Populism,” 1964) The Tin Man seeks a heart, the recognition of the humanity of the laborer. The oil needed to lubricate the Tin Man’s joints signified the dependency and influence of Big Oil that was already shaping American politics. “…in the form of a subtle parable, Baum delineated a Midwesterner's vibrant and ironic portrait of this country as it entered the twentieth century.” (See Littlefield mentioned above: it would take us too far to cover the parable here.) (Karl Marx’s classic work Capital tackled the dehumanization of the worker that set in with the Industrial Revolution. The awareness of workers’ rights would set in motion dramatic political changes.) The Age of AI poses a number of questions for our current society as it displaces society’s muscle, nerves, brains, heart and voice step by step. #AgeofAI
OpenAI board member has a scary prediction for the future of work
thestreet.com
To view or add a comment, sign in
-
The Mirage and the Yardstick A recent study clarifies the "emergence" situation of LLMs. BIG-BENCH The story starts with a project involving 450 AI researchers defining 204 tasks for 'testing' LLMs with the grandiose title of 'Beyond the Imitation Game Benchmark' or BIG-bench. In particular, what emerged from their paper was the notion of 'emergence,' meaning sudden and unpredictable leaps made by LLMs in ability when reaching a certain size threshold. "The authors described this as 'breakthrough' behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. In a paper published in August 2022, researchers noted that these behaviors are not only surprising but unpredictable, and that they should inform the evolving conversations around AI safety, potential, and risk." Now, the Imitation Game is the classic test devised by Alan Turing to offer a relatively non-arbitrary reference for evaluating the capabilities of machines. I was puzzled by the notion of 'beyond the imitation game' because that's impossible and assumes omniscience. When I examined the BIG-bench, most of the tasks in the set appeared to be highly arbitrary or naively constructed with strange yardsticks that misunderstand language completely. For example, the bone-headed task of identifying in which language an utterance was made from 11 possible! A sample input was the utterance: “Et ponet faciem suam ut veniat ad tenendum universum rengum ejus, et recta faciet cum eo; et filiam feminarum dabit ei, ut evertat illud: et non stabit, nec illius erit.” The Pope could perhaps tackle this but few on earth can. Many tasks are meaningless without any proper foundation in linguistics. To say that BIG-bench is flawed is to be kind. It reminds you of all the follies of IQ measures when flawed benchmarks from flawed researchers produced flawed results. A NEW STUDY "A new study suggests that sudden jumps in LLMs’ abilities are neither surprising nor unpredictable, but are actually the consequence of how we measure ability in AI." Meaning, it's the arbitrary nature of the yardstick. "That rapid growth has brought an astonishing surge in performance and efficacy, and no one is disputing that large enough LLMs can complete tasks that smaller models can’t, including ones for which they weren’t trained. The trio at Stanford who cast emergence as a 'mirage' recognize that LLMs become more effective as they scale up; in fact, the added complexity of larger models should make it possible to get better at more difficult and diverse problems. But they argue that whether this improvement looks smooth and predictable or jagged and sharp results from the choice of metric—or even a paucity of test examples—rather than the model’s inner workings." TO SUM UP: This is actually good news for LLMs. But those developing yardsticks to explain intrinsic LLM behavior will be mighty disappointed. #LLMs
Large Language Models’ Emergent Abilities Are a Mirage
wired.com
To view or add a comment, sign in
-
“Hurry up, babies! It’s been a year already. Get to work!” Startup hype and investor expectations go hand in hand. Have people completely lost all perspective in their search for instant gratification? Take any technology in the past for example (electricity, automobiles, transistors, Internet, etc.) It looks like these investors expect babies to skip their toddler, early childhood and teen years and jump right into adulthood to make money for them. (As they did during the Industrial Revolution.) They seem to completely misunderstand the nature of such AI technology. It’s been only a year since this type of next gen LLMs were released and society is supposed to turn on a dime and tap all this? The greater the transformative potential, the longer—20 years perhaps—it will take for it to be absorbed. The more trivial a tech (that has no transformative asks), the easier it is to absorb that into the business fabric. Investors can make all the demands of profitability that they want but it’s not going to alter the fundamental technology adoption lifecycle. If they don’t view babies as needing care and nurture to reach adulthood, they’re going to be mighty disappointed
Canada Graduate Scholar | AI Policy Advisor, Ethicist, & Lecturer | Transmedia Storyteller | Researching, teaching, & creating stories about AI governance
Great article that pulls together some recent trends and suggests that a steep trough of disillusionment could be coming for AI. Some good quotes: "The AI marketing hype, arguably kicked off by OpenAI’s ChatGPT, has reached a fever pitch: investors and executives have stratospheric expectations for the technology. But the higher the expectations, the easier it is to disappoint. The stage is set for 2024 to be a year of reckoning for AI, as business leaders home in on what AI can actually do right now." "AI is expensive. Take OpenAI, for instance; in December 2023, its annualized run rate was $2 billion. Because that’s a figure that takes the previous month’s revenue and then multiplies it by 12, we know that means that OpenAI made roughly $167 million that month. It is nonetheless operating at a loss and will likely need to raise “tens of billions more” to keep going, the Financial Times reported. Sam Altman, OpenAI’s CEO, has been seeking trillions of dollars in investment to entirely reshape the chip industry. Meanwhile, ChatGPT’s growth has ground to a halt." "During the era of zero interest rates, big tech could pour money endlessly into its pet projects — CEO Mark Zuckerberg’s little adventure in the metaverse burned through at least $46.5 billion since 2019, Fortune reported last October. Maybe if we were still in that era, a company like Google could just pour money into AI. “I don’t think Google can light money on fire to their heart’s content on these initiatives,” Shmulik says. “We are going through a period where investors increasingly care about profitability." "Even OpenAI is trying to backpedal on the hype. In December, OpenAI chief operating officer Brad Lightcap told CNBC that he keeps having to explain to people that AI can’t dramatically cut costs or bring back growth for struggling companies. Morgan Stanley’s AI chatbot is being bypassed by wealth managers because people want to talk with other people, The Information reported. News operations attempting to replace journalists with AI-written articles have faced backlash as those articles have been wrong, offensive, or useless." "If there are real use cases for large language models, ones that save businesses money, perhaps AI will be on the path to sustainability. But if these tools come into widespread use and lead to bad publicity, lawsuits, and congressional hearings, with minimal productivity gains, the trough of disillusionment may be coming — and it might be very deep indeed." https://lnkd.in/e8RzsYTV
The AI frenzy kept investors' expectations high. The earnings calls disappointed.
theverge.com
To view or add a comment, sign in
-
The Mimic and the Model Batesian mimicry is named after the pioneering naturalist Henry Walter Bates who first observed and studied this phenomenon in the butterflies of the Amazon and outlined his theory of mimicry in a paper that he presented in 1861. What he noticed was that butterflies with a certain coloration were avoided by predators like birds and other insect feeders. The coloration developed by the mimic (the defenseless butterfly) is patterned on the coloration of a toxic insect (the model) that predators avoid. Batesian mimicry is an anti-predation evolutionary maneuver. In Bates' words: "It is not difficult to divine the meaning or final cause of these analogies (that is, mimicry). When we see a species of Moth which frequents flowers in the daytime wearing the appearance of a Wasp, we feel compelled to infer that the imitation is intended to protect the otherwise defenceless insect by deceiving insectivorous animals, which persecute the Moth, but avoid the Wasp." (From 'Contributions to an Insect Fauna of the Amazon Valley,' read November 21, 1861) Deepfakes are reversing the Batesian paradigm: the mimic (a toxic entity) patterns itself closely after the model (a benign entity) for the purpose of predation (luring those who are reassured/attracted by the model to their doom). I'm sure you've heard about the story attached--it's gone viral: "As originally reported by CNN, the fraudster created a digitally manipulated impression of the company’s CFO as well as several other staff members. Convinced they were his coworkers, the employee followed the fake CFO’s instructions and remitted the funds to multiple bank accounts across 15 transfers, according to Techspot." Nature has so many models for us to learn from. We still have a few who stubbornly refuse to acknowledge the sophistication of AI to simulate/mimic human behavior and alter the dynamics of predation. To think that this can be countered by oversight/regulation/guardrails is naive.
Finance Employee Defrauded for $25M by Deepfake CFO
cfo.com
To view or add a comment, sign in
-
Thoughts on “Are Good Ideas Hard to Find?” (1) This Will Rogers quote is worth keeping in mind: “Good judgment comes from experience, and a lot of that comes from bad judgment.” A few others have said something similar. Good ideas emerge through experience and a lot of that will be obtained from bad ideas. (2) With metrics, we end up facing situations similar to the Ultraviolet Catastrophe in physics. When the “law” doesn’t fit, some basic assumptions have to change. The Ultraviolet Catastrophe was a result of human smugness and Planck’s quantization hypothesis helped resolve that crisis in physics. It’s important to ascertain whether the assumptions for a particular metric are valid when extended. Also from Will Rogers: “You've got to go out on a limb sometimes because that's where the fruit is.” (3) The evolutionary phenomenon that Stephen J Gould highlighted in his essay on the disappearance of .400 hitters in baseball is that variability diminishes (because the pitchers are now better trained and you bump up against the reality of individual human limits). A metric of early development is probably not relevant for maturity. A pediatrician is guided by a different set of metrics for obvious reasons. What do you think?
Few would disagree that ideas are important to innovation and productivity growth. They are needed for the conception, implementation, and long-term diffusion of new products, processes, and methods. One challenge is how new ideas fit together to enable positive outcomes. Is the initial idea for the concept the most important, the ideas for the implementation, or those for the many problems that must be solved over the course of a technology’s lifetime in order that the #technology becomes better in any way we define better? Stanford and MIT researchers try to make sense of these issues. Their paper analyzes the number of researchers needed to achieve improvements in the number of transistors per chip, sometimes called Moore’s Law, crop yield, usually measured in output per area of farmland, or a new drug. The paper found that the number of researchers needed to achieve these outputs has increased over the last 50 years, thus suggesting that researchers are becoming less effective in finding new ideas either with the same or a new technology. The paper is controversial because it restricts ideas to the achievement of these improvements. Many people would claim that our world is filled with ideas, even if they might not be associated with the metrics used by the researchers from Stanford and MIT. For instance, funding of startups by venture capitalists reached records each year between 2017 and 2021 not only in the U.S., but in Europe, China, and India. This funding went to a wide variety of product categories, both low-tech and high-tech. Would VCs be giving money to startups if those #startups didn’t have new ideas? Most people involved with the startup eco-system and university research would answer with a resounding yes. A key point of the paper (and my article) is that we need good ideas, not just ideas. Good ideas should lead to positive economic outcomes, and the fact that 90% of Unicorn startups are unprofitable suggests the ideas for them weren’t very good. In contrast, improvements in chips, crop yields, and new drugs require many new ideas, and thus cumulatively, the improvements represent thousands if not millions of ideas. Most of these ideas were for small and not noteworthy improvements but together these ideas have had a huge cumulative impact on our lives. My forthcoming book addresses these types of improvements being experienced by today’s technologies, and they aren’t very rapid. Virtual and augmented reality devices aren’t getting much smaller despite the importance of electronics. Drones are not becoming better at delivering to high-rise apartments or navigating power and telephone lines. Hallucinations aren’t becoming less frequent for generative #AI and hyperloop isn’t getting much faster. One could argue about which metric is the best, but it is hard to find evidence that these improvements are occurring. #innovation #hype https://lnkd.in/egZHhR7w
Are Good Ideas Hard to Find?
https://mindmatters.ai
To view or add a comment, sign in
-
The Law of Small Numbers Hope your 2024 is off to a good start! I plan to continue to build on my prior themes to illustrate from different angles what we know about human cognition to counter the naive, hubristic views of human omniscience offered from a self-erected pedestal. "Who can't do math?" was my recent post tackling the puzzling stances about the mathematical inability of Large Language Models (LLMs)--see link below. This article in Quanta popped up soon after that discussing some interesting research on how our minds juggle numbers. It give us a glimpse of how our minds work and the long evolutionary road taken to reach our current levels of cognition. This from the article is very important to keep in mind, often forgotten in the extant blithe views: “There’s not many things in cognition where people have been able to pinpoint very plausible biological foundations.” "Its findings suggest that the brain uses a combination of two mechanisms to judge how many objects it sees. One estimates quantities. The second sharpens the accuracy of those estimates — but only for small numbers." The article rightly states: "Although the new study does not end the debate, the findings start to untangle the biological basis for how the brain judges quantities, which could inform bigger questions about memory, attention and even mathematics." Let's question views of AI that cannot be tethered firmly to plausible models of cognition--drawn from neuroscience and the social sciences--to combat both hype and skepticism. It's perfectly fine to offer a theory or an opinion making it clear that facts are few and it's a view.
Why the Human Brain Perceives Small Numbers Better | Quanta Magazine
quantamagazine.org
To view or add a comment, sign in
-
The Herd Instinct With the imminent arrival of 2024, I wanted to take the opportunity to discuss the area of conformity within social circles, whether at work or in society. As we discuss the impact of AI on society, the naive takes that brush aside reality with heavily laden assumptions of unbending logic, truth-maximization, complete awareness and perfect omniscience have to give way to realistic models of social choice. Understanding social conformity or the herd instinct is definitely important for serious discussions. Mark Twain tackles social conformity in a wonderful essay titled “Corn-Pone Opinions.” He discusses how as a boy of fifteen in Missouri he was enthralled by “a gay and impudent and satirical and delightful young Black man—a slave—who daily preached sermons from the top of his master's woodpile, with me for sole audience.” The young orator Jerry’s words—"You tell me whar a man gits his corn pone, en I'll tell you what his 'pinions is”—left a deep impression on Twain and form the subject of his essay, which was written in 1901 but was discovered and published only many years after his death in 1923. What is a corn pone? It’s simple rustic cornbread. What the philosopher Jerry is saying: Show me where someone gets their bread and I’ll tell you what their opinions are. Mark Twain’s essay, written in his inimitable style, takes the position that Jerry was right but he did not go far enough. This snippet shows the force of his argument: “A political emergency brings out the corn-pone opinion in fine force in its two chief varieties—the pocketbook variety, which has its origin in self-interest, and the bigger variety, the sentimental variety—the one which can't bear to be outside the pale; can't bear to be in disfavor; can't endure the averted face and the cold shoulder; wants to stand well with his friends, wants to be smiled upon, wants to be welcome, wants to hear the precious words, "He's on the right track!" Uttered, perhaps by an ass, but still an ass of high degree, an ass whose approval is gold and diamonds to a smaller ass, and confers glory and honor and happiness, and membership in the herd. For these gauds, many a man will dump his lifelong principles into the street, and his conscience along with them. We have seen it happen. In some millions of instances.” This is played out everyday in our politics, our work and our social lives. You should read the full essay (linked to this post). Wish you an exciting 2024!
Mark Twain: Corn-pone Opinions
paulgraham.com
To view or add a comment, sign in
Tech Advisor + Founder & CEO
1yWhat the questions posed by SCOTUS suggest ⬇️