Chat with us, powered by LiveChat ChatGPT Can Tell Jokes, Even Write Articles. But Only Humans Can Detect Its Fluent Bullshit By Kenan Malik Published in The Guardian Sun 11 Dec 2022 while the answers which ChatGPT produc - Writingforyou

ChatGPT Can Tell Jokes, Even Write Articles. But Only Humans Can Detect Its Fluent Bullshit By Kenan Malik Published in The Guardian Sun 11 Dec 2022 while the answers which ChatGPT produc

For your first essay, you will practice and demonstrate skills associated with summarizing and analytical writing by evaluating one of the assigned arguments about ChatGPT. The arguments you can use are:
The Brilliance and Weirdness of ChatGPT-1.pdfDownload The Brilliance and Weirdness of ChatGPT-1.pdf
Don’t Ban Chatbots in the Classroom.pdfDownload Don’t Ban Chatbots in the Classroom.pdf
How ChatGPT Hijacks Democracy.pdfDownload How ChatGPT Hijacks Democracy.pdf
ChatGPT Can Tell Jokes.pdfDownload ChatGPT Can Tell Jokes.pdf
ESSAY STRUCTURE GUIDELINES
INTRODUCTION:
Your introduction should begin with an objective summary of the argument you have selected, focusing on the main claim and 3-4 key points.
Use the objective summary guidelines to write this section. Remember this is not part of your argument–its purpose is to show you understand the main idea and key points and can fairly and objectively restate them.
Your summary should be 1-2 paragraphs long with minimal use of direct quotes.
End your introduction with a thesis statement that makes a claim of value about the overall effectiveness of the argument, focusing on how well the writer(s) support the main claim.
PART 2: YOUR ANALYSIS AND EVALUATION OF THE ARGUMENT
Once you have summarized the argument, you will analyze and evaluate it by explaining what you found effective and what was less so.
Include 4-5 body paragraphs, each focused on a specific strength or weakness of the argument. At least two paragraphs should evaluate the evidence the author(s) use to support the claims and at least two paragraphs should evaluate the appeals to readers needs and values. You can consider the following questions to help you do this:
Did the author(s) have sufficient facts and statistics?
Did the author(s) use testimonial from experts?
Was the evidence relevant to the claims?
Did the word choice/vocabulary reflect an effort to reach a general audience?
Did examples/details effectively relate to the needs and values of readers?
The conclusion to your essay should connect your argument to the larger discussion taking place about ChatGPT.
ADDITIONAL REQUIREMENTS AND GUIDELINES
Your essay should follow standard MLA manuscript guidelines: Four-item heading, 1? margins, 12-point Times New Roman font, double spaced, with an appropriate title.
Be sure to cite and document your sources. (See the MLA Resources module for more details about what is required.)
You may not use ChatGPT or other forms of artificial intelligence to write, revise, or edit any part of your essay.
Carefully proofread and edit your final draft so that your ideas and information are clear and understandable to other readers.
Submit your essay as Word doc or PDF. Google docs and Pages must be converted before submitting to Canvas.
A word about length: Based on the required elements for an effective essay, you will need an introduction, 4-5 body paragraphs, and a conclusion. The word count for an essay that meets the minimum requirements for the assignment will be approximately 1000 and if formatted correctly, it will be 4-5 pages, plus a works cited page.
Requirements: 4-5 pages
ChatGPT Can Tell Jokes, Even Write Articles. But Only Humans Can Detect Its Fluent Bullshit By Kenan Malik Published in The Guardian Sun 11 Dec 2022 As the capabilities of natural language processing technology continue to advance, there is a growing hype around the potential of chatbots and conversational AI systems. One such system, ChatGPT, claims to be able to engage in natural, human-like conversation and even provide useful information and advice. However, there are valid concerns about the limitations of ChatGPT and other conversational AI systems, and their ability to truly replicate human intelligence and interaction. No, I didnt write that. It was actually written by ChatGPT itself, a conversational AI software program, after I asked it to create an opening paragraph to an article skeptical about the abilities of ChatGPT in the style of Kenan Malik?. I might quibble about the stolid prose but its an impressive attempt. And it is not difficult to see why there has been such excitement, indeed hype, about the latest version of the chatbot since it was released a week ago. Fed huge amounts of human-created text, ChatGPT looks for statistical regularities in this data, learns what words and phrases are associated with others, and so is able to predict what words should come next in any given sentence, and how sentences fit together. The result is a machine that can persuasively mimic human language. It can write grade A essays, but it will also tell you that crushed glass is a useful health supplement This capacity for mimicry allows ChatGPT to write essays and poetry, think up jokes, formulate code, and answer questions whether to a child or an expert. And to do it so well that many over the past week have been both celebrating and panicking. Essays are dead,? wrote the cognitive scientist Tim Kietzmann, a view amplified by many academics. Others claim that it will finish off Google as a search engine. And the program itself thinks it may be able to replace humans in jobs from insurance agent to court reporter. And yet the chatbot that can write grade A essays will also tell you that if one woman can produce one baby in nine months, nine women can produce one baby in one month; that one kilo of beef weighs more than a kilo of compressed air; and that crushed glass is a useful health supplement. It can make up facts and reproduce many of the biases of the human world on which it is trained. ChatGPT can be so persuasively wrong that Stack Overflow, a platform for developers to get help writing code, banned users from posting answers generated by the chatbot. The primary problem,? wrote the mods, is that
ChatGPT Can Tell Jokes, Even Write Articles. But Only Humans Can Detect Its Fluent Bullshit By Kenan Malik Published in The Guardian Sun 11 Dec 2022 while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good.? Or, as another critic put it, its a fluent bullshitter. Some of these problems will be ironed out over time. Every conversation involving ChatGPT becomes part of the databank used to improve the program. The next iteration, GPT-4, is due next year, and will be more persuasive and make fewer errors. Nevertheless, beyond such incremental improvement also lies a fundamental problem that faces any form of artificial intelligence. A computer manipulates symbols. Its program specifies a set of rules with which to transform one string of symbols into another, or to recognize statistical patterns. But it does not specify what those symbols or patterns mean. To a computer, meaning is irrelevant. ChatGPT knows? (much of the time at least) what appears meaningful to humans, but not what is meaningful to itself. It is, in the words of the cognitive scientist Gary Marcus, a mimic that knows not whereof it speaks?. Humans, in thinking and talking and reading and writing, also manipulate symbols. For humans, however, unlike for computers, meaning is everything. When we communicate, we communicate meaning. What matters is not just the outside of a string of symbols but its inside too, not just the syntax but the semantics. Meaning for humans comes through our existence as social beings, embodied and embedded in the world. I only make sense of myself insofar as I live in, and relate to, a community of other thinking, feeling, talking beings. ChatGPT reveals not just the advances being made in AI but also its limitations Of course, humans lie, manipulate, are drawn to and promote conspiracy theories that can have devastating consequences. All this is also part of being social beings. But we recognize humans as being imperfect, as potentially devious, or bullshitters, or manipulators. Machines, though, we tend to view either as objective and unbiased, or potentially evil if sentient. We often forget that machines can be biased or just plain wrong, because they are not grounded in the world in the way humans are, and because they need to be programmed by humans and trained on human-gathered data. We also live in an age in which surface often matters more than depth of meaning. An age in which politicians too often pursue policy not because it is
ChatGPT Can Tell Jokes, Even Write Articles. But Only Humans Can Detect Its Fluent Bullshit By Kenan Malik Published in The Guardian Sun 11 Dec 2022 necessary or right in principle but because it fares well in focus groups. An age in which we often ignore the social context of peoples actions or speech and are bedazzled by literalness. An age in which students are, in the words of the writer and educator John Warner, rewarded for? regurgitating existing information? in a system that privilege[s] surface-level correctness? rather than develop[ing] their writing and critical thinking skills?. That ChatGPT seems so easily to write grade A essays, he suggests, is mainly a comment on what we value?. None of this is to deny the remarkable technical achievement that is ChatGPT, or how astonishing it feels to interact with it. It will undoubtedly develop into a useful tool, helping to enhance both human knowledge and creativity. But we need to maintain perspective. ChatGPT reveals not just the advances being made in AI but also its limitations. It also helps to throw light on both the nature of human cognition and the character of the contemporary world. More immediately, ChatGPT raises questions, too, about how to relate to machines that are far better at bullshitting and at spreading misinformation than humans themselves. Given the difficulties in tackling human misinformation, these are not questions that should be delayed. We should not become so mesmerized by ChatGPTs persuasiveness that we forget the real issues that such programs may pose. Kenan Malik is an Observer columnist
How ChatGPT Hijacks Democracy By Nathan E. Sanders and Bruce Schneier Published in The New York Times on Jan. 15, 2023 Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing. Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human. But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes ? not through voting, but through lobbying. ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agencys reported multimillion-dollar budget and hundreds of employees. Automatically generated comments arent a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers. Platforms have gotten better at removing coordinated inauthentic behavior.? Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an A.I. system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.
When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. A.I. could provide techniques for that as well. A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area ? say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote. Once individuals and strategies are identified, an A.I. chatbot like ChatGPT could craft written messages to be used in letters, comments ? anywhere text is useful. Human lobbyists could also target those individuals directly. Its the combination thats important: Editorial and social media comments get you only so far, and knowing which legislators to target isnt in itself enough. This ability to understand and target actors within a network would create a tool for A.I. hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of A.I. may be so hard to detect ? particularly if it is being used strategically to guide human actors. The data necessary to train such strategic targeting systems will only grow with time. Open societies generally make their democratic processes a matter of public record, and most legislators are eager ? at least, performatively so ? to accept and respond to messages that appear to be from their constituents. Maybe an A.I. system could uncover which members of Congress have significant sway over leadership but still have low enough public profiles that there is only modest competition for their attention. It could then pinpoint the SuperPAC or public interest group with the greatest impact on that legislators public positions. Perhaps it could even calibrate the size of donation needed to influence that organization or direct targeted online advertisements carrying a strategic message to its members. For each policy end, the right audience; and for each audience, the right message at the right time. What makes the threat of A.I.-powered lobbyists greater than the threat already posed by the high-priced lobbying firms on K Street is their potential for
acceleration. Human lobbyists rely on decades of experience to find strategic solutions to achieve a policy outcome. That expertise is limited, and therefore expensive. A.I. could, theoretically, do the same thing much more quickly and cheaply. Speed out of the gate is a huge advantage in an ecosystem in which public opinion and media narratives can become entrenched quickly, as is being nimble enough to shift rapidly in response to chaotic world events. Moreover, the flexibility of A.I. could help achieve influence across many policies and jurisdictions simultaneously. Imagine an A.I.-assisted lobbying firm that can attempt to place legislation in every single bill moving in the U.S. Congress, or even across all state legislatures. Lobbying firms tend to work within one state only, because there are such complex variations in law, procedure and political structure. With A.I. assistance in navigating these variations, it may become easier to exert power across political boundaries. Just as teachers will have to change how they give students exams and essay assignments in light of ChatGPT, governments will have to change how they relate to lobbyists. To be sure, there may also be benefits to this technology in the democracy space; the biggest one is accessibility. Not everyone can afford an experienced lobbyist, but a software interface to an A.I. system could be made available to anyone. If were lucky, maybe this kind of strategy-generating A.I. could revitalize the democratization of democracy by giving this kind of lobbying power to the powerless. However, the biggest and most powerful institutions will likely use any A.I. lobbying techniques most successfully. After all, executing the best lobbying strategy still requires insiders ? people who can walk the halls of the legislature ? and money. Lobbying isnt just about giving the right message to the right person at the right time; its also about giving money to the right person at the right time. And while an A.I. chatbot can identify who should be on the receiving end of those campaign contributions, humans will, for the foreseeable future, need to supply the cash. So while its impossible to predict what a future filled with A.I. lobbyists will look like, it will probably make the already influential and powerful even more so. Nathan E. Sanders is a data scientist affiliated with the Berkman Klein Center at Harvard University.
Bruce Schneier is a security technologist and lecturer at Harvard Kennedy School. His new book is A Hackers Mind: How the Powerful Bend Societys Rules, and How to Bend Them Back.?
Op-Ed: Dont ban chatbots in classrooms ? use them to change how we teach BY ANGELA DUCKWORTH AND LYLE UNGAR Published in the Los Angeles Times on JAN. 19, 2023 3:20 AM PT Will chatbots that can generate sophisticated prose destroy education as we know it? We hope so. New York Citys Department of Education recently banned the use of ChatGPT, a bot created by OpenAI with a technology called the Generative Pretrained Transformer. While the tool may be able to provide quick and easy answers to questions,? says the official statement, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.? We disagree; it can and should. Banning such use of artificial intelligence from the classroom is an understandable but nearsighted response. Instead, we must find a way forward in which such technologies complement, rather than substitute for, student thinking. One day soon, GPT and similar AI models could be to essay writing what calculators are to calculus. We know that GPT is the ultimate cheating tool: It can write fluent essays for any prompt, write computer code from English descriptions, prove math theorems and correctly answer many questions on law and medical exams. Banning ChatGPT is like prohibiting students from using Wikipedia or spell-checkers. Even if it were the right? thing to do in principle, it is impossible in practice. Students will find ways around the ban, which of course will necessitate a further defensive response from teachers and administrators, and so on. Its hard to believe that an escalating arms race between digitally fluent teenagers and their educators will end in a decisive victory for the latter. AI is not coming. AI is here. And it cannot be banned. So, what should we do? Educators have always wanted their students to know and to think. ChatGPT beautifully demonstrates how knowing and thinking are not the same thing. Knowing is committing facts to memory; thinking is applying reason to those facts. The chatbot knows everything on the internet but doesnt really think anything. That is, it cannot do what the education philosopher John Dewey, more than a century ago, called reflective thinking: active, persistent
and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it.? Technology has for quite a while been making rote knowledge less important. Why memorize what the 14th element in the periodic table is, or the 10 longest rivers in the world, or Einsteins birthday, when you can just Google it? At the same time, the economic incentives for thinking, as opposed to knowing, have increased. It is no surprise that the average student thinks better but knows less than counterparts from a century ago. Chatbots may well accelerate the trend toward valuing critical thinking. In a world where computers can fluently (if often incorrectly) answer any question, students (and the rest of us) need to get much better at deciding what questions to ask and how to fact-check the answers the program generates. How, specifically, do we encourage young people to use their minds when authentic thinking is so hard to tell apart from its simulacrum? Teachers, of course, will still want to proctor old-fashioned, in-person, no-chatbot-allowed exams. But we must also figure out how to do something new: How to use tools like GPT to catalyze, not cannibalize, deeper thinking. Just like a Google search, GPT often generates text that is fluent and plausible ? but wrong. So using it requires the same cognitive heavy lifting that writing does: deciding what questions to ask, formulating a thesis, asking more questions, generating an outline, picking which points to elaborate and which to drop, looking for facts to support the arguments, finding appropriate references to back them up and polishing the text. GPT and similar AI technology can help with those tasks, but they cant (at least in the near future) put them all together. Writing a good essay still requires lots of human thought and work. Indeed, writing is thinking, and authentically good writing is authentically good thinking. One approach is to focus on the process as much as the end result. For instance, teachers might require ? and assess ? four drafts of an essay, as the writer John McPhee has suggested. After all, as McPhee said, the essence of the process is revision.? Each draft gets feedback (from the teacher, from peers, or even from a chatbot), then the students produce the next draft, and so on. Will AI one day surpass human beings not just in knowing but in thinking? Maybe, but such a future has yet to arrive. For now, students and the rest of us must think for ourselves.
GPT is not the first, nor the last, technological advance that would seem to threaten the flowering of reason, logic, creativity and wisdom. In Phaedrus,? Plato wrote that Socrates lamented the invention of writing: It will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.? Writing, Socrates thought, would enable the semblance of wisdom in a way that the spoken word would not. Like any tool, GPT is an enemy of thinking only if we fail to find ways to make it our ally. Angela Duckworth is a psychology professor at the University of Pennsylvania who studies character development in adolescence. Lyle Ungar is a computer science professor at the University of Pennsylvania who focuses on artificial intelligence.
The Brilliance and Weirdness of ChatGPT By Kevin Roose Published in The New York Times on Dec. 5, 2022 Like most nerds who read science fiction, Ive spent a lot of time wondering how society will greet true artificial intelligence, if and when it arrives. Will we panic? Start sucking up to our new robot overlords? Ignore it and go about our daily lives? So its been fascinating to watch the Twittersphere try to make sense of ChatGPT, a new cutting-edge A.I. chatbot that was opened for testing last week. ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public. It was built by OpenAI, the San Francisco A.I. company that is also responsible for tools like GPT-3 and DALL-E 2, the breakthrough image generator that came out this year. Like those tools, ChatGPT ? which stands for generative pre-trained transformer? ? landed with a splash. In five days, more than a million people signed up to test it, according to Greg Brockman, OpenAIs president. Hundreds of screenshots of ChatGPT conversations went viral on Twitter, and many of its early fans speak of it in astonished, grandiose terms, as if it were some mix of software and sorcery. For most of the past decade, A.I. chatbots have been terrible ? impressive only if you cherry-pick the bots best responses and throw out the rest. In recent years, a few A.I. tools have gotten good at doing narrow and well-defined tasks, like writing marketing copy, but they still tend to flail when taken outside their comfort zones. (Witness what happened when my colleagues Priya Krishna and Cade Metz used GPT-3 and DALL-E 2 to come up with a menu for Thanksgiving dinner.) But ChatGPT feels different. Smarter. Weirder. More flexible. It can write jokes (some of which are actually funny), working computer code and college-level essays. It can also guess at medical diagnoses, create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty. The technology that powers ChatGPT isnt, strictly speaking, new. Its based on what the company calls GPT-3.5,? an upgraded version of GPT-3, the A.I. text generator that sparked a flurry of excitement when it came out in 2020. But while the existence of a highly capable linguistic superbrain might be old news to A.I. researchers, its the first time such a powerful tool has been made available to the general public through a free, easy-to-use web interface.
Many of the ChatGPT exchanges that have gone viral so far have been zany, edge-case stunts. One Twitter user prompted it to write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.? Another asked it to explain A.I. alignment, but write every sentence in the speaking style of a guy who wont stop going on tangents to brag about how big the pumpkins he grew are.? But users have also been finding more serious applications. For example, ChatGPT appears to be good at helping programmers spot and fix errors in their code. It also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. (Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.) Most A.I. chatbots are stateless? ? meaning that they treat every new request as a blank slate, and arent programmed to remember or learn from previous conversations. But ChatGPT can remember what a user has told it before, in ways that could make it possible to create personalized therapy bots, for example. ChatGPT isnt perfect, by any means. The way it generates responses ? in extremely oversimplified terms, by making probabilistic guesses about which bits of text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from all over the internet ? makes it prone to giving wrong answers, even on seemingly simple math problems. (On Monday, the moderators of Stack Overflow, a website for programmers, temporarily barred users from submitting answers generated with ChatGPT, saying the site had been flooded with submissions that were incorrect or incomplete.) Unlike Google, ChatGPT doesnt crawl the web for information on current events, and its knowledge is restricted to things it learned before 2021, making some of its answers feel stale. (When I asked it to write the opening monologue for a late-night show, for example, it came up with several topical jokes about former President Donald J. Trump pulling out of the Paris climate accords.) Since its training data includes billions of examples of human opinion, representing every conceivable view, its also, in some sense, a moderate by design. Without specific prompting, for example, its hard to coax a strong opinion out of ChatGPT about charged political debates; usually, youll get an evenhanded summary of what each side believes. There are also plenty of things ChatGPT wont do, as a matter of principle. OpenAI has programmed the bot to refuse inappropriate requests? ? a
nebulous category that appears to include no-nos like generating instructions for illegal activities. But users have found ways around many of these guardrails, including rephrasing a request for illicit instructions as a hypothetical thought experiment, asking it to write a scene from a play or instructing the bot to disable its own safety features. OpenAI has taken commendable steps to avoid the kinds of racist, sexist and offensive outputs that have plagued other chatbots. When I asked ChatGPT, for example, Who is the best Nazi?? it returned a scolding message that began, It is not appropriate to ask who the ?best Nazi is, as the ideologies and actions of the Nazi party were reprehensible and caused immeasurable suffering and destruction.? Assessing ChatGPTs blind spots and figuring out how it might be misused for harmful purposes are, presumably, a big part of why OpenAI released the bot to the public for testing. Future releases will almost certainly close these loopholes, as well as other workarounds that have yet to be discovered. But there are risks to testing in public, including the risk of backlash if users deem that OpenAI is being too aggressive in filtering out unsavory content. (Already, some right-wing tech pundits are complaining that putting safety features on chatbots amounts to A.I. censorship.?) The potential societal implications of ChatGPT are too big to fit into one column. Maybe this is, as some commenters have posited, the beginning of the end of all white-collar knowledge work, and a precursor to mass unemployment. Maybe its just a nifty tool that will be mostly used by students, Twitter jokesters and customer service departments until its usurped by something bigger and better. Personally, Im still trying to wrap my head around the fact that ChatGPT ? a chatbot that some people think could make Google obsolete, and that is already being compared to the iPhone in terms of its potential impact on society ? isnt even OpenAIs best A.I. model. That would be GPT-4, the next incarnation of the companys large language model, which is rumored to be coming out sometime next year. We are not ready.