Advertisement

Drudge Retort: The Other Side of the News
Sunday, September 08, 2024

Analysis: It's the start of the school year, and thus the start of a fresh round of discourse on generative AI's new role in schools. In the space of about three years, essays have gone from a mainstay of classroom education everywhere to a much less useful tool, for one reason: ChatGPT.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the analysis...

... Estimates of how many students use ChatGPT for essays vary, but it's commonplace enough to force teachers to adapt.

While generative AI has many limitations, student essays fall into the category of services that they're very good at: There are lots of examples of essays on the assigned topics in their training data, there's demand for an enormous volume of such essays, and the standards for prose quality and original research in student essays are not all that high.
This story was first featured in the Future Perfect newsletter.

Right now, cheating on essays via the use of AI tools is hard to catch. A number of tools advertise they can verify that text is AI-generated, but they're not very reliable. ...

But there is a technical solution here. Back in 2022, a team at OpenAI, led by quantum computing researcher Scott Aaronson, developed a "watermarking" solution that makes AI text virtually unmistakable -- even if the end user changes a few words here and there or rearranges text. The solution is a bit technically complicated, but bear with me, because it's also very interesting.

At its core, the way that AI text generation works is that the AI "guesses" a bunch of possible next tokens given what appears in a text so far. In order not to be overly predictable and produce the same repetitive output constantly, AI models don't just guess the most probable token -- instead, they include an element of randomization, favoring "more likely" completions but sometimes selecting a less likely one. ...



#1 | Posted by LampLighter at 2024-09-08 02:40 PM | Reply

Technological ignorance/intentional deception from the tasked decision makers.

#2 | Posted by Angrydad at 2024-09-08 06:50 PM | Reply

What's the point if you're not going to be hired on merit anyhow.

POSTED BY ORGAN_BANK

When you grow up and become a real man and apply for a real job you will realize how stupid you were as a child thinking that.

#4 | Posted by donnerboy at 2024-09-09 12:00 PM | Reply

In comp sci a professor broke students into 3 groups to write the same FORTRAN program. One group got ChatGPT, one some AI help, one had to Google everything. The first group was done the fastest, but when tested they didn't retain any knowledge. The Google searching group took the longest, but produced the strongest test results since they had to break down the problem and understand each step of the program solving it.

OpenAI has an internal tool for detecting ChatGPT generated work, but it generates a false positive 1 in 100 times.

#5 | Posted by sitzkrieg at 2024-09-09 01:47 PM | Reply

Using AI as a personalized tutor, such as Khanmigo from the Khan Academy, could potentially be a true equalizer for students in low-income communities. A personalized tutor available 24/7 that does not judge and has the ultimate level of patience. Of course, a human tutor would be ideal, but that costs money -- lots of money when the tutor is assisting the student become an eligible candidate for a group of prestigious schools. This is a privilege that, hopefully, AI tutoring can help low-income students compete with.

#6 | Posted by rstybeach11 at 2024-09-09 07:45 PM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2024 World Readable

Drudge Retort