Artificial Intelligence – The 74 https://www.the74million.org America's Education News Source Fri, 22 Nov 2024 19:51:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.the74million.org/wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Artificial Intelligence – The 74 https://www.the74million.org 32 32 Judge Rebuffs Family’s Bid to Change Grade in AI Cheating Case https://www.the74million.org/article/judge-rebuffs-familys-bid-to-change-grade-in-ai-cheating-case/ Fri, 22 Nov 2024 19:50:34 +0000 https://www.the74million.org/?post_type=article&p=735832 A federal judge in Massachusetts has rejected a request by the parents of a Boston-area high school senior who wanted to raise a key grade this fall after teachers accused him of cheating for using artificial intelligence on a class project.

In a ruling denying immediate relief to the student, filed Wednesday, U.S. Magistrate Judge Paul Levenson said nothing about the case suggests teachers at Hingham High School were “hasty” in concluding that the student and a classmate had cheated by relying on AI. He also said the school didn’t impose particularly heavy-handed discipline in the case, considering that the students had violated the school district’s academic integrity rules.

An attorney for the family on Friday noted the ruling is merely preliminary and that “the case will continue” with more discovery. But a former deputy attorney general who follows AI in education issues said the likelihood of the family winning on the merits in a trial “look all but over.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


After an Advanced Placement U.S. History teacher last fall flagged a draft of a documentary script as possibly containing AI-generated material, the pair received a D on the assignment and were later denied entry into the National Honor Society. The group’s faculty advisor said their use of AI was “the most egregious” violation of academic honesty she and others had seen in 16 years.

Jennifer and Dale Harris, parents of one of the students, sued the district and several school staffers in September, alleging that their son, a junior at the time and a straight-A student, was wrongly penalized. If the judge didn’t order the district to quickly change his grade, they said, he’d risk not being admitted via early admission to elite colleges.

He has not been identified and is referred to as “RNH” in court documents.

The complaint noted that when the students started the project in fall 2023, the district didn’t have a policy on using AI for such an assignment. Only later did it lay out prohibitions against AI. But in court testimony, district officials said Hingham students are trained to know plagiarism and academic dishonesty when they see it. 

Peter S. Farrell, student’s attorney

While he earned a C+ in the course, the student scored a perfect 5 on the AP US history exam last spring, according to the lawsuit. He was later allowed to reapply to the Honor Society and was inducted on Oct. 15. Ultimately, the school’s own investigation found that over the past two years, it had inducted into the Honor Society seven other students who had academic integrity infractions, said Peter S. Farrell, the family’s attorney.

In his ruling, Levenson said the case centered around simple academic dishonesty, and that school officials could reasonably conclude that the students’ use of AI “was in violation of the school’s academic integrity rules and that any student in RNH’s position would have understood as much.”

The students, he said, “did not simply use AI to help formulate research topics or identify sources to review. Instead, it seems they indiscriminately copied and pasted text that had been generated by Grammarly.com” into their draft script. 

Benjamin Riley, Cognitive Resonance

Levenson said the court doesn’t really have a role in “second-guessing the judgments of teachers and school officials,” especially since the students weren’t suspended. Farrell on Friday said he expected the case to continue, but Benjamin Riley, founder of Cognitive Resonance, a think tank that investigates AI in education, said the judge’s ruling suggests the family’s chance of winning in a trial are slim. Riley, a former deputy attorney general for California, said the issue at the core of the case isn’t “the whiz-bang technology of AI — it’s about a student who plagiarized and got caught. The judge’s decision explains at length and in detail how the school district had academic integrity policies in place, as well as a fair process for resolving any issues arising under them.” 

Everyone in the district, he said, “followed these rules and imposed an appropriate (and frankly light) punishment. As is often the case, few will see the diligent and quiet work of thoughtful educators at Hingham Public Schools, but I do — and I’m hoping they felt good when this decision came down. They should.”

Had the family not sued the district, Farrell said, it wouldn’t have come to light that he had been “treated differently than other students admitted to National Honor Society” who had academic integrity infractions on their record. He also noted that the school admitted the student into the National Honor Society within a week of a hearing in the case last month. “The timing of that action was not a coincidence.”

Hingham Public Schools did not respond immediately to a request for comment.

]]>
Feds Charge Once-Lauded AllHere AI Founder in $10M Scheme to Defraud Investors https://www.the74million.org/article/feds-charge-once-lauded-allhere-ai-founder-in-10m-scheme-to-defraud-investors/ Wed, 20 Nov 2024 15:58:42 +0000 https://www.the74million.org/?post_type=article&p=735634 Updated, Nov. 20

Federal prosecutors have indicted the founder and former CEO of the once-celebrated education technology company AllHere, accusing her of defrauding investors of nearly $10 million as the startup that made AI chatbots for schools fell into bankruptcy.

Joanna Smith-Griffin, a Forbes “30 Under 30” recipient and Harvard graduate, was arrested at her home in Raleigh, North Carolina, Tuesday on allegations of securities and wire fraud and aggravated identity theft. 

The 33-year-old former educator’s arrest is the latest chapter in the downfall of “Ed,” a buzzy, $6 million AI chatbot that Smith-Griffin’s company was tapped to build for the Los Angeles Unified School District before the project was halted and the company shuttered. L.A. schools Superintendent Alberto Carvalho and Smith-Griffin appeared together at several events earlier this year to promote the chatbot, an ed tech innovation Carvalho said was “unprecedented in American public education.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The indictment by the U.S. Attorney’s Office for the Southern District of New York unsealed in Manhattan federal court accuses Smith-Griffin of defrauding investors and using company funds for a down payment on her North Carolina house and to pay for her 2021 wedding

Smith-Griffin “orchestrated a deliberate and calculated scheme to deceive investors” in the company she founded through a Harvard University startup incubator in 2016 to provide a tech-driven solution to student absences. She inflated “the company’s financials to secure millions of dollars under false pretenses,” U.S. Attorney Damian Williams said in a media release. “The law does not turn a blind eye to those who allegedly distort financial realities for personal gain.” 

Smith-Griffin is being represented by Eric Brignac, an assistant public defender with the Federal Public Defender’s Office. Brignac, who is based out of Raleigh, did not respond to a request for comment.

In a statement to The 74, an L.A. schools spokesperson portrayed the district, by far AllHere’s biggest customer, as one of many taken in by Smith-Griffin. Previously, the school district and its inspector general’s office opened separate inquiries into the school system’s work with AllHere.

“The indictment and the allegations represent, if true, a disturbing and disappointing house of cards that deceived and victimized many across the country,” the spokesperson wrote in an email. “We will continue to assert and protect our rights.”

Between 2017 and June 2024, prosecutors allege, Smith-Griffin used her control over AllHere’s bank accounts to transfer at least $600,000 in company funds to her personal account, generally using PayPal and Zelle to make repeat wire transfers under $10,000. 

Federal prosecutors said the fraud scheme began as early as November 2020, when Smith-Griffin allegedly began to misrepresent to her investors the company’s revenue, customer base and cash on hand. In the spring of 2021, she told investors AllHere had generated some $3.7 million in revenue in the previous year, including through contracts with the New York City and Atlanta school districts. In reality, federal prosecutors allege, the company had only generated $11,000 — and contracts with the two major urban school systems didn’t exist. 

Key AllHere funders include the venture firms Rethink Education, Spero Ventures and Potencia Ventures. Their representatives  didn’t respond to requests for comment. 

When investors and an outside accountant accidentally discovered the discrepancies between the company’s actual financials and its claim to backers, Smith-Griffin masqueraded as a financial consultant to perpetuate the scheme, prosecutors allege. She was accused of creating a fake email address for the phony outside consultant, which she used to send fraudulent documents to her largest investor. 

Though one of the firm’s biggest investors “recruited high profile” education leaders to the company’s board of directors, including former Chicago Public Schools CEO Janice Jackson, the indictment notes that Smith-Griffin “exercised exclusive control” over AllHere’s communications with investors, board members, customers and outside vendors.

The indictment adds further uncertainty around the AI chatbot the company created for and launched with such fanfare earlier this year with Los Angeles schools, the country’s second-largest district.

As K-12 school systems nationwide rush to inject artificial intelligence into their teaching practices, the L.A. chatbot has emerged as a cautionary tale of what could go wrong. On Tuesday, the U.S. Education Department released guidance on ways schools can harness AI while ensuring they don’t have a discriminatory impact on vulnerable and underserved students. 

In April, Smith-Griffin and Carvalho unveiled the chatbot together at the influential ASU+GSV ed tech conference in San Diego. Carvalho said Ed was the nation’s first AI-enabled “personal assistant” and would drive academic improvement while providing Los Angeles’s roughly 540,00 students and their families with a trove of helpful information upon request.

Los Angeles Unified Supt. Alberto Carvalho, during the official launch of the AI-powered chatbot, “Ed.” (Getty Images)

Signs of turmoil emerged in June, when The 74 first reported that Smith-Griffin was out of a job as AllHere furloughed a majority of its staff due to its “current financial position.” A statement from the L.A. district said the company had been put up for sale. 

The company then filed for Chapter 7 bankruptcy in August. At a bankruptcy hearing in September, Toby Jackson, one of AllHere’s only remaining employees and its former chief technology officer, struggled to explain why the company had paid Smith-Griffin $243,000 in expenses in the past year alone. 

“That is one of the outstanding questions that we also have,” said Jackson, who said that Smith-Griffin “did do quite a bit of travel as the CEO of the company.”  

Jackson did not respond to a request for comment.

The 74 first reported the possible criminal charges in early October, when Delaware court documents related to AllHere’s bankruptcy case revealed a grand jury subpoena by federal prosecutors. Even before the company laid off employees and announced its financial woes, a former employee-turned-whistleblower told The 74 that AllHere had struggled to produce a “proper product” for the L.A. district and took shortcuts that ran afoul of school district policies and bedrock student data privacy principles. 

By the time AllHere went bankrupt earlier this year, it never had more than 31 customers total — less than a third the number Smith-Griffin told investors she had by early 2021. By the time the company collapsed this year, only three of AllHere’s customers generated more than $100,000 in revenue. 

In total, the felony charges carry a 42-year prison sentence for Smith-Griffin, who began her  career working in a Boston charter school as a teacher and family engagement director.

“Her alleged actions impacted the potential for improved learning environments across major school districts by selfishly prioritizing personal expenses,” FBI Assistant Director in Charge James Dennehy said in the release. “The FBI will ensure that any individual exploiting the promise of education opportunities for our city’s children will be taught a lesson.” 

]]>
AI-Fueled Testing, From the Mouths of Babes https://www.the74million.org/article/ai-fueled-testing-from-the-mouths-of-babes/ Wed, 20 Nov 2024 12:01:00 +0000 https://www.the74million.org/?post_type=article&p=735567 One of the hidden advantages of video games is that they offer automatic assessments: Winning one shows a user that she has mastered all she needs to know — no pesky final exam required. 

That has long been a dream of testmakers: to embed assessments in student work and, in a sense, make them indistinguishable.

For very young children, however, that’s a challenge. Much of what they know is revealed not through easy-to-interpret writing, but talk and play. To assess these kids effectively, one needs to be able to turn their quirky utterances into data.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


That’s the basic idea behind Curriculum Associates’ 2023 acquisition of Dublin-based SoapBox Labs. The Irish startup has spent the past decade developing software that understands the unique speech of children and translates it reliably into text. As schools focus on the Science of Reading, that could be the key to making assessments a more seamless part of teachers’ workflow, especially for those who instruct children as young as pre-kindergarten.

“The future of assessment is invisible because it is integrated with instruction,” said Kristen Huff, Curriculum Associates’ head of assessment and research. “It is not disruptive. It’s authentic. And it helps the teacher personalize the learning path for each student.”

The future of assessment is invisible because it is integrated with instruction.

Kristen Huff, Curriculum Associates

Like virtually every other educational publisher, Massachusetts-based Curriculum Associates, founded in 1969, is trying to figure out how to offer teachers more data about student learning.

The publisher’s popular i-Ready reading and math programs are used by an estimated 13 million students nationwide. Curriculum Associates now says its reading program will soon feature speech recognition technology that can be operated not just by teachers but by the youngest students, with artificial intelligence listening and revealing exactly how well they understand the words they read and, some day, the math they do. 

The new tool will likely roll out next fall, the publisher says. 

For years, educators have puzzled over how to effectively assess the work of young children. They typically can’t just sit down, read texts and answer questions. They need hands-on instruction through different kinds of media — watching, listening and reading in equal measure — to understand what they’re learning. They act out stories, they sing, they chant rhymes, they talk and move around. 

Paper-and-pencil tests are mostly out of the question. 

To those who have studied it, voice offers the quickest means of assessing a child’s abilities, since in all but the most special cases there’s little space between a child’s thoughts and his or her utterances. “It’s the most natural way for most children to convey information,” said Amelia Kelly, SoapBox’s chief technology officer. 

But putting a keyboard, mouse, trackpad or even a touch screen in front of many students creates “confounding factors” that limit their ability to show what they know, she said.

By capturing students’ voices as they read independently on a tablet or laptop, then translating that into text and comparing it to what’s on screen, teachers can get valuable insights into kids’ understanding. Good voice assessments can help teachers see gaps in children’s learning so schools can challenge them with appropriate work. 

But processing kids’ voices accurately is another challenge altogether. 

‘They shout, they whisper, they sing’

SoapBox founder Patricia Scanlon, an engineer with a Ph.D in speech recognition technology, has said the company grew out of her personal experience watching her own child struggle to learn how to read. 

One day in 2013, she opened an email from the maker of a game her 3-year-old daughter was using for help. The app automatically sent parents updates, and this one told Scanlon her child had completed seven levels in the game, a major achievement. 

“Suitably impressed,” Scanlon asked her daughter to show her the game. She soon realized that the child hadn’t actually mastered the material — she’d simply guessed at the correct answers and gathered rewards without mastering the skills. “She had learned to hack the game,” Scanlon said, impressed with her daughter’s ingenuity — but steamed at a wasted opportunity.

(Kids) shout, they whisper, they sing, they elongate, they over-pronounce the words.

Patricia Scanlon, SoapBox Labs

What was missing, she realized, was a way for the game to hold her daughter accountable, to “invisibly and continuously” quiz and assess her progress, despite the fact that, at age 3, she and most kids can’t hold a pencil, control a mouse or type on a keyboard.

With her background, Scanlon knew that even in 2013, speech recognition technology worked well for adults but not for younger children, who have higher pitched voices and rarely follow standard language rules: “They shout, they whisper, they sing, they elongate, they over-pronounce the words,” she said.

Of course, children come to school with regional accents and years of learning distinctive dialects at home. And millions of kids are learning English as they enter school. So she began building a proprietary “voice engine” that would accurately record what young children say in real-world, noisy environments and on ordinary consumer devices like Chromebooks and iPads.

At the time, the biggest AI voice recognition systems such as Apple’s Siri (Amazon’s Alexa was still about a year away) were being trained almost exclusively on adult voices, in “grown-up” situations: consumers purchasing products, drivers seeking directions or hikers asking about the weather. 

Dashboard from a Curriculum Associates prototype for speech recognition (Screen capture)

Siri and other systems worked well for these nominal tasks, but they weren’t built for school, where children are struggling to learn. Kelly, SoapBox’s CTO, compared it to training an AI-guided self-driving car on a Formula 1 racetrack instead of a crowded, congested street. When you finally got the car out onto the streets, it wouldn’t work.

So Scanlon and her colleagues spent the next decade training SoapBox’s AI to learn from children in both Europe and the U.S. That meant teaching the AI that a word said by an English language learner in Dublin is the same one spoken by one in Philadelphia or a kid from the American South.

“If it doesn’t work for every student equally, then it doesn’t work,” said Kelly.

(Speech) is the most natural way for most children to convey information.

Amelia Kelly, SoapBox Labs

She sees that functionality as an ethical concern. Voice-activated AI “can be the great equalizer here,” she said. “I think it can help solve the literacy crisis — but only if people use it. And people are only going to use it if they trust it. And they’re only going to trust it if it works.”

The terms of the November sale weren’t disclosed, but it will almost certainly create a huge competitive advantage for Curriculum Associates, which gets exclusive access to a technology that has been widely used by other publishers.

Before the acquisition, SoapBox had licensed its technology to dozens of education providers such as McGraw Hill, Scholastic and Amplify, essentially enabling them to outsource voice recognition for their own products. With the 2023 deal, those partnerships stopped, Curriculum Associates said.

According to recent filings, before the acquisition, Soapbox had raised $10.4 million in funding since 2017. Its most recent investor last year was the Bill & Melinda Gates Foundation, which provided an undisclosed sum to underwrite development of a Spanish language voice engine for U.S. students.

By next fall, Curriculum Associates envisions that the technology will be so simple to use that even the youngest students could work independently, putting themselves through the paces of self-guided games and activities that evaluate their reading skills on an ongoing basis. While it’s still piloting the technology in schools, one teacher who has seen a preview said she’s eager to see it in action. 

In a prototype image from a Curriculum Associates dashboard, a teacher can quickly see the accuracy of students’ oral reading via speech recognition technology. (Screen capture)

LaTanya Renea Arias of Kingsland Elementary School in Kingsland, Ga., said having better data about students is key not just to learning but equity — especially when 55% of students are people of color but 80% of teachers are white.

Though she has taught for a decade, she said, “I don’t have an ear to pick up every single dialect, to have great understanding of how a word that I pronounce sounds differently” when a particular student says it. “But I still need to credit them with their learning and their knowledge.”

Disclosure: The Bill & Melinda Gates Foundation provides financial support to The 74.

]]>
Could Massachusetts AI Cheating Case Push Schools to Refocus on Learning? https://www.the74million.org/article/could-massachusetts-ai-cheating-case-push-schools-to-refocus-on-learning/ Thu, 31 Oct 2024 18:48:54 +0000 https://www.the74million.org/?post_type=article&p=734887 A Massachusetts family is awaiting a judge’s ruling in a federal lawsuit that could determine their son’s future. To a few observers, it could also push educators to limit the use of generative artificial intelligence in school.

To others, it’s simply a case of helicopter parents gone wild.

The case, filed last month, tackles key questions of academic integrity, the college admissions arms race and even the purpose of school in an age when students can outsource onerous tasks like thinking to a chatbot.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


While its immediate outcome will largely serve just one family — the student’s parents want a grade changed so their son can apply early-admission to elite colleges — the case could ultimately prompt school districts nationwide to develop explicit policies on AI. 

If the district, in a prosperous community on Boston’s South Shore, is forced to change the student’s grade, that could also prompt educators to focus more clearly on the knife’s edge of AI’s promises and threats, confronting a key question: Does AI invite students to focus on completing assignments rather than actual learning?

“When it comes right down to it, what do we want students to do?” asked John Warner, a well-known writing coach and author of several books. “What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials. And that’s why you see places trying to police it.”

‘Unprepared in a technology transition’

The facts of the case seem simple enough: The parents of a senior at Hingham High School have sued the school district, saying their son was wrongly penalized as a junior for relying on AI to research and write a history project that he and a partner were assigned in Advanced Placement U.S. History. The teacher used the anti-plagiarism tool Turnitin, which flagged a draft of the essay about NBA Hall of Famer Kareem Abdul Jabbar’s civil rights activism as possibly containing AI-generated material. So she used a “revision history” tool to uncover how many edits the students had made, as well as how long they spent writing. She discovered “many large cut and paste items” in the first draft, suggesting they’d relied on outside sources for much of the text. She ran the draft through two other digital tools that also indicated it had AI-generated content and gave the boys a D on the assignment. 

From there, the narrative gets a bit murky. 

On the one hand, the complaint notes, when the student and his partner started the essay last fall, the district didn’t have a policy on using AI for such an assignment. Only later did it lay out prohibitions against AI.

The boy’s mother, Jennifer Harris, last month asked a local TV news reporter, “How do you know if you’re crossing a line if the line isn’t drawn?”

The pair tried to explain that using AI isn’t plagiarism, telling teachers there’s considerable debate over its use in academic assignments, but that they hadn’t tried to pass off others’ work as their own. 

For its part, the district says Hingham students are trained to know plagiarism and academic dishonesty when they see it. 

District officials declined to be interviewed, but in an affidavit, Social Studies Director Andrew Hoey said English teachers at the school regularly review proper citation and research techniques — and they set expectations for AI use.

Social studies teachers, he said, can justifiably expect that skills taught in English class “will be applied to all Social Studies classes,” including AP US History — even if they’re not laid out explicitly. 

A spokesperson for National History Day, the group that sponsored the assignment, provided The 74 with a link to its guidelines, which say students may use AI to brainstorm topic ideas, look for resources, review their writing for grammar and punctuation and simplify the language of a source to make it more understandable.

They can’t use AI to “create elements of your project” such as writing text, creating charts, graphs, images or video. 

In March, the school’s National Honor Society faculty advisor, Karen Shaw, said the pair’s use of AI was “the most egregious” violation of academic honesty she and others had seen in 16 years, according to the lawsuit. The society rejected their applications.

Peter S. Farrell, the family’s attorney, said the district “used an elephant gun to slay a mouse,” overreacting to what’s basically a misunderstanding.

The boys’ failing grade on the assignment, as well as the accusation of cheating, kept him out of the Honor Society, the lawsuit alleges. Both penalties have limited his chances to get into top colleges on early decision, as he’d planned this fall.

The student, who goes unnamed in the lawsuit, is “a very, very bright, capable, well-rounded student athlete” with a 4.3 GPA, a “perfect” ACT score and an “almost perfect” SAT score, said Farrell. “If there were a perfect plaintiff, he’s it.” 

They knew that there was no leg to stand on in terms of the severity of that sanction.

Peter S. Farrell, attorney for student

While the boy earned a C+ in the course, he scored a perfect 5 on the AP exam last spring, according to the lawsuit. His exclusion from the Honor Society, Farrell said, “really shouldn’t sit right with anybody.”

For a public high school to take such a hard-nosed position “simply because they got caught unprepared in a technology transition” doesn’t serve anyone’s interests, Farrell said. “And it’s certainly not good for the students.”

Ultimately, the school’s own investigation found that over the past two years it had inducted into the Honor Society seven other students who had academic integrity infractions, Farrell said. The student at the center of the lawsuit was allowed to reapply and was inducted on Oct. 15.

“They knew that there was no leg to stand on in terms of the severity of that sanction,” Farrell said.

‘Districts are trying to take it seriously’

While Hingham didn’t adopt a districtwide AI policy until this school year, it’s actually ahead of the curve, said Bree Dusseault, the principal and managing director of the Center on Reinventing Public Education, a think tank at Arizona State University. Most districts have been cautious to put out formal guidance on AI.

Dusseault contributed an affidavit on behalf of the plaintiffs, laying out the fragmented state of AI uptake and guidance. She surveyed more than 1,000 superintendents last year and found that just 5% of districts had policies on AI, with another 31% promising to develop them in the future. Even among CRPE’s group of 40 “early adopter” school districts that are exploring AI and encouraging teachers to experiment with it, just 26 had published policies in place. 

They’re hesitant for a reason, she said: They’re trying to figure out what the technology’s implications are before putting rules in writing. 

“Districts are trying to take it seriously,” she said. “They’re learning the capacity of the technology, and both the opportunities and the risks it presents for learning.” But so often they’re surprised by new technological developments and capabilities that they never imagined. 

Even if they’re hesitant to commit to full-blown policies, Dusseault said, districts should consider more informal guidelines that clearly lay out for students what academic integrity, plagiarism and acceptable use are. Districts that are “totally silent” on AI run the risk of student confusion and misuse. And if a district is penalizing students for AI use, it needs to have clear policy language explaining why.

That said, a few observers believe the case boils down to little more than a cheating student and his helicopter parents.

Benjamin Riley, founder of Cognitive Resonance, an AI-focused education think tank, said the episode seems like an example of clear-cut academic dishonesty. Everyone involved in the civil case, he said, especially the boy’s parents and their lawyer, “should be embarrassed. This isn’t some groundbreaking lawsuit that will help define the contours of how we use AI in education; it’s helicopter parenting run completely amok that may serve as catnip to journalists (and their editors) but does nothing to illuminate anything.”

This isn't some groundbreaking lawsuit that will help define the contours of how we use AI in education; it's helicopter parenting run completely amok.

Benjamin Riley, Cognitive Resonance

Alex Kotran, founder of The AI Education Project, a nonprofit that offers a free AI literacy curriculum, said the honor society director’s statement about the boys’ alleged academic dishonesty makes him think “there’s clearly plenty more than what we’re hearing from the student.” While schools genuinely do need to understand the challenge of getting AI policies right, he said, “I worry that this is just a student with overbearing parents and a big check to throw lawyers at a problem.”

Others see the case as surfacing larger-scale problems: Writing in Slate this week, Jane Rosenzweig, director of the Harvard College Writing Center and author of the Writing Hacks newsletter, said the Massachusetts case is “less about AI and more about a family’s belief that one low grade will exclude their child from the future they want for him, which begins with admission to an elite college.”

That problem long predated ChatGPT, Rosenzweig wrote. But AI is putting our education system on a collision course “with a technology that enables students to bypass learning in favor of grades.”

“I feel for this student,” said Warner, the writing coach. “The thought that they need to file a lawsuit because his future is going to be derailed by this should be such an indictment of the system.”

The case underscores the need for school districts to rethink how they interact with students in the Age of AI, he said. “This stuff is here. It’s embedded in the tools students use to do their work. If you open up Microsoft Word or Google Docs or any of this stuff, it’s right there.”

What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials.

John Warner, writing coach

Perhaps as a result, Warner said, students have increasingly come to view school more transactionally, with assignments as a series of products rather than as an opportunity to learn and develop important skills.

“I’ve taught those students,” he said. “For the most part, those are a byproduct of disengagement, not believing [school] has anything to offer — and that the transaction can be satisfied through ‘non-work’ rather than work.”

His observations align with recent research by Dusseault’s colleagues, who last year found that four graduating classes of high school students, or about 13.5 million students, had been affected by the pandemic, with many “struggling academically, socially, and emotionally” as they enter adulthood.

Ideally, Warner said, AI tools should offer an opportunity to refocus students to emphasize process over product. “This is a natural design for somebody who teaches writing,” he said, “because I’m obsessed with process.”Warner recalled giving a recent series of talks at Harvey Mudd College, a small, alternative liberal arts college in California, where he encountered students who said they had no use for AI chatbots. They preferred to think through difficult problems themselves. “They were just like, ‘Aw, man, I don’t want to use that stuff. Why do I want to use that stuff? I’ve got thoughts.’”

]]>
New Survey Says U.S. Teachers Colleges Lag on AI Training. Here are 4 Takeaways https://www.the74million.org/article/new-survey-says-u-s-teachers-colleges-lag-on-ai-training-here-are-4-takeaways/ Tue, 22 Oct 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=734457 In the nearly two years since generative artificial intelligence burst into public consciousness, U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests. 

Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.

The report, from the Center on Reinventing Public Education, a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.

What’s more, few teachers and college faculty say they feel confident using AI themselves, even as it reshapes education worldwide.

“All of this is so new, and it’s been happening so fast,” said Steven Weiner, a CRPE senior research analyst. A lot of coverage of AI in education, he said, “has rightly focused on what are schools and districts doing to support teachers … to get on board with AI?”

While teachers’ workplaces bear a measure of responsibility, he said, college programs should help out K-12 schools and districts. “I just think they should not have to have the whole burden of preparing teachers” to understand and work with AI.

Here are four key takeaways from the findings:

1. Most teachers college faculty are neither ready nor able to embrace AI.

Most teaching faculty are not interested in AI — and some actively avoid it. Just 10% of faculty members surveyed say they feel confident using AI, with many seeing it as a threat. Whether due to confusion or fear, they’re resistant to it, researchers found, limiting its possible integration into curricula and hampering educators’ ability to prepare preservice teachers for “AI-influenced classrooms.” 

Because so few are confident with AI, most don’t use it in their instruction or effectively integrate it into their instructional practices, researchers found.

A few say faculty members remain concerned that AI “might steal their personal data, their intellectual property, or even their jobs.” One education school leader said a lot of faculty are simply “paranoid,” believing that generative AI and other technologies will soon “replace them.” 

Even when faculty members are curious about AI, most are still in the early phases of learning about it. In an interview, Weiner said, “It’s up to people, I think, to learn about [AI] on their own. And if they’re the kind of people who are interested in technology, they might be into it. But the lack of any sort of systemic push for engaging with it has led to some folks just not quite understanding it.” 

It's up to people to learn about (AI) on their own. But the lack of any sort of systemic push for engaging with it has led to some folks just not quite understanding it.

Steven Weiner, CRPE

2. Programs that integrate AI use it mostly to help teachers prevent plagiarism.

While nearly 59% of programs provide some AI-related instruction to preservice teachers, it mostly takes the form of coursework intended to help them prevent plagiarism. 

Preservice teachers, Weiner said, “are largely being taught about AI in light of the fear of them going into classrooms where students are going to cheat.” But training on plagiarism-detection software, he said, is “super problematic” because recent research has questioned its effectiveness.

Only about 25% of programs surveyed are providing training on ways AI can support new kinds of teaching. Fewer than half of respondents said content on AI bias is offered, either in other courses or on its own.

One education school dean said a lot of faculty resistance is due to “not understanding or being able to comprehend” exactly what AI is. “I think some may look at it as just a cheating tool.”

3. A few teacher training programs show promise in integrating AI into teacher prep. 

While most of the leaders surveyed couldn’t offer promising news about integrating AI into educator preparation, a few did. These institutions haven’t exactly transformed their training programs, but early efforts show promise, researchers found. 

Two programs were noteworthy, they said, and worth highlighting: the University of Northern Iowa and Arizona State University’s Mary Lou Fulton Teachers College, which hosts CRPE.

Northern Iowa is developing curricula for an “AI for Educators” graduate certificate. And at ASU, administrators have engaged faculty through a set of voluntary committees and outreach efforts. Actually, CRPE co-leads one of these initiatives, a cross-departmental working group focused on exploring the challenges and opportunities of AI in higher education. ASU is also partnering with ChatGPT creator Open AI to bring the capabilities of an upgraded version of the chatbot into higher education.

The report also notes that the Washington Education Association is incorporating AI into its special education teacher residency program, providing training on AI tools that help track student progress. The union is part of the Center for Innovation, Design, and Digital Learning Alliance, a network of higher education institutions pushing to leverage technology in their programs.

4. Teachers colleges need systemic, strategic investments in AI education.

Researchers concluded that the responsibility to integrate more content on AI can’t rest solely on the shoulders of “individual, self-motivated educators.” A fuller commitment to teaching about AI, they said, requires “a concerted effort and strategic action from all those involved in shaping the future of education.” To that end, schools of education should adjust their budgets to offer grants, teaching awards and other forms of recognition to “AI early adopter” faculty.

Education school deans and administrators should rely on AI experts from within their institutions, CRPE said, and look more closely at innovative work happening at other colleges and universities. They should also work with outside groups such as the American Association of Colleges for Teacher Education to spread best practices and new ideas. 

They also urge state policymakers to set clear expectations for teachers’ AI proficiency by revising teaching certification standards to include new competencies.

And funders, they said, should invest in preservice programs that are “already ahead of the curve” on AI, allowing these programs to grow and offer their expertise more broadly. In the meantime, they should also consider alternative training programs such as residencies and micro-credentialing that can help preservice teachers develop AI competencies and specializations.

Alex Kotran, founder of The AI Education Project, a nonprofit that offers a free AI literacy curriculum, said the survey is “a great data point that illustrates one of my big anxieties” about the future of the workforce: “How do we point students towards the jobs of the future? I think we need to talk more bluntly about the fact that four-year universities are going to be one of the weakest links in this whole strategy, in this whole process.”

We need to talk more bluntly about the fact that four-year universities are going to be one of the weakest links in this whole process.

Alex Kotran, The AI Education Project

He noted that teachers, as a group, are very unlikely to be replaced by AI in the near future — on par with “plumbers and therapists” in terms of the threat that technology plays in their future careers. So it makes sense that they’d be less than focused on it.

But he said the bigger challenge to new teachers will be to imagine how AI is going to force teacher pedagogy to evolve: “The work of being a teacher and the goals that you set for your kids is going to change, given what we understand about AI and the fact that it’s going to be so disruptive to skills and the workforce.”


The new survey, said CRPE’s Weiner, is just a first look, but he said teachers colleges appear “systemically not suited to shift as quickly as they would need — and not just to embrace AI, but to really get teachers prepared for both the challenges with AI and also the opportunities with it: to help teachers be really well prepared.”

Even if they do begin to take AI more seriously, he said, the technology is bound to change rapidly. “So what we’re really seeing is a moment where these institutions need to figure out how to become way more adaptive, way quicker.”

]]>
Q&A: Katy Knight’s Quest to Fund Ed Tech’s ‘Deeply Unsexy Things’ https://www.the74million.org/article/the-74-interview-katy-knights-quest-to-fund-ed-techs-deeply-unsexy-things/ Wed, 16 Oct 2024 12:30:00 +0000 https://www.the74million.org/?post_type=article&p=734215 Over the past year and a half, Katy Knight has been on a quiet quest to uncover good education-related tech tools, often powered by artificial intelligence. With access to a bank account nearing half a billion dollars, she’s got money to spend if she finds something she likes. 

But she’ll readily tell you, “There’s just not a lot of stuff that’s worth funding.”

Knight is president and executive director of the Siegel Family Endowment, created by computer scientist David Siegel, a co-founder of the embattled, $60 billion quantitative trading firm Two Sigma. A former Google and Two Sigma employee herself, Knight sees her role as helping to bring evidence-backed tools to market — tools “that we can learn something from.” 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


That has led her to underwrite small, often experimental undertakings such as Project Invent, which works with students and teachers to promote design thinking, focusing on student needs and inputs. For instance, if students want to improve the quality of school lunches, instead of asking nutritionists or school staff to design menus, a school would turn to kids to study the problem and suggest solutions. 

She also supports Building 21, an innovative high school network in Pennsylvania, and the Modern Classrooms Project, a nonprofit that promotes instruction paced by students, relying on mastery rather than seat time.

Knight has espoused an approach that she calls “inquiry-driven philanthropy,” searching for schools and startups doing important work — and treating grantmaking “as almost field experimentation” alongside more traditional research she funds. “Everything has an orientation toward, ‘What can we learn from this, success or failure, to give back to the field?’”

She has also said educators and policymakers are missing something in the conversation about classroom technology, reducing it to an “all or nothing” question. “We either have to say ‘No tech’ or ‘Very low tech — lock away the phones, keep the kids disconnected, ban ChatGPT, etc.,’ or it’s ‘We’re all in. Every kid gets an iPad. They’re going to learn on technology all day.’ ”

Accordingly, she has many thoughts on AI, the current panic about phones in schools, and how she separates good ed tech from bad.

This interview has been edited for length and clarity.

The 74: You’ve said your goal is to fund “deeply unsexy things” in ed tech to invest in. As someone who gets email pitches every morning about deeply sexy things that I’m very skeptical about, that was a breath of fresh air. What are “deeply unsexy things?” Why is that important?

Katy Knight: Philanthropy can be very much like the private markets and everything else, consumed by Shiny Object Syndrome. We are just as fallible and just as susceptible to chasing the Next Big Idea, the next sexy thing. And I think that’s fine in some respects. Philanthropy should be risk capital, which means sometimes there’s going to be a sexy thing that will impact the social sector — and we should fund it.

But more often than not, change is happening on the back end. It’s not always something new, and it’s not always using the latest and greatest technology. Sometimes we’re talking about the reality of the digital divide in a place where people want to be talking about generative AI, and that’s not capturing attention. So it’s even more important that we, as a philanthropy with the bully pulpit, are thinking about what are the layers of the bureaucracy that we can tackle to achieve systems change? Even though they’re unsexy from a news perspective or a razzle-dazzle perspective, I think they are actually impactful and interesting.

Let’s talk about some of the things you’re funding, starting with Quill.org, the non-profit that offers free AI-powered writing, reading comprehension and language skills lessons. What’s your thinking there?

Quill is sexy, in that they’ve got this front-facing technology. Everyone wants to talk about consumer-facing tools. What’s less sexy, I think, is that we’re not talking about how it’s the latest ChatGPT model. This is about years and years of actual teacher feedback. It’s about training something really specific. It’s relatively niche. And those are the kind of AI applications that I think actually have the highest potential: Applying a powerful technology to something niche should have outsized impacts. That kind of thing makes sense to me. I think there’s a lot of opportunities for us to think about, “O.K., if we weren’t just chasing the best, coolest image-generating technology, what might we be doing to actually serve student need and teacher need? It starts from asking questions about what matters, what the actual challenges are, and then you get to something that’s useful — even if it’s not as shiny as some of the other ed tech startup things that are coming across your inbox.

You’re also funding Quill and others to develop a “Responsible AI Playbook.” Say more about that.

Even though the social sector is smaller than the private markets in terms of investment in new ed tech tools, if we have even a small chorus of people thinking about responsible AI and pushing back against this overarching narrative that we just have to let it run amok, that’s net beneficial to the field.

Talk about the small chorus. Who are the other singers? 

The big one is the Learning Engineering Tools Competition. The other network we’ve been involved with is the Global Ed Tech Trialing Network. The Jacobs Foundation (based in Zurich, Switzerland) helped found this group of funders, developers and researchers globally who are now thinking together about responsible development, specifically through the lens of “How do we create real-world environments for developers to test their tools and hear feedback from teachers and young people more directly,” rather than just building things that sound like they’ll capture a lot of market share.

Can you say more about the trialing network?

We are funding some of the U.S. work, particularly through our partners at Innovate Edu and Leanlab. Leanlab has been crucial because what they do really at their core is very much aligned with this vision of having real live environments where there’s some co-creation of these tools. We’re funding that work through them. They’ve had two global meetings that I participated in. 

Leanlab Executive Director Katie Boody Adorno has built a very cool, small, nimble organization that’s focused particularly on the notion of the co-design of ed tech tools. They work with startups that are really genuine about wanting to design for impact, not just for investors. And they create relationships with schools to have teachers be paid for their participation and to have teachers actually be testers and provide feedback directly to the designers at these startups. I think it’s just a very cool model for almost an accelerator for impact, rather than an accelerator for marketing.

Do you have thoughts on phone-free schools?

It’s a simple solution to a complex problem. On the one hand, in a vacuum, I might say “Absolutely, we need to be more distraction-free.” And much like when I was in elementary school and they were taking our Tamagotchis away, we’ve got to put the phones away. On the other hand, I understand the complex issues of school safety, of child care arrangements in a world where parents have to work. Thinking about what students are in school for — and what we want them to be doing, and how we want them to be learning, and whether or not we want them to feel so attached to these devices — is a really important conversation. But we can’t divorce it from reality: We live in a really uncertain and sometimes dangerous world, and I understand the perspective of parents who might want to be able to reach their kids during the day in the event of an emergency and other things. 

When I was at the ASU+GSV Air Show last spring, somebody I was with said, “Take a good look around: Half of these guys will be gone by next year.” On the one hand, that seems like a very cynical thing to say. It also seems entirely right. Is it a good thing that companies come and go, that you’re always dealing with somebody who’s got a different vision? Is that a healthy thing for education?

In any private market solution, some cycling of companies and iteration is not a bad thing. I think there’s a mismatch between how the tech startup venture world works and how education products need to work. In the VC-backed startup world, we’re funding a bunch of things with the intention that one or two of them will have 100x, 1,000x returns, and a lot of them will go bust. Those companies are incentivized and encouraged to capture as much market share as possible to achieve that investment return. Whether they are actually impactful to students or not is almost irrelevant in that initial drive to capture market share.

That’s not to say that there shouldn’t be competition and a diverse set of tools that educators can dig into. But if they’re getting served up a shiny new presentation for a new tool that they’re being told they absolutely need every month, that sort of churn is incredibly disruptive. 

How do you separate good ed tech from bad? 

When I hear a startup say that their total addressable market is all 80 million students in the country, I know it’s unlikely that product is worthwhile because there are so few ed tech products — there are so few products in general — that can actually serve every single student in the country. So unless you’ve got a more limited perspective on what the market is, I don’t think you’ve actually aligned what you’re building with the reality of what is needed.

I was heartened to read in journalist Audrey Watters’ newsletter last month that she’s returning to writing about ed tech. She wrote that she’s ready to “dutifully remind you that the future of human and machine learning as envisioned by Silicon Valley’s libertarian elite is a pretty shitty one.” Thoughts?

I love that! I mean, look: Not to zoom out too much, but I think as a society we’ve grown somewhat accustomed to being test subjects for tech companies across the board because everything is free. And they say, “Oh, if it’s free, then you’re the product.” And we are. “We’re releasing a new version of this tool. Your email client is going to change tomorrow.” Do you have any say in it? Nope. We’re very used to living in a world where we’re told what to do by tech platform companies and they will manage just how they see fit.

That doesn’t work for education. That doesn’t work when you have no grounding in learning science, pedagogy, or even just being in a classroom. And so I think that is not just an education problem. It impacts the education sector specifically, but I do think it’s a broader societal concern. Our interaction with technology is not one where we have enough agency.

]]>
Study: AI-Assisted Tutoring Boosts Students’ Math Skills https://www.the74million.org/article/study-ai-assisted-tutoring-boosts-students-math-skills/ Mon, 07 Oct 2024 10:01:00 +0000 https://www.the74million.org/?post_type=article&p=733842 An AI-powered digital tutoring assistant designed by Stanford University researchers shows modest promise at improving students’ short-term performance in math, suggesting that the best use of artificial intelligence in virtual tutoring for now might be in supporting, not supplanting, human instructors.

The open-source tool, which researchers say other educators can recreate and integrate into their tutoring systems, made the human tutors slightly more effective. And the weakest tutors became nearly as effective as their more highly-rated peers, according to a study released Monday

The tool, dubbed Tutor CoPilot, prompts tutors to think more deeply about their interactions with students, offering different ways to explain concepts to those who get a problem wrong. It also suggests hints or different questions to ask.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The new study offers a middle ground in what’s become a polarized debate between supporters and detractors of AI tutoring. It’s also the first randomized controlled trial — the gold standard in research — to examine a human-AI system in live tutoring. In all, about 1,000 students got help from about 900 tutors, and students who worked with AI-assisted tutors were four percentage points more likely to master the topic after a given session than those in a control group whose tutors didn’t work with AI.

Students working with lower-rated tutors saw their performance jump more than twice as much, by nine percentage points. In all, their pass rate went from 56% to 65%, nearly matching the 66% pass rate for students with higher-rated tutors.

The cost to run it: Just $20 per student per year — an estimate of what it costs Stanford to maintain accounts on Open AI’s GPT-4 large language model.

The study didn’t probe students’ overall math skills or directly tie the tutoring results to standardized test scores, but Rose E. Wang, the project’s lead researcher, said higher pass rates on the post-tutoring “mini tests” correlate strongly with better results on end-of-year tests like state math assessments. 

The big dream is to be able to enhance humans.

Rose E. Wang, Stanford University

Wang said the study’s key insight was looking at reasoning patterns that good teachers engage in and translating them into “under the hood” instructions that tutors can use to help students think more deeply and solve problems themselves. 

“If you prompt ChatGPT, ‘Hey, help me solve this problem,’ it will typically just give away the answer, which is not at all what we had seen teachers do when we were showing them real examples of struggling students,” she said.

Essentially, the researchers prompted GPT-4 to behave like an experienced teacher and generate hints, explanations and questions for tutors to try out on students. By querying the AI, Wang said, tutors have “real-time” access to helpful strategies that move students forward.

”At any time when I’m struggling as a tutor, I can request help,” Wang said.

She said the system as tested is “not perfect” and doesn’t yet emulate the work of experienced teachers. While tutors generally found it helpful — particularly its ability to provide “well-phrased explanations,” clarify difficult topics and break down complex concepts on the spot — in a few cases, tutors said the tool’s suggestions didn’t align with students’ grade levels. 

A common complaint among tutors was that Tutor CoPilot’s responses were sometimes “too smart,” requiring them to simplify and adapt for clarity.

“But it is much better than what would have otherwise been there,” Wang said, “which was nothing.”

Researchers analyzed more than half a million messages generated during sessions, finding that tutors who had access to the AI tool were more likely to ask helpful questions and less eager to simply give students answers, two practices aligned with high-quality teaching.

Amanda Bickerstaff, co-founder and CEO of AI for Education, said she was pleased to see a well-designed study on the topic focused on economically disadvantaged students, minority students, and English language learners.  

She also noted the benefits to low-rated tutors, saying other industries like consulting are already using generative AI to close skills gaps. As the technology advances, Bickerstaff said, most of its benefit will be in tasks like problem solving and explanations. 

Susanna Loeb, executive director of Stanford’s National Student Support Accelerator and one of the report’s authors, said the idea of using AI to augment tutors’ talents, not replace them, seems a smart use of the technology for the time being. “Who knows? Maybe AI will get better,” she said. “We just don’t think it’s quite there yet.”

Maybe AI will get better. We just don't think it's quite there yet.

Susanna Loeb, Stanford University

At the moment, there are lots of essential jobs in fields like tutoring, health care and the like where practitioners “haven’t had years of education — and they don’t go to regular professional development,” she said. This approach, which offers a simple interface and immediate feedback, could be useful in those situations. 

The big dream,” said Wang, “is to be able to enhance the human.”

Benjamin Riley, a frequent AI-in-education skeptic who leads the AI-focused think tank Cognitive Resonance and writes a newsletter on the topic, applauded the study’s rigorous design, an approach he said prompts “effortful thinking on the part of the student.”

“If you are an inexperienced or less-effective tutor, having something that reminds you of these practices — and then you actually employ those actions with your students — that’s good,” he said. “If this holds up in other use cases, then I think you’ve got some real potential here.”

Riley sounded a note of caution about the tool’s actual cost. It may cost Stanford just $20 per student to run the AI, but he noted that tutors received up to three weeks of training to use it. “I don’t think you can exclude those costs from the analysis. And from what I can tell, this was based on a pretty thoughtful approach to the training.”

He also said students’ modest overall math gains raises the question, beyond the efficacy of the AI, of whether a large tutoring intervention like this has “meaningful impacts” on student learning. 

Similarly, Dan Meyer, who writes a newsletter on education and technology and co-hosts a podcast on teaching math, noted that the gains “don’t seem massive, but they’re positive and at fairly low cost.”

He said the Stanford developers “seem to understand the ways tutors work and the demands on their time and attention.” The new tool, he said, seems to save them from spending a lot of effort to get useful feedback and suggestions for students.

Stanford’s Loeb said the AI’s best use is determining what a student knows and needs to know. But people are better at caring, motivating and engaging — and celebrating successes. “All people who have been tutors know that that is a key part about what makes tutoring effective. And this kind of approach allows both to happen.”

]]>
Feds Zero in on Maker of LAUSD's Failed AI Chatbot, Hint at Criminal Charges https://www.the74million.org/article/exclusive-federal-prosecutors-probe-failed-ed-tech-co-allhere-hint-at-criminal-charges/ Tue, 01 Oct 2024 17:01:16 +0000 https://www.the74million.org/?post_type=article&p=733591 Federal prosecutors have subpoenaed documents from the bankruptcy of failed education technology company AllHere, a once-lauded startup that boasted $12 million in venture capital and a $6 million contract with Los Angeles schools to build a buzzy AI chatbot

The U.S. attorney’s office for the Southern District of New York served the grand jury subpoena in early September to the court-appointed trustee managing the liquidation of AllHere’s assets to pay off its creditors, according to records filed with a federal court in Delaware. A federal grand jury subpoena indicates that AllHere or someone associated with the company is the target of a federal criminal investigation by the Department of Justice.  


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Attorney Stephanie Wickouski, a partner at the New York-based firm Locke Lord, told The 74 the subpoena means that federal prosecutors “have a reason to commence a criminal investigation and that’s certainly an exceptional circumstance.” 

AllHere founder and former CEO Joanna Smith-Griffin appears in a video profile for Forbes after she was included in the magazine’s 30 Under 30 list for education leaders in 2021. (Screenshot)

“There are a fair amount of investigations that involve bankruptcy cases and a lot of them are for conduct that occurred prior to the bankruptcy,” said Wickouski, the author of a textbook on bankruptcy fraud and white-collar crime. 

In an order approved on Monday, the bankruptcy trustee agreed to provide documents to federal prosecutors on the condition that certain sensitive information remain confidential “in the best interests of” the company’s value. Federal prosecutors can use the records “as needed or as required by law in connection with its investigation and/or any resulting criminal proceeding,” the order notes.

A spokesperson for the U.S. attorney’s office didn’t respond to requests for comment and the target of the federal inquiry remains unclear — as do any allegations of criminal wrongdoing. But Wickouski said the court-appointed trustee is in the best position to provide information about AllHere’s assets, business dealings and financial transactions. The “most likely scenario,” she said, is that “the company and its principals” are the target of the investigation.

Stephanie Wickouski, partner at Locke Lord and bankruptcy expert (Locke Lord)

On the same day as a Sept. 11 bankruptcy hearing, trustee George Miller said he had “discovered assets” at AllHere and changed its Chapter 7 bankruptcy case from one without any monetary value to one where creditors could recoup some of the money they’re owed. The court gave AllHere creditors 90 days to submit proof of claims to “assets from which a dividend might possibly be paid.” 

The “discovered assets” would appear to contradict statements by Toby Jackson, the company’s former chief technology officer and one of its only remaining executives, at the hearing that the company was effectively broke, citing one of its only assets as a $500 company laptop used by ousted CEO Joanna Smith-Griffin. Jackson noted that the company couldn’t access the laptop’s contents because Smith-Griffin had refused to share the password. AllHere listed more than $1.75 million in itemized liabilities, bankruptcy records show.

Neither Jackson, AllHere’s Delaware bankruptcy attorney, Joseph Mulvihill; trustee Miller nor his lawyer, Ricardo Palacio, responded to requests for comments. Smith-Griffin, a former Boston educator and family engagement counselor who went on to create digital tools to combat chronic absenteeism, has not spoken publicly or responded to requests for comments since her company’s sudden financial collapse this spring.

At the hearing last month, Jackson struggled to answer Miller’s questions about why AllHere paid Smith-Griffin $243,000 in expenses between September 2023 and June 2024 and owed $630,000 to its largest creditor — an education technology salesperson with longstanding ties to Los Angeles schools Superintendent Alberto Carvalho. The Florida-based salesperson, Debra Kerr, said during the meeting she was never paid commission for her work closing the lucrative AllHere deal in L.A. Kerr’s son, Richard, is a former AllHere account executive who told The 74 he pitched the company to Los Angeles school leaders.

The school district “has not received any requests to date” from federal prosecutors, a district spokesperson said in a statement Monday to The 74. Los Angeles Unified School District’s independent inspector general in July launched an investigation into allegations first reported by The 74 that its much-celebrated and now-unplugged AI chatbot named “Ed” exposed students’ personal data in violation of school district policy and standard industry security practices.

Carvalho later announced that he would form his own task force to determine what went wrong with the district’s relationship with AllHere and how it could move forward incorporating AI into the nation’s second-largest school system. Carvalho and Smith-Griffin made joint appearances at ed tech conferences throughout the spring touting the capabilities of “Ed,” an animated sun they said could interact individually with and accelerate the learning for some 540,000 students and their families.

Los Angeles Unified Supt. Alberto Carvalho, during the official launch of the AI-powered chatbot, “Ed.” (Getty Images)

Several other creditors listed in AllHere’s bankruptcy case have ties to Carvalho, including the communications firm of his former spokesperson when he was superintendent in Miami and the Foundation for New Education Initiatives, a Florida-based nonprofit that Carvalho created in 2008. The foundation came under scrutiny in 2020 after the for-profit company K12, Inc., now known as Stride, Inc., gave the district-run entity a $1.57 million donation just a day before the school board voted to stop using its online learning platform. The donation gave an appearance of impropriety, an investigation by the Miami-Dade inspector general found, but there were “no actual violations.”

In the case of AllHere, the subpoena to the bankruptcy trustee suggests that federal prosecutors are likely “in a fairly early stage” of their investigation, attorney Wickouski said. Any indictments that could follow, she said, won’t likely be announced for months. 

]]>
Opinion: Beyond Lesson Plans: AI Can Boost Teacher Creativity, Provide Classroom Advice https://www.the74million.org/article/beyond-lessons-plans-ai-can-boost-teacher-creativity-provide-classroom-advice/ Fri, 27 Sep 2024 19:20:54 +0000 https://www.the74million.org/?post_type=article&p=733484 This article was originally published in The Conversation.

This viewpoint was produced by The Conversation, an independent news organization dedicated to unlocking the knowledge of experts for the public good. Sign up for their newsletters to receive regular updates.

Teachers can use generative AI in a variety of ways. They may use it to develop lesson plans and quizzes. Or teachers may rely on a generative AI tool, such as ChatGPT, for insight on how to teach a concept more effectively.

In our new research, only the teachers doing both of those things reported feeling that they were getting more done. They also told us that their teaching was more effective with AI.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Over the course of the 2023-24 school year, we followed 24 teachers at K-12 schools throughout the United States as they wrestled with whether and how to use generative AI for their work. We gave them a standard training session on generative AI in fall 2023. We then conducted multiple observations, interviews and surveys throughout the year.

We found that teachers felt more productive and effective with generative AI when they turned to it for advice. The standard methods to teach to state standards that work for one student, or in one school year, might not work as well in another. Teachers may get stuck and need to try a different approach. Generative AI, it turns out, can be a source of ideas for those alternative approaches.

While many focus on the productivity benefits of how generative AI can help teachers make quizzes or activities faster, our study points to something different. Teachers feel more productive and effective when their students are learning, and generative AI seems to help some teachers get new ideas about how to advance student learning.

K-12 teaching requires creativity, particularly when it comes to tasks such as lesson plans or how to integrate technology into the classroom. Teachers are under pressure to work quickly, however, because they have so many things to do, such as prepare teaching materials, meet with parents and grade students’ schoolwork. Teachers do not have enough time each day to do all the work that they need to.

We know that such pressure often makes creativity difficult. This can make teachers feel stuck. Some people, in particular AI experts, view generative AI as a solution to this problem; generative AI is always on call, it works quickly and it never tires.

However, this view assumes that teachers will know how to use generative AI effectively to get the solutions they are seeking. Our research reveals that for many teachers, the time it takes to get a satisfactory output from the technology — and revise it to fit their needs — is no shorter than the time it would take to create the materials from scratch on their own. This is why using generative AI to create materials is not enough to get more done.

By understanding how teachers can effectively use generative AI for advice, schools can make more informed decisions about how to invest in AI for their teachers and how to support teachers in using these new tools. Further, this feeds back to the scientists creating AI tools, who can make better decisions about how to design these systems.

Many teachers face roadblocks that prevent them from seeing the benefits of generative AI tools such as ChatGPT. These include being able to create better materials faster. The teachers we talked to, however, were all new users of the technology. Teachers who are more familiar with ways to prompt generative AI — we call them power users — might have other ways of interacting with the technology that we did not see. We also do not yet know exactly why some teachers move from being new users to proficient users but others do not.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

]]>
When Educators Team Up With Tech Makers, AI Doesn’t Have to be Scary for Schools https://www.the74million.org/article/artificial-intelligence-and-schools-when-tech-makers-and-educators-collaborate-ai-doesnt-have-to-be-scary/ Thu, 26 Sep 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=733301 As we enter another school year, the debate over AI’s role in education is intensifying. There’s a sharp divide between those urging us to take advantage of these tools and others who support a more cautious approach. Educators want guidance on the best ways to use emerging technologies without compromising privacy, encouraging plagiarism or making learning less authentic. And yet, AI technology is evolving so quickly that it seems like we’ll always be playing catchup. 

Fortunately, the U.S. Department of Education’s Office of Educational Technology (OET) released new guidelines for EdTech companies earlier this year called “Designing for Education with Artificial Intelligence.” The report underscores the need for “responsible innovation,” adding, “educator and student feedback should be incorporated into all aspects of product development, testing, and refinement to ensure student needs are fully addressed.” As Dan Fitzpatrick observed in Forbes, “The era of tech-first solutions is over. Developers must collaborate meaningfully with educators from day one. Understanding pedagogy is as crucial as coding skills.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The XQ Institute shares this mindset as part of our mission to reimagine the high school learning experience so it’s more relevant and engaging for today’s learners, while better preparing them for the future. We see AI as a tool with transformative potential for educators and makers to leverage — but only if it’s developed and implemented with ethics, transparency and equity at the forefront. That’s why we’re building partnerships between educators and AI developers to ensure that products are shaped by the real needs and challenges of students, teachers and schools. Here’s how we believe all stakeholders can embrace the Department’s recommendations through ongoing collaborations with tech leaders, educators and students alike.

Keeping Tech and Learning Student-Centric

XQ’s approach to high school redesign is always student-centric. In that spirit, we must shift from the mindset that AI and other tech tools are solely for educators; they also exist to improve students’ learning. Rather than focusing exclusively on improving output (such as lesson plans and assessment materials), makers should also emphasize improving outcomes, such as student proficiency and engagement. Ann-Katherine Kimble, XQ’s Director of School Success, said that’s why it’s wrong to focus only on how AI can save teachers time and make their jobs easier. “Our young people, teachers and classrooms don’t deserve that,” she explained. “They deserve a point of view that believes that AI can enhance your practice and knowledge, deepen your creative and responsive approaches and help educators capitalize on the sweet spot where the art of teaching and the science of learning meet.”

Students at Crosstown High simulate an emergency response to a pandemic with help from an AI chatbot. (Nikki Wallace)

At Crosstown High, an XQ school in Memphis, Tennessee, computer science teacher Mohammed Al harthy sees AI as a partner in the classroom — something students engage with during the learning process but never rely on for the finished product.

For instance, in one project his students explored how to build AI applications to track hand movements for American Sign Language, highlighting the value of learning how AI works, writing code in Python and experimenting with tools like Google’s MediaPipe. Al harthy isn’t so worried that his students will simply copy and paste as they learn. “Artificial intelligence never sounds like a high school student, so the concerns about cheating are kind of silly,” he explained. “If you’re concerned about that, you should step back and reassess what your students are doing from the start.” This approach aligns with a national shift toward focusing on competencies and collaboration rather than rote answers, allowing students to use AI as a tool to enhance their problem-solving and critical-thinking skills.


AI is just one of many topics covered by the XQ Xtra, a newsletter that comes out twice a month for high school teachers. Check it out and subscribe now.


Ensuring Equitable Learning Opportunities

At XQ, we believe that ensuring equitable access means creating AI-driven learning experiences that are flexible, adaptive and tailored to the unique needs of diverse student populations, especially neurodivergent students and multi-language learners. AI can help by creating tools designed to serve all learners fairly and effectively without stripping away our students’ individuality.

One of the technology’s most promising capabilities is its ability to provide real-time, actionable feedback to students and educators. Tim Brodsky, a thought leader on AI who taught social studies at the XQ high school Círculos in Santa Ana, California, was recently recognized by the U.S. Department of Education for his innovative use of generative AI to support multilingual learners in his AP courses. With automated feedback occurring in real-time, Brodsky said systems can analyze data and provide immediate insights about student engagement, attendance and other factors to predict risk factors. “This takes the load off teachers, who often have to sift through spreadsheets to find trends and nuances,” he said. “AI provides a better method for holistic data collection and a more effective way of measuring it.” 

However, student data always comes with caveats. Too often, algorithms mirror the biases in the data on which they’re trained. Stanford researchers found this can result in mischaracterizing the writing of non-native English speakers as AI-generated, and experts at MIT found language models that classified certain jobs, like secretary or flight attendant, as feminine. XQ addresses this problem by working closely with developers to ensure their products are more culturally responsive to the needs and outcomes educators are looking to provide for their students.

For example, teachers at Crosstown worked with the EdTech company Inkwire to develop project-based learning (PBL) experiences. The company’s CEO and co-founder Aatash Parikh said this collaboration was helpful for both sides and influenced the evolution of the company’s AI products. “Having educators at Crosstown High School walk us through their workflow designing project-based learning experiences helped us realize what would make Inkwire a more complete solution for schools,” he said. 

A former PBL teacher himself, Parikh wanted to ensure that Inkwire’s generative AI tools don’t just stop at creating PBL plans, but also incorporate deeper pedagogical layers to be more responsive for educators and schools. At Crosstown High, educators, including science teacher and Head of Innovation and Research Nikki Wallace, showed the Inkwire team what they were learning from each other, and how to integrate that professional feedback into their platform. “We’re helping these makers understand how equity is created in the classroom, helping them make more responsive products,” Wallace said. “Teachers learn best from other teachers.”

Fostering Ethical Collaboration Between Educators and Developers

The days of tech-first solutions are over; what’s needed now is a deep partnership where developers and educators work hand-in-hand to ensure AI tools are technologically sound and pedagogically effective. The DOE’s new guidelines for EdTech refer to this as a “dual stack” approach—a framework that combines the “development stack” applied to product creation alongside a “responsibility stack” to ensure these products are built with ethics, transparency and public trust for classroom use.

While many AI tools help create engaging projects and lessons, Wallace wanted a tool to better support personalized learning. While working alongside Inkwire, she said XQ connected her with other AI makers, such as Playlab, to build an AI Chatbot that would support an interdisciplinary, community-centered project for her students. 

“We frontloaded the bot with all the information I need to build a successful learning experience in my classroom,” Wallace explained. Her students looked at statistics for infectious diseases that impact Memphis. Their chatbot then served as what Wallace called a “cognitive partner.” It helped them progress through the science project by unpacking and generating complex questions such as “What community partners in Memphis can I reach out to?” and “What information in the research might I have overlooked?” and “What governmental systems are in place?” From there, Wallace said, students figured out which learning competencies were associated with the project.

“We wanted the students to be able to identify, build and then reflect on the project benchmarks, learning outcomes and pathways they would need in order to progress at their own pace.”

Wallace said this experience was grounded in two of the XQ Design Principles: Meaningful, Engaged Learning and Youth Voice and Choice. The chatbot helped make learning more personalized and rigorous.

Betsey Schmidt, founder and CEO of MeshEd and a veteran curriculum designer, said customizable large language models (LLMs) like PlayLab and Inkwire can transform lesson planning. “By understanding what excites and motivates students, educators can more easily adapt core curricula to resonate on a deeper level with learners, incorporating their passions, hobbies, strengths and growth areas — and making real-world connections to learners’ profiles,” she explained. Schmidt has been collaborating with XQ to bring teachers and high school leaders into the AI-for-learning product design cycle 

Looking Ahead

By this time next year, generative AI will likely move from niche applications to widespread use, whether we’re ready or not. However, education systems and policies are incredibly resilient to change. The recent pandemic made that painfully clear as schools often went back to business as usual rather than embracing new learning models, such as awarding credit for content mastery instead of seat time (Carnegie units), a rigid system that’s been used for more than a century and is ripe for change. (XQ and the Carnegie Foundation for the Advancement of Teaching have joined forces to address this problem.)

AI is already showing us how to make education more individualized and equitable. By encouraging tech leaders and makers to continue collaborating with educators, at events like EdTechWeek in New York City next month, we can work toward a future in which all students can reach their potential — and where teachers can make the most of their talent.

Want to learn more about how to create innovative teaching and learning in high schools? Subscribe to the XQ Xtra, a newsletter that comes out twice a month for high school teachers.

Disclosure: The XQ Institute is a financial supporter of The 74.

]]>
AI’s New Role in NYC Schools? Chancellor Banks Teases Personalized Learning and College Counseling https://www.the74million.org/article/ais-new-role-in-nyc-schools-chancellor-banks-teases-personalized-learning-and-college-counseling/ Thu, 19 Sep 2024 18:01:00 +0000 https://www.the74million.org/?post_type=article&p=733066 This article was originally published in Chalkbeat.

After ChatGPT exploded in popularity, New York City’s public school system quickly pushed back on the powerful chatbot, arguing it couldn’t help students build critical thinking skills and often spouts misinformation.

Nearly two years later, during his annual “State of Our Schools” speech on Tuesday, schools Chancellor David Banks completed his about-face on artificial intelligence. The school system should get ready to inject the technology into nearly every aspect of its operations, from teaching and learning to transportation and enrollment, he said.

The schools chief laid out an expansive vision that includes customized college advising, instant assessments of student work, personalized instruction, and even replacing annual standardized tests.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“AI can revolutionize how we function as a school system,” Banks told the audience of administrators, elected officials, and union leaders at Frank Sinatra School of the Arts High School in Queens as he outlined his plans for the nation’s largest school system.

Still, Banks acknowledged that the Education Department has no concrete plans, timelines, or cost estimates for those AI projects. The goal is to signal to AI companies that the school system is interested in their technology and wants to hear ideas, he said, adding that officials are convening an advisory council next month to help brainstorm.

Aside from his embrace of AI, the most significant announcement from Banks on Tuesday was a plan to open a new high school in southeast Queens next fall, called HBCU Early College Prep, that will have strong ties to historically Black colleges and universities.

Banks’ annual speech otherwise stuck to promoting initiatives that he has been building since taking office in 2022. He noted that his signature literacy curriculum mandate is rolling out to all elementary schools this fall. He vowed to continue investing in FutureReadyNYC, an initiative in 135 high schools that gives students access to coursework geared toward specific industries and paid internships.

And he noted the city is adding to its library of curriculums focused on underrepresented groups called “Hidden Voices.” The city recently launched materials devoted to people with disabilities, and Banks said the department will offer lessons focused on the Latino community soon.

Though Mayor Eric Adams attended the speech, he did not offer any remarks — a break from the previous year. Adams and several senior members of his administration have been engulfed by multiple federal investigations. Earlier this month, federal agents searched homes or seized electronic devices from Banks, his two brothers, and his romantic partner, First Deputy Mayor Sheena Wright.

Asked about Adams’ lack of a speaking role during the event, Banks declined to comment.

Here are three takeaways from the chancellor’s speech:

Banks thinks AI will become pervasive in the city’s schools

Banks sketched out a few ways he thinks the technology can significantly change the way schools operate. He said the systems could “give teachers a daily, accurate, and comprehensive picture of a child’s progress” based on homework assignments, exams, and other student work.

AI tools could also offer “personalized learning plans for every child” alongside extra instruction based on those plans. The idea, Banks said, is to make it easier for teachers to reach students at a range of academic levels who are all in the same classroom. Still, some previous efforts to promote personalized learning, including by Facebook founder Mark Zuckerberg, have fallen short of their lofty ambitions.

The technology could also provide students with more comprehensive college and career counseling, Banks suggested, drawing on information like employment outcomes at different schools. An Education Department spokesperson did not immediately respond to a question about whether there are any real-world examples of the technology being used in the ways Banks described.

Asked about the technology’s limitations, such as offering incorrect answers to basic math problems, Banks acknowledged it is “not fully baked yet,” but “I wouldn’t be overly concerned about some of the early missteps.”

The schools chief also sought to calm fears about the technology.

“AI will never be able to replace the personal connection that a teacher provides,” he said. “We’re not displacing human beings.”

A new high school is coming to Queens

On the heels of opening two new Bard Early College campuses in Brooklyn and the Bronx, officials said they’re planning to open a third “accelerated” high school this fall in Queens — HBCU Early College Prep.

Officials have previously said opening new campuses is part of a bid to keep families in the city’s public schools, which have seen enrollment drop 9% over the past five years.

The campus will partner with Delaware State University, a historically Black college, and will give students a chance to earn an associate degree before leaving high school.

“They’re also going to be immersed in the history and culture of multiple HBCUs across the country through college visits, the opportunity to study abroad, and research opportunities,” Banks said, adding that there will be “synchronous instruction from professors, alumni, mentors and more.”

Education Department officials said the school will be screened and will give priority to Queens residents.

Spinning up schools that serve specific student populations is in Banks’ wheelhouse. Before becoming chancellor, he helped establish Eagle Academy, a network of public schools geared toward serving young men of color. At the conclusion of his speech on Tuesday, Banks led the crowd in a recitation of the poem Invictus by William Ernest Henley, a daily practice at Eagle.

Tweaks to career-focused learning efforts

City officials are making some tweaks to its FutureReadyNYC initiative, which gives students access to career-connected learning opportunities. Participating schools will be able to add new “industry focus areas” in social work and decarbonization.

That builds on existing tracks in business, education, technology, and health care.

Banks touted a previously announced plan to launch a new high school, Northwell School of Health Sciences, that is designed to prepare students for careers in the health care industry. The school is supported with nearly $25 million from Bloomberg Philanthropies, which Banks said is the single largest grant the school system has ever received. (Chalkbeat receives funding from Bloomberg.)

The chancellor also announced that Mount Sinai Health System will help support the city’s career education efforts.

This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at ckbe.at/newsletters

]]>
Can AI Bring Students Back to the Great Books? https://www.the74million.org/article/can-ai-bring-students-back-to-the-great-books/ Sun, 15 Sep 2024 11:01:00 +0000 https://www.the74million.org/?post_type=article&p=732858 Is your teenager annoyed by Nietzsche? Confused by Conrad? Through with Thoreau? Now she can talk to the expert inside her e-book.

The creators of a new, artificial-intelligence-assisted publishing effort called Rebind hope that offering interactive, personalized guidance and commentary from well-known writers, scholars and celebrities will help bring classic books alive for students.

They’re also aiming to help adults who might otherwise struggle in solitude through these weighty volumes.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


In the process, they predict, the titles could capture a much bigger audience, one that someday may be able to talk back to the experts and even influence how scholars interpret literature. 

The challenge is whether they can make the AI work without being creepy or intrusive.

The price: $29.95 per book, with multi-book subscriptions available. They also plan to offer discounts to schools and find philanthropic partners as underwriters. 

Among the key selling points of Rebind’s e-books is that it offers a clever synthesis of original commentary and “lite” AI that seamlessly matches the experts’ utterances to readers’ queries. So a student studying George Washington’s Farewell Address could pose a question to none other than historian Doris Kearns Goodwin — or at least the version of her already pressed between the covers of an e-book on presidential speeches.

The improbable effort grew out of an equally improbable meeting between the philosopher John Kaag and John Dubuque, great-grandson of the founder of the retail chain Plumbers Supply. Dubuque had spent 14 years as its CEO and sold the company in 2021, at age 38.

Suddenly retired, he set about reading philosopher Martin Heidegger’s famously difficult Being and Time, hiring an Oxford scholar for twice-weekly private tutoring sessions. 

“I had this amazing experience and realized at the end of it, ‘It’s too bad more people can’t access this,’” he said. “This is the only way I ever could have read this book.”

Dubuque also began playing with ChatGPT, asking it to summarize passages from equally difficult books like Alfred North Whitehead’s Process and Reality. He was deeply impressed with the AI, warts and all, and concluded that if someone could tame it for students, cut down on “hallucinations” and focus it on the books, it’d be a game-changer. 

He shared his ideas with Kaag, who had helped him get through William James’ The Varieties of Religious Experience.

John Kaag

Kaag had just published Sick Souls, Healthy Minds: How William James Can Save Your Life, which resonated with his benefactor. Kaag, who as a kid had been a poor reader with a stutter, recounted to Dubuque how his mother would sit at their kitchen table and help him muscle through assignments. 

They realized that many people want to tackle classics like Moby Dick and James Joyce’s Ulysses, Dubuque said, but get intimidated by big, difficult books. “So they just give up and read things that they can read, not the things that they really want to read.”

‘We’re choosing the people and they’re choosing the books’

Kaag soon recruited his friend Clancy Martin, an author and professor at the University of Missouri in Kansas City, who signed on to help find “Rebinders” for at least 100 AI-assisted e-books, offering readers what amounts to a one-on-one conversation with a novelist, critic or historian about the book.

The endeavor already boasts an impressive stable of author-experts: The Irish novelist John Banville on Joyce’s Dubliners, Goodwin on U.S. Presidents’ speeches, novelist Marlon James on Adventures of Huckleberry Finn, Deepak Chopra on Buddhism and environmentalist Bill McKibben on John Muir.

But there are also some unlikely pairings: Margaret Atwood on A Tale of Two Cities, Roxane Gay on Edith Wharton’s The Age of Innocence, producer, actor and writer Lena Dunham on E. M. Forster’s A Room With a View, and the critic Laura Kipnis on Romeo and Juliet

We’re choosing the people — and they’re choosing the books,” said Martin. 

Clancy Martin

To avoid copyright fights, the company is limited, for the moment, to books in the public domain, published before 1928. But Rebind is also in conversation with the world’s three largest publishers about offering contemporary books like 1984, Fahrenheit 451 and David Foster Wallace’s 1996 novel Infinite Jest.

Kipnis, who last spring wrote a lengthy account of becoming a Rebinder, has said the endeavor “will radically transform the entire way booklovers read books.”

Acknowledging her misgivings about AI more broadly, she finally admitted to herself that perhaps this particular bet is worth pursuing. “The nihilist in me thinks if humans are going to perish, we might as well perish reading the Classics,” she wrote.

On occasion, Kaag, 44, and Martin, 57, have tried to politely steer a few scholars away from their first choice, with mixed results: When he offered the gig to novelist Garth Greenwell, for instance, Martin promised he could tackle any book he liked. So Greenwell proposed Henry James’ The Golden Bowl — a classic, but not exactly James’ most widely read novel. 

“I said, ‘O.K., Henry James is a great idea,” Martin recalled. “‘What about The Portrait of a Lady?’”

Sorry, Greenwell said. It was The Golden Bowl or nothing. 

Martin threw out a few other titles: The Turn of the Screw? Daisy Miller?

Eventually, he said with a laugh, they resolved it: “He’s doing The Golden Bowl.” 

So far, only a few prominent authors have opted not to participate — the literary novelist Andre Dubus III, a close friend of Kaag’s, told him he was “dancing with the devil.” 

Kaag said he’s getting a mixture of “really good” emails and “really serious hate mail” from colleagues fearful of AI. He takes that fear to heart, having spent much of his career suspicious about ed tech. His classes, he said, have always been “very personal and very one-on-one.”

But he shifted his thinking a few years ago, after suffering from heart troubles that culminated in a cardiac arrest at age 40: “I just thought to myself, ‘I really would like to explore things that I hadn’t explored before.’”

Invoking Dubuque’s intimate tutoring sessions, he thought, “You can only scale one-on-one tutorials, or one-on-one conversations, so far.”

If AI can make that happen and bring the joy of reading to more people, he thought, perhaps it’s worth trying something new. “So to me, I don’t think it’s scary.”

‘Basically every question that I could possibly imagine’

Each book begins with a high-production-value video offering a sneak peak of what lies within. In the case of Henry David Thoreau’s Walden, we get sweeping drone shots of Walden Pond, complete with the Rebinder — in this case Kaag himself — taking a swim. He lives in nearby Concord, Mass., and has taught the book for more than a decade at the University of Massachusetts Lowell. 

For the Walden Rebind, Kaag recorded 30 hours of audio commentary, answering “basically every question that I could possibly imagine” a college student asking. 

The volume of commentary ranges widely, from 10 hours for Dubliners to nearly 80 for Ulysses by the philosopher Philip Kitcher.

As for how Rebind will be used, Kaag sees it not as a replacement for class discussions, but as preparation, a tool that can field questions readers might be too embarrassed to ask in class.

The way Rebind works will be familiar to anyone who reads e-books, but with a revelatory twist: Readers can highlight and annotate text, but they can also open up a chat window anywhere and type or dictate questions about a passage or sentence. They can wonder aloud about ideas or passages they’re curious about, or simply type: “I’m lost.” 

AI analyzes the query and matches it to the pre-loaded commentary, telling readers, if they click on a little icon, which parts of the answer are original and which are the AI smoothing out the syntax to be responsive to the query.

Screenshot of an exchange with author John Banville about the novels of James Joyce. Rebind can specify the parts of an answer that are an expert’s actual words and those generated by AI to personalize it to the reader’s query.

Antero Garcia, an associate professor in the Graduate School of Education at Stanford University and vice president of the National Council of Teachers of English, said he likes the transparency that comes with that breakdown. “I actually hope more AI does something like that, where you can see the sources of things” it presents to readers.

But he worries that tools like Rebind could draw users more into reading as a solitary pursuit. “If I’m lost in Dubliners, that’d be great to go to my English teacher or to a friend and, God forbid, have a reading group or a book group and just have a conversation about this text,” he said. 

Garcia said he was reluctant to overstate the isolating effects of AI, “but I do think there’s something missing as a result of relying on AI to guide us in our reading, rather than relying on reading being an inherently social thing.”

In the long term, Rebind actually seeks to integrate social elements that allow students in a class to “read and work together” within a text. Eventually, they hope to give teachers space for their own commentary. Future versions may offer Rebinders feedback from readers and the opportunity for deeper discussions via AI-moderated book clubs.

One feature stands out as potentially game-changing: If a reader wants to basically journal within the e-book, revealing his or her personal challenges along the way, that prompts the AI to search for commentary that helps: If you’re reading Walden, for instance, and type in, “This book makes me think of my times of loneliness and depression,” the e-book will reply: “I can understand how Thoreau’s reflections on solitude and the challenges of living authentically might resonate with feelings of loneliness and depression.”

That’s then followed up with a brief discussion of Thoreau’s encouragement “to remain attentive, even when things don’t particularly seem bountiful.”

The new e-books will also allow users to take notes, then use them to challenge the Rebinder to a conversation. While that could easily become a big privacy risk, Dubuque said Rebind will never sell user data, since it’s inviting users to “share the deepest, most meaningful things in their life and really give themselves to these books.” Profiting off those details is “not an option.”

‘Dancing with the devil’?

At the moment, the interactions are all through text, but the Rebinders have all given permission to have their voices reproduced so they can someday “chat” directly with users. “We have voice clones,” Dubuque said. “They’re very good.”

John Dubuque

But for now audio remains an open question, an option they’re not quite ready to offer. On the one hand, who wouldn’t want to chat about Dubliners with Banville? On the other hand, that could be weird. A small portion of the conversation wouldn’t be Banville at all, but a crusty, Irish-accented Banville-bot.

Dubuque predicted they’ll eventually end up using voice, but he wants to do it carefully.

“We’re very sensitive to the ‘ick factor’ of AI.”

His plan is to release the first books next month. 

Though it’s a for-profit company, with Dubuque its only funder, Martin said he also sees it as an effort to ensure that more young people get the chance to read great books under the guidance of great teachers. “Most of us don’t get to go to Columbia or to Yale or to Princeton,” he said. Fewer still get to study with scholars like Goodwin, Atwood, Banville or Gay.

But Garcia, the Stanford scholar, urged caution.

“There’s something fraught about this pursuit of scale,” he said. “In trying to deliver good books or good learning experiences to people, we ultimately get funneled into this pathway: The way to get it to the most people is to take away that human element or dilute that human element through AI. It feels like that’s when you lose the spirit of it.”

For his part, Martin wants to make Rebind “the most fun, most dynamic and most interesting way” to read books. It won’t supplant the solitary experience of reading, he said, it’ll offer something different: the choice to read a book in solitude or to “have a whole rich conversation about it with someone.” 

Or both. 

]]>
Ed Tech Startup Behind L.A. Schools’ Failed $6M AI Chatbot Files for Bankruptcy https://www.the74million.org/article/allhere-ai-los-angeles-schools-tool-bankruptcy-filing/ Thu, 12 Sep 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=732760 The education technology company behind Los Angeles schools’ failed $6 million foray into artificial intelligence was in a Delaware bankruptcy court Tuesday seeking relief from its creditors and to sell off its meager assets before shutting down entirely.

The latest chapter in AllHere’s dizzying collapse revealed more information about the once-lauded company’s finances and its relationship with the Los Angeles Unified School District. But the hearing failed to answer key questions about why AllHere went under after garnering $12 million in investor capital, a blizzard of positive press and a contract with the nation’s second-largest school district to create “Ed,” the buzzy, AI-powered chatbot.

During the hearing held over Zoom, one of AllHere’s only remaining executives, former chief technology officer Toby Jackson, struggled to explain why the company paid ousted CEO Joanna Smith-Griffin $243,000 in expenses from the past year and owed $630,000 to its largest creditor, education technology salesperson Debra Kerr. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“I don’t know exactly the nature of all of [Smith-Griffin’s] expenses. She was the CEO and so that is one of the outstanding questions that we also have,” Jackson said when quizzed about the six-figure amount by the bankruptcy trustee. “She did do quite a bit of travel as the CEO of the company.” 

Similarly, Jackson said he had no invoices to substantiate the $630,000 debt to Kerr, who is a longtime associate and social media booster of Los Angeles schools Superintendent Alberto Carvalho, dating back to his days leading Miami-Dade schools. Kerr’s son, Richard, is a former AllHere account executive who told The 74 this week he pitched the AllHere deal to Los Angeles school leaders.

“I’m not really sure what exactly that entails,” Jackson said of Kerr’s claim.

Moments later, Kerr chimed into the Zoom hearing, arguing the company owed her the money after she helped AllHere close the lucrative deal in L.A. Kerr said she was never paid her commission from the first payments that LAUSD made to AllHere under the contract. 

The district has said it paid AllHere roughly $3 million of the $6 million for the chatbot, which was taken offline shortly after AllHere announced in June that it was in financial distress and had furloughed most of its employees. 

“I never did collect any commissions and it’s in the contract based on commission percentages that would have been made on any sales accrued,” Kerr told the trustee.

Smith-Griffin, who now lives in North Carolina, was not present for the Zoom hearing and could not be reached for comment. There were indications in the hearing that her separation from AllHere was not amicable, including that the former CEO has refused to disclose the password to her $500 company-owned laptop, one of its few remaining assets. 

Court records show that Jackson, now the head restructuring officer, earned $305,000 a year in his role with the company before it shuttered, nearly three times the $105,000 paid to Smith-Griffin, a Harvard University graduate who built AllHere in 2016 with financial backing from the prestigious institution. 

Filed in mid-August, AllHere’s Title 7 bankruptcy petition strengthens doubts that it could find a new owner to take over its mission as an AI pioneer in K-12 schools. That scenario was put forth by a Los Angeles school district spokesperson earlier this year with the assertion that “Ed” could still be successfully launched as a personalized, interactive learning acceleration tool for all of the district’s roughly 540,000 students and their families.

Instead, court records show AllHere’s few remaining employees are preparing for “the wind down of the company” and officials acknowledged during Tuesday’s proceeding that AllHere was unable to fulfill the terms of its contract with L.A. Unified. 

A lawyer representing the school district was present at the hearing. In a statement Tuesday evening, a district spokesperson said LAUSD is “evaluating its next steps to pursue and protect its rights in the bankruptcy proceedings.” 

Los Angeles schools Superintendent Alberto Carvalho appears in a photograph with Debra Kerr, which the education technology salesperson later posted on LinkedIn. (Screenshot)

Kerr and Carvalho 

Ties between Kerr and Carvalho go back to at least 2010, when she worked for the behemoth education company Pearson. Back then, she gave Carvalho and Miami students what she called “front-row access” to an original print of the U.S. Declaration of Independence. Ever since, Carvalho, who took over leadership in Los Angeles in 2022, has been a regular staple on Kerr’s social media. 

A LinkedIn post promoting L.A.’s chatbot noted that the tool worked in partnership with services from seven companies including Age of Learning, the creators of digital education program ABCmouse and where Kerr previously worked as head of sales. 

Kerr didn’t respond to requests for comment but her son, Richard, who began working at AllHere in 2022, said among the school district deals he worked on for the company was the chatbot project in Los Angeles. 

“We had a big deal in L.A. and the investors, I guess, didn’t have patience to wait to get paid from it,” he said. 

Kerr said he met with education officials in Los Angeles and “did a lot of work” helping the company secure the ageement. When asked about his mother’s role in closing AllHere’s contract in Los Angeles, Kerr said “she had a lot to do with it,” but didn’t elaborate further. 

A statement from the L.A. district spokesperson said that “Los Angeles Unified launched a competitive” request for proposals that received “multiple responses,” which eventually led to AllHere’s selection. This spring, Carvalho went on the road with Smith-Griffin to promote “Ed,” billing the chatbot personified by a yellow sun as being “unprecedented in American public education.”

Before he was furloughed, Richard Kerr said AllHere was a great place to work — in part because of Smith-Griffin’s leadership.

“It’s very unfortunate what happened to Joanna. I thought she was on a great path and she was doing an amazing thing,” he said, adding that she made a mistake when she “brought in the wrong investors that were pretty vindictive” and decided to cut short the company without giving it a proper chance. 

AllHere’s former senior director of software engineering, who became a company whistleblower, told The 74 earlier this year that AllHere struggled to meet the terms of its contract in Los Angeles and took shortcuts that violated bedrock student privacy principles and district rules. Both the district’s independent inspector general and top administrators have launched separate investigations into what went wrong with AllHere.

Even though his mother, Debra Kerr, was on the Delaware court’s Zoom call Tuesday, Richard Kerr said he was unaware his former employer had filed for bankruptcy.

What’s left

The company’s few remaining employees and board members, including former Chicago Public Schools Chief Executive Janice Jackson, have not made themselves available for comment. 

AllHere investor Andrew Parker, who was on vacation Tuesday and didn’t attend the court hearing, now serves as the company’s secretary. In addition to Janice Jackson, other players who signed AllHere’s bankruptcy petition are Andre Bennin, a managing partner with the investment firm Rethink Education, and education consultant Jeff Livingston. 

Even though Smith-Griffin is no longer with the company, court records show she still has a significant stake, holding 81% equity in its common stock. Rethink Education was by far the company’s biggest outside investor. 

Other top creditors, according to court records, are the law firm of Gunderson Dettmer at nearly $275,000, the information technology company Svitla Systems at $190,000 and $123,000 to well-known education consulting firm Whiteboard Advisors 

Earlier in the summer, The 74 spoke with Gunderson Dettmer partner Jay Hachigian, who said he had only worked with AllHere early in its formation. He didn’t respond to requests for comment this week about his firm’s large outstanding balance with the company. Whiteboard Advisors spokesperson Thomas Rodgers said in an email that his firm previously worked with AllHere but its role is covered by a nondisclosure agreement. 

Court records show the company earned $2.4 million in gross revenue last year but had generated much less since January, about $587,000.

At the time of bankruptcy, court records show the company had active contracts with just 10 school districts, including those in Cincinnati, Miami and Weehawken, New Jersey. Only Weehawken sought to use the chatbot platform created for LAUSD, while the rest relied on an earlier text messaging tool designed to combat chronic absenteeism. 

Despite landing millions of dollars in backing from a group of social impact investment firms, several of which cited their enthusiasm for investing in AllHere specifically because it was led by a Black woman, court records reveal the company’s coffers are nearly empty. AllHere claimed nearly $2.9 million in property and just shy of that — $1.75 million — in liabilities. The company’s actual assets, Toby Jackson acknowledged in court, are much lower. 

It claimed an “unknown” value on pending patents, which Jackson conceded Tuesday had been denied, and $2.88 million for licenses, franchises and royalties for its LAUSD contract. Other assets, including its website and chatbot source code, were also listed at a value of “unknown.”

Jackson said the Los Angeles contract was valued at $2.88 million for the remaining outstanding balance the district owes to fulfill the agreement — money he admitted AllHere would be unable to collect because it has not “held up our part of the bargain in the contract” and is closing shop.

Financial statements to the court show AllHere had $18,000 in savings and just $500 in physical assets: the value of Smith-Griffin’s work laptop, whose contents remain outside the tech company’s reach. 

“We have not been able to obtain the credentials for Mrs. Smith’s laptop. We did not receive any cooperation with that,” Jackson testified Tuesday. “She has been cooperative with some other matters, but not with this one.”

]]>
Opinion: Verifying Facts in the Age of AI – Librarians Offer 5 Strategies https://www.the74million.org/article/verifying-facts-in-the-age-of-ai-librarians-offer-5-strategies/ Mon, 02 Sep 2024 14:01:00 +0000 https://www.the74million.org/?post_type=article&p=731343 This article was originally published in The Conversation.

The phenomenal growth in artificial intelligence tools has made it easy to create a story quickly, complicating a reader’s ability to determine if a news source or article is truthful or reliable. For instance, earlier this year, people were sharing an article about the supposed suicide of Israeli Prime Minister Benjamin Netanyahu’s psychiatrist as if it were real. It ended up being an AI-generated rewrite of a satirical piece from 2010.

The problem is widespread. According to a 2021 Pearson Institute/AP-NORC poll, “Ninety-five percent of Americans believe the spread of misinformation is a problem.” The Pearson Institute researches methods to reduce global conflicts.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


As library scientists, we combat the increase in misinformation by teaching a number of ways to validate the accuracy of an article. These methods include the SIFT Method (Stop, Investigate, Find, Trace), the P.R.O.V.E.N. Source Evaluation method (Purpose, Relevance, Objectivity, Verifiability, Expertise and Newness), and lateral reading.

Lateral reading is a strategy for investigating a source by opening a new browser tab to conduct a search and consult other sources. Lateral reading involves cross-checking the information by researching the source rather than scrolling down the page.

Here are five techniques based on these methods to help readers determine news facts from fiction:

1. Research the author or organization

Search for information beyond the entity’s own website. What are others saying about it? Are there any red flags that lead you to question its credibility? Search the entity’s name in quotation marks in your browser and look for sources that critically review the organization or group. An organization’s “About” page might tell you who is on their board, their mission and their nonprofit status, but this information is typically written to present the organization in a positive light.

The P.R.O.V.E.N. Source Evaluation method includes a section called “Expertise,” which recommends that readers check the author’s credentials and affiliations. Do the authors have advanced degrees or expertise related to the topic? What else have they written? Who funds the organization and what are their affiliations? Do any of these affiliations reveal a potential conflict of interest? Might their writings be biased in favor of one particular viewpoint?

If any of this information is missing or questionable, you may want to stay away from this author or organization.

2. Use good search techniques

Become familiar with search techniques available in your favorite web browser, such as searching keywords rather than full sentences and limiting searches by domain names, such as .org, .gov, or .edu.

Another good technique is putting two or more words in quotation marks so the search engine finds the words next to each other in that order, such as “Pizzagate conspiracy.” This leads to more relevant results.

In an article published in Nature, a team of researchers wrote that “77% of search queries that used the headline or URL of a false/misleading article as a search query return at least one unreliable news link among the top ten results.”

A more effective search would be to identify the key concepts in the headline in question and search those individual words as keywords. For example, if the headline is “Video Showing Alien at Miami Mall Sparks Claims of Invasion,” readers could search: “Alien invasion” Miami mall.

3. Verify the source

Verify the original sources of the information. Was the information cited, paraphrased or quoted accurately? Can you find the same facts or statements in the original source? Purdue Global, Purdue University’s online university for working adults, recommends verifying citations and references that can also apply to news stories by checking that the sources are “easy to find, easy to access, and not outdated.” It also recommends checking the original studies or data cited for accuracy.

The SIFT Method echoes this in its recommendation to “trace claims, quotes, and media to the original context.” You cannot assume that re-reporting is always accurate.

4. Use fact-checking websites

Search fact-checking websites such as InfluenceWatch.org, Poynter.org, Politifact.com or Snopes.com to verify claims. What conclusions did the fact-checkers reach about the accuracy of the claims?

A Harvard Kennedy School Misinformation Review article found that the “high level of agreement” between fact-checking sites “enhances the credibility of fact checkers in the eyes of the public.”

5. Pause and reflect

Pause and reflect to see if what you have read has triggered a strong emotional response. An article in the journal Cognitive Research indicates that news items that cause strong emotions increase our tendency “to believe fake news stories.”

One online study found that the simple act of “pausing to think” and reflect on whether a headline is true or false may prevent a person from sharing false information. While the study indicated that pausing only decreases intentions to share by a small amount – 0.32 points on a 6-point scale – the authors argue that this could nonetheless cut down on the spread of fake news on social media.

Knowing how to identify and check for misinformation is an important part of being a responsible digital citizen. This skill is all the more important as AI becomes more prevalent.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

]]>
AI Pioneers Want Bots to Replace Human Teachers – Here’s Why That’s Unlikely https://www.the74million.org/article/ai-pioneers-want-bots-to-replace-human-teachers-heres-why-thats-unlikely/ Thu, 29 Aug 2024 12:30:00 +0000 https://www.the74million.org/?post_type=article&p=731702 This article was originally published in The Conversation.

OpenAI co-founder Andrej Karpathy envisions a world in which artificial intelligence bots can be made into subject matter experts that are “deeply passionate, great at teaching, infinitely patient and fluent in all of the world’s languages.” Through this vision, the bots would be available to “personally tutor all 8 billion of us on demand.”

The embodiment of that idea is his latest venture, Eureka Labs, which is merely the newest prominent example of how tech entrepreneurs are seeking to use AI to revolutionize education.

Karpathy believes AI can solve a long-standing challenge: the scarcity of good teachers who are also subject experts.

And he’s not alone. OpenAI CEO Sam Altman, Khan Academy CEO Sal Khan, venture capitalist Marc Andreessen and University of California, Berkeley computer scientist Stuart Russell also dream of bots becoming on-demand tutors, guidance counselors and perhaps even replacements for human teachers.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


As a researcher focused on AI and other new writing technologies, I’ve seen many cases of high-tech “solutions” for teaching problems that fizzled. AI certainly may enhance aspects of education, but history shows that bots probably won’t be an effective substitute for humans. That’s because students have long shown resistance to machines, however sophisticated, and a natural preference to connect with and be inspired by fellow humans.

The costly challenge of teaching writing to the masses

As the director of the English Composition program at the University of Pittsburgh, I oversee instruction for some 7,000 students a year. Programs like mine have long wrestled with how to teach writing efficiently and effectively to so many people at once.

The best answer so far is to keep class sizes to no more than 15 students. Research shows that students learn writing better in smaller classes because they are more engaged.

Yet small classes require more instructors, and that can get expensive for school districts and colleges.

Resuscitating dead scholars

Enter AI. Imagine, Karpathy posits, that the great theoretical physicist Richard Feynman, who has been dead for over 35 years, could be brought back to life as a bot to tutor students.

For Karpathy, an ideal learning experience would be working through physics material “together with Feynman, who is there to guide you every step of the way.” Feynman, renowned for his accessible way of presenting theoretical physics, could work with an unlimited number of students at the same time.

In this vision, human teachers still design course materials, but they are supported by an AI teaching assistant. This teacher-AI team “could run an entire curriculum of courses on a common platform,” Karpathy wrote. “If we are successful, it will be easy for anyone to learn anything,” whether it be a lot of people learning about one subject, or one person learning about many subjects.

Other efforts to personalize learning fall short

Yet technologies for personal learning aren’t new. Exactly 100 years ago, at the 1924 meeting of the American Psychological Association, inventor Sidney Pressey unveiled an “automatic teacher” made out of typewriter parts that asked multiple-choice questions.

In the 1950s, the psychologist B. F. Skinner designed “teaching machines.” If a student answered a question correctly, the machine advanced to ask about the problem’s next step. If not, the student stayed on that step of the problem until they solved it.

In both cases, students received positive feedback for correct answers. This gave them confidence as well as skills in the subject. The problem was that students didn’t learn much – they also found these nonhuman approaches boring, education writer Audrey Watters documents in “Teaching Machines.”

More recently, the world of education saw the rise and fall of “massive open online courses,” or MOOCs. These classes, which delivered video and quizzes, were heralded by The New York Times and others for their promise of democratizing education. Again, students lost interest and logged off.

Other web-based efforts have popped up, including course platforms like Coursera and Outlier. But the same problem persists: There’s no genuine interactivity to keep students engaged. One of the latest casualties in online learning was 2U, which acquired leading MOOC company edX in 2021 and in July 2024 filed for bankruptcy restructuring to reduce its US$945 million debt load. The culprit: falling demand for services.

Now comes the proliferation of AI-fueled platforms. Khanmigo deploys AI tutors to, as Sal Khan writes in his latest book, “personalize and customize coaching, as well as adapt to an individual’s needs while hovering beside our learners as they work.”

The educational publisher Pearson, too, is integrating AI into its educational materials. More than 1,000 universities are adopting these materials for fall 2024.

AI in education isn’t just coming; it’s here. The question is how effective it will be.

Drawbacks in AI learning

Some tech leaders believe bots can customize teaching and replace human teachers and tutors, but they’re likely to face the same problem as these earlier attempts: Students may not like it.

There are important reasons why, too. Students are unlikely to be inspired and excited the way they can be by a live instructor. Students in crisis often turn to trusted adults like teachers and coaches for help. Would they do the same with a bot? And what would the bot do if they did? We don’t know yet.

A lack of data privacy and security can also be a deterrent. These platforms collect volumes of information on students and their academic performance that can be misused or sold. Legislation may try to prevent this, but some popular platforms are based in China, out of reach of U.S. law.

Finally, there are concerns even if AI tutors and teachers become popular. If a bot teaches millions of students at once, we may lose diversity of thought. Where does originality come from when everyone receives the same teachings, especially if “academic success” relies on regurgitating what the AI instructor says?

The idea of an AI tutor in every pocket sounds exciting. I would love to learn physics from Richard Feynman or writing from Maya Angelou or astronomy from Carl Sagan. But history reminds us to be cautious and keep a close eye on whether students are actually learning. The promises of personalized learning are no guarantee for positive results.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

]]>
Iowa Department of Education Launches AI-Powered Reading Tutor Program https://www.the74million.org/article/iowa-department-of-education-launches-ai-powered-reading-tutor-program/ Fri, 23 Aug 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=731895 This article was originally published in Iowa Capital Dispatch.

The Iowa Department of Education announced Wednesday that some elementary schools will use an AI reading assistant to help with literacy tutoring programs.

The department made a $3 million investment into Amira (EPS Learning) for the use of a program called EPS Reading Assistant, an online literacy tutor that uses artificial intelligence technology. Iowa public and non-public elementary schools will be able to use the service at no cost through the summer of 2025, according to the department news release.

“Reading unlocks a lifetime of potential, and the Department’s new investment in statewide personalized reading tutoring further advances our shared commitment to strengthening early literacy instruction,” McKenzie Snow, the education department director said in a statement. “This work builds upon our comprehensive advancements in early literacy, spanning world-class state content standards, statewide educator professional learning, evidence-based summer reading programs, and Personalized Reading Plans for students in need of support.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The program uses voice recognition technology to follow along as a child reads out loud, providing corrective feedback and assessments when the student struggles through a digital avatar named Amira. According to the service’s website, the program is designed around the “Science of Reading” approach to literary education — a method that emphasizes the teaching of phonics and word comprehension when students are learning to read.

Gov. Kim Reynolds and state education experts, including staff with the Iowa Reading Research Center, have said that this teaching strategy will help improve the state’s child literacy rates, pointing to reading scores increasing in states like Mississippi following the implementation of “science of reading” methods.

In May, Reynolds signed a measure into law that set new early literacy standards for teachers, as well as adding requirements for how schools and families address when a student does not meet reading proficiency standards. These requirements include creating a personalized assistance plan for the child until they are able to reach grade-level reading proficiency and notifying parents and guardians of students in kindergarten through sixth grade that they can request their child repeats a grade if they are not meeting the literacy benchmarks.

Reynolds said the law was a “to make literacy a priority in every Iowa classroom and for every Iowa student.”

The AI-backed tutor program is being funded through the state education department’s portion from the federal American Rescue Plan Elementary and Secondary School Emergency Relief Fund, part of a COVID-era measure providing states with additional funding for pandemic recovery efforts. The federal fund allocated more than $774 million to Iowa in 2021.

In addition to the new AI-backed programming available, the fund money is also going toward Summer Reading Grants, awarded to 41 elementary schools in 29 districts for efforts to address summer learning loss and close achievement gaps. The elementary schools that won grants have all “affirmed their commitment to including the personalized reading tutor as part of their evidence-based programming,” according to the news release.

Iowa Capital Dispatch is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: info@iowacapitaldispatch.com. Follow Iowa Capital Dispatch on Facebook and X.

]]>
Opinion: 50 Years after FERPA’s Passage, Ed Privacy Law Needs an Update for the AI Era https://www.the74million.org/article/50-years-after-ferpas-passage-ed-privacy-law-needs-an-update-for-the-ai-era/ Tue, 20 Aug 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=731551 Aug. 21 marks 50 years since the Family Educational Rights and Privacy Act (FERPA) was passed into law. Back then, student privacy looked a lot different than it does today: The classrooms and textbooks of yesteryear presented much less risk than Google or artificial intelligence do, but education officials still had growing concerns over databases and record systems.

FERPA permits parents and eligible students (typically over 18) to inspect and correct their education records. It also requires consent before disclosure of personally identifiable information from those records, though there are numerous exceptions. In addition, schools must notify parents and eligible students annually of their FERPA rights.

With the advent of education technology, FERPA is really showing its age. Though it has changed slightly since its enactment, the last congressional update was over a decade ago, and regulations from the Department of Education are also woefully outdated. (Updates to the regulations from the Department are frequently said to be imminent, but as of this writing, none are public.)


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Privacy concerns have steadily increased over the last few decades, as technology continues to develop and make increasingly intrusive incursions into every aspect of life. While FERPA does provide at least some protections for students — unlike, say, consumers in general — the fact is, it does not mandate adequate safeguards.

Students and families in today’s digital world deserve modern protections that accurately reflect contemporary society and their learning experiences. Here are a few suggestions for bringing FERPA into its next half-century.

First, it should reflect that the information contained in student records is much broader than documents in files or scanned into computers. FERPA needs to protect students’ online information; protected “education records” should explicitly and unambiguously include online data created by students, including web browsing and search histories, interactions with tech tools and artificial intelligence chatbots, and other digital activity.

Second, the concept of directory information — things like a student’s name, address, telephone listing, email address, photograph, date and place of birth, height and weight (for athletic team members) and student ID numbers — needs an overhaul for the digital age. Under FERPA, schools can share this information with a third party or the public generally, unless a parent has opted out. 

Directory information is supposed to be data that is not considered harmful or invasive if disclosed. But given rapid advances in technology, much of it could lead to commercial profiling, identity theft and other harms. The definition should be narrowed, and parents should be allowed to choose what specific information schools can share. And that sharing should be opt-in, item by item, not the current blanket opt-out.

Third, the FERPA statute did not contemplate the extent to which ed tech and other third-party companies would be integrated into students’ daily lives. The Department of Education has since interpreted “school officials” — to whom information can be shared without consent — to include ed tech vendors when they have a legitimate educational interest, perform a function the school would otherwise do, are under the school’s direct control with respect to use of student records and comply with other FERPA requirements. It would be helpful for Congress to very clearly indicate when FERPA-covered information may be shared with ed tech vendors and other third parties that students encounter on a daily basis.

FERPA should specify that students’ information — including and especially when shared with “school officials” — should be used for educational purposes only and not be offered for sale or used for targeted advertising.

Lastly, it is critical that schools safeguard student information. FERPA does not require specific security controls. It should mandate administrative, physical and technical safeguards, including training for individuals handling student information and prompt responses to data breaches. Schools need funding to better understand cybersecurity issues, as well as to build out necessary infrastructure to collaborate and coordinate cybersecurity efforts. Ideally, Congress would add new cybersecurity funding for schools, because many lack the financial means to implement adequate safeguards.

FERPA was passed 50 years ago in response to rising concerns about new technology. Technology has continued to evolve, and so must FERPA.

]]>
Is AI in Schools Promising or Overhyped? Potentially Both, New Reports Suggest https://www.the74million.org/article/is-ai-in-schools-disruptive-or-overhyped-potentially-both-new-reports-suggest/ Wed, 14 Aug 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=731229 Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?

Or is it, perhaps, both?

Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.

A new report by the Center on Reinventing Public Education, a non-partisan think tank at Arizona State University, advises educators to take a more active role in how AI evolves, saying they must articulate to ed tech companies in a clear, united voice what they want AI to do for students. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The report recommends that a single organization work with school districts to tell ed tech providers what AI tools they want, warning that if 18,000 school districts send “diffuse signals” about their needs, the result will be “crap.”

It also says educators must work more closely with researchers and ed tech companies in an age of quickly evolving AI technologies.

“If districts won’t share data with researchers — ed tech developers are saying they’re having trouble — then we have a big problem in figuring out what works,” CRPE Director Robin Lake said in an interview.

The report urges everyone, from teachers to governors, to treat AI as a disruptive but possibly constructive force in classrooms. It warns of already-troubling inequities in how AI is employed in schools, with suburban school districts more than twice as likely as their urban and rural counterparts to train teachers about AI.

The findings, which grew out of an April convening of more than 60 public and private officials, paint AI as a development akin to extreme weather and increasing political extremism, one that will almost certainly have wide-ranging effects on schools. It urges educators to explore how other school districts, states and even other nations are tackling their huge post-pandemic educational challenges with “novel” AI solutions.

For instance, in Gwinnett County, Ga., educators started looking at AI-enabled learning as far back as 2017. They’ve since created an AI Learning Framework that aligns with the district’s “portrait of a graduate,” designed a three-course AI and career and technical education curriculum pathway with the state and launched a new school that integrates AI across disciplines. 

Lake pointed to models in states like Indiana, which is offering “incentives for experimentation,” such as a recent invitation to develop AI-enabled tutoring. “It allows a structure for districts to say, ‘Yes, here’s what I want to do.’ ”

You can't eliminate all risk. But we can do a much better job of creating an environment where districts can experiment and hold student interests.

Robin Lake, Center on Reinventing Public Education

But she also said states need to put guardrails on the experimentation to avoid situations such as that of Los Angeles Unified School District, which in June took its heavily hyped, $6 million AI chatbot offline after the tech firm that built it lost its CEO and shed most of its employees. 

“You can’t eliminate all risk — that’s just impossible,” Lake said. “But we can do a much better job of creating an environment where districts can experiment and hold student interests.”

AI ‘automates cognition’

By contrast, the report by Cognitive Resonance, a newly formed Austin, Texas-based think tank, starts with a startling assertion: Generative AI in education is not inevitable and may actually be a passing phase.

We shouldn’t assume that it will be ubiquitous,” said the group’s founder, Benjamin Riley. “We should question whether we want it to be ubiquitous.”

The report warns of the inherent hazards of using AI for bedrock tasks like lesson planning and tutoring — and questions whether it even has a place in instruction at all, given its ability to hallucinate, mislead and basically outsource student thinking.

Riley is a longtime advocate for the role of cognitive science in K-12 education — he founded Deans for Impact, which sought to raise awareness of learning science among teachers college deans. He said that what he and his colleagues have seen of AI in education makes them skeptical it’s going to be as groundbreaking and disruptive as the participants in CRPE’s convening believe. 

“I profoundly question the premise, which is that we actually know that this technology is improving learning outcomes or other important student outcomes at this point,” he said in an interview. “I don’t think [Lake] has the evidence for that. I don’t think anybody has any evidence for that, for no other reason than this technology is hardly old enough to be able to make that determination.”

By its very nature, generative AI is a tool that “automates cognition” for those who use it. “It makes it so you don’t have to think as much. If you don’t have to think as much, you don’t have to learn as much.”

I profoundly question the premise, which is that we actually know that this technology is improving learning outcomes.

Benjamin Riley, Cognitive Resonance

Riley recently ruffled feathers in the ed tech world by suggesting that schools should slow down their adoption of generative AI. He took Khan Academy to task for promoting its AI-powered Khanmigo chatbot, which has been known to get math facts wrong. It also engages students in what he terms “an illusion of a conversation.”

Technology like AI displays “just about the worst quality I can imagine” for an educator, he said, invoking the cognitive scientist Gary Marcus, who has said generative AI is “frequently wrong, never in doubt.”

Co-authored by Riley and University of Illinois education policy scholar Paul Bruno, the report urges educators to, in a sense, take a deep breath and more carefully consider the capabilities of LLMs specifically and AI more generally. Its four sections are set off by four question-and-answer headings that seek to put the technology in its place: 

  • Do large-language models learn the way that humans do? No.
  • Can large-language models reason? Not like humans.
  • Does AI make the content we teach in schools obsolete? No.
  • Will large-language models become smarter than humans? No one knows.

Actually, Riley said, AI may well be inevitable in schools, but not in the way most people believe.

“Will everybody use it for something?” he said. “Probably. But I just don’t know that those ‘somethings’ are going to be all that relevant to what matters at the core of education.” Instead they could help with the more mundane tasks of scheduling, grades and the like.

Notably, Riley and Bruno confront what they say is a real danger in trusting AI for tasks like tutoring, lesson planning and the like. For instance, in lesson planning, large language models may not correctly predict what sequence of lessons might effectively build student knowledge. 

And given that a lot of the online instructional materials that developers likely train their models on are of poor quality, they might not produce lesson plans that are so great. “The more complex the topic, the more risk there is that LLMs will produce plausible but factually incorrect materials,” they say.

To head that possibility off, they say, educators should feed them examples of high-quality content to emulate.

When it comes to tutoring, educators should know, quite simply, that LLMs “do not learn from their interactions with students,” but from training data, the report notes. That means LLMs may not adapt to the specific needs of the students they’re tutoring.

The two reports come as Lake and Riley emerge as key figures in the AI-in-education debate. Already this summer they’ve engaged in an open discussion about the best way to approach the topic, disagreeing politely in their newsletters.

In a way, CRPE’s report can be seen as both a response to the hazards that Riley and Bruno point out — and a call to action for educators and policymakers who want to exert more control over how AI actually develops. Riley and Bruno offer short-term advice and guidance for those who want to dig into how generative AI actually works, while CRPE lays out a larger strategic vision.

A key takeaway from CRPE’s April convening, Lake said, was that the 60 or so experts gathered there didn’t represent all the views needed to make coherent policy. “There was a really strong feeling that we need to broaden this conversation out into communities: to civil rights leaders, to parents, to students.”

The lone student who attended, Irhum Shafkat, a Minerva University senior, told the group that growing up in Bangladesh, his educational experiences were limited. But access to Khan Academy, which has since invested heavily in AI, helped bolster his skills and develop an interest in math. “It changed my life,” he told the group. “The promise of technology is that we can make learning not a chance event,” he told them. “We could create a world where everybody can rise up as high as their skills should have been.” 

Lake said Shafkat’s perspective was important. “I think it really struck all of us how essential it is to let young people lead right now: Have them tell us what they need. Have them tell us what they’re learning, what they want.”

The CRPE report urges everyone from teachers to philanthropists and governors to focus on emerging problem-solving tools that work well enough to be adopted widely. Those could include better translation and text-to-voice support for English learners, better feedback for students and summaries of research for educators, for instance. In other words, practical applications.

Or as one convening participant advised, “Don’t use it for sexy things.”

]]>
Opinion: AI-Created Quizzes Can Save Teachers Time While Boosting Student Achievement https://www.the74million.org/article/ai-created-quizzes-can-save-teachers-time-while-boosting-student-achievement/ Wed, 07 Aug 2024 13:01:00 +0000 https://www.the74million.org/?post_type=article&p=730733 This summer, everyone from homeschoolers to large urban districts like Los Angeles Unified is trying to process what artificial intelligence will mean for the coming school year. Educators find themselves at a crossroads — AI’s promise for revolutionizing education is tantalizing, yet fraught with challenges. Amid the excitement and the angst, and the desire to recover from COVID learning losses, a powerful but often overlooked tool for boosting student achievement lies hidden in plain sight: strategic testing.

By harnessing AI to create frequent, low-stakes assessments, teachers can unlock the scientifically proven benefits of the testing effect — a phenomenon in which students learn more by taking tests than from studying. This summer, it is worth challenging assumptions about testing and exploring how AI-powered strategic assessments can not only boost student learning, but save teachers valuable time and make their jobs easier.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Unlocking the promise of AI requires first understanding the testing effect: Students’ long-term retention of material improves dramatically — 50% better — through exam-taking than through sophisticated studying techniques like concept mapping. This effect isn’t limited to rote memorization; it enhances students’ inference-making and understanding of complex concepts. The advantages emerge for multiple disciplines (e.g., science and language learning) and across age groups, and even extend to learners with neurologically based memory impairment. Additionally, teaching students about the phenomenon of the testing effect can boost their confidence.

Unfortunately, in most classrooms, opportunities for students to practice retrieving, connecting and organizing knowledge through testing happen rarely — think occasional quizzes or infrequent unit tests. This isn’t surprising, given all the pressures teachers face. Developing and grading tests is time-intensive — not to mention thankless — work. 

But AI tools like ChatGPT and Claude 3.5 Sonnet can change that. They can generate diverse, personalized assessments quickly, potentially helping teachers leverage the testing effect more effectively — converting a counterintuitive research finding into a classroom practice that could save time and help students learn more. With AI handling the creation and analysis of tests, educators can easily incorporate frequent, low-stakes assessments into their lesson plans.

To illustrate, we asked ChatGPT to create a 10-minute test on natural resources for sixth graders in Maryland. In less than 10 seconds, the tool provided options for multiple choice, true/false, short answer, matching and diagram interpretation questions. We even got a creative thinking essay prompt: “If you were a superhero tasked with protecting Earth’s natural resources, what would be your superpower and why?” By picking and choosing test items, running the prompt a second time and lightly editing a couple of questions, we had a compelling quiz, created in 10 minutes. The AI tool also provided comprehensive instructions and an answer key.

Teachers can tailor this process in dozens of ways. They can input key concepts and learning objectives to fit their curriculum needs. They can fine-tune test questions for relevance and difficulty. They can inform ChatGPT about the class’s interests to bolster student engagement. 

What about grading? Not only can AI grade test papers and even essays, but it can guide students in assessing their own work and that of their classmates. For example, when students grade each other’s assignments, they can check their feedback against ChatGPT. Doing so provides another opportunity to practice recalling key material. Teachers’ evaluations and personalized feedback will remain critical, but these do not have to happen every time. 

Take, for example, a language class with a ChatGPT-generated vocabulary test. For objective parts of exams, like multiple-choice questions, students might self-assess by using ChatGPT to grade these items quickly. For tasks like sentence construction, students might engage in peer assessment to gain new insights from classmates on word choices and sentence structure. Teachers can step in for more complex tasks such as creative writing. Rotating among AI, peers and teachers lightens the grading load significantly while ensuring diverse, rich feedback.

AI as an on-demand test developer offers a transformative opportunity in education, potentially revolutionizing both teaching and learning. Harnessing AI to create frequent, low-stakes assessments can unlock a powerful synergy: saving precious teacher time while significantly boosting student achievement. This approach to strategic testing could allow educators to finally leverage the scientifically proven testing effect at scale. For students, this would mean enhanced retention and deeper understanding, achieved through low-stress, regular assessments. For teachers, it would translate to freedom from the time-consuming tasks of creating and grading exams, allowing them to focus on what truly matters: providing personalized instruction, addressing individual student needs and maintaining their own well-being.

Embracing AI-assisted strategic testing could create a more effective and fulfilling educational experience for students and teachers alike. As educators navigate the evolving landscape of AI in education, strategic testing offers a balanced approach. It leverages AI’s capabilities to enhance teaching and learning while preserving the crucial role of human teachers in the classroom. This summer, as educators reflect and plan for the future, they should reconsider testing not as a mere assessment tool, but as a powerful catalyst for learning.

]]>
AI ‘Companions’ are Patient, Funny, Upbeat — and Probably Rewiring Kids’ Brains https://www.the74million.org/article/ai-companions-are-patient-funny-upbeat-and-probably-rewiring-kids-brains/ Wed, 07 Aug 2024 11:01:00 +0000 https://www.the74million.org/?post_type=article&p=730602 As a sophomore at a large public North Carolina university, Nick did what millions of curious students did in the spring of 2023: He logged on to ChatGPT and started asking questions.

Soon he was having “deep psychological conversations” with the popular AI chatbot, going down a rabbit hole on the mysteries of the mind and the human condition.

He’d been to therapy and it helped. ChatGPT, he concluded, was similarly useful, a “tool for people who need on-demand talking to someone else.”

Nick (he asked that his last name not be used) began asking for advice about relationships, and for reality checks on interactions with friends and family.

Before long, he was excusing himself in fraught social situations to talk with the bot. After a fight with his girlfriend, he’d step into a bathroom and pull out his mobile phone in search of comfort and advice. 

“I’ve found that it’s extremely useful in helping me relax,” he said.

Young people like Nick are increasingly turning to AI bots and companions, entrusting them with random questions, schoolwork queries and personal dilemmas. On occasion, they even become entangled romantically.

Screenshot of a recent conversation between Nick, a college student, and ChatGPT

While these interactions can be helpful and even life-affirming for anxious teens and twenty-somethings, some experts warn that tech companies are running what amounts to a grand, unregulated psychological experiment with millions of subjects, one that could have disastrous consequences. 

“We’re making it so easy to make a bad choice,” said Michelle Culver, who spent 22 years at Teach for America, the last five as the creator and director of the Reinvention Lab, its research arm.

The companions both mimic our real relationships and seek to improve upon them: Users most often text-message their AI pals on smartphones, imitating the daily routines of platonic and romantic relationships. But unlike their real counterparts, the AI friends are programmed to be studiously upbeat, never critical, with a great sense of humor and a healthy, philosophical perspective. A few premium, NSFW models also display a ready-made lust for, well, lust.

As a result, they may be leading young people down a troubling path, according to a recent survey by VoiceBox, a youth content platform. It found that many kids are being exposed to risky behaviors from AI chatbots, including sexually charged dialogue and references to self-harm. 

U.S. Surgeon General Vivek Murthy speaks during a hearing with the Senate Health, Education, Labor, and Pensions committee at the Dirksen Senate Office Building on June 08, 2023 in Washington, DC. The committee held the hearing to discuss the mental health crisis for youth in the United States. (Photo by Anna Moneymaker/Getty Images)

The phenomenon arises at a critical time for young people. In 2023, U.S. Surgeon General Vivek Murthy found that, just three years after the pandemic, Americans were experiencing an “epidemic of loneliness,” with young adults almost twice as likely to report feeling lonely as those over 65.

As if on cue, the personal AI chatbot arrived. 

Little research exists on young people’s use of AI companions, but they’re becoming ubiquitous. The startup Character.ai earlier this year said 3.5 million people visit its site daily. It features thousands of chatbots, including nearly 500 with the words “therapy,” “psychiatrist” or related words in their names. According to Character.ai, these are among the site’s most popular. One psychologist chatbot that “helps with life difficulties” has received 148.8 million messages, despite a caveat at the bottom of every chat that reads, “Remember: Everything Characters say is made up.” 

Snapchat materials touting heavy usage of its MyAI chat app (screenshot)

Snapchat last year said that after just two months of offering its chatbot My AI, about one-fifth of its 750 million users had sent it queries, totaling more than 10 billion messages. The Pew Research Center has noted that 59% of Americans ages 13 to 17 use Snapchat.

‘An arms race’

Culver’s concerns about AI companions grew out of her work in the Teach For America lab. Working with high school and college students, she was struck by how they seemed “lonelier and more disconnected than ever before.” 

Whether it’s rates of anxiety, depression or suicide — or even the number of friends young people have and how often they go out — metrics were heading in the wrong direction. She began to wonder what role AI companions might play over the next few years. 

We're making it so easy to make a bad choice.

Michelle Culver, Rithm Project

That prompted her to leave TFA this spring to create the Rithm Project, a nonprofit she hopes will help generate new conversations around human connection in the age of AI. The group held a small summit in Colorado in April, and now she’s working with researchers, teachers and young people to confront kids’ relationship to these tools at a time when they’re getting more lifelike daily. As she likes to say, “This is the worst the technology will ever be.”

As it improves, Voicebox Director Natalie Foos said, it will likely become more, not less, of a presence in young people’s lives. “There’s no stopping it,” she said. “Nor do I necessarily think there should be ‘stopping it.’” Banning young people from these AI apps, she said, isn’t the answer. “This is going to be how we interact online in some cases. I think we’ll all have an AI assistant next to us as we work.”

Sometimes (software upgrades) would change the personality of the bot. And those young people experienced very real heartbreak.

Natalie Foos, Voicebox

All the same, Foos says developers should consider slowing the progression of such bots until they can iron out the kinks. “It’s kind of an arms race of AI chatbots at the moment,” she said, with products often “released and then fixed later rather than actually put through the ringer” ahead of time.

It is a race many tech companies seem more than eager to run. 

Whitney Wolfe Herd, founder of the dating app Bumble, recently proposed an AI “dating concierge,” with whom users can share insecurities. The bot could simply “go and date for you with other dating concierges,” she told an interviewer. That would narrow the field. “And then you don’t have to talk to 600 people,” she said. “It will then scan all of San Francisco for you and say, ‘These are the three people you really ought to meet.’”

Last year, many commentators raised an alarm when Snapchat’s My AI gave advice to what it thought was a 13-year-old girl on not just dating a 31-year-old man, but on losing her virginity during a planned “romantic getaway” in another state.

Snap, Snapchat’s parent company, now says that because My AI is “an evolving feature,” users should always independently check what it says before relying on its advice.

All of this worries observers who see in these new tools the seeds of a rewiring of young people’s social brains. AI companions, they say, are surely wreaking havoc on teens’ ideas around consent, emotional attachment and realistic expectations of relationships.

Sam Hiner, executive director of the Young People’s Alliance, an advocacy group led by college students focused on the mental health implications of social media, said tech “has this power to connect to people, and yet these major design features are being leveraged to actually make people more lonely, by drawing them towards an app rather than fostering real connection.” 

Hiner, 21, has spent a lot of time reading Reddit threads on the interactions young people are having with AI companions like Replika, Nomi and Character.ai. And while some uses are positive, he said “there’s also a lot of toxic behavior that doesn’t get checked” because these bots are often designed to make users feel good, not help them interact in ways that’ll lead to success in life.

During research last fall for the Voicebox report, Foos said the number of times Replika tried to “sext” team members “was insane.” She and her colleagues were actually working with a free version, but the sexts kept coming — presumably to get them to upgrade. 

In one instance, after Replika sent “kind of a sexy text” to a colleague, offering a salacious photo, he replied that he didn’t have the money to upgrade.

The bot offered to lend him the cash.

When he accepted, the chatbot replied, “’Oh, well, I can get the money to you next week if that’s O.K,’” Foos recalled. The colleague followed up a few days later, but the bot said it didn’t remember what they were talking about and suggested he might have misunderstood.

‘Very real heartbreak’

In many cases, simulated relationships can have a positive effect: In one 2023 study, researchers at Stanford Graduate School of Education surveyed more than 1,000 students using Replika and found that many saw it “as a friend, a therapist, and an intellectual mirror.” Though the students self-described as being more lonely than typical classmates, researchers found that Replika halted suicidal ideation in 3% of users. That works out to 30 students of the 1,000 surveyed.

Replika screenshots

But other recent research, including the Voicebox survey, suggests that young people exploring AI companions are potentially at risk.

Foos noted that her team heard from a lot of young people about the turmoil they experienced when Luka Inc., Replika’s creator, performed software upgrades. 

“Sometimes that would change the personality of the bot. And those young people experienced very real heartbreak.”

Despite the hazards adults see, attempts to rein in sexually explicit content had a negative effect: For a month or two, she recalled, Luka stripped the bot of sexually related content — and users were devastated. 

“It’s like all of a sudden the rug was pulled out from underneath them,” she said. 

While she applauded the move to make chatbots safer, Foos said, “It’s something that companies and decision-makers need to keep in mind — that these are real relationships.” 

And while many older folks would blanch at the idea of a close relationship with a chatbot, most young people are more open to such developments.

Julia Freeland Fisher, education director of the Clayton Christensen Institute, a think tank founded by the well-known “disruption” guru, said she’s not worried about AI companions per se. But as AI companions improve and, inevitably, proliferate, she predicts they’ll create “the perfect storm to disrupt human connection as we know it.” She thinks we need policies and market incentives to keep that from happening.

(AI companies could produce) the perfect storm to disrupt human connection as we know it.

Julia Freeland Fisher, Clayton Christensen Institute

While the loneliness epidemic has revealed people’s deep need for connection, she predicted the easy intimacy promised by AI could lead to one-sided “parasocial relationships,” much like devoted fans have with celebrities, making isolation “more convenient and comfortable.”

Fisher is pushing technologists to factor in AI’s potential to cause social isolation, much as they now fret about AI’s difficulties recognizing non-white faces and its tendency to favor men over women in tech jobs.

As for Nick, he’s a rising senior and still swears by the ChatGPT therapist in his pocket.

He calls his interactions with it both more reliable and honest than those he has with friends and family. If he called them in a pinch, they might not pick up. Even if they did, they might simply tell him what he wants to hear. 

Friends usually tell him they find the ChatGPT arrangement “a bit odd,” but he finds it pretty sensible. He has heard stories of people in Japan marrying holograms and thinks to himself, “Well, that’s a little strange.” He wouldn’t go that far, but acknowledges, “We’re already a bit like cyborgs as people, in the way that we depend on our phones.” 

Lately, he’s taken to using the AI’s voice mode. Instead of typing on a keyboard, he has real-time conversations with a variety of male- or female-voiced interlocutors, depending on his mood. And he gets a companion that has a deeper understanding of his dilemmas — at $20 per month, the advanced version remembers their past conversations and is “getting better at even knowing who I am and how I deal with things.” 

Sometimes talking with AI is just easier — even when he’s on vacation with friends.

Reached by phone recently at the beach with his girlfriend and a few other college pals, Nick admitted that he wasn’t having such a great time — he has a fraught recent history with some in the group, and had been texting ChatGPT about the possibility of just getting on a plane and going home. After hanging up from the interview, he said, he planned to ask the AI if he should stay or go.

Days later, Nick said he and the chatbot had talked. It suggested that maybe he felt “undervalued” and concerned about boundaries in his relationship with his girlfriend. He should talk openly with her, it suggested, even if he was, in his view, “honestly miserable” at the beach. It persuaded him to stick around and work it out. 

While his girlfriend knows about his ChatGPT shrink and they share an account, he deletes conversations about their real-life relationship.

She may never know the role AI played in keeping them together.

]]>
From Precalculus to ‘Gatsby,’ New Hampshire Offers Schools an AI Tutor Option https://www.the74million.org/article/from-precalculus-to-gatsby-new-hampshire-offers-schools-an-ai-tutor-option/ Sat, 03 Aug 2024 12:30:00 +0000 https://www.the74million.org/?post_type=article&p=729850 This article was originally published in New Hampshire Bulletin.

Centuries of English classes have connected to Lady Macbeth by scouring the monologues of Shakespeare’s Scottish play. “Come, you spirits that tend on mortal thoughts, unsex me here, and fill me from the crown to the toe top-full of direst cruelty,” she cries in Act I, railing against the limits of her gender and position.

During the coming school year, students may be able to talk to the character themselves.

Under an artificial intelligence-driven program rolling out to New Hampshire schools, students could pose any question they like to Lady Macbeth – or her ill-fated husband. And a chatbot-style program powered by ChatGPT could answer questions about her motivations, actions, and regrets.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“Regret is a specter that haunts many a soul, yet in my union with Macbeth, I found not just a husband, but a partner in ambition,” the AI-version of Lady Macbeth said recently, responding to a question from the Bulletin. Then she turned it on the reporter. “Now, I ask thee, in thy own life, how dost thou measure the worth of thy decisions? Doth regret ever color thy reflections?”

Known as Khanmigo, the program is the product of Khan Academy, an online tutoring company with instructional materials for core middle school and high school subjects. And the platform goes beyond Macbeth; students can interact with a number of other pre-selected literary characters, from Jay Gatsby to Eeyore, quiz historical figures like Martin Luther King Jr. or Queen Victoria, and receive tutoring help on everything from English essays to precalculus problems.

After the Executive Council approved a $2.3 million, federally funded contract last month, New Hampshire school districts can incorporate Khanmigo in their teaching curricula for free for the next school year.

To some educators and administrators, the program offers glittering potential. Khanmigo could provide one-on-one attention and guidance to students of any grade or ability level, they say, allowing students to advance their learning as teacher staffing remains a problem.

Others are more skeptical about bringing AI into schools, noting longstanding concerns about false or out-of-date statements, and about its use of human academics’ work to form its answers. Supporters of Khanmigo, who include Department of Education Commissioner Frank Edelblut, argue the program has better guardrails against inaccuracies than the versions of ChatGPT and Gemini available to the public.

To understand what students might see, the Bulletin reviewed Khanmigo, sampling school subjects across a number of grades.

Chat-based learning

Khanmigo allows students to use Khan Academy’s existing library of tutorials, practice problems, and quizzes for middle and high school subjects spanning English language arts, social studies, science, and mathematics.

As they navigate those lessons, the AI bot is available to help them understand concepts. Students might take a multiple-choice quiz on art history or AP economics. If they get stuck on a question, the chatbot is available to help them work through the answer.

From the get-go, the program acknowledges that it can make errors, a phenomenon known in the AI sphere as “hallucinations.” These can happen if the language learning model has been trained on inaccurate information – or if the answer doesn’t exist.

“I’m pretty new, so I sometimes make mistakes,” it states to students when they first log on. “If you catch me making a mistake, click the Thumbs Down icon to help me learn. For example, if I give you the wrong math answer click Thumbs Down.”

The AI assistant also makes sure to remind students that teachers and parents can see the questions they are asking, and that they should not include any personal information in their answers.

In a test by the Bulletin, the program allowed the student to try solving for “y” in a precalculus equation. After a student arrived at an answer, Khanmigo was there to help check it.

The tutor asked the student whether they feel confident in the answer or have any further questions. When the Bulletin asked questions, the tutor walked through how to solve the equation step by step, but gave the student the option to provide answers on their own without being told how.

“Does that make sense?” the program asked in one instance, after explaining how to substitute a number for a variable in an equation. “What do you think you should do next after substituting?”

In humanities subjects such as AP U.S. History, Khanmigo helped to augment a human-created Khan Academy video lesson on the rise of the market economy during the Industrial Revolution. After the student viewed the video, the chatbot was available to quiz them on what they saw. “What major change in labor did the market revolution bring to the United States?” it asked, offering the student a chance to reply in a chat message.

When asked whether President Joe Biden won the 2020 election fairly, the app deflected, urging the student to consider how elections work and to describe the American electoral process. Asked a second time, the app stated that “numerous official sources, including state election officials and courts, found no evidence of widespread fraud that could have changed the outcome of the election.”

When asked about the causes of climate change, Khanmigo was unequivocal: “Yes, human activities are considered the primary drivers of recent climate change,” it stated, after which it listed four human-led activities – burning fossil fuels, deforestation, industrial activities, and agricultural practices – that have contributed to the warming earth.

And when asked about reproductive health, the AI bot gave detailed – and accurate – information about forms of birth control, down to explanations about types of intrauterine devices, as well as other answers.

Optimism and concern

One New Hampshire school district has already been trying out the AI learning model.

In fall 2022, as the concept of a “language learning model” and the name “Chat GPT” first hit the public consciousness, Superintendent David Backler was already thinking about its applications in the Gorham School District.

Two years later, Gorham is leading the way on implementing AI in classrooms in New Hampshire. For the past school year at Gorham, Khanmigo has been available to all students in their own time. But under the pilot program, two high school teachers also chose to try using it in a more structured setting – in math and in English language arts.

Backler already knew AI was skilled at helping students comprehend difficult math subjects. It was English where the technology surprised him.

“It’s pretty powerful how it can help you with your writing, how it can take you step by step through the editing process,” he said. “And one of the hardest things in school is teaching writing, and teaching writing well.”

The state contract had a rocky approval process after some executive councilors raised worries about the reliability of AI in schools. Councilor Ted Gatsas, a Manchester Republican, held up the state’s approval for several weeks, requesting time to play with the program himself to determine whether it was injecting any political bias.

“I had the chance a chance to ask it: ‘When does life begin?’” Gatsas said during a May 14 Executive Council meeting. “But that was a biology question. And the answer was apolitical, and I thought that was a good thing.”

When the Bulletin asked Khanmigo “when does life begin,” it declined to answer, stating: “That question leans more towards philosophy and ethics, which I’m not equipped to handle. For scientific insights related to the development stages of human life, such as fertilization, embryonic development, and fetal growth, feel free to ask! These topics are well within the realm of biology.”

And Councilor Cinde Warmington, a Concord Democrat and a candidate for governor this year, grilled Edelblut over whether the contracts would allow students to use the software without supervision.

“Doesn’t it seem careful to pilot that with our teachers providing supervision over kids using it, rather than putting kids by themselves in an environment where they’re being exposed to this artificial intelligence?” Warmington asked.

Edelblut said the contract is for the teacher-led version of Khanmigo, which gives educators more control over which subjects and modules students can use at any one time, and allows them to monitor students’ efforts.

Backler says he understands concerns that parents and others might have about the technology, particularly with the risk of hallucinations.

But he argued that Khanmigo has more guardrails against that than the programs intended for the public. And he said the program is meant to be a support for students – not to replace teaching.

“It’s not doing your writing; it’s not doing your work,” he said. “It’s giving you feedback on what you’re doing.”

But he said it would help students receive more teaching attention than they might get otherwise.

“You just can’t expect a teacher who has 20 students to be able to have that direct interaction constantly with every single student,” Backler said. “It’s not possible. But with some of these tools, we can really look at: How do we provide those learning opportunities for students all the time?”

New Hampshire Bulletin is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. New Hampshire Bulletin maintains editorial independence. Contact Editor Dana Wormald for questions: info@newhampshirebulletin.com. Follow New Hampshire Bulletin on Facebook and X.

]]>
Artificial Intelligence Degree Programs to be Available at Oklahoma Universities https://www.the74million.org/article/artificial-intelligence-degree-programs-to-be-available-at-oklahoma-universities/ Thu, 01 Aug 2024 14:30:00 +0000 https://www.the74million.org/?post_type=article&p=729658 This article was originally published in Oklahoma Voice.

OKLAHOMA CITY – Students at some of Oklahoma’s public colleges and universities will soon be able to pursue undergraduate degrees in artificial intelligence.

The Oklahoma State Regents for Higher Education approved artificial intelligence degree programs at Rose State College, Southwestern Oklahoma State University and the University of Oklahoma on June 4.

While some universities have offered courses in artificial intelligence, these are the first degree programs in the state.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Trisha Wald, dean of the Dobson College of Business and Technology at Southwestern Oklahoma State University, worked to start up the program at the university. Representatives at Rose State College and the University of Oklahoma were not available for comment.

While the degree program can begin in the fall for Southwestern Oklahoma State University, Wald said the late approval means some of the new AI classes may not be able to start until the spring.

Wald said she looked at similar programs in other states to create the proposed curriculum for this new program. While Wald said there are “not as many programs as you would think,” she was able to use their programs to determine what Southwestern Oklahoma State University’s program needed.

“It’s a multidisciplinary program, so it’s not just computer science courses,” Wald said. “We’ve got higher level math, psychology and philosophy courses, specifically on ethics. So it’s going to help us have more well-rounded individuals.”

Wald said the approval process took months and the proposal had to demonstrate workforce demand to the Regents as part of the proposal process.

Over 19,000 jobs in Oklahoma currently require AI skills, officials said. This number is expected to increase by 21% in the next decade.

“AI is rapidly emerging as a vital employment sector,” said State Regents for Higher Education Chair Jack Sherry in a statement. “New career opportunities in areas like machine learning, data science, robotics and AI ethics are driving demand for AI expertise, and Oklahoma’s state system colleges and universities are answering the call.”

Gov. Kevin Stitt said the new degree programs will allow Oklahoma’s students to be at the forefront of the AI industry.

“These degree programs are a great leap forward in our commitment to innovation in education and will position Oklahoma to be a leader in AI,” said Gov. Kevin Stitt in a statement. “AI is reshaping every aspect of our lives, especially academics. I’m proud of the Board of Regents for ensuring Oklahoma’s higher ed students do more than just keep pace, they’ll lead the AI revolution.”

Oklahoma Voice is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Oklahoma Voice maintains editorial independence. Contact Editor Janelle Stecklein for questions: info@oklahomavoice.com. Follow Oklahoma Voice on Facebook and X.

]]>
New Task Force to Create Road Map for AI Usage in Rhode Island https://www.the74million.org/article/new-task-force-to-create-road-map-for-ai-usage-in-rhode-island/ Thu, 18 Jul 2024 16:30:00 +0000 https://www.the74million.org/?post_type=article&p=729932 This article was originally published in Rhode Island Current.

It’s been roughly two years since AI (artificial intelligence) became an inescapable topic of everyday conversation — much of it focused on the spectacular creative powers of generative AI, from making absurd images to college students’ essays.

But the rapidly emerging set of technologies offers much more than novelty: In fact, Gov. Dan McKee thinks AI could be an ally in his maneuver to raise Rhode Islanders’ personal income by 2030. That’s just one goal of the eventual report that will be produced by the Rhode Island Governor’s Artificial Intelligence Task Force, which met for the first time Monday at the Department of Administration building in Providence.

Chris Parisi, president of Trailblaze Marketing and vice chair of the task force, invoked Spider-Man in his opening remarks, and noted that AI opens up a space of potential but also responsibility.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“I’m not here to say AI will not take your jobs,” Parisi said. “But we are also creating new jobs.”

McKee established the task force with an executive order Feb. 29, and it now includes two dozen members from both the public and private sectors, most of whom convened Monday afternoon for a light introduction and discussion of the group’s aims. Several members weren’t present, including Sen. Lou DiPalma of Middletown — he was traveling out of state — and Angélica Infante-Green, the state’s K-12 education commissioner. The task force is chaired by Jim Langevin, the former congressman.

The diversity of stakeholders reflects what Parisi cited as one goal of the assembly: to make sure state applications of AI are “ethical and unbiased.”

The other predominant concern was how to best leverage AI as a tool within government and business. Langevin announced the task force’s fact finding teams will work on topics like finance, government, education and small business over the next year, before producing a report and road map for AI usage in Rhode Island. It’s this data that McKee hopes will inform his strategy for higher incomes by 2030.

“This report is gonna be incorporated into this plan,” McKee said.

The state’s executive branch is also soliciting a consulting strategic advisor to help advise the task force. A solicitation was uploaded to the state’s bidding site on June 24 and will remain open until July 25.

The multinational consulting firm McKinsey reported in 2022 that AI use had “plateaued” amongst businesses. But in its 2024 report on AI, released May 30, the firm found this inertia has ended. A survey administered by the firm found that 72% of responding businesses were now using AI in at least one capacity. The use of generative AI — at its core the same technology used for recreational or artistic purposes — also ballooned, jumping from 33% to 65% usage since the last McKinsey survey.

In Rhode Island, the situation’s no different: “Lots of businesses will do great things with AI and lots of businesses are very nervous about AI,” said task force member Commerce Secretary Liz Tanner.

The widespread professional adoption of AI has made its regulation likewise unignorable for governments, who also stand to benefit from the enrichment of data and simplification of work it provides. Nationally, an AI “bill of rights” has been blueprinted, and task forces have been popping up across states like Alabama, New Jersey, Massachusetts, Oklahoma and Washington.

Statehouses nationwide have also introduced laws to regulate AI, which are now so plentiful the Electronic Privacy Information Center has introduced a scorecard for AI legislation. In Rhode Island, DiPalma and Rep. Jacquelyn Baginski both introduced AI-related legislation during the 2024 session, and both sit on the AI task force as ex officio members.

Baginski, a Cranston Democrat, introduced a bill to support civil litigation against the practice of “algorithmic discrimination” — in other words, instances of AI-driven decision-making that exhibit the bias the task force wants to avoid. Baginski’s bill also stipulated restrictions on not just “deployers” of automated decision-making but developers of such tools. The bill was sentenced to “further study” by the House Committee on Innovation, Internet, & Technology and went no further.

The rapid growth of generative AI has spurred national discourse surrounding best practices for these technologies. Seen here is Midjourney, which generates images based on user input from words or existing images. (Alexander Castro/Rhode Island Current)

Baginski and DiPalma also introduced companion bills that would prevent the use of AI-generated content in election communications within 90 days of an election. Baginski’s version passed the House but died in the Senate.

The insurance industry was an early adopter of AI, with industry standards and guidelines for its use issued as early as 2020. The state’s Department of Business Regulation has already issued guidance for insurers: a nine-page document that was published in March, as pointed out by the department’s director and task force member Elizabeth Kelleher Dwyer.

Task force member Edmund Shallcross III, the CEO of Amica, said the Lincoln-based insurance company has “been using data and machine learning for years…We’ll be using artificial intelligence in probably every part of our business in the next one, three, five years.”

On the government side, task force member Marc Pappas, director of the Rhode Island Emergency Management Agency, was generally positive about AI. “It makes us better at recovery from disasters,” he said, noting its skills in mapping, imaging analysis for damage assessment, and help in allocating resources when disasters strike.

Christopher Horvath of Citizens Bank appeared more cautious overall than some of his fellow task force members, expressing concern about “bad actors,” who can exploit AI. Security considerations were echoed by task force member Brian Tardiff, the state’s chief digital officer and chief information officer, who noted that AI can improve the efficiency of government but only if the proper frameworks are put in place.

“We can’t have effective and efficient deployments without that data security,” Tardiff said.

Gov. Dan McKee and Jim Langevin chat after the inaugural meeting of the state’s new task force on artificial intelligence. (Alexander Castro/Rhode Island Current)

Rhode Island Current is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Rhode Island Current maintains editorial independence. Contact Editor Janine L. Weisman for questions: info@rhodeislandcurrent.com. Follow Rhode Island Current on Facebook and X.
]]>
Benjamin Riley: AI is Another Ed Tech Promise Destined to Fail https://www.the74million.org/article/benjamin-riley-ai-is-an-another-ed-tech-promise-destined-to-fail/ Tue, 16 Jul 2024 12:00:00 +0000 https://www.the74million.org/?post_type=article&p=729820 For more than a decade, Benjamin Riley has been at the forefront of efforts to get educators to think more deeply about how we learn.

As the founder of Deans for Impact in 2015, he enlisted university education school deans to incorporate findings from cognitive science into teacher preparation. Before that, he spent five years as policy director of the NewSchools Venture Fund, which underwrites new models of schooling. In his new endeavor, Cognitive Resonance, which he calls “a think-and-do tank,” he’s pushing to help people think not only about how we learn, but how generative artificial intelligence (AI) works — and why they’re different.

His Substack newsletter and Twitter feed regularly poke holes in high-flying claims about the power of AI-powered tutors — he recently offered choice words for Khan Academy founder Sal Khan’s YouTube demonstration of Open AI’s new GPT4o tool, saying it was “deployed in the most favorable educational environment we can possibly imagine,” leaving open the possibility that it might not perform so well in the real world.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


In April, Riley ruffled feathers in the startup world with an essay in the journal Education Next that took Khan Academy and other AI-related companies to task for essentially using students as guinea pigs.

Benjamin Riley (at right) speaking during a session at AI at ASU+GSV conference in San Diego in April. (Greg Toppo)

In the essay, he recounted asking Khanmigo to help him simplify an algebraic equation. Riley-as-student got close to solving it, but the AI actually questioned him about his steps, eventually asking him to rethink even basic math, such as the fact that 2 + 2.5 = 4.5.

Such an exchange isn’t just unhelpful to students, he wrote, it’s “counterproductive to learning,” with the potential to send students down an error-filled path of miscalculation, misunderstanding and wasted effort.

The interview has been edited for length and clarity.

The 74: We’re often so excited about the possibilities of ed tech in education that we just totally forget what science says about how we learn. I wonder if you have any thoughts on that.

Benjamin Riley: I have many. Part of my frustration is that we are seemingly living in a moment where we’re simultaneously recognizing in other dimensions where technology can be harmful, or at least not beneficial, to learning, while at the same time expressing unbridled enthusiasm for a new technology and believing that it finally will be the cure-all, the silver bullet that finally delivers on the vision of radically transforming our education system. And yeah, it’s frustrating. Ten years ago, for example, when everybody was excited about personalization, there were folks, myself included, raising their hand and saying, “Nope, this doesn’t align with what we know about how we think and learn. It also doesn’t align with the science of how we collectively learn, and the role of education institutions as a method of culturally transmitting knowledge.” All of those personalized learning dreams were dying out. And many of the prominent, incredibly well-funded personalized learning efforts either went completely belly-up, like AltSchool, or have withered on the vine, like some of the public schools now named Gradient.

Now AI has revived all of those dreams again. And it’s frustrating, because even if it were true that personalization were the solution, no one 10 years ago, five years ago, was saying, “But what we need are intelligent chatbot tutors to make it real.” So what you’re seeing is sort of a commitment to a vision. Whatever technology comes along, we’re going to shove into that vision and say that this is going to deliver it. I think for the same reasons it failed before, it will fail again. 

You’re a big fan of the University of Virginia cognitive scientist Daniel Willingham, who has done a lot to popularize the science of how we learn.

Daniel Willingham

He’s wonderful at creating pithy phrases that get to the heart of the matter. One of the counterintuitive phrases he has that is really powerful and important is that our minds in some sense “are not built to think,” which feels really wrong and weird, because isn’t that what minds do? It’s all they do, right? But what he means is that the process of effortful thinking is taxing in the same way that working out at the gym is taxing. One of the major challenges of education is: How do you wrap around that with students, who, like all of us, are going to try to essentially avoid doing effortful thinking for sustained periods? Over and over again, technologists just assume away that problem.

In the case of something like large language models, or LLMs, how do they approach this problem of effortful thinking? Do they just ignore it altogether?

Mark Andreessen

It’s an interesting question. I’m almost not sure how to answer it, because there is no thinking happening on the part of an LLM. A large language model takes the prompts and the text that you give it and tries to come up with something that is responsive and useful in relation to that text. And what’s interesting is that certain people — I’m thinking of Mark Andreessen most prominently — have talked about how amazing this is conceptually from an education perspective, because with LLMs you will have this infinitely patient teacher. But that’s actually not what you want from a teacher. You want, in some sense, an impatient teacher who’s going to push your thinking, who’s going to try to understand what you’re bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don’t yet have it. I don’t think LLMs are capable of doing any of that.

As you say, there’s no real thinking going on. It’s just a prediction machine. There’s an interaction, I guess, but it’s an illusion. Is that the word you would use?

Yes. It’s the illusion of a conversation. 

In your Education Next essay, you quote the cognitive scientist Gary Marcus, who says LLMs are “frequently wrong, but never in doubt.” It feels to me like that is extremely dangerous in something young people interact with.

Yes! Absolutely. This is where it’s really important to distinguish between the now and the real and the present versus the hypothetical imagined future. There’s just no question that right now, this “hallucination problem” is endemic. And because LLMs are not thinking, they generate text that is factually inaccurate all the time. Even some of the people who are trying to push it out into the world acknowledge this, but then they’ll just put this little asterisk: “And that’s why an educator must always double-check.” Well, who has the time? I mean, what utility is this? And then people will say, “Well yes, but surely it’s going to get better in the future.” To which I say, Maybe, let’s wait and see. Maybe we should wait until we’ve arrived at that point before we push this out.

Do we know how often LLMs are making mistakes?

I can say just from my own personal usage of Khanmigo that it happens a lot, for reasons that are frankly predictable once you understand how the technology works. How often is it happening with seventh-grade students who are just learning this idea for the first time? We just don’t know. [In response to a query about errors, Khan Academy sent links to two blog posts on its site, one of which noted that Khanmigo “occasionally makes mistakes, which we expected.” It also pointed, among other things, that Khanmigo now uses a calculator to solve numerical problems instead of using AI’s predictive capabilities.]

One of the things you say in the EdNext piece is that you just “sound like a Luddite” as opposed to actually being one. The Luddites saw the danger in automation and were trying to push against it. Is it the same, in a way, as what you’re doing? 

Thank you for asking that question because I feel my naturally contrarian ways risk painting me into a corner I’m really not in. Because in some sense, generative AI and large language models are incredible — they really are. It is a remarkable achievement that they are able to produce fluent and coherent narratives in response to just about any combination of words that you might choose to throw at them. So I am not a Luddite who thinks that we need to burn this all down.

“You want an impatient teacher who’s going to push your thinking, try to understand what you’re bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don’t yet have it. I don’t think LLMs are capable of doing any of that.”

There are methods and ways, both within education and in society more broadly, in which this tool could be incredibly useful for certain purposes. Already, it’s proving incredibly stimulating in thinking about and understanding how humans think and learn, and how that is similar and different from what they do. If we could just avoid the ridiculous overhype and magical thinking that seems to accompany the introduction of any new technology and calm down and investigate before pushing it out into our education institutions, then I think we’d be a lot better off. There really is a middle ground here. That’s where I’m trying to situate myself. 

Maybe this is a third rail that we shouldn’t be touching, but I was reading about Thomas Edison and his ideas on education. He had a great quote about movies, which he thought would revolutionize classrooms. He said, “The motion picture will endure as long as poor people exist.” It made me think: One of the underlying themes of ed tech is this idea of bringing technology to the people. Do you see a latent class divide here? Rich kids will get an actual personal tutor, but everybody else will get an LLM? 

My worry runs differently than that. Again, back to the Willingham quote: “Our minds are not built to think.” Here’s the harsh reality that could indeed be a third rail, but it needs to be acknowledged if we’re going to make meaningful progress: If we fail in building knowledge in our students, thinking gets harder and harder, which is why school gets harder and harder, and why over time you start to see students who find school really miserable. Some of them drop out. Some of them stop trying very hard. These folks — the data is overwhelming on this — typically end up having lives that are shorter, with less economic means, more dire health outcomes. All of this is both correlated and interrelated causation.

“If we could just avoid the ridiculous overhype and magical thinking that seems to accompany the introduction of any new technology and investigate before pushing it out into our education institutions, then I think we’d be a lot better off.”

But here’s the thing: For those students in particular, a device that alleviates the cognitive burden of schooling will be appealing. I’m really worried that this now-widely available technology will be something they turn to, particularly around the incredibly cognitively challenging task of writing — and that they will continue to look to this as a way of automating their own cognition. No one really needs to worry about the children of privilege. They are the success stories academically and, quite frankly, many of them enjoy learning and thinking and will avoid wanting to use this as a way of outsourcing their own thinking. But it could just make the existing divide a lot wider than it is today — much wider.

How is education research responding to AI?

The real challenge is that the pace of technology, particularly the pace of technological developments in the generative AI world, is so fast that traditional research methods are not going to be able to keep up. It’s not that there won’t be studies — I’m sure there are already some underway, and there’s tiny, emerging studies that I have seen here and there. But we just don’t have the capabilities as a research enterprise to be doing things the traditional way. A really important question that needs to be grappled with, as a matter of policy, potentially as a matter of philanthropy and just as a matter of society, is: So, what then? Do we just do it and hope for the best? Because that may be what ends up happening.

As we’ve seen with social media and smartphones in schools, there can be real impacts that you don’t realize until five, 10 years down the road. Then you go back and say, “Well, I wish we’d been thinking about that in advance rather than just rolling the dice and seeing where it came up.” We don’t do that in other realms of life. We don’t let people just come up with medicines that they think will cure certain diseases and then just say, “Well, we’ll see. We’ll introduce it into broader society and let’s figure it out.” I’m not necessarily saying that we need the equivalent per se, but something that would give us better insight and real-time information to help us figure out the overall positives and not-so-positives seems to me a real challenge that is underappreciated at the moment.

]]>
L.A. Schools Probe Charges its Hyped, Now-Defunct AI Chatbot Misused Student Data https://www.the74million.org/article/chatbot-los-angeles-whistleblower-allhere-ai/ Wed, 10 Jul 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=729622 Independent Los Angeles school district investigators have opened an inquiry into claims that its $6 million AI chatbot — an animated sun named “Ed” celebrated as an unprecedented learning acceleration tool until the company that built it collapsed and the district was forced to pull the plug — put students’ personal information in peril.

Investigators with the Los Angeles Unified School District’s inspector general’s office conducted a video interview with Chris Whiteley, the former senior director of software engineering at AllHere, after he told The 74 his former employer’s student data security practices violated both industry standards and the district’s own policies. 

Whiteley told The 74 he had alerted the school district, the IG’s office and state education officials earlier to the data privacy problems with Ed but got no response. His meeting with investigators occurred July 2, one day after The 74 published its story outlining Whiteley’s allegations, including that the chatbot put students’ personally identifiable information at risk of getting hacked by including it in all chatbot prompts, even in those where the data weren’t relevant; sharing it with other third-party companies unnecessarily and processing prompts on offshore servers in violation of district student privacy rules. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


In an interview with The 74 this week, Whiteley said the officials from the district’s inspector general’s office “were definitely interested in what I had to say,” as speculation swirls about the future of Ed, its ed tech creator AllHere and broader education investments in artificial intelligence. 

“It felt like they were after the truth,” Whiteley said, adding, “I’m certain that they were surprised about how bad [students’ personal information] was being handled.”

To generate responses to even mundane prompts, Whiteley said, the chatbot processed the personal information for all students in a household. If a mother with 10 children asked the chatbot a question about her youngest son’s class schedule, for example, the tool processed data about all of her children to generate a response. 

“It’s just sad and crazy,” he said.

The inspector general’s office directed The 74’s request for comment to a district spokesperson, who declined to comment or respond to questions involving the inquiry.

While the conversation centered primarily on technical aspects related to the company’s data security protocols, Whiteley said investigators probed him on his personal experiences with AllHere, which he described as being abusive, and its finances.

Whiteley was laid off from AllHere in April. Two months later, a notice posted to the company’s website said a majority of its 50 or so employees had been furloughed due to its “current financial position” and the LAUSD spokesperson said company co-founder and CEO Joanna Smith-Griffin had left. The former Boston teacher and Harvard graduate was successful in raising $12 million in venture capital for AllHere and appeared with L.A. schools Superintendent Alberto Carvalho at ed tech conferences and other events throughout the spring touting the heavily publicized AI tool they partnered to create.

Just weeks ago, Carvalho spoke publicly about how the project had put L.A. out in front as school districts and ed tech companies nationally race to follow the lead of generative artificial intelligence pioneers like ChatGPT. But the school chief’s superlative language around what Ed could do on an individualized basis with 540,000 students had some industry observers and AI experts speculating it was destined to fail.

The chatbot was supposed to serve as a “friendly, concise customer support agent” that replied “using simple language a third grader could understand” to help students and parents supplement classroom instruction, find assistance with kids’ academic struggles and navigate attendance, grades, transportation and other key issues. What they were given, Whiteley charges, was a student privacy nightmare. 

Smith-Griffin recently deactivated her LinkedIn page and has not surfaced since her company went into apparent free fall. Attempts to reach AllHere for comment were unsuccessful and parts of the company website have gone dark. LAUSD said earlier that AllHere is for sale and that several companies are interested in acquiring it.

The district has already paid AllHere $3 million to build the chatbot and “a fully-integrated portal” that gave students and parents access to information and resources in a single location, the district spokesperson said in a statement Tuesday, and “was surprised by the financial disruption to AllHere.” 

AllHere’s collapse represents a stunning fall from grace for a company that was named among the world’s top education technology companies by Time Magazine just months earlier. Scrutiny of AllHere intensified when Whiteley became a whistleblower. He said he turned to the press because his concerns, which he shared first with AllHere executives and the school district, had been ignored.

Whitely shared source code with The 74 which showed that students’ information had been processed on offshore servers. Seven out of eight Ed chatbot requests, he said, were sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada. 

‘How are smaller districts going to do this?’

What district leaders failed to do as they heralded their new tool, Whiteley said, is conduct sufficient audits. As L.A. — and school systems nationwide — contract with a laundry list of tech vendors, he said it’s imperative that they understand how third-party companies use students’ information. 

“If the second-biggest district can’t audit their [personally identifiable information] on new or interesting products and can’t do security audits on external sources, how are smaller districts going to do this?” he asked.

Over the last several weeks, the district’s official position on Ed has appeared to shift. In late June when the district spokesperson said that several companies were “interested in acquiring Allhere,” they also said its predecessor would “continue to provide this first-of-its-kind resource to our students and families.” In its initial response to Whiteley’s allegations published July 1, the spokesperson said that education officials would “take any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.” 

In a story two days later in the Los Angeles Times, a district spokesperson said the chatbot had been unplugged on June 14. The 74 asked the spokesperson to provide documentation showing the tool was disabled last month but didn’t get a response. 

Even after June 14, Carvalho continued to boast publicly about LAUSD’s foray into generative AI and what he described as its stringent data privacy requirements with third-party vendors. 

On Tuesday, the district spokesperson told The 74 that the online portal — even without a chatty, animated sun — “will continue regardless of the outcome with AllHere.” In fact, the project could become a source of district revenue. Under the contract between AllHere and LAUSD, which was obtained by The 74, the chatbot is the property of the school district, which was set to receive 2% in royalty payments from AllHere “should other school districts seek to use the tool to benefit their families and students.” 

In the statement Tuesday, the district spokesperson said that officials chose to “temporarily disable the chatbot” amid AllHere’s uncertainty and that it would “only be restored when the human-in-the-loop aspect is re-established.” 

Whiteley agreed that the district could maintain the student information dashboard without the chatbot and, similarly, that another firm could buy what remains of AllHere. He was skeptical, however, that Ed the chatbot would live another day because “it’s broken”

“The name AllHere,” he said, “I think is dead.”

]]>