Artificial Intelligence – The 74 https://www.the74million.org America's Education News Source Tue, 09 Jul 2024 00:41:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.the74million.org/wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Artificial Intelligence – The 74 https://www.the74million.org 32 32 Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen? https://www.the74million.org/article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen/ Tue, 09 Jul 2024 10:01:00 +0000 https://www.the74million.org/?post_type=article&p=729513 When news broke last month that Ed, the Los Angeles school district’s new, $6 million artificial intelligence chatbot, was in jeopardy — the startup that created it on the verge of collapse — many insiders in the ed tech world wondered the same thing: What took so long?

The AI bot, created by Boston-based AllHere Education, was launched March 20. But just three months later, AllHere posted on its website that a majority of its 50 or so employees had been furloughed due to its “current financial position.” A spokesperson for the Los Angeles district said company founder and CEO Joanna Smith-Griffin was no longer on the job. AllHere was up for sale, the district said, with several businesses interested in acquiring it.

A screenshot of AllHere’s website with its June 14 announcement that much of its staff had been furloughed (screen capture)

The news was shocking and certainly bleak for the ed tech industry, but several observers say the partnership bit off more than it could chew, tech-wise — and that the ensuing blowup could hurt future AI investments.

Ed was touted as a powerful, easy-to-use online tool for students and parents to supplement classroom instruction, find assistance with kids’ academic struggles and help families navigate attendance, grades, transportation and other key issues, all in 100 languages and on their mobile phones.

But Amanda Bickerstaff, founder and CEO of AI for Education, a consulting and training firm, said that was an overreach.

“What they were trying to do is really not possible with where the technology is today,” she said. ”It’s a very broad application [with] multiple users — teachers, students, leaders and family members — and it pulled in data from multiple systems.”

What they were trying to do is really not possible with where the technology is today.

Amanda Bickerstaff, AI for Education

She noted that even a mega-corporation like McDonald’s had to trim its AI sails. The fast-food giant recently admitted that a small experiment using a chatbot to power drive-thru windows had resulted in a few fraught customer interactions, such as one in which a woman angrily tried to persuade the bot that she wanted a caramel ice cream as it added multiple stacks of butter to her order.

If McDonald’s, worth an estimated $178.6 billion, can’t get 100 drive-thrus to take lunch orders with generative AI, she said, the tech isn’t “where we need it to be.”

If anything, L.A. and AllHere did not seem worried about the project’s scale, even if industry insiders now say it was bound to under-deliver: Last spring, at a series of high-profile ed tech conferences, Smith-Griffin and Superintendent Alberto Carvalho showed off Ed widely, with Carvalho saying it would revolutionize students’ and parents’ relationships to school, “utilizing the data-rich environment that we have for every kid.”

Alberto Carvalho speaks at the ASU+GSV Summit in April (YouTube screenshot)

In an interview with The 74 at the ASU+GSV Summit in San Diego in April, Carvalho said many students are not connected to school, “therefore they’re lost.” Ed, he promised, would change that, with a “significantly different approach” to communication from the district.

“We are shifting from a system of 540,000 students into 540,000 ‘schools of one,’” with personalization and individualization for each student, he said, and “meaningful connections with parents.”

Better communication with parents, he said, would help improve not just attendance but reading and math proficiency, graduation rates and other outcomes. “The question that needs to be asked is: Why have those resources not meaningfully connected with students and parents, and why have they not resulted in this explosive experience in terms of educational opportunity?”

Carvalho noted Ed’s ability to understand and communicate in about 100 different languages. And, he crowed, it “never goes to sleep” so it can answer questions 24/7. He called it “an entity that learns and relearns all the time and does nothing more, nothing less than adapt itself to you. I think that’s a game changer.” 

But one experienced ed tech insider recalled hearing Carvalho speak about Ed at the conference in April and say it was already solving “all the problems” that big districts face. The insider, who asked not to be identified in order to speak freely about sensitive matters, found the remarks troubling. “The messaging was so wrong that at that point I basically started a stopwatch on how long it would take” for the effort to fail. “And I’m kind of amazed it’s been this long before it all fell apart. I feel badly about it, I really do, but it’s not a surprise.”

‘A high-risk proposition’

In addition to the deal’s dissolution, The 74 reported last week that a former senior director of software engineering at AllHere told district officials, L.A.’s independent inspector general’s office and state education officials that Ed processed student records in ways that likely ran afoul of the district’s own data privacy rules and put sensitive information at risk of being hacked — warnings that he said the agencies ignored. 

AI for Education’s Bickerstaff said developers “have to take caution” when building these systems for schools, especially those like Ed that bring together such large sets of data under one application.

“These tools, we don’t know how they work directly,” she said. “We know they have bias. And we know they’re not reliable. We know they can be leaky. And so we have to be really careful, especially with kids that have protected data.”

Alex Spurrier, an associate partner with the education consulting firm Bellwether Education Partners, said what often happens is that district leaders “try to go really big and move really fast to adopt a new technology,” not fully appreciating that it’s “a really high risk proposition.”

While ed tech is rife with disaster stories of overpromising and disappointing results, Spurrier said, other districts dare to take a different approach, starting small, iterating and scaling up. In those cases, he said, disaster rarely follows.

Richard Culatta, CEO of the International Society for Technology in Education (ISTE), put it more bluntly: “Whenever a district says, ‘Our strategy around AI is to buy a tool,’ that’s a problem. When the district says, ‘For us, AI is a variety of tools and skills that we are working on together,’ that’s when I feel comfortable that we’re moving in the right direction.”

Whenever a district says, 'Our strategy around AI is to buy a tool,' that's a problem.

Richard Culatta, International Society for Technology in Education

Culatta suggested that since generative AI is developing and changing so rapidly, districts should use the next few months as “a moment of exploration — it’s a moment to bring in teachers and parents and students to give feedback,” he said. “It is not the moment for ribbon cutting.” 

‘It’s about exploring’

Smith-Griffin founded AllHere in 2016 at Harvard University’s Innovation Labs. In an April interview with The 74, she said she originally envisioned it as a way to help school systems reduce chronic absenteeism through better communication with parents. Many interventions that schools rely on, such as phone calls, postcards and home visits, “tend to be heavily reliant on the sheer power of educators to solve system-wide issues,” she said.

A former middle-school math teacher, Smith-Griffin recalled, “I was one of those teachers who was doing phone calls, leaving voicemails, visiting my parents’ homes.” 

AllHere pioneered text messaging “nudges,” electronic versions of postcard reminders to families that, in one key study, improved attendance modestly. 

The company’s successful proposal for L.A., Smith-Griffin said, envisioned extending the attendance strategies while applying them to student learning “in the most disciplined way possible.”

“You nudge a parent around absences and they will tell you things ranging from, ‘My kid needs tutoring, my kid is struggling with math’ [to] ‘I struggle with reading,’” she said. AllHere went one step further, she said, bringing together “the full body of resources” that a school system can offer parents.

The district had high hopes for the chatbot, requiring it to focus on “eliminating opportunity gaps, promoting whole-child well-being, building stronger relationships with students and families, and providing accessible information,” according to the proposal.

In April, it was still in early implementation at 100 of the district’s lowest performing “priority” schools, serving about 55,000 students. LAUSD planned to roll out Ed for all families this fall. The district “unplugged” the chatbot on June 14, the Los Angeles Times reported last week, but a district spokesperson said L.A. “will continue making Ed available as a tool to its students and families and is closely monitoring the potential acquisition of AllHere.” The company did not immediately responded to queries about the chatbot or its future.

As for the apparent collapse of AllHere, speculation in the ed tech world is rampant.

In the podcast he co-hosts, education entrepreneur Ben Kornell said late last month, “My spidey sense basically goes to ‘Something’s not adding up here and there’s more to the story.’” He theorized a “critical failure point” that’s yet to emerge “because you don’t see things like this fall apart this quickly, this immediately” for such a small company, especially in the middle of a $6 million contract.

My spidey sense basically goes to 'Something's not adding up here and there's more to the story.'

Ben Kornell, education entrepreneur

Kornell said the possibilities fall into just a few categories: an accounting or financial misstep, a breakdown among AllHere’s staff, board and funders or “major customer payment issues.” 

The district also may have withheld payment for undelivered products, but he said the sudden collapse of the company seemed unusual. “If you are headed towards a cash crisis, the normal thing to do would be: Go to your board, go to your funders, and get a bridge to get you through that period and land the plane.”

Bellwether’s Spurrier said L.A. deserves a measure of credit “for being willing to lean into AI technology and think about ways that it could work.” But he wonders whether the best use of generative AI at this moment will be found not in “revolutionizing instruction,” as L.A. has pursued, but elsewhere. 

There's plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls.

Alex Spurrier, Bellwether Education Partners

“There’s plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls,” rather than focusing on changing how schools deliver instruction. “I think that’s the wrong place to start.”

ISTE’s Culatta noted that just down the road from Los Angeles, in Santa Ana, California, district officials there responded to the dawn of tools like ChatGPT and Google’s Gemini by creating evening classes for adults. “The parents come in and they talk about what AI is, how they should be thinking about it,” he said. “It’s about exploring. It’s about helping people build their skills.” 

‘How are your financials?’

The fate of AllHere’s attendance work in districts nationwide isn’t clear at the moment. In one large district, the Prince George’s County, Maryland, Public Schools, near Washington, D.C., teachers piloted AllHere with 32 schools as far back as January 2020, spokeswoman Meghan Thornton said. The district added two more schools to the pilot in 2022, but AllHere notified the district on June 18 that, effective immediately, it wouldn’t be able to continue its services due to “unforeseen financial circumstances.” 

District officials are now looking for another messaging system to replace AllHere “should it no longer be available,” Thornton said.

Bickerstaff said the field more broadly suffers from “a major, major overestimation of the capabilities of the technology to date.” L.A., she noted, is the nation’s second-largest school district, so even the pilot stage likely saw “very high” usage, raising its costs. She predicted a fast acquisition of AllHere, noting that they’d been looking for outside investment for several months.

As founder of the startup Magic School AI, which offers teachers tools to streamline their workload, Adeel Khan is no stranger to hustling for funding — and to competitors running out of money. But he said the news about AllHere and Ed was bad for the industry more broadly, leaving districts with questions about whether to partner with newer, untested companies.

“I see it as something that is certainly not great for the startup ecosystem,” he said.

I see (AllHere’s failure) as something that is certainly not great for the startup ecosystem.

Adeel Khan, Magic School AI

Even before the news about AllHere broke last month, Khan attended ISTE’s big national conference in Denver last month, where he talked to school district officials about prospective partnerships. “More than one time I was asked directly, ‘How are your financials?’” he recalled. 

Usually technology directors ask about features and what a product can do for students, he said. But they’re beginning to realize that a failed product doesn’t just waste time and money. It damages reputations as well. “That is on the mind of buyers,” he said. 

When school districts invest in new tech, he said, they’re not just committing to funding it for months or even years, but also to training teachers and others, so they want responsible growth.

“There’s a lot of disruption to K-12 when a product goes out of business,” Khan said. “So people remember this. They remember, ‘Hey, we committed to this product. We discovered it at ISTE two years ago and we loved it. It was great — and it’s not here anymore. And we don’t want to go through that again.’ ”

]]>
California Teachers are Using AI to Grade Papers. Who’s Grading the AI? https://www.the74million.org/article/california-teachers-are-using-ai-to-grade-papers-whos-grading-the-ai/ Sun, 07 Jul 2024 12:30:00 +0000 https://www.the74million.org/?post_type=article&p=728414 This article was originally published in CalMatters.

Your children could be some of a growing number of California kids having their writing graded by software instead of a teacher.

California school districts are signing more contracts for artificial intelligence tools, from automated grading in San Diego to chatbots in central California, Los Angeles, and the San Francisco Bay Area. 

English teachers say AI tools can help them grade papers faster, get students more feedback, and improve their learning experience. But guidelines are vague and adoption by teachers and districts is spotty. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The California Department of Education can’t tell you which schools use AI or how much they pay for it. The state doesn’t track AI use by school districts, said Katherine Goyette, computer science coordinator for the California Department of Education. 

While Goyette said chatbots are the most common form of AI she’s encountered in schools, more and more California teachers are using AI tools to help grade student work. That’s consistent with surveys that have found teachers use AI as often if not more than students, news that contrasts sharply with headlines about fears of students cheating with AI.  

Teachers use AI to do things like personalize reading material, create lesson plans, and other tasks in order to save time and reduce burnout. A report issued last fall in response to an AI executive order by Gov. Gavin Newsom mentions opportunities to use AI for tutoring, summarization, and personalized content generation, but also labels education a risky use case. Generative AI tools have been known to create convincing but inaccurate answers to questions, and use toxic language or imagery laden with racism or sexism.

California issued guidance for how educators should use the technology last fall, one of seven states to do so. It encourages critical analysis of text and imagery created by AI models and conversations between teachers and students about what amounts to ethical or appropriate use of AI in the classroom.

But no specific mention is made of how teachers should treat AI that grades assignments. Additionally, the California education code states that guidance from the state is “merely exemplary, and that compliance with the guidelines is not mandatory.”

Goyette said she’s waiting to see if the California Legislature passes Senate Bill 1288, which would require state Superintendent Tony Thurmond to create an AI working group to issue further guidance to local school districts on how to safely use AI. Cosponsored by Thurmond, the bill also calls for an assessment of the current state of AI in education and for the identification of forms of AI that can harm students and educators by 2026.

Nobody tracks what AI tools school districts are adopting or the policy they use to enforce standards, said Alix Gallagher, head of strategic partnerships at the Policy Analysis for California Education center at Stanford University. Since the state does not track curriculum that school districts adopt or software in use, it would be highly unusual for them to track AI contracts, she said.

Amid AI hype, Gallagher thinks people can lose sight of the fact that the technology is just a tool and it will only be as good or problematic as the decisions of the humans using that tool, which is why she repeatedly urges investments in helping teachers understand AI tools and how to be thoughtful about their use and making space for communities are given voice about how to best meet their kid’s needs.

“Some people will probably make some pretty bad decisions that are not in the best interests of kids, and some other people might find ways to use maybe even the same tools to enrich student experiences,” she said.

Teachers use AI to grade English papers

Last summer, Jen Roberts, an English teacher at Point Loma High School in San Diego, went to a training session to learn how to use Writable, an AI tool that automates grading writing assignments and gives students feedback powered by OpenAI. For the past school year, Roberts used Writable and other AI tools in the classroom, and she said it’s been the best year yet of nearly three decades of teaching. Roberts said it has made her students better writers, not because AI did the writing for them, but because automated feedback can tell her students faster than she can how to improve, which in turn allows her to hand out more writing assignments.  

“At this point last year, a lot of students were still struggling to write a paragraph, let alone an essay with evidence and claims and reasoning and explanation and elaboration and all of that,” Roberts said. “This year, they’re just getting there faster.”

Roberts feels Writable is “very accurate” when grading her students of average aptitude. But, she said, there’s a downside: It sometimes assigns high-performing students lower grades than merited and struggling students higher grades. She said she routinely checks answers when the AI grades assignments, but only checks the feedback it gives students occasionally. 

“In actual practicality, I do not look at the feedback it gives every single student,” she said. “That’s just not a great use of my time. But I do a lot of spot checking and I see what’s going on and if I see a student that I’m worried about get feedback, (I’m like) ‘Let me go look at what his feedback is and then go talk to him about that.’”

Alex Rainey teaches English to fourth graders at Chico Country Day School in northern California. She used GPT-4, a language model made by OpenAI which costs $20 a month, to grade papers and provide feedback. After uploading her grading rubric and examples of her written feedback, she used AI to grade assignments about animal defense mechanisms, allowing GPT-4 to analyze students’ grammar and sentence structure while she focused on assessing creativity.

“I feel like the feedback it gave was very similar to how I grade my kids, like my brain was tapped into it,” she said.

Like Roberts she found that it saves time, transforming work that took hours into less than an hour, but also found that sometimes GPT-4 is a tougher grader than she is. She agrees that quicker feedback and the ability to dole out more writing assignments produces better writers. A teacher can assign more writing before delivering feedback but “then kids have nothing to grow from.”

Rainey said her experience grading with GPT-4 left her in agreement with Roberts, that more feedback and writing more often produces better writers. She feels strongly that teachers still need to oversee grading and feedback by AI, “but I think it’s amazing. I couldn’t go backwards now.”

The cost of using AI in the classroom

Contracts involving artificial intelligence can be lucrative. 

To launch a chatbot named Ed, Los Angeles Unified School District signed a $6.2 million contract for two years with the option of renewing for three additional years. Magic School AI is used by educators in Los Angeles and costs $100 per teacher per year. 

Despite repeated calls and emails over the span of roughly a month, Writable and the San Diego Unified School District declined to share pricing details with CalMatters. A district spokesperson said teachers got access to Writeable through a contract with Houghton Mifflin Harcourt for English language learners. 

Quill is an AI-powered writing tool for students in grades 4-12 made by the company Quill. Quill says its tool is currently used at 1,000 schools in California and has more than 13,000 student and educator users in San Diego alone. An annual Quill Premium subscription costs $80 per teacher or $1800 per school.

Quill does not generate writing for students like ChatGPT or grade writing assignments, but gives students feedback on their writing. Quill is a nonprofit that’s raised $20 million from groups like Google’s charitable foundation and the Bill and Melinda Gates Foundation over the past 10 years.

Even if a teacher or district wants to shell out for an AI tool, guidance for safe and responsible use is still getting worked out. 

Governments are placing high-risk labels on forms of AI with the power to make critical decisions about whether a person gets a job or rents an apartment or receives government benefits. California Federation of Teachers President Jeff Freitas said he hasn’t considered whether AI for grading is moderate or high risk, but “it definitely is a risk to use for grading.”

The California Federation of Teachers is a union with 120,000 members. Freitas told CalMatters he’s concerned about AI having a number of consequences in the classroom. He’s worried administrators may use it to justify increasing classroom sizes or adding to teacher workloads; he’s worried about climate change and the amount of energy needed to train and deploy AI models’ he’s worried about protecting students’ privacy, and he’s worried about automation bias.

Regulators around the world wrestling with AI praise approaches where it is used to augmenthuman decisionmaking instead of replacing it. But it’s difficult for laws to account for automation bias and humans becoming placing too much trust in machines.

The American Federation of Teachers created an AI working group in October 2023 to propose guidance on how educators should use the technology or talk about it in collective bargaining contract negotiations. Freitas said those guidelines are due out in the coming weeks.

“We’re trying to provide guidelines for educators to not solely rely on (AI), he said. “It should be used as a tool, and you should not lose your critical analysis of what it’s producing for you.” 

State AI guidelines for teachers

Goyette, the computer science coordinator for the education department, helped create state AI guidelines and speaks to county offices of education for in-person training on AI for educators. She also helped create an online AI training series for educators. She said the most popular online course is about workflow and efficiency, which shows teachers how to automate lesson planning and grading.

“Teachers have an incredibly important and tough job, and what’s most important is that they’re building relationships with their students,” she said. “There’s decades of research that speaks to the power of that, so if they can save time on mundane tasks so that they can spend more time with their students, that’s a win.”

Alex Kotran, chief executive of an education nonprofit that’s supported by Google and OpenAI, said they found that it’s hard to design a language model to predictably match how a teacher grades papers.

He spoke with teachers willing to accept a model that’s accurate 80% of the time in order to reap the reward of time saved, but he thinks it’s probably safe to say that a student or parent would want to make sure an AI model used for grading is even more accurate.

Kotran of the AI Education Project thinks it makes sense for school districts to adopt a policy that says teachers should be wary any time they use AI tools that can have disparate effects on student’s lives. 

Even with such a policy, teachers can still fall victim to trusting AI without question. And even if the state kept track of AI used by school districts, there’s still the possibility that teachers will purchase technology for use on their personal computers.

Kotran said he routinely speaks with educators across the U.S. and is not aware of any systematic studies to verify the effectiveness and consistency of AI for grading English papers.

When teachers can’t tell if they’re cheating

Roberts, the Point Loma High School teacher,  describes herself as pro technology. 

She regularly writes and speaks about AI.  Her experiences have led her to the opinion that grading with AI is what’s best for her students, but she didn’t arrive at that conclusion easily. 

At first she questioned whether using AI for grading and feedback could hurt her understanding of her students. Today she views using AI like the cross-country coach who rides alongside student athletes in a golf cart, like an aid that helps her assist her students better.

Roberts says the average high school English teacher in her district has roughly 180 students. Grading and feedback can take between five to 10 minutes per assignment she says, so between teaching, meetings, and other duties, it can take two to three weeks to get feedback back into the hands of students unless a teacher decides to give up large chunks of their weekends. With AI, it takes Roberts a day or two.

Ultimately she concluded that “if my students are growing as writers, then I don’t think I’m cheating.” She says AI reduces her fatigue, giving her more time to focus on struggling students and giving them more detailed feedback.

“My job is to make sure you grow, and that you’re a healthy, happy, literate adult by the time you graduate from high school, and I will use any tool that helps me do that, and I’m not going to get hung up on the moral aspects of that,” she said. “My job is not to spend every Saturday reading essays. Way too many English teachers work way too many hours a week because they are grading students the old-fashioned way.”

Roberts also thinks AI might be a less biased grader in some instances than human teachers who can adjust their grading for students sometimes to give them the benefit of the doubt or be punitive if they were particularly annoying in class recently.

She isn’t worried about students cheating with AI, a concern she characterizes as a moral panic. She points to a Stanford University study released last fall which found that students cheated just as much before the advent of ChatGPT as they did a year after the release of the AI. 

Goyette said she understands why students question whether some AI use by teachers is like cheating. Education department AI guidelines encourage teachers and students to use the technology more. What’s essential, Goyette said, is that teachers discuss what ethical use of AI looks like in their classroom, and convey that — like using a calculator in math class — using AI is accepted or encouraged for some assignments and not others. 

For the last assignment of the year, Robers has one final experiment to run: Edit an essay written entirely by AI. But they must change at least 50% of the text, make it 25% longer, write their own thesis, and add quotes from classroom reading material. The idea, she said, is to prepare them for a future where AI writes the first draft and humans edit the results to fit their needs. 

“It used to be you weren’t allowed to bring a calculator into the SATs and now you’re supposed to bring your calculator so things change,” she said. “It’s just moral panic. Things change and people freak out and that’s what’s happening.”

For the record: An earlier version of this story misnamed the AI tool made by the company Quill. Quill is both the name of the company and the tool. 

]]>
Opinion: New Database Features 250 AI Tools That Can Enhance Social Science Research https://www.the74million.org/article/new-database-features-250-ai-tools-that-can-enhance-social-science-research/ Wed, 03 Jul 2024 11:00:00 +0000 https://www.the74million.org/?post_type=article&p=728242 This article was originally published in The Conversation.

AI – or artificial intelligence – is often used as a way to summarize data and improve writing. But AI tools also represent a powerful and efficient way to analyze large amounts of text to search for patterns. In addition, AI tools can assist with developing research products that can be shared widely. 

It’s with that in mind that we, as researchers in social science, developed a new database of AI tools for the field. In the database, we compiled information about each tool and documented whether it was useful for literature reviews, data collection and analyses, or research dissemination. We also provided information on the costs, logins and plug-in extensions available for each tool.

When asked about their perceptions of AI, many social scientists express caution or apprehension. In a sample of faculty and students from over 600 institutions, only 22% of university faculty reported that they regularly used AI tools.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


From combing through lengthy transcripts or text-based data to writing literature reviews and sharing results, we believe AI can help social science researchers – such as those in psychology, sociology and communication – as well as others get the most out of their data and present it to a wider audience.

Analyze text using AI

Qualitative research often involves poring over transcripts or written language to identify themes and patterns. While this kind of research is powerful, it is also labor-intensive. The power of AI platforms to sift through large datasets not only saves researchers time, but it can also help them analyze data that couldn’t have been analyzed previously because of the size of the dataset.

Specifically, AI can assist social scientists by identifying potential themes or common topics in large, text-based data that scientists can interrogate using qualitative research methods. For example, AI can analyze 15 million social media posts to identify themes in how people coped with COVID-19. These themes can then give researchers insight into larger trends in the data, allowing us to refine criteria for a more in-depth, qualitative analysis.

AI tools can also be used to adapt language and scientists’ word choice in research designs. In particular, AI can reduce bias by improving the wording of questions in surveys or refining keywords used in social media data collection. 

Identify gaps in knowledge

Another key task in research is to scan the field for previous work to identify gaps in knowledge. AI applications are built on systems that can synthesize text. This makes literature reviews – the section of a research paper that summarizes other research on the same topic – and writing processes more efficient.

Research shows that human feedback to AI, such as providing examples of simple logic, can significantly improve the tools’ ability to perform complex reasoning. With this in mind, we can continually revise our instructions to AI and refine its ability to pull relevant literature.

However, social scientists must be wary of fake sources – a big concern with generative AI. It is essential to verify any sources AI tools provide to ensure they come from peer-reviewed journals.

Share research findings

AI tools can quickly summarize research findings in a reader-friendly way by assisting with writing blogs, creating infographics and producing presentation slides and even images.

Our database contains AI tools that can also help scientists present their findings on social media. One tool worth highlighting is BlogTweet. This free AI tool allows users to copy and paste text from an article like this one to generate tweet threads and start conversations. 

Be aware of the cost of AI tools

Two-thirds of the tools in the database cost money. While our primary objective was to identify the most useful tools for social scientists, we also sought to identify open-source tools and curated a list of 85 free tools that can support literature reviews, writing, data collection, analysis and visualization efforts.

In our analysis of the cost of AI tools, we also found that many offer “freemium” access to tools. This means you can explore a free version of the product. More advanced versions of the tool are available through the purchase of tokens or subscription plans. 

For some tools, costs can be somewhat hidden or unexpected. For instance, a tool that seems open source on the surface may actually have rate limits, and users may find that they’ve run out of free questions to ask the AI. 

The future of the database

Since the release of the Artificial Intelligence Applications for Social Science Research Database on Oct. 5, 2023, it has been downloaded over 400 times across 49 countries. In the database, we found 131 AI tools useful for literature reviews, summaries or writing. As many as 146 AI tools are useful for data collection or analysis, and 108 are useful for research dissemination.

We continue to update the database and hope that it can aid academic communities in their exploration of AI and generate new conversations. The more that social scientists use the database, the more they can work toward consensus of adopting ethical approaches to using AI in research and analysis.

]]>
Whistleblower: L.A. Schools’ Chatbot Misused Student Data as Tech Co. Crumbled https://www.the74million.org/article/whistleblower-l-a-schools-chatbot-misused-student-data-as-tech-co-crumbled/ Mon, 01 Jul 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=729298 Just weeks before the implosion of AllHere, an education technology company that had been showered with cash from venture capitalists and featured in glowing profiles by the business press, America’s second-largest school district was warned about problems with AllHere’s product.

As the eight-year-old startup rolled out Los Angeles Unified School District’s flashy new AI-driven chatbot — an animated sun named “Ed” that AllHere was hired to build for $6 million — a former company executive was sending emails to the district and others that Ed’s workings violated bedrock student data privacy principles. 

Those emails were sent shortly before The 74 first reported last week that AllHere, with $12 million in investor capital, was in serious straits. A June 14 statement on the company’s website revealed a majority of its employees had been furloughed due to its “current financial position.” Company founder and CEO Joanna Smith-Griffin, a spokesperson for the Los Angeles district said, was no longer on the job. 

Smith-Griffin and L.A. Superintendent Alberto Carvalho went on the road together this spring to unveil Ed at a series of high-profile ed tech conferences, with the schools chief dubbing it the nation’s first “personal assistant” for students and leaning hard into LAUSD’s place in the K-12 AI vanguard. He called Ed’s ability to know students “unprecedented in American public education” at the ASU+GSV conference in April. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?” The tool relies on vast amounts of students’ data, including their academic performance and special education accommodations, to function.

Meanwhile, Chris Whiteley, a former senior director of software engineering at AllHere who was laid off in April, had become a whistleblower. He told district officials, its independent inspector general’s office and state education officials that the tool processed student records in ways that likely ran afoul of L.A. Unified’s own data privacy rules and put sensitive information at risk of getting hacked. None of the agencies ever responded, Whiteley told The 74. 

“When AllHere started doing the work for LAUSD, that’s when, to me, all of the data privacy issues started popping up,” Whiteley said in an interview last week. The problem, he said, came down to a company in over its head and one that “was almost always on fire” in terms of its operations and management. LAUSD’s chatbot was unlike anything it had ever built before and — given the company’s precarious state — could be its last. 

If AllHere was in chaos and its bespoke chatbot beset by porous data practices, Carvalho was portraying the opposite. One day before The 74 broke the news of the company turmoil and Smith-Griffin’s departure, EdWeek Marketbrief spotlighted the schools chief at a Denver conference talking about how adroitly LAUSD managed its ed tech vendor relationships — “We force them to all play in the same sandbox” — while ensuring that “protecting data privacy is a top priority.”

In a statement on Friday, a district spokesperson said the school system “takes these concerns seriously and will continue to take any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.” 

“Pursuant to contract and applicable law, AllHere is not authorized to store student data outside the United States without prior written consent from the District,” the statement continued. “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.” 

Sign up for the School (in)Security newsletter.

Get the most critical news and information about students' rights, safety and well-being delivered straight to your inbox.

A district spokesperson, in response to earlier questioning from The 74 last week, said it was informed that Smith-Griffin was no longer with the company and that several businesses “are interested in acquiring AllHere.” Meanwhile Ed, the spokesperson said, “belongs to Los Angeles Unified and is for Los Angeles Unified.”

Officials in the inspector general’s office didn’t respond to requests for comment. The state education department “does not directly oversee the use of AI programs in schools or have the authority to decide which programs a district can utilize,” a spokesperson said in a statement.

It’s a radical turn of events for AllHere and the AI tool it markets as a “learning acceleration platform,” which were all the buzz just a few months ago. In April, Time Magazine named AllHere among the world’s top education technology companies. That same month, Inc. Magazine dubbed Smith-Griffin a global K-12 education leader in artificial intelligence in its Female Founders 250 list. 

Ed has been similarly blessed with celebrity treatment. 

“He’s going to talk to you in 100 different languages, he’s going to connect with you, he’s going to fall in love with you,” Carvalho said at ASU+GSV. “Hopefully you’ll love it, and in the process we are transforming a school system of 540,000 students into 540,000 ‘schools of one’ through absolute personalization and individualization.”

Smith-Griffin, who graduated from the Miami school district that Carvalho once led before going onto Harvard, couldn’t be reached for comment. Smith-Griffin’s LinkedIn page was recently deactivated and parts of the company website have gone dark. Attempts to reach AllHere were also unsuccessful.

‘The product worked, right, but it worked by cheating’

Smith-Griffin, a former Boston charter school teacher and family engagement director, founded AllHere in 2016. Since then, the company has primarily provided schools with a text messaging system that facilitates communication between parents and educators. Designed to reduce chronic student absences, the tool relies on attendance data and other information to deliver customized, text-based “nudges.” 

The work that AllHere provided the Los Angeles school district, Whiteley said, was on a whole different level — and the company wasn’t prepared to meet the demand and lacked expertise in data security. In L.A., AllHere operated as a consultant rather than a tech firm that was building its own product, according to its contract with LAUSD obtained by The 74. Ultimately, the district retained rights to the chatbot, according to the agreement, but AllHere was contractually obligated to “comply with the district information security policies.” 

 The contract notes that the chatbot would be “trained to detect any confidential or sensitive information” and to discourage parents and students from sharing with it any personal details. But the chatbot’s decision to share and process students’ individual information, Whiteley said, was outside of families’ control. 

In order to provide individualized prompts on details like student attendance and demographics, the tool connects to several data sources, according to the contract, including Welligent, an online tool used to track students’ special education services. The document notes that Ed also interfaces with the Whole Child Integrated Data stored on Snowflake, a cloud storage company. Launched in 2019, the Whole Child platform serves as a central repository for LAUSD student data designed to streamline data analysis to help educators monitor students’ progress and personalize instruction. 

Whiteley told officials the app included students’ personally identifiable information in all chatbot prompts, even in those where the data weren’t relevant. Prompts containing students’ personal information were also shared with other third-party companies unnecessarily, Whiteley alleges, and were processed on offshore servers. Seven out of eight Ed chatbot requests, he said, are sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada. 

Taken together, he argued the company’s practices ran afoul of data minimization principles, a standard cybersecurity practice that maintains that apps should collect and process the least amount of personal information necessary to accomplish a specific task. Playing fast and loose with the data, he said, unnecessarily exposed students’ information to potential cyberattacks and data breaches and, in cases where the data were processed overseas, could subject it to foreign governments’ data access and surveillance rules. 

Chatbot source code that Whiteley shared with The 74 outlines how prompts are processed on foreign servers by a Microsoft AI service that integrates with ChatGPT. The LAUSD chatbot is directed to serve as a “friendly, concise customer support agent” that replies “using simple language a third grader could understand.” When querying the simple prompt “Hello,” the chatbot provided the student’s grades, progress toward graduation and other personal information. 

AllHere’s critical flaw, Whiteley said, is that senior executives “didn’t understand how to protect data.” 

“The issue is we’re sending data overseas, we’re sending too much data, and then the data were being logged by third parties,” he said, in violation of the district’s data use agreement. “The product worked, right, but it worked by cheating. It cheated by not doing things right the first time.”

In a 2017 policy bulletin, the district notes that all sensitive information “needs to be handled in a secure way that protects privacy,” and that contractors cannot disclose information to other parties without parental consent. A second policy bulletin, from April, outlines the district’s authorized use guidelines for artificial intelligence, which notes that officials, “Shall not share any confidential, sensitive, privileged or private information when using, prompting or communicating with any tools.” It’s important to refrain from using sensitive information in prompts, the policy notes, because AI tools “take whatever users enter into a prompt and incorporate it into their systems/knowledge base for other users.” 

“Well, that’s what AllHere was doing,” Whiteley said. 

L.A. Superintendent Alberto Carvalho (Getty Images)

‘Acid is dangerous’

Whiteley’s revelations present LAUSD with its third student data security debacle in the last month. In mid-June, a threat actor known as “Sp1d3r” began to sell for $150,000 a trove of data it claimed to have stolen from the Los Angeles district on Breach Forums, a dark web marketplace. LAUSD told Bloomberg that the compromised data had been stored by one of its third-party vendors on the cloud storage company Snowflake, the repository for the district’s Whole Child Integrated Data. The Snowflake data breach may be one of the largest in history. The threat actor claims that the L.A. schools data in its possession include student medical records, disability information, disciplinary details and parent login credentials. 

The chatbot interacted with data stored by Snowflake, according to the district’s contract with AllHere, though any connection between AllHere and the Snowflake data breach is unknown. 

In its statement Friday, the district spokesperson said an ongoing investigation has “revealed no connection between AllHere or the Ed platform and the Snowflake incident.” The spokesperson said there was no “direct integration” between Whole Child and AllHere and that Whole Child data was processed internally before being directed to AllHere.

The contract between AllHere and the district, however, notes that the tool should “seamlessly integrate” with the Whole Child Integrated Data “to receive updated student data regarding attendance, student grades, student testing data, parent contact information and demographics.”

Earlier in the month, a second threat actor known as Satanic Cloud claimed it had access to tens of thousands of L.A. students’ sensitive information and had posted it for sale on Breach Forums for $1,000. In 2022, the district was victim to a massive ransomware attack that exposed reams of sensitive data, including thousands of students’ psychological evaluations, to the dark web. 

With AllHere’s fate uncertain, Whiteley blasted the company’s leadership and protocols.

“Personally identifiable information should be considered acid in a company and you should only touch it if you have to because acid is dangerous,” he told The 74. “The errors that were made were so egregious around PII, you should not be in education if you don’t think PII is acid.” 

L.A. parents and students, we want to hear from you. Tell us about your experience using AllHere’s Ed:

]]>
Opinion: Generative Artificial Intelligence May Help Teachers. Does It Work for Students? https://www.the74million.org/article/generative-artificial-intelligence-may-help-teachers-does-it-work-for-students/ Thu, 27 Jun 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=729121 The public release of ChatGPT in April 2022 sparked a wave of fear and excitement among educators. While some expressed hesitation about the ability of generative artificial intelligence to make cheating undetectable, others pointed to its potential to provide real-time, personalized support for teachers and students, making differentiated learning finally seem possible after decades of unmet promises. 

Today, that potential has begun to come to fruition. Recent national survey data indicate 18% of teachers have used genAI, mostly to support differentiated lesson planning, and 56% of educators believe its use in schools will continue to grow. Increasingly, districts are introducing students to this technology, with products like Khanmigo — which provides individualized tutoring — already being adopted in Indiana, Florida and New Jersey. And students are experimenting with it outside the classroom as well. According to a recent survey, approximately half of 14- to 22-year-olds report having used genAI at some point.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


But rapid changes in technology and the speed of adoption are far outpacing the field’s understanding of impacts on teaching and learning. Every day there is a new story about an exciting AI-related development, but given the time it takes to conduct careful evaluation, very limited evidence exists about whether any of these tools have positive benefits for students. As schools start facing hard choices about where to spend their resources in response to continued learning gaps and the ESSER funding cliff, it’s important to take a look at what we know about the impact of genAI on education and what more we need to learn. 

What we know

Educators spend about 46% percent of their time on tasks that don’t directly involve teaching, ranging from taking attendance and submitting reports to giving written feedback to students. Gen AI tools hold promise for speeding up and even automating these tasks, saving time that could be spent building meaningful relationships and deepening learning. For example, researchers from UC Irvine found that teachers in California and North Carolina who used the genAI product Merlyn Mind, which automates test question creation and lesson planning, reported spending less time on administrative tasks and more on teaching and learning after seven weeks of use compared to educators without access to the tool. And about 44% of teachers who have used genAI agree the technology has made their job easier. 

To date, however, most of these findings rely on anecdotal reports. To quantify the impact of genAI on time saved, the field needs more rigorous evidence — such as through randomized controlled trials — to not only gauge the impact on administrative burden but to explore whether these tools help improve teaching quality. 

A separate body of research is finding that genAI-based coaching tools, which aim to give regular, impartial, real-time feedback in a cost-effective way, can have small effects on targeted teacher practices. For example, researchers at Stanford and the University of Maryland developed “M-Powering Teachers,” an automated coaching tool that uses natural language processing to give educators feedback. Across two randomized controlled trials, the tool was shown to reduce teacher-directed talk, increased student contributions and improved completion of assignments. Another study found that feedback provided via TeachFX, an app that uses voice AI to assess key indicators of classroom quality, increased teachers’ use of focusing questions that probe students’ thinking by 20%. 

Another randomized controlled trial found a genAI-enabled coaching tool that provided targeted feedback increased the quality of math tasks assigned to students and created a more coherent learning environment. Perhaps more impressive, the feedback resulted in a small positive improvement in students’ knowledge of ratios and proportional relationships, the area it focused on. 

These studies show early promise, but the impacts they found have been small. As AI-enabled coaching products for teachers start to expand to more classrooms, more evaluation is needed to better understand the potential of genAI to truly improve teaching and, ultimately, student learning. 

What we need to learn

Despite early evidence that AI has potential to make teachers’ jobs a bit easier and professional development more effective, the verdict is still out on whether having students interact directly with genAI can improve academic and social-emotional outcomes. These technologies, especially in education, are changing rapidly, making rigorous studies challenging. This point was recently made by the Alliance for Learning Innovation in calling on Congress to budget almost $3 billion to address the issue.

While some tools — like Khan Academy’s Khanmigo (which has received funding from Overdeck Family Foundation) — are based on evidence that personalized learning can support better outcomes for some students, and some emerging research indicates that hybrid AI-human tutoring may boost achievement, it is not yet clear whether genAI tools themselves can strengthen and supplement student learning. As these types of products move into classrooms, there is a clear need for families, educators and policymakers to demand proof that they improve outcomes and do not unintentionally harm students most in need of effective support by providing incorrect guidance and feedback. 

This is an exciting moment for education, with transformative technology finding its way into all our lives in a way that hasn’t been seen since the introduction of smartphones. Yet much research on genAI does not consider the types of ed tech products schools are actually buying. Instead, it comes from lab-based studies and tools that are not actually used or tested in the classroom. 

Now is the time — before these technologies become pervasive — to rigorously evaluate what is being sold into and used in schools. The goal of educators should always be to ensure that students have the most effective tools for learning, not merely those with the best sales pitch.

]]>
Turmoil Surrounds Los Angeles’ New AI Student Chatbot; Tech Firm Furloughs Staff https://www.the74million.org/article/turmoil-surrounds-las-new-ai-student-chatbot-as-tech-firm-furloughs-staff-just-3-months-after-launch/ Wed, 26 Jun 2024 23:32:25 +0000 https://www.the74million.org/?post_type=article&p=729145 The future of Los Angeles Unified School District’s heavily hyped $6 million artificial intelligence chatbot was uncertain after the tech firm the district hired to build the tool shed most of its employees and its founder left her job. 

Boston-based AllHere Education, founded in 2016 by Harvard grad and former teacher Joanna Smith-Griffin, figured heavily in LAUSD’s March 20 launch of Ed, an AI-powered online tool for students and parents designed to supplement classroom instruction and help families navigate. 

But on June 14, AllHere furloughed the majority of its employees due to its “current financial position,” according to a statement posted on its website. A statement from LAUSD sent to The 74 said AllHere now is up for sale.  


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


But even before the surprise announcement, AllHere was already having trouble fulfilling its contract with LAUSD, according to one former high-ranking company executive. 

 LAUSD Board materials for the district’s contract with AllHere. 

The company was unable to push back against the district’s timeline, he said, and couldn’t produce a “proper product.”

An LAUSD spokesperson said the district is aware of Smith-Griffin’s departure and that “several educational technology companies are interested in acquiring AllHere.” 

“The educational technology field is a dynamic space where acquisitions are not uncommon,” the spokesperson said via email. “We will ensure that whichever entity acquires AllHere will continue to provide this first-of-its-kind resource to our students and families.”

Smith-Griffin and AllHere did not respond to requests for comment. The former CEO has taken down her LinkedIn profile. Portions of the AllHere website have also disappeared, including the company’s “About Us” page

James Wiley, a vice president at the education market research firm ListEdTech, said turmoil at AllHere could be a red flag for LAUSD’s AI program if the district hasn’t taken steps to protect itself from changes at the company.   

“It could be a problem,” said Wiley. “It depends on how much of the program the district has been able to bring in-house, as opposed to leaving with the vendor.”

Wiley also expressed surprise that LAUSD contracted with a relatively small and untested firm such as AllHere for its Ed rollout, as opposed to enlisting a major AI company for the job, or a larger ed tech firm.   

“You have bigger players out there who could have done this thing,” said Wiley.

Outside of Los Angeles, the company has offered districts a text messaging system that allows schools to inform families about weather events and other announcements. 

According to GovSpend, which tracks government contracts with companies, AllHere has already been paid more than $2 million by LAUSD. The company has had much smaller contracts with other districts, according to GovSpend, including a $49,390 payment from Brownsville Independent School District in Texas and a similar-sized payment from Broward County Public Schools in Florida. 

But AllHere’s star had been ascendant. 

With backing from the Harvard Innovation Lab, Smith-Griffin raised more than $12 million to start the new company. AllHere in April was named one of the world’s top ed tech companies by TIME. 

The LAUSD school board last June approved a competitively bid $6.2 million contract for AllHere to plan, design and develop the district’s new AI tool, Ed. The deal began with a two-year agreement ending in July 2025, with options for three subsequent one-year renewals.  

Smith-Griffin appeared with LAUSD superintendent Alberto Carvalho in April to discuss the project, which was described by the district’s leader as a game-changer for LAUSD that represented the first time a school district systematically leveraged AI. 

The former AllHere executive, who was recently laid off, said in an interview that the company’s work with LAUSD was far more involved than that of its other customer school districts. 

The small company was being asked to create a far more sophisticated tool than its prior text messaging system and bit off more than it could chew in its contract with the nation’s second-largest district. 

At the same time, he said, AllHere employees operated more as consultants than as a company building its own product and were unable to “to say no or to slow things down” with the district.

“So I think because of that, they were unable or unwilling to build a proper product,” he said. 

LA parents and students, we want to hear from you. Tell us about your experience using AllHere’s Ed:

With reporting and contributions from Mark Keierleber and Greg Toppo

]]>
Homeschoolers Embrace AI, Even As Many Educators Keep It at Arms’ Length https://www.the74million.org/article/homeschoolers-embrace-ai-even-as-many-educators-keep-it-at-arms-length/ Tue, 25 Jun 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=727604 Like many parents who homeschool their children, Jolene Fender helps organize book clubs, inviting students in her Cary, North Carolina, co-op to meet for monthly discussions.

But over the years, parents have struggled to find good opening questions. 

“You’d search [the Internet], you’d go on Pinterest,” she said. “A lot of the work had to be done manually, or you had to do a lot more digging around.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Then came ChatGPT, Open AI’s widely used artificial intelligence bot. For Fender, it was a no-brainer to query it for help developing deep opening questions.

The chatbot and other AI tools like it have found an eager audience among homeschoolers and microschoolers, with parents and teachers readily embracing it as a brainstorming and management tool, even as public schools take a more cautious approach, often banning it outright

A few observers say AI may even make homeschooling more practical, opening it up to busy parents who might have balked previously.

“Homeschoolers have always been unconstrained in their ability to combine technology — any kind of tech,” said Alex Sarlin, a longtime technology analyst and co-host of the EdTech Insiders podcast. 

Homeschoolers have always been unconstrained in their ability to combine technology — any kind of tech.

Alex Sarlin, co-host of EdTech Insiders

The reasons are readily apparent, he said: Home internet service typically doesn’t block key websites the way most schools do. Families can more easily manage data privacy and get the digital tools they want without fuss. They’re basically able to ignore “all the dozen reasons why everything falls apart when you try to sell to schools,” Sarlin said. 

Persuading homeschoolers to try out new things is also a lot simpler: If a student and parents like a tool, “There’s nobody else you have to convince.”

Indeed, a September survey by the curriculum vendor Age of Learning found that 44% of homeschool educators reported using ChatGPT, compared to 34% of classroom educators.

“Not everyone is using it, but some are very excited about it,” said Amir Nathoo, co-founder of Outschool, an online education platform.

The most interesting uses he has seen are by gifted and neurodiverse homeschoolers, who often use chatbots to explore complex topics like advanced math and science, philosophy and even ethics, which they wouldn’t ordinarily have access to at a young age. They ask it to provide simple explanations of advanced topics, such as relativity and quantum mechanics, then pursue them on their own. “They’re able to go on a relatively unstructured exploration, which is often the best way that kids learn.”

They're able to go on a relatively unstructured exploration, which is often the best way that kids learn.

Amir Nathoo, Outschool

Alternatively, he said, kids whose ability to express themselves is limited can also benefit from what many consider the non-judgmental qualities of tools like ChatGPT. 

Peer-to-peer learning

Tobin Slaven, cofounder of Acton Academy, a self-paced, independent microschool in Fort Lauderdale, said he’s been experimenting with AI tools for the past year or so and is excited by what he’s seen. “This is what the future looks like to me,” he said

This is what the future looks like to me.

Tobin Slaven, cofounder of Acton Academy

Like many educators, he sees the problems inherent in AI tools like ChatGPT, which on occasion “hallucinate” with incorrect information and can sometimes be downright creepy. These concerns have stopped many families from fully embracing AI.

But Slaven can’t support banning it outright. Instead, he’ll offer a student his own device with ChatGPT loaded onto a browser window. The entire time, he has access to their queries and results. That ensures he can review the sessions for inappropriate content.

Lately, Slaven and his students have been playing with an AI tool called Pathfinder that helps them create and develop projects. Designed by a small, two-person UK-based startup, it’s set up like a simple chatbot that asks students what they want to learn about. It elicits information, much like a Socratic guide, about their prior knowledge and how they’d like to explore the topic. Then it searches the Internet for appropriate resources and returns suggestions on what to do next. 

Pathfinder uses Open AI’s GPT-4 large language model and its own algorithm to rank resources based on how relevant it is to an individual learner, said co-founder Amaan Ahmad. That includes how they learn best, what they’re interested in and what they already know. 

Amaan Ahmad 

After a number of students in a homeschool group or class have worked with it long enough, it can even begin recommending classmates or friends to consult with to learn how they’re approaching the topic. 

“My AI can talk to your AI and say, ‘Hey, Greg crushed that last week. Why don’t you go speak to him and develop your project together?’” he said. 

Slaven tried out Pathfinder with a group of students recently and found that even during a brief trial run, it allowed them to better conceptualize their projects. 

With the tool asking them questions about their preferred topic, they were able to go from general inquiries about their interests, such as horseback riding or space exploration, into more advanced ones that explore the topics more deeply. That goes a long way toward helping students become more independent and responsible for their own learning, a key goal of microschooling and homeschooling.

A student works on a laptop at Acton Academy, a self-paced, independent microschool in Fort Lauderdale, Fla. (Courtesy of Acton Academy)

Slaven believes, more broadly, that AI co-pilots configured to students’ interests and preferences will enable personalized learning at scale. It’ll become the norm that everyone has a collaborative AI partner that will, in time, understand how each student performs best and under what conditions. “It’s eventually going to become their preferred resource,” he said.

Making homeschooling more accessible

Ahmad, the Pathfinder co-founder, said AI holds the possibility of helping endeavors like microschooling and homeschooling become more practical. Access to reliable, safe AI agents means that an individual student isn’t restricted to what a parent or teacher knows.

Giving that autonomy with a bit of guidance helps make learning much more impactful, he said. “It’s very difficult to do that in real time because with one adult and one kid, you can’t always be by their side. And if you have a microschool with 12 to 16 kids, that’s even more time-consuming.” 

For Fender, the North Carolina homeschooling mother, one of the most helpful aspects of AI is that it helps parents organize what can often be a chaotic, free-form learning environment. 

Fender subscribes to a type of homeschooling known as “unschooling,” which seeks to teach students to be more self-directed and independent than in most public schools. Her kids’ lessons are “very much interest-led” and her small co-op has grown in recent years. 

But she must also persuade state bureaucrats that she’s providing an adequate education. So she and a few other homeschool parents in Cary rely on a website that uses AI to detail what activities their kids have done and auto-completes all of the relevant North Carolina educational standards. “I thought that was a genius tool,” she said, and one that allows stressed, busy parents to build a comprehensive portfolio for annual state reviews and high school transcripts.

Fender also uses ChatGPT for brainstorming. In a recent case, which she shared on Instagram, Fender asked the AI for 50 real-life applications for the Pythagorean theorem. It generated a list that included designing ramps or stairs, planning optimal pathways in garden design and building efficient roller coasters. 

An image from homeschooling mother Jolene Fender’s Instagram account, in which she queries ChatGPT for real-life applications of the Pythagorean theorem. (Instagram screen capture)

Last year, she recalled, one of her daughters was creating Christmas cards for a homeschool craft fair and “wanted to have fun puns in the cards.” Fender explained how to craft an AI prompt — and how to sift through the chaff. Her daughter eventually asked ChatGPT for 50 different Christmas-themed puns and ended up using about 10 to 15. 

Like most parents, Fender has read about the downsides of AI but believes schools are short-sighted to limit its use. 

“Why are you banning a tool that is definitely here to stay?” she said. “Maybe we don’t understand all the ins and outs, but at the end of the day, our goal is to prepare kids for the jobs of the future. And a lot of these jobs of the future, we don’t even know what they are.”

]]>
Opinion: Call to Action: This Summer, Target Deepfakes that Victimize Girls in Schools https://www.the74million.org/article/call-to-action-this-summer-target-deepfakes-that-victimize-girls-in-schools/ Tue, 11 Jun 2024 18:41:49 +0000 https://www.the74million.org/?post_type=article&p=728311 School’s almost out for summer. But there’s no time for relaxing: Kids, especially girls, are becoming victims of fabricated, nonconsensual, sexually explicit images, often created by peers. These imaginary girls are upending the lives of the real ones. The coming summer break provides the opportunity for coordinated action at the state level to disrupt this trend and protect children.

The creation of deepfakes — highly realistic but artificial images, audio and video — used to require high-powered equipment and considerable skill. Now, with advancements in generative artificial intelligence, any kid with a smartphone can make one. 

Adolescents, mostly teenage boys, are exploiting readily accessible deepfake tools to create graphic images and videos of female classmates, causing profound distress and disruption in schools across the country, from Beverly Hills, California, to Westfield, New Jersey. High school students outside Seattle were photographed by a classmate at a school dance who then “undressed” them on his phone and circulated supposedly nude pictures.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The impact could be significant. Experts report that so-called deepnudes can hurt victims’ mental health, physical and emotional safety, as well as college and job opportunities. Comprehensive data is lacking, but documented incidents indicate that this is a troubling trend that demands immediate attention. 

While anti-child pornography statutes, Title IX regulations regarding online harassment and revenge-porn laws already exist, these measures were not designed to handle the unique challenges posed by deepfake technology. 

Schools, educators and law enforcement are scrambling to respond to this new phenomenon. In some cases, students have been harshly disciplined, but arresting 13- and 14 year-old boys for engaging in impulsive behavior on phones their parents have handed them is not an appropriate, just or sustainable solution. It is incumbent upon adults to make the technological world safe for children.

The Biden administration has rightly called on technology companies, financial institutions and other businesses to limit websites and mobile apps whose primary business is to create, facilitate, monetize or disseminate image-based sexual abuse. But these steps are largely symbolic and will result in voluntary commitments that are likely unenforceable. 

The U.S. Department of Education is scheduled to release guidance on this matter, but its track record of issuing timely — and, frankly, practical — information is underwhelming. 

It’s also impractical to rely on slow-moving legislative processes that get caught up in arguments about accountability for offending images when students’ well-being is at stake. As any school leader can tell you, laws only go so far in deterring behavior, and the legislation ambling through Congress don’t address how K-12 institutions should respond to these incidents. 

So, where does that leave us?

Educators need support and guidance. Schools have a critical role to play, but to expect them to invent policies and educational programs that combat the malicious use of deepfakes and protect students from this emerging threat — absent significant training, resources and expertise — is not only a fool’s errand, but an unfair burden to place on educators. 

Communities, districts and schools need statewide strategies to prevent and deter deepfakes. States must use this summer to bring together school administrators, educators, law enforcement, families, students, local technology companies, researchers, community groups and other nonprofit organizations to deliver comprehensive policies and implementation plans by Labor Day. These should, among other things:

  • Recommend curriculum, instruction and training programs for school leaders and teachers about the potential misuses of artificial intelligence and deepfakes in school settings;
  • Update school-based cyber harassment policies and codes of conduct to include deepfakes;
  • Establish discipline policies to clarify accountability for students who create, solicit or distribute nonconsensual, sexual deepfake images of their peers;
  • Update procurement policies to ensure that any technology provider has a plan to interrupt or handle a deepfake incident;
  • Build or purchase education, curriculum and instruction for students and families on digital citizenship and the safe use of technology, including AI literacy and deepfakes;
  • Issue guidance for community institutions, including religious programs, small businesses, libraries and youth sports leagues, to promote prevention by addressing this issue head-on with teens who need to understand the damage deepfakes cause;
  • Issue detailed guidance about how schools must enforce Title IX, the federal law that bans sex discrimination, including sexual harassment, in schools.

Is this too ambitious for state government? Maybe. But there is no choice. As the grown-ups, and as citizens of a democracy, we have a collective responsibility to decide what kind of world we want our children to live in, and to take action, before it’s too late.

]]>
Opinion: Will AI Be Your Next Principal? Probably Not. But It’s Here to Stay https://www.the74million.org/article/will-ai-be-your-next-principal-probably-not-but-its-here-to-stay/ Mon, 03 Jun 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=727825 When I was a principal, if you had told me I would be working with artificial intelligence on a daily basis, I would have conjured visions of the Terminator and Skynet in my head. Fortunately, we’re not there (yet?) but the introduction of AI amplifies risks and opportunities attached to school leaders’ decisions. Education leaders need to have forward-looking conversations about technology and its implications to ensure that public education is responsive both to what students need and what the world is going to ask of them.

This year at SXSW EDU, I teamed up with The Leadership Academy to facilitate a conversation on the role of AI in education, specifically in relation to the principalship. The panelists discussed the potential benefits and challenges of embedding AI in schools and how it might impact the role of the principal. We also explored the implications of AI for equity and access in education. As education leaders come to terms with integrating AI into our schools, they need to consider these issues:

AI can help principals avoid burnout and focus on the “human” work.  

The role of the principal is currently unsustainable. In 2022, 85% of principals reported experiencing high levels of job-related stress, compared with 35% of the general working adult population. The risk of principal burnout has sweeping  implications for the field. Principal turnover has a negative impact on teacher retention and is associated with decreased student achievement. AI can help make principals’  jobs more manageable and sustainable by helping them save time and even automate administrative and analytic tasks. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The time and technical assets afford principals with more bandwidth, so they can focus on more sophisticated,  human-centered activities such as building relationships with their faculty and the community, and fostering a positive climate, which is a proven predictor of school effectiveness. AI offers an answer to a vital question that was posed by Kentwood, Michigan, Superintendent Kevin Polston during the panel: “If time is our most precious commodity, and humans are the most important value that we have in our organizations, how do you then create more time for your people to do those innately human things that change outcomes for kids?” 

Education leaders must consider the risk of bias in design.

During our discussion, Nancy Gutierrez, executive director of The Leadership Academy, emphasized the importance of who is at the table in the design process. To illustrate the risks, she referred to sobering examples, such as the initial designs of self-driving cars being more likely to hit people with darker skin tones. In terms of education, she noted that teachers might use AI to design work that inadvertently reflects their biases about a student’s capabilities, based on that child’s identity. Bias in AI is simply a reflection of existing human biases, so district leaders and principals should redouble efforts against bias that might undermine students. Eva Mejia, an expert in design and innovation at IDEO, underscored how involving educators in the design process and increasing transparency could mitigate some of these risks and enhance innovation in schools.

The role of the principal must evolve in line with technological advancements, with a focus on leading change.

Schools must actively learn about and adopt AI, rather than being passive recipients, and principals must be prepared to lead this change effectively. Principals are drivers of school success, and AI is yet another means for them to foster innovation in their schools by modeling a exploratory mindset for students and adults. For example, principals can cultivate spaces where teachers and students feel free to work with AI out in the open, sharing best practices and pitfalls for the benefit of other educators. What might principals and teachers accomplish by testing and leveraging computing power to elevate academic rigor, rather than banning tools that are already integrating in the professional world?

Unfortunately, many school leaders are doing this work at a disadvantage. When I ask principals in urban districts why they have not done more to leverage AI in their schools, the most common answer is, “I just don’t have the time.” Too often, the folks who lead the schools with the greatest needs have the least time to be proactive. They fall behind because they do not have the bandwidth to capitalize on new opportunities or innovative solutions. District leaders must commit to investing in the resources — time and material — that principals need to create the conditions required for schools to remain current and competitive.  

Integrating AI into schools is not just about bringing in new technology. It is about rethinking what leadership looks like. Education leaders have the opportunity to use their expertise in school systems, learning and development to think about how AI can be used to close equity gaps, instead of widening them, and position principals to focus on what matters most — children.

]]>
One-Third of Teachers Have Already Tried AI, Survey Finds https://www.the74million.org/article/one-third-of-teachers-have-already-tried-ai-survey-finds/ Thu, 30 May 2024 10:30:00 +0000 https://www.the74million.org/?post_type=article&p=727770 One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.

In all, two-thirds of respondents said they hadn’t used AI in their work, including 9 percent who reported they’d never heard of tools and products like OpenAI’s ChatGPT or Google’s Gemini. By contrast, 18 percent of participants said they regularly relied on such offerings, and 15 percent said they had tried them before but don’t intend to use them more regularly.

Melissa Kay Diliberti, a policy researcher at RAND and one of the report’s co-authors, said the current minority of users constitutes a “foothold” in schools that is poised to grow with time — and that has already expanded massively in the 17 months since ChatGPT was first unveiled to an unsuspecting public in November 2022.

“There seem to be a small number of people on the bandwagon, but the bandwagon is moving forward,” Diliberti said.

The poll, incorporating responses from a nationally representative sample of more than 1,000 teachers in 231 public school districts, offers the most recent data from a technological shift that has been trumpeted as revolutionary. The potential of AI to maximize teacher efficiency, individualize instruction for every pupil, and offer support to kids struggling with mental health problems has stoked a growing demand for new products that is quickly being met by major tech players like Google and Khan Academy.

The gleanings of broader public opinion research are somewhat diffuse, but there is reason to think that the level of AI take-up by teachers is comparable to, or even further along than, that of other professionals. In previous polls, similar minorities of lawyers (15 percent), journalists (28 percent), human resources staff (26 percent), and doctors (38 percent) have reported using AI in a variety of tasks. 

OpenAI CEO Sam Altman, whose company developed ChatGPT. (Getty Images)

And teachers’ outlook on the future is suggestive: Nearly all respondents who already use AI tools believe they will use it more in the 2024–25 school year than they do now, while 28 percent of non-users predicted they would eventually try them out. 

Use of artificial intelligence was roughly even across different kinds of schools, whether broken down by student demographics, poverty levels or rural/urban geography. By contrast, middle and high school teachers were almost twice as likely to say they used AI than their counterparts in elementary school (23 percent vs. 12 percent), and English and social studies instructors reported higher use than those in STEM disciplines (27 percent versus 19 percent).

While cautioning against overinterpreting results in a relatively small sample, Diliberti reasoned that English and social studies teachers are also more likely to create or modify their own curricular materials, or source them from online marketplaces like Teachers Pay Teachers. Outsourcing some of those efforts — along with periodic non-instructional tasks, such as composing emails to parents or letters of recommendation to colleges — to AI could save hundreds of hours over the course of a school year.

“You could see where AI might be a way to ease the burden of a task they’re already doing,” she said. “That might be why these teachers appear to be more inclined to use AI than a math teacher, who could be more tightly focused on a given curriculum that’s used throughout the school.”

Among teachers regularly using AI, close to half said they did so to generate classroom assignments or worksheets (40 percent), lesson plans (41 percent), or assessments for students (49 percent). 

Establishing a ‘foothold’

Amanda Bickerstaff, CEO of AI for Education, a company that advises school districts on the use of artificial intelligence, said the RAND poll is notable for being “the first survey I’ve seen that seems representative of what is happening in schools. 

In training sessions she has conducted for tens of thousands of classroom teachers and administrators since last year, Bickerstaff said she and her colleagues have received a warm reception from audiences, but also uneven awareness of what AI can accomplish. Early adopters might simply be tech enthusiasts, or they could be special education teachers hoping to make their instruction more accessible.

Curiosity about the new technology “is coming from the bottom-up as well as the top-down,” she observed. “One of the more interesting things is that we’re seeing more teachers using AI in schools than schools and districts teaching them to use it.”

Partly because guidance and professional development still trail teacher interest, a little under 10 percent of all survey respondents said they were seeking out AI tools of their own initiative. At present, the most commonly used products were popular platforms like Google Classroom, adaptive learning systems offered by Khan Academy and i-Ready, and the nearly ubiquitous chatbots. 

Diliberti said she wasn’t surprised that incumbent players like Google and OpenAI, powered by billions of dollars in investment and promotion, have gained early primacy in the K–12 arena. But she added it was striking that lesser-known products that are specifically geared toward activities like lesson planning and assessment generation haven’t won the following of more multifunctional alternatives like ChatGPT.

“It’s notable that teachers seem to be using more generic tools instead of dedicated tools that were developed for this purpose,” she said.

Bickerstaff argued that the survey results demonstrated that teachers, increasingly finding their own way to AI, should be provided more training on the use of existing tools. Beyond that, she said, public and private actors should broaden access to more advanced versions of those tools, which are now available at subscription costs averaging about $20 per month, to allow teachers to gain a better understanding of their applications. 

“These tools make mistakes, they’re biased, and they require significant training to be able to use them. You need support on how to use the tools before you can get the best out of them.”

]]>
Case Study: How 2 Teachers Use AI Behind the Scenes to Build Lessons & Save Time https://www.the74million.org/article/case-study-how-2-teachers-use-ai-behind-the-scenes-to-build-lessons-save-time/ Mon, 15 Apr 2024 15:00:00 +0000 https://www.the74million.org/?post_type=article&p=725339 FRANKLIN SQUARE, NEW YORK — The sixth-graders learning about ancient Greek vases in their classroom at John Street School looked like students in nearly any other social studies class in the country. Wearing sweatpants and hoodies, they heard a short lesson about what the vases were used for and how they were decorated before breaking into small groups to ponder specific questions and fill out worksheets. 

But behind the scenes, preparing for the lesson was anything but typical for teachers Janice Donaghy and Jean D’Aurio. They had avoided the hours of preparation the lesson might normally have taken by using artificial intelligence to craft a plan that included a summary of ancient Greek vases, exit questions and student activities.

“Classroom preparation goes from hours to seconds” when using AI, said D’Aurio. In the past, the co-teaching pair had created lesson plans by scouring the school’s literacy closet to sift through printed materials, perusing the Teachers Pay Teachers online marketplace and exploring Instagram or TikTok accounts.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


For this lesson, the two consulted the county’s curriculum guide but also used Canva, a tool that automatically generated pictures of Grecian vases. The teachers turned to Diffit, another AI application, to craft a reading passage that explained the importance of vases in everyday life in ancient Greece. Diffit also created alternative versions of the text so it would be appropriate for kids reading at different levels, wrote three multiple choice questions to test comprehension and prompted students to draw pictures to show they understood the lesson’s key points. The teachers added short-answer questions that students answered on Google Classroom, and they wrapped up the multi-week lesson by having students paint a design on an actual vase. 

A sixth-grader uses his iPad to study the different types of artwork on Grecian vases during lesson at John Street School. Students had to choose a design that they would eventually attempt to re-create when they painted their own vase. (Wayne D’Orio)

“This is just touching the surface of what [AI] has the potential to do,” said Jared Bloom, the superintendent of Franklin Square School District, where both teachers work. “The promise really is to personalize learning, not just differentiate it, in a way that’s not taxing or exhausting for teachers. This could revolutionize education a year from now, as the tools get better and better.”

When ChatGPT was unveiled in late 2022, many educators saw the large language model chatbot as a shortcut students might use to complete — or cheat on — their homework. While it’s still unclear how the technology may ultimately affect schools, growing numbers of teachers are using various AI applications to help cut down on the work they do outside the classroom, from creating lessons to grading papers to emailing parents.

Teachers average about eight to 10 hours a week planning and doing administrative work, said Amanda Bickerstaff, CEO and co-founder of AI for Education, a company that advises districts on how to integrate artificial intelligence into their work. AI is a great way to find efficiencies and lessen that workload, she added.

In Franklin Square, a small K-6 district about 9 miles from John F. Kennedy Airport, the impetus to start using AI came from Bloom. Before the current school year, he highlighted various ways teachers could incorporate AI, from generating ideas for lesson plans to allowing students to use tools to enhance their work. In one example, after students studied houses that are shaped like cats, they created their own drawings. The teacher was then able to use AI tools to show the class how these buildings would look if they were constructed.

D’Aurio said she and Donaghy are “tech nerds” who were the first in their school to experiment with the new technology, and she’s noticed more teachers getting on board “a little at a time.” They use a variety of applications, including Diffit, which can create lesson plans from a few prompts. For instance, teachers can type in “ancient Greek vases” and a grade level, the application takes less than 20 seconds to return an adapted reading passage, a summary, key vocabulary words, and multiple-choice, short-answer and open-ended questions. These elements can be edited and quickly added into activities for students to complete.

Users can also ask the technology to adapt existing text for students reading at different levels. “In one classroom, you could go from second-grade reading level to 10th grade,” Donaghy said.

John Street School teacher Jean D’Aurio reviews lesson on Greek vases with a small group of students. (Wayne D’Orio)

Other companies help teachers create interactive slideshows, give writing feedback or generate images around various topics.

The inclusion class the pair co-teach contains both general and special education students. Donaghy said AI tools can help her not only create materials that meet students’ individual learning plans, they can track their progress in a variety of areas — a huge time saving because each student can have five or more individual goals. 

AI really helped when a new student from El Salvador showed up speaking only Spanish, said Donaghy. The teachers used it to translate every classroom lesson for her, allowing her to understand assignments while she worked in both Spanish and English. 

Donaghy said it did take a little trial and error to understand how to best craft queries to get the desired output. But she encourages her peers to try the tools by telling them, “I know tech can be scary, but guys, this is easy.” 

While acknowledging that most teachers in the small district haven’t used these tools yet, Bloom said 87% told him at the beginning of this school year that they were interested in trying them out. “They’re intrigued,” he added. 

Bickerstaff said about 84% of people who use a smartphone or a computer interact with AI every day, often without realizing it. 

John Street School librarian Paige Chambers said she used AI while earning her master’s degree and was eager to see how it could help her at school. Chambers, who teaches media literacy and related lessons to students in her library, said she uses AI tools to help her find ideas for lesson plans. Because results are so quick, she added, it’s easy to modify prompts when they don’t return what she wants. 

She has uploaded YouTube videos to AI applications to get a summary of the videos’ contents as well as questions for the students to answer. The tools can also break down a lesson plan into step-by-step directions for her while offering sample projects for students to complete. 

Because these tools can add to an existing lesson, Chambers said they can boost an idea she has by filling it out with extra ideas.

Donaghy, D’Aurio and Chambers said they were aware that AI can sometimes hallucinate — make up facts — but reading through what the program creates can help avoid this problem. To stop misinformation, Chambers said she specifically asks these tools to let her know if the application doesn’t have any information about a particular topic. This can prevent them from inventing answers to her prompts.

One area the teachers haven’t used AI for yet is helping to grade student work. This would require teachers to upload students’ writing into AI tools, which could breach the security of student information. 

Bloom said he expects technology upgrades to eventually solve this dilemma by creating tools that keep student work from being uploaded to the entire internet. “We’re not trying to remove teachers [from the grading process]. We just want students to get support in the moment. It could be like having a tutor on your shoulder.”

Donaghy said having a tool that checks whether her grading is accurate and fair and hews to a lesson’s rubric would be a big help.
“This is an exciting time for education,” Chambers said. AI “is getting better every day. It’s worlds different in just the half a year I’ve been using it.”

]]>
Texas Will Use Computers to Grade Written Answers on This Year’s STAAR Tests https://www.the74million.org/article/texas-will-use-computers-to-grade-written-answers-on-this-years-staar-tests/ Wed, 10 Apr 2024 12:30:00 +0000 https://www.the74million.org/?post_type=article&p=725110 This article was originally published in The Texas Tribune.

Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the state’s standardized tests will be graded automatically by computers.

The Texas Education Agency is rolling out an “automated scoring engine” for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing technology like artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students’ understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions — known as constructed response items. After the redesign, there are six to seven times more constructed response items.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score,” said Jose Rios, director of student assessment at the Texas Education Agency.

In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given.

This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans.

When the computer has “low confidence” in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.

“We have always had very robust quality control processes with humans,” said Chris Rozunick, division director for assessment development at the Texas Education Agency. With a computer system, the quality control looks similar.

Every day, Rozunick and other testing administrators will review a summary of results to check that they match what is expected. In addition to “low confidence” scores and responses that do not fit in the computer’s programming, a random sample of responses will also be automatically handed off to humans to check the computer’s work.

TEA officials have been resistant to the suggestion that the scoring engine is artificial intelligence. It may use similar technology to chatbots such as GPT-4 or Google’s Gemini, but the agency has stressed that the process will have systematic oversight from humans. It won’t “learn” from one response to the next, but always defer to its original programming set up by the state.

“We are way far away from anything that’s autonomous or can think on its own,” Rozunick said.

But the plan has still generated worry among educators and parents in a world still weary of the influence of machine learning, automation and AI.

Some educators across the state said they were caught by surprise at TEA’s decision to use automated technology — also known as hybrid scoring — to score responses.

“There ought to be some consensus about, hey, this is a good thing, or not a good thing, a fair thing or not a fair thing,” said Kevin Brown, the executive director for the Texas Association of School Administrators and a former superintendent at Alamo Heights ISD.

Representatives from TEA first mentioned interest in automated scoring in testimony to the Texas House Public Education Committee in August 2022. In the fall of 2023, the agency announced the move to hybrid scoring at a conference and during test coordinator training before releasing details of the process in December.

The STAAR test results are a key part of the accountability system TEA uses to grade school districts and individual campuses on an A-F scale. Students take the test every year from third grade through high school. When campuses within a district are underperforming on the test, state law allows the Texas education commissioner to intervene.

The commissioner can appoint a conservator to oversee campuses and school districts. State law also allows the commissioner to suspend and replace elected school boards with an appointed board of managers. If a campus receives failing grades for five years in a row, the commissioner is required to appoint a board of managers or close that school.

With the stakes so high for campuses and districts, there is a sense of uneasiness about a computer’s ability to score responses as well as a human can.

“There’s always this sort of feeling that everything happens to students and to schools and to teachers and not for them or with them,” said Carrie Griffith, policy specialist for the Texas State Teachers Association.

A former teacher in the Austin Independent School District, Griffith added that even if the automated scoring engine works as intended, “it’s not something parents or teachers are going to trust.”

Superintendents are also uncertain.

“The automation is only as good as what is programmed,” said Lori Rapp, superintendent at Lewisville ISD. School districts have not been given a detailed enough look at how the programming works, Rapp said.

The hybrid scoring system was already used on a limited basis in December 2023. Most students who take the STAAR test in December are retaking it after a low score. That’s not the case for Lewisville ISD, where high school students on an altered schedule test for the first time in December, and Rapp said her district saw a “drastic increase” in zeroes on constructed responses.

“At this time, we are unable to determine if there is something wrong with the test question or if it is the new automated scoring system,” Rapp said.

The state overall saw an increase in zeroes on constructed responses in December 2023, but the TEA said there are other factors at play. In December 2022, the only way to score a zero was by not providing an answer at all. With the STAAR redesign in 2023, students can receive a zero for responses that may answer the question but lack any coherent structure or evidence.

The TEA also said that students who are retesting will perform at a different level than students taking the test for the first time. “Population difference is driving the difference in scores rather than the introduction of hybrid scoring,” a TEA spokesperson said in an email.

For $50, students and their parents can request a rescore if they think the computer or the human got it wrong. The fee is waived if the new score is higher than the initial score. For grades 3-8, there are no consequences on a student’s grades or academic progress if they receive a low score. For high school students, receiving a minimum STAAR test score is a common way to fulfill one of the state graduation requirements, but it is not the only way.

Even with layers of quality control, Round Rock ISD Superintendent Hafedh Azaiez said he worries a computer could “miss certain things that a human being may not be able to miss,” and that room for error will impact students who Azaiez said are “trying to do his or her best.”

Test results will impact “how they see themselves as a student,” Brown said, and it can be “humiliating” for students who receive low scores. With human graders, Brown said, “students were rewarded for having their own voice and originality in their writing,” and he is concerned that computers may not be as good at rewarding originality.

Julie Salinas, director of assessment, research and evaluation at Brownsville ISD said she has concerns about whether hybrid scoring is “allowing the students the flexibility to respond” in a way that they can demonstrate their “full capability and thought process through expressive writing.”

Brownsville ISD is overwhelmingly Hispanic. Students taking an assessment entirely in Spanish will have their tests graded by a human. If the automated scoring engine works as intended, responses that include some Spanish words or colloquial, informal terms will be flagged by the computer and assigned to a human so that more creative writing can be assessed fairly.

The system is designed so that it “does not penalize students who answer differently, who are really giving unique answers,” Rozuick said.

With the computer scoring now a part of STAAR, Salinas is focused on adapting. The district is incorporating tools with automated scoring into how teachers prepare students for the STAAR test to make sure they are comfortable.

“Our district is on board and on top of the things that we need to do to ensure that our students are successful,” she said.

Disclosure: Google, the Texas Association of School Administrators and Texas State Teachers Association have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here.

This article originally appeared in The Texas Tribune at https://www.texastribune.org/2024/04/09/staar-artificial-intelligence-computer-grading-texas/.

The Texas Tribune is a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.

]]>
A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor https://www.the74million.org/article/a-cautionary-ai-tale-why-ibms-dazzling-watson-supercomputer-made-a-lousy-tutor/ Tue, 09 Apr 2024 13:30:00 +0000 https://www.the74million.org/?post_type=article&p=724698

With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits. 

In the annals of artificial intelligence, Feb. 16, 2011, was a watershed moment.

That day, IBM’s Watson supercomputer finished off a three-game shellacking of Jeopardy! champions Ken Jennings and Brad Rutter. Trailing by over $30,000, Jennings, now the show’s host, wrote out his Final Jeopardy answer in mock resignation: “I, for one, welcome our computer overlords.”

A lark to some, the experience galvanized Satya Nitta, a longtime computer researcher at IBM’s Watson Research Center in Yorktown Heights, New York. Tasked with figuring out how to apply the supercomputer’s powers to education, he soon envisioned tackling ed tech’s most sought-after challenge: the world’s first tutoring system driven by artificial intelligence. It would offer truly personalized instruction to any child with a laptop — no human required.

YouTube

“I felt that they’re ready to do something very grand in the space,” he said in an interview. 

Nitta persuaded his bosses to throw more than $100 million at the effort, bringing together 130 technologists, including 30 to 40 Ph.D.s, across research labs on four continents. 

But by 2017, the tutoring moonshot was essentially dead, and Nitta had concluded that effective, long-term, one-on-one tutoring is “a terrible use of AI — and that remains today.”

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

Some of the leading lights of ed tech, from Google to Microsoft, are trying to pick up where Watson left off, offering AI tools that promise to help teach students. Sal Khan, founder of Khan Academy, last year said AI has the potential to bring “probably the biggest positive transformation” that education has ever seen. He wants to give “every student on the planet an artificially intelligent but amazing personal tutor.”

A 25-year journey

To be sure, research on high-dosage, one-on-one, in-person tutoring is unequivocal: It’s one of the most powerful interventions available, offering significant improvement in students’ academic performance, particularly in subjects like math, reading and writing.  

But traditional tutoring is also “breathtakingly expensive and hard to scale,” said Paige Johnson, a vice president of education at Microsoft. One school district in West Texas, for example, recently spent more than $5.6 million in federal pandemic relief funds to tutor 6,000 students. The expense, Johnson said, puts it out of reach for most parents and school districts. 

We missed something important. At the heart of education, at the heart of any learning, is engagement.

Satya Nitta, IBM Research’s former global head of AI solutions for learning

For IBM, the opportunity to rebalance the equation in kids’ favor was hard to resist. 

The Watson lab is legendary in the computer science field, with six Nobel laureates and six Turing Award winners among its ranks. It’s where modern speech recognition was invented, and home to countless other innovations such as barcodes and the magnetic stripes on credit cards that make ATMs possible. It’s also where, in 1997, Deep Blue beat world chess champion Garry Kasparov, essentially inventing the notion that AI could “think” like a person.

Chess enthusiasts watch World Chess champion Garry Kasparov on a television monitor as he holds his head in his hands at the start of the sixth and final match May 11, 1997 against IBM’s Deep Blue computer in New York. Kasparov lost this match in just 19 moves. (Stan Honda/Getty)

The heady atmosphere, Nitta recalled, inspired “a very deep responsibility to do something significant and not something trivial.”

Within a few years of Watson’s victory, Nitta, who had arrived in 2000 as a chip technologist, rose to become IBM Research’s global head of AI solutions for learning. For the Watson project, he said, “I was just given a very open-ended responsibility: Take Watson and do something with it in education.”

Nitta spent a year simply reading up on how learning works. He studied cognitive science, neuroscience and the decades-long history of “intelligent tutoring systems” in academia. Foremost in his reading list was the research of Stanford neuroscientist Vinod Menon, who’d put elementary schoolers through a 12-week math tutoring session, collecting before-and-after scans of their brains using an MRI. Tutoring, he found, produced nothing less than an increase in neural connectivity. 

Nitta returned to his bosses with the idea of an AI-powered cognitive tutor. “There’s something I can do here that’s very compelling,” he recalled saying, “that can broadly transform learning itself. But it’s a 25-year journey. It’s not a two-, three-, four-year journey.”

IBM drafted two of the highest-profile partners possible in education: the children’s media powerhouse Sesame Workshop and Pearson, the international publisher.

One product Sesame envisioned was a voice-activated Elmo doll that would serve as a kind of digital tutoring companion, interacting fully with children. Through brief conversations, it would assess their skills and provide spoken responses to help kids advance.

One proposed application of IBM’s planned Watson tutoring app was to create a voice-activated Elmo doll that would be an interactive digital companion. (Getty)

Meanwhile, Pearson promised that it could soon allow college students to “dialogue with Watson in real time.”

Nitta’s team began designing lessons and putting them in front of students — both in classrooms and in the lab. In order to nurture a back-and-forth between student and machine, they didn’t simply present kids with multiple-choice questions, instead asking them to write responses in their own words.

It didn’t go well.

Some students engaged with the chatbot, Nitta said. “Other students were just saying, ‘IDK’ [I don’t know]. So they simply weren’t responding.” Even those who did began giving shorter and shorter answers. 

Nitta and his team concluded that a cold reality lay at the heart of the problem: For all its power, Watson was not very engaging. Perhaps as a result, it also showed “little to no discernible impact” on learning. It wasn’t just dull; it was ineffective.

Satya Nitta (left) and part of his team at IBM’s Watson Research Center, which spent five years trying to create an AI-powered interactive tutor using the Watson supercomputer.

“Human conversation is very rich,” he said. “In the back and forth between two people, I’m watching the evolution of your own worldview.” The tutor influences the student — and vice versa. “There’s this very shared understanding of the evolution of discourse that’s very profound, actually. I just don’t know how you can do that with a soulless bot. And I’m a guy who works in AI.”

When students’ usage time dropped, “we had to be very honest about that,” Nitta said. “And so we basically started saying, ‘OK, I don’t think this is actually correct. I don’t think this idea — that an intelligent tutoring system will tutor all kids, everywhere, all the time — is correct.”

‘We missed something important’

IBM soon switched gears, debuting another crowd-pleasing Watson variation — this time, a touching throwback: It engaged in Oxford-style debates. In a televised demonstration in 2019, it went up against debate champ Harish Natarajan on the topic “Should we subsidize preschools?” Among its arguments for funding, the supercomputer offered, without a whiff of irony, that good preschools can prevent “future crime.” Its current iteration, Watsonx, focuses on helping businesses build AI applications like “intelligent customer care.” 

Nitta left IBM, eventually taking several colleagues with him to create a startup called Merlyn Mind. It uses voice-activated AI to safely help teachers do workaday tasks such as updating digital gradebooks, opening PowerPoint presentations and emailing students and parents. 

Thirteen years after Watson’s stratospheric Jeopardy! victory and more than one year into the Age of ChatGPT, Nitta’s expectations about AI couldn’t be more down-to-earth: His AI powers what’s basically “a carefully designed assistant” to fit into the flow of a teacher’s day. 

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

These notions aren’t news to those who do tutoring for a living. Varsity Tutors, which offers live and online tutoring in 500 school districts, relies on AI to power a lesson plan creator that helps personalize instruction. But when it comes to the actual tutoring, humans deliver it, said Anthony Salcito, chief institution officer at Nerdy, which operates Varsity.

”The AI isn’t far enough along yet to do things like facial recognition and understanding of student focus,” said Salcito, who spent 15 years at Microsoft, most of them as vice president of worldwide education. “One of the things that we hear from teachers is that the students love their tutors. I’m not sure we’re at a point where students are going to love an AI agent.”

Students love their tutors. I'm not sure we're at a point where students are going to love an AI agent.

Anthony Salcito, Nerdy

The No. 1 factor in a student’s tutoring success is simply showing up consistently, research suggests. As smart and efficient as an AI chatbot might be, it’s an open question whether most students, especially struggling ones, would show up for an inanimate agent or develop a sense of respect for its time.

When Salcito thinks about what AI bots now do in education, he’s not impressed. Most, he said, “aren’t going far enough to really rethink how learning can take place.” They end up simply as fast, spiffed-up search engines. 

In most cases, he said, the power of one-on-one, in-person tutoring often emerges as students begin to develop more honesty about their abilities, advocate for themselves and, in a word, demand more of school. “In the classroom, a student may say they understand a problem. But they come clean to the tutor, where they expose, ‘Hey, I need help.’”

Cognitive science suggests that for students who aren’t motivated or who are uncertain about a topic, only one-on-one attention will help. That requires a focused, caring human, watching carefully, asking tons of questions and reading students’ cues. 

Jeremy Roschelle, a learning scientist and an executive director of Digital Promise, a federally funded research center, said usage with most ed tech products tends to drop off. “Kids get a little bored with it. It’s not unique to tutors. There’s a newness factor for students. They want the next new thing.” 

There's a newness factor for students. They want the next new thing.

Jeremy Roschelle, Digital Promise

Even now, Nitta points out, research shows that big commercial AI applications don’t seem to hold users’ attention as well as top entertainment and social media sites like YouTube, Instagram and TikTok. One recent analysis dubbed the user engagement of sites like ChatGPT “lackluster,” finding that the proportion of monthly active users who engage with them in a single day was only about 14%, suggesting that such sites aren’t very “sticky” for most users.

For social media sites, by contrast, it’s between 60% and 65%. 

One notable AI exception: Character.ai, an app that allows users to create companions of their own among figures from history and fiction and chat with the likes of Socrates and Bart Simpson. It has a stickiness score of 41%.

As startups like Synthesis offer “your child’s superhuman tutor,” starting at $29 per month, and Khan Academy publicly tests its popular Khanmigo AI tool, Nitta maintains that there’s little evidence from learning science that, absent a strong outside motivation, people will spend enough time with a chatbot to master a topic.

“We are a very deeply social species,” said Nitta, “and we learn from each other.”

IBM declined to comment on its work in AI and education, as did Sesame Workshop. A Pearson spokesman said that since last fall it has been ​​beta-testing AI study tools keyed to its e-textbooks, among other efforts, with plans this spring to expand the number of titles covered. 

Getting ‘unstuck’

IBM’s experiences notwithstanding, the search for an AI tutor has continued apace, this time with more players than just a legacy research lab in suburban New York. Using the latest affordances of so-called large language models, or LLMs, technologists at Khan Academy believe they are finally making the first halting steps in the direction of an effective AI tutor. 

Kristen DiCerbo remembers the moment her mind began to change about AI. 

It was September 2022, and she’d only been at Khan Academy for a year-and-a-half when she and founder Khan got access to a beta version of ChatGPT. Open AI, ChatGPT’s creator, had asked Microsoft co-founder Bill Gates for more funding, but he told them not to come back until the chatbot could pass an Advanced Placement biology exam.

Khan Academy founder Sal Khan has said AI has the potential to bring “probably the biggest positive transformation” that education has ever seen. He wants to give every student an “artificially intelligent but amazing personal tutor.” (Getty)

So Open AI queried Khan for sample AP biology questions. He and DiCerbo said they’d help in exchange for a peek at the bot — and a chance to work with the startup. They were among the first people outside of Open AI to get their hands on GPT-4, the LLM that powers the upgraded version of ChatGPT. They were able to test out the AI and, in the process, become amateur AI prompt engineers before anyone had even heard of the term. 

Like many users typing in queries in those first heady days, the pair initially just marveled at the sophistication of the tool and its ability to return what felt, for all the world, like personalized answers. With DiCerbo working from her home in Phoenix and Khan from the nonprofit’s Silicon Valley office, they traded messages via Slack.

Kristen DiCerbo introduces users to Khanmigo in a Khan Academy promotional video. (YouTube)

“We spent a couple of days just going back and forth, Sal and I, going, ‘Oh my gosh, look what we did! Oh my gosh, look what it’s saying — this is crazy!’” she told an audience during a recent appearance at the University of Notre Dame. 

She recounted asking the AI to help write a mystery story in which shoes go missing in an apartment complex. In the back of her mind, DiCerbo said, she planned to make a dog the shoe thief, but didn’t reveal that to ChatGPT. “I started writing it, and it did the reveal,” she recalled. “It knew that I was thinking it was going to be a dog that did this, from just the little clues I was planting along the way.”

More tellingly, it seemed to do something Watson never could: have engaging conversations with students.

DiCerbo recounted talking to a high school student they were working with who told them about an interaction she’d had with ChatGPT around The Great Gatsby. She asked it about F. Scott Fitzgerald’s famous green light across the bay, which scholars have long interpreted as symbolizing Jay Gatsby’s out-of-reach hopes and dreams.

“It comes back to her and asks, ‘Do you have hopes and dreams just out of reach?’” DiCerbo recalled. “It had this whole conversation” with the student.

The pair soon tore up their 2023 plans for Khan Academy. 

It was a stunning turn of events for DiCerbo, a Ph.D. educational psychologist and former senior Pearson research scientist who had spent more than a year on the failed Watson project. In 2016, Pearson had predicted that Watson would soon be able to chat with college students in real time to guide them in their studies. But it was DiCerbo’s teammates, about 20 colleagues, who had to actually train the supercomputer on thousands of student-generated answers to questions from textbooks — and tempt instructors to rate those answers. 

Like Nitta, DiCerbo recalled that at first things went well. They found a natural science textbook with a large user base and set Watson to work. “You would ask it a couple of questions and it would seem like it was doing what we wanted to,” answering student questions via text.

But invariably if a student’s question strayed from what the computer expected, she said, “it wouldn’t know how to answer that. It had no ability to freeform-answer questions, or it would do so in ways that didn’t make any sense.” 

After more than a year of labor, she realized, “I had never seen the ‘OK, this is going to work’ version” of the hoped-for tutor. “I was always at the ‘OK, I hope the next version’s better.’”

But when she got a taste of ChatGPT, DiCerbo immediately saw that, even in beta form, the new bot was different. Using software that quickly predicted the most likely next word in any conversation, ChatGPT was able to engage with its human counterpart in what seemed like a personal way.

Since its debut in March 2023, Khanmigo has turned heads with what many users say is a helpful, easy-to-use, natural language interface, though a few users have pointed out that it sometimes gets math facts wrong.

Surprisingly, DiCerbo doesn’t consider the popular chatbot a full-time tutor. As sophisticated as AI might now be in motivating students to, for instance, try again when they make a mistake, “It’s not a human,” she said. “It’s also not their friend.”

(AI's) not a human. It’s also not their friend.

Kristen DiCerbo, Khan Academy

Khan Academy’s research shows their tool is effective with as little as 30 minutes of practice and feedback per week. But even as many startups promise the equivalent of a one-on-one human tutor, DiCerbo cautions that 30 minutes is not going to produce miracles. Khanmigo, she said, “is not a solution that’s going to replace a human in your life,” she said. “It’s a tool in your toolbox that can help you get unstuck.”

‘A couple of million years of human evolution’

For his part, Nitta says that for all the progress in AI, he’s not persuaded that we’re any closer to a real-live tutor that would offer long-term help to most students. If anything, Khanmigo and probabilistic tools like it may prove to be effective “homework helpers.” But that’s where he draws the line. 

“I have no problem calling it that, but don’t call it a tutor,” he said. “You’re trying to endow it with human-like capabilities when there are none.”  

Unlike humans, who will typically do their best to respond genuinely to a question, the way AI bots work —by digesting pre-existing texts and other information to come up with responses that seem human — is akin to a “statistical illusion,” writes Harvard Business School Professor Karim Lakhani. “They’ve just been well-trained by humans to respond to humans.”

Researcher Sidney Pressey’s 1928 Testing Machine, one of a series of so-called “teaching machines” that he and others believed would advance education through automation.

Largely because of this, Nitta said, there’s little evidence that a chatbot will continuously engage people as a good human tutor would.

What would change his mind? Several years of research by an independent third party showing that tools like Khanmigo actually make a difference on a large scale — something that doesn’t exist yet.

DiCerbo also maintains her hard-won skepticism. She knows all about the halting early decades of computers a century ago, when experimental, punch-card operated “teaching machines” guided students through rudimentary multiple-choice lessons, often with simple rewards at the end. 

In her talks, DiCerbo urges caution about AI revolutionizing education. As much as anyone, she is aware of the expensive failures that have come before. 

Two women stand beside open drawers of computer punch card filing cabinets. (American Stock/Getty Images)

In her recent talk at Notre Dame, she did her best to manage expectations of the new AI, which seems so limitless. In one-to-one teaching, she said, there’s an element of humanity “that we have not been able to — and probably should not try — to replicate in artificial intelligence.” In that respect, she’s in agreement with Nitta: Human relationships are key to learning. In the talk, she noted that students who have a person in school who cares about their learning have higher graduation rates. 

But still.

ChatGPT now has 100 million weekly users, according to Open AI. That record-fast uptake makes her think “there’s something interesting and sticky about this for people that we haven’t seen in other places.”

Being able to engineer prompts in plain English opens the door for more people, not just engineers, to create tools quickly and iterate on what works, she said. That democratization could mean the difference between another failed undertaking and agile tools that actually deliver at least a version of Watson’s promise. 

An early prototype of IBM’s Watson supercomputer in Yorktown Heights, New York. In 2011, the system was the size of a master bedroom. (Wikimedia Commons)

Seven years after he left IBM to start his new endeavor, Nitta is philosophical about the effort. He takes virtually full responsibility for the failure of the Watson moonshot. In retrospect, even his 25-year timeline for success may have been naive.

“What I didn’t appreciate is, I actually was stepping into a couple of million years of human evolution,” he said. “That’s the thing I didn’t appreciate at the time, which I do in the fullness of time: Mistakes happen at various levels, but this was an important one.”

]]>
189 Innovative School Leaders: Teacher Staffing, AI, Mental Health Top Ed Issues https://www.the74million.org/article/189-innovative-school-leaders-teacher-staffing-ai-mental-health-top-ed-issues/ Tue, 09 Apr 2024 11:15:00 +0000 https://www.the74million.org/?post_type=article&p=725031 A common set of problems are keeping education leaders up at night: Will there be enough teachers to staff America’s schools? Can artificial intelligence enhance learning without deepening inequality? How can educators address the mental health crisis among young people? None of these have easy answers.

New data confirm that these issues are top of mind for school leaders, and that education innovators are working to find solutions. The Canopy project, an ongoing national study of schools that focus on designing student-centered and equitable learning environments — and challenge assumptions about what school must be — just updated its database with survey results from 189 innovative schools. 

In the survey, most participants agreed that teacher workforce issues, AI and the mental health crisis will shape the future of education. They are also working on solutions — but are concerned about having adequate resources to sustain those efforts.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


School leaders selected teacher workforce issues as the top factor that they think will transform the education sector. While some respondents said they have struggled to recruit teachers in general, they particularly have trouble finding those with skills geared to working with non-traditional instructional models. A leader from Bostonia Global, a charter school that’s part of Cajon Valley Unified School District in California, wrote that credentialing programs need to “shift to meet the needs of our current and future workforce.” The school’s competency-based instructional model requires teachers to implement an individualized approach, not just teach the same content at the same pace to a classroom of 30 kids.

Canopy’s survey data show that many schools are innovating to solve these workforce-related issues: 65% reported they implement some form of flexible or alternative staffing model. For example, the Center for Advanced Research and Technology, a high school that enrolls students from two partner districts in California, brings in industry professionals to work alongside teachers. Several Canopy schools foster collaboration, using staffing models such as Opportunity Culture, which provides mentorship, opportunities for small-group teaching and professional development. 

Artificial intelligence was the second most-selected driver of change. School leaders’ responses showed they want to harness its potential while staying attentive to issues of access, privacy and equity. Only 7% of Canopy school leaders said they have a policy in place governing students’ use of generative AI, but 38% said they’re developing one. Despite the shortage of formal policy, experimentation appeared abundant.

Howard Middle School for Math and Science, based at Howard University, said the school’s policy is to use AI “to enhance educational outcomes, personalize learning experiences and streamline administrative tasks, while ensuring the safety, privacy and well-being of all students and staff. Anastasis Academy, an independent microschool in Colorado, wrote, “We have trained a GPT on our model, our writings and our curriculum to help personalize learning.” 

The mental health crisis claimed the third spot on the list of factors that school leaders believe will transform K-12 education. Four in five leaders reported that their schools are already integrating social and emotional learning into all subject areas and student activities, making it one of the practices most commonly implemented across Canopy schools this year. Additionally, two-thirds of schools surveyed provide mental health services to students, either directly or through a partner like a community-based health organization, and just under half said they support adult wellness, too.

Some responses pointed to an even bigger problem beyond students’ acute mental health needs: battling despair about what the future may hold. One leader wrote, “Students are developing an increasing sense of hopelessness about the world beyond school.” Many lower- and middle-income young people, he said, feel that social mobility is “not possible for them.”

Many schools are working toward solutions that combat that sense of hopelessness. As in previous years of Canopy surveys, most schools reported designing solutions to meet marginalized students’ needs. At BuildUp Community School in Alabama, the school’s mostly Black and economically disadvantaged students split their time between classrooms and work-based learning in construction and real estate, revitalizing their communities and paving a path to homeownership. And 5280 High School, in Colorado, helps students recovering from addiction to reengage in their education and explore their passions in a setting that prioritizes mental health.

A majority of leaders worried about their ability to sustain resources in the coming years. Of those, the top concerns were the availability of local public, private and philanthropic funding. Over a third of those with concerns also said they worried about staffing shortages, inflation and the expiration of federal stimulus funding.

A few leaders pointed out that inadequate funding will not just make it harder to keep the lights on — it will stunt the development of innovative ideas to solve the enormous challenges ahead. Indeed, recent reporting shows reduced philanthropic investment in broader systemic change in the sector.Funding shortfalls in many districts and states will also mean even basic education services may lack adequate resources, making it harder for leaders to defend funding for higher-risk innovation efforts.

Too often, the scale of K-12 sector problems lead education leaders, policymakers and funders to bemoan a lack of bold solutions or flock to attractive but still-theoretical ideas that fail in the implementation stage. School-level innovation efforts are worth watching because they show unconventional ideas in the process of becoming reality — and some may hint at what success can look like. Canopy schools are prime examples of this, whether it’s a New York City charter school accelerating student learning and well-being through summer programming or a North Carolina district school achieving high growth rates with an innovative staffing approach. 

The Canopy project will release a full research report later this year. For now, the headlines from this year’s survey should prompt education leaders, policymakers and funders to take note of schools, like those in Canopy’s national dataset, that are working toward bold and unconventional solutions. 

Indeed, one answer to what will drive K-12 transformation in the coming years is that it will arise from innovation not just in ed tech companies and think tanks, but in the nation’s schools.

]]>
California Community Colleges are Losing Millions to Financial Aid Fraud https://www.the74million.org/article/california-community-colleges-are-losing-millions-to-financial-aid-fraud/ Fri, 05 Apr 2024 15:01:00 +0000 https://www.the74million.org/?post_type=article&p=724825 This article was originally published in CalMatters.

They’re called “Pell runners” — after enrolling at a community college they apply for a federal Pell grant, collect as much as $7,400, then vanish.

Since fall 2021, California’s community colleges have given more than $5 million to Pell runners, according to monthly reports they sent to the California Community Colleges Chancellor’s Office. Colleges also report they’ve given nearly $1.5 million in state and local aid to these scammers.

The chancellor’s office began requiring the state’s 116 community colleges to submit these reports three years ago, after fraud cases surged.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


At the time, the office said it suspected 20% of college applicants were fraudulent. Because of the COVID-19 pandemic, the federal government loosened some restrictions around financial aid, making it easier for students to prove they were eligible, and provided special one-time grants to help keep them enrolled. Once these pandemic-era exceptions ended in 2023 and some classes returned to in-person instruction, college officials said they expected fraud to subside. 

It hasn’t. In January, the chancellor’s office suspected 25% of college applicants were fraudulent, said Paul Feist, a spokesperson for the office. 

“This is getting significantly worse,” said Todd Coston, an associate vice chancellor with the Kern Community College District. He said that last year, “something changed and all of a sudden everything spiked like crazy.”

Online classes that historically don’t fill up were suddenly overwhelmed with students — a sign that many of them might be fake — Coston said. Administrators at other large districts, including the Los Rios Community College District in Sacramento, the Mt. San Antonio Community College District in Walnut, California and the Los Angeles Community College District, told CalMatters that fraudsters are evading each new cybersecurity strategy. 

The reason for the reported increase in fraud is because the chancellor’s office and college administrators are getting better at detecting it, he said. Since 2022, the state has allocated more than $125 million for fraud detection, cybersecurity and other changes in the online application process at community colleges.

The reports the colleges submitted don’t include how much fraud they prevented. 

The rise in suspected fraud coincides with years of efforts, both at the state and local level, to increase access to community college. Schools are reducing fees — or making college free — while legislators have worked to simplify and expand financial aid. Those efforts accelerated during the pandemic, when community colleges saw record declines in enrollment.

It’s not surprising, then, that “bad actors” would take advantage of the system’s good intentions, Feist said. 

Financial aid fraud is not new

College officials suspect most of the fake students are bots and often, they display tell-tale signs. In Sacramento, community colleges started seeing an influx of applications from Russia, China, and India during the start of the pandemic. Around the same time, administrators at Mt. San Antonio College saw students using Social Security numbers of retirees. Others had home addresses that were abandoned lots. Uncommon email domains, such as AOL.com, were another red flag. 

These scams aren’t new. The federal government has long required colleges to report instances of financial aid fraud. Every year, the federal government closes around 40 to 80 cases, including a recent conviction of three California women who stole nearly a million dollars by collecting fraudulent student loans. California community colleges also say they’ve spotted fraudulent applications from people trying to get an .edu email address in order to receive student discounts.

“If I saw, for example, that a college that only gets 1,000 applications in some time frame gets 5,000, you kind of know something is probably up.”

 VALERIE LUNDY-WAGNER, VICE CHANCELLOR FOR THE COMMUNITY COLLEGE SYSTEM

When the chancellor’s office began requiring community colleges to file monthly reports, it asked for the number of fake applications and the amount of money they gave to fraudsters.

CalMatters submitted a public records request for the data, broken down by campus. After the request was initially rejected, CalMatters appealed and received an anonymized copy of all of the monthly reports, lacking individual campus details. 

The reports show that between September 2021 and January 2024, the colleges received roughly 900,000 fraudulent college applications and gave fraudsters more than $5 million in federal aid, as well as nearly $1.5 million in state and local aid. 

The numbers show that fraud represents less than 1% of the total amount of financial aid awarded to community college students in the same time period. It’s hard to tell how accurate the data is because compliance is spotty, with some months missing reports from as many as half the colleges. 

More fraud, in more places

To understand how fraud is evolving, the chancellor’s office uses several sources of information and data, Feist said. One indicator is an atypical bump in applications. 

“If I saw, for example, that a college that only gets 1,000 applications in some time frame gets 5,000, you kind of know something is probably up,” said Valerie Lundy-Wagner, a vice chancellor for the community college system. 

The chancellor’s office provided CalMatters with anonymous application data for each month from September 2021 to January 2024. CalMatters analyzed the data using two different techniques to identify statistical outliers in the application data and asked the office to verify the methodology. The office repeatedly declined.

East Los Angeles College in Monterey Park on March 14. (Jules Hotz/CalMatters)

According to the analysis, more than 50 of the state’s 116 community colleges saw at least one unusual spike in the number of applications they received during that time frame. In the last year, colleges have seen more unusual spikes than at any point since 2021. Along with fraud, however, outliers could also reflect normal fluctuations in applications or the overall increase in college enrollment last year

“What we’re hearing is that (fraud) is happening more widespread than people are letting on, but people just have their heads in the sand because it looks good to have your enrollment going up,” said Coston with the Kern Community College District. Many college administrators say improvements in artificial intelligence have made it easier for people to attempt fraud on a larger scale. 

Yet clamping down too hard on fraud can have unintended consequences. More than 20% of community college students in California don’t receive Pell grants they’re eligible for. Administrative hurdles — including the verification process — are one reason why, according to a 2018 study by researchers at UC Davis. To help, the federal government is trying to simplify its financial aid application, but in some cases, it’s created more barriers for students during the rollout this year

“We’ve overcorrected at times, even in policy, and in how stringently we’re verifying students relative to the amount of fraud in the system,” said Jake Brymer, a deputy director with the California Student Aid Commission. As a result, he said, real low-income students get pushed out.

Kicking real students out of class

Sometimes, the fraud detection backfires on actual students, ousting people like Martin Romero.  

In order to graduate from East Los Angeles College, Romero, 20, must take American history, so last fall he enrolled in an online class where students can watch pre-recorded lectures on their own time. 

He said it’s all he had time for. Romero takes four classes at East Los Angeles College each semester and serves as its student body president. He also helps out at his family’s auto body shop, sometimes as much as 15 hours a week. 

On the first day of class last fall, he said the online portal, Canvas, wasn’t working on his computer.

That day, the American history professor did a test through Canvas, asking students to respond to a prompt in order to prove they were not a bot. Romero didn’t answer, so the professor dropped him from the class. 

“I was freaking out,” he said, and wrote to the professor as soon as he found out, begging to be reinstated. The professor told him the class was already full again, so letting him in would mean kicking someone else out. 

“We’re frustrated with the fact that some of these courses are getting filled really quickly. We see it as an access issue for our students.”

LETICIA BARAJAS, ACADEMIC SENATE PRESIDENT AT EAST LOS ANGELES COLLEGE

For the college’s Academic Senate, the faculty group that governs academic matters, fake students is one of the top three issues, said its president, Leticia Barajas. 

“We’re frustrated with the fact that some of these courses are getting filled really quickly,” she said. “We see it as an access issue for our students.”

She said there’s been an uptick in recent months, especially in certain kinds of online classes, that has forced professors to focus on hunting bots instead of teaching. Professors now are expected to test their students in the first weeks, asking them to submit answers to prompts, sign copies of the syllabus, or send other evidence to prove they are real. 

Increasingly, she said, the bots are evading detection, especially with the help of AI. “They’re submitting assignments. It’s gibberish,” she said.

The endless, multi-million dollar game of combating fraud

Campus and state officials described fraud detection as a game of whack-a-mole. “When we get better at addressing one thing, something else pops up,” said Lundy-Wagner. “That’s sort of the nature of fraud.”

To fight fraud, she said, the chancellor’s office, the 73 independently governed districts and their colleges all must work together, including those who oversee information technology, enrollment and financial aid. Part of the challenge is that the system is so “decentralized,” she said.

The largest reform underway is a new version of CCCApply, the state’s community college application portal, which will offer more cybersecurity, Feist said. He also said there are other “promising” short-term projects. 

One of them, a software tool known as ID.Me, launched in February. The contract with the software company, costing more than $3.5 million, gives it permission to check college applicants for identification, including video interviews in certain cases. Privacy experts have warned that the company’s video technology could be racially biased and error-prone. 

To mitigate these privacy concerns and avoid creating enrollment barriers, applicants need to opt in to the new verification software. 

In the first few days after its implementation, 29% of applicants opted in to ID.Me’s new vetting process. Some applicants started the verification process but never finished, said Feist, while others are ineligible because they’re under the age of 18. The rest chose not to verify their identity for other reasons, including many who are suspected bots. 

‘We’re just trying to survive’

In Los Angeles, community colleges have already seen a drop in suspicious applications, said Nicole Albo-Lopez, a vice chancellor with the district. But she’s skeptical the problem is solved. “The lull we see, I don’t believe we’ll be able to sustain,” she said. “They’ll find another way to come in.” 

Her district is now concerned that bots are trying to steal data or intellectual property, not just financial aid. “Say I have 400 sections of English 101 online. There are 400 variations of readings, assignments, peer-to-peer questions that somebody can go in and scrape,” Albo-Lopez said. 

Barajas said faculty at East Los Angeles College are so overwhelmed by bots they haven’t discussed the potential risk to their intellectual property: “We’re at such a level where we’re just trying to survive.”

Meanwhile, students like Romero who are wrongly mistaken for bots must develop their own survival skills. When the professor denied the request to re-enroll, he signed up for the same course in the one format that was still available — in-person. The class met every Monday and Wednesday at 7:10 a.m., and the professor deducted points for anyone who was late.

“It was torture,” he said, noting that he missed two classes and was late to around four. He finished the class with a B but said he would have had an A if he had gotten into the class he wanted.

As student body president, he said he’s been outspoken about the issue. While he was able to fulfill his history requirement, he worries that other students may not be so lucky. 

Data reporter Erica Yee contributed to this reporting. 

Adam Echelman covers California’s community colleges in partnership with Open Campus, a nonprofit newsroom focused on higher education.

This story was originally published at CalMatters

]]>
‘Distrust, Detection & Discipline:’ New Data Reveals Teachers’ ChatGPT Crackdown https://www.the74million.org/article/distrust-detection-discipline-new-data-reveals-teachers-chatgpt-crackdown/ Tue, 02 Apr 2024 20:01:00 +0000 https://www.the74million.org/?post_type=article&p=724713 New survey data puts hard numbers behind the steep rise of ChatGPT and other generative AI chatbots in America’s classrooms — and reveals a big spike in student discipline as a result. 

As artificial intelligence tools become more common in schools, most teachers say their districts have adopted guidance and training for both educators and students, according to a new, nationally representative survey by the nonprofit Center for Democracy and Technology. What this guidance lacks, however, are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom — making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” report co-authors Maddy Dwyer and Elizabeth Laird write.

Among the middle and high school teachers who responded to the online survey, which was conducted in November and December, 60% said their schools permit the use of generative AI for schoolwork — double the number who said the same just five months earlier on a similar survey. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat. 

That doesn’t mean, however, that students aren’t getting into trouble. Among survey respondents, 64% said they were aware of students who were disciplined or faced some form of consequences — including not receiving credit for an assignment — for using generative AI on a school assignment. That represents a 16 percentage-point increase from August. 

The tools have also affected how educators view their students, with more than half saying they’ve grown distrustful of whether their students’ work is actually theirs. 

Fighting fire with fire, a growing share of teachers say they rely on digital detection tools to sniff out students who may have used generative AI to plagiarize. Sixty-eight percent of teachers — and 76% of licensed special education teachers — said they turn to generative AI content detection tools to determine whether students’ work is actually their own. 

The findings carry significant equity concerns for students with disabilities, researchers concluded, especially in the face of research suggesting that such detection tools are ineffective.

]]>
Opinion: AI Can Fine-Tune Teaching With Quicker, More Frequent & More Affordable Feedback https://www.the74million.org/article/ai-can-fine-tune-teaching-with-quicker-more-frequent-more-affordable-feedback/ Sun, 17 Mar 2024 15:30:00 +0000 https://www.the74million.org/?post_type=article&p=723910 It seems counterintuitive to think that artificial intelligence can help teachers reach children in the classroom more effectively. After all, what could be more distinctively human than lighting that flame of learning inside a child’s mind? And who better to coach a teacher on what works than another human? The short answer is no one. But the more nuanced response is that AI can inform teachers in ways that can strengthen the quality of their engagement with their students. We believe in its power and potential.

We lead education nonprofits that work closely with New York City public schools. Teaching Matters provides evidenced-based coaching to ensure that all students have equitable access to effective math and reading instruction. The Urban Assembly enables social and economic mobility by innovating in public schools with no admission requirements, including giving teachers the tools they need.

One of those tools is an AI-powered system that instructional coaches will soon use to analyze classroom videos and identify what teachers are doing well to connect with students and where they can improve. Coaches already analyze video, but this system, developed by American Institutes of Research, will speed up the process, allowing coaches to get meaningful feedback to teachers more quickly and more often in the Urban Assembly network’s 22 schools. Now, a teacher may get feedback as little as once or twice a year. AI will pump up the pace.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Teaching Matters piloted a similar system this school year, thanks to a grant from the Bill & Melinda Gates Foundation. This system, tried out in secondary math classes, listens to and analyzes the speech of teachers and students and can measure how often an educator uses certain practices that research has shown to work. For example, it can tell a teacher if a question was good and could help students think through a math problem themselves. Rather than using data to inform instruction by looking at student results, AI can quickly and cheaply collect information on select teacher practices that can lead to improved achievement.

In the past, this required a coach to sit and observe a teacher in the classroom. Now, with AI, schools can capture data on teacher practices more quickly and at lower cost. This is a game changer. In public education, time is both an important and limited resource; these AI systems save time by allowing instructional coaches and schools to see the methods that teachers are using in class more quickly. They provide data on these practices to the teachers, coaches and principals — and all of them can do something very human with it. They can use the data to discuss with one another which instructional practices are working. Collecting the data to support human interaction has now become affordable. 

The ability of artificial intelligence to capture evidence of tone and mood can help foster a more supportive classroom environment and enable more effective social-emotional learning. AI can tell the difference between laughter and yelling, provide clues as to what leads to either and even help identify a negative mood that could hamper student learning and belonging. 

It allows educators to find evidence of teaching moves that work, much in the same way that a basketball coach might draw up a game-winning play. 

Still, it’s early days for AI. It’s not perfect. Because AI thus far has trained mostly on the voices of white men, it struggles with accents. It sometimes has trouble distinguishing between the voices of female teachers and students and thus offers incorrect feedback. These are problems that need work. If teachers and coaches are to trust the data, they are problems worth working on.

The potential dividends are enormous. We believe it will be easier to measure the quality of more complex types of teaching, and lower the cost of it to resource-strapped school districts. We believe it will energize teachers where it matters most — in the classroom.

And that’s the point, in the end. Students learn best from strong teachers. Not from programs, not from videos, not from AI. But if AI helps coaches to help teachers improve, that will be a gift given to students. Why withhold that?

Disclosure: The Bill & Melinda Gates Foundation provides financial support to The 74.

]]>
How Texas is Preparing Higher Education for AI https://www.the74million.org/article/how-texas-is-preparing-higher-education-for-ai/ Fri, 15 Mar 2024 13:00:00 +0000 https://www.the74million.org/?post_type=article&p=723906 This article was originally published in The Texas Tribune.

When Taylor Eighmy talks to people about the growth of artificial intelligence in society, he doesn’t just see an opportunity — he feels a jolt of responsibility.

The president of The University of Texas at San Antonio said the Hispanic-serving institution on the northwest side of the Alamo City needs to make sure its students are ready for what their future employers expect them to know about this rapidly changing technology.

“It doesn’t matter if you enter the health industry, banking, oil and gas, or national security enterprises like we have here in San Antonio,” Eighmy told The Texas Tribune. “Everybody’s asking for competency around AI.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


It’s one of the reasons the public university, which serves 34,000 students, announced earlier this year that it is creating a new college dedicated to AI, cyber security, computing and data science. The new college, which is still in the planning phase, would be one of the first of its kind in the country. UTSA wants to launch the new college by fall 2025.

According to UTSA, Texas will see a nearly 27% increase in AI and data science jobs over the next decade. The U.S. Bureau of Labor Statistics projects data science jobs nationally will increase by 35% over that time period. Leaders at UTSA say they don’t just want students to be competent in the field, but also prepare them to be a part of the conversation as it grows and evolves.

“We don’t want [students] to spend time early in their careers just trying to figure out AI,” said Jonathon Halbesleben, dean of UTSA’s business school who is co-chairing a task force to establish the new college. “We’d love to have them be career-ready to jump right into the ability to sort of shape AI and how it’s used in their organizations.”

Over the past year, much of the conversation around AI in higher education has centered around generative AI, applications and search engines that can create texts, images or data based on prompts. The arrival of ChatGPT, a free chatbot that provides conversational answers to users’ questions, sent universities and faculty scrambling to understand how this new technology will affect teaching and learning. It also raised concerns that students might be using the new technology as a shortcut to write papers or complete other assignments.

But many state higher education leaders are thinking beyond that. As AI becomes a part of everyday life in new, unpredictable ways, universities across Texas and the country are also starting to consider how to ensure faculty are keeping up with the new technology and students are ready to use it when they enter the workforce.

“This is a technology that’s clearly here to stay and advancing rapidly,” said Harrison Keller, commissioner of the Texas Higher Education Coordinating Board, the state agency that oversees colleges and universities in Texas. “Having institutions collaborate, share content [and] work with [the] industry so that the content really reflects the state of the art is really critical. It’s moving much faster than anyone anticipated.”

Next month, the state agency plans to start an assessment of AI activity at all community colleges and four-year universities in the state and use it to build a collaborative system that can help all schools get up to speed with AI.

“A majority of institutions are trying to identify what are the skills that are necessary for our faculty to be able to engage with this new evolving technology [and] to provide experiences for our students to get acclimated with skills that are going to be required in the global workforce,” said Michelle Singh, assistant commissioner for digital learning with the coordinating board.

UTSA isn’t the only school coming up with completely new programs. Other schools, including the University of North Texas and the University of Texas at Austin, have launched graduate programs and short-term certificate programs. Houston Community College recently became the first community college in Texas to offer a bachelor’s degree program in AI and robotics.

“As a community college, we’re forging new paths to ensure AI education is accessible and inclusive,” said Margaret Ford Fisher, interim chancellor of HCC, in a press release last fall. “The goal is to cultivate talent that will shape our future in this burgeoning field that has so much promise for good.”

UT-Austin recently declared 2024 the “Year of AI,” highlighting an increased focus on researching the technology and the creation of a new online master’s degree program in AI that launched this year. The program has a price tag close to $10,000, making it one of the most affordable AI graduate programs in the country.

Elsewhere around the state, colleges and universities have created internal committees to see how AI can be used to improve university operations, including identifying at-risk students, increasing retention and boosting student learning. Others are creating resource guides for faculty to help them adapt to how AI is impacting the classroom.

“We just need to jump in and start wrestling with it, and we need to be able to adapt and evolve it and build our understanding of it,” said Marty Alvarado, vice president of postsecondary education and training at Jobs for the Future, a nonprofit that focuses on improving education, the workforce and economic opportunities for students.

While other experts agree that university leaders need to be having these conversations among themselves and with students, they’re concerned adapting to AI might become another burden placed on already exhausted faculty.

“Where does that time and money come from?” said Lance Eaton, director of faculty development and innovation at College Unbound, a national nonprofit, accredited four-year college focused on adult learners. He also writes a newsletter on AI and higher education. “Because they were already overtaxed well before this happened.”

Keller acknowledged that many of the conversations about AI have “glossed over” the need to appropriately support faculty.

This spring, the coordinating board is launching a series of webinars to educate faculty across the state on general AI concepts. Meanwhile, four Texas institutions — UT-Austin, UNT, Austin Community College and San Jacinto Community College — are creating an AI essentials course for Texas faculty that goes beyond the theoretical and provides faculty with direct ways to apply AI in their classrooms, curriculums, lesson plans and assignments. Possible topics include how to use chatbots in the classroom and how to build out class assignments and research topics with AI.

“We have to support faculty at scale across different contexts, small community colleges and large research universities,” Keller said. “The idea is you don’t want every institution to have to reinvent the wheel.”

Keller said any future conversations about AI need to also involve employers and students. Employers need to share with schools how their needs are changing and schools need to acknowledge that students are often more skilled in using AI than faculty and administrators.

“We will all be better off if we are working on this together,” he said.

Eaton said while it’s important for higher education to be having conversations about the future of AI, it’s equally important for universities to make sure they’re not rushing to embrace the new technology too quickly, especially since there are still clear limitations in terms of how it can be used and how it interprets and processes information that is put into it.

“AI has become ubiquitous in a lot of places in a very short amount of time,” he said. “There are ways it’s helpful in a simple way, but there are lots of ways it fails at sophistication … it’s still not something we can really trust people’s lives with.”

For instance, Eaton expressed some skepticism with schools that are creating completely new AI programs.

“Right now, it feels like it’s a money grab,” he said. “If you want to see an institution that’s taking this seriously, it’ll be the ones that are actually looking at the curriculum, looking at their programs and say, ‘what does this curriculum look like if AI is a more ubiquitous tool?’”

As AI develops and spreads, Eaton said critical thinking, analytics, communication and strong reading and writing skills that students learn through traditional liberal arts degrees will be key to navigating the technology and recognizing where it can be useful.

Keller agreed. He said employers have emphasized to him that students will need those skills to learn and adapt to emerging AI technologies.

AT UTSA, leaders like Halbesleben say they are trying to both place themselves at the forefront of AI and figure out how to prepare all students for the ripple effects this technology will have on the rest of the workforce.

“It’ll be an important challenge for us to make sure that though we are concentrating our capacity in one college, we still need to maintain our ability to ensure all of our students have that sort of understanding,” he said.

The Texas Tribune partners with Open Campus on higher education coverage.

Disclosure: Houston Community College, Institute for Economic Development – UTSA, University of Texas at Austin, University of Texas at San Antonio and University of North Texas have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here.

This article originally appeared in The Texas Tribune at https://www.texastribune.org/2024/03/12/texas-higher-education-ai/.

The Texas Tribune is a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.

]]>
University of Texas at El Paso To Use Faculty Survey Results For AI Strategy https://www.the74million.org/article/utep-to-use-faculty-survey-results-to-enhance-campus-ai-strategy/ Thu, 14 Mar 2024 16:30:00 +0000 https://www.the74million.org/?post_type=article&p=723865 This article was originally published in El Paso Matters.

A University of Texas at El Paso team plans to conduct a survey this spring and act on the data to offer UTEP instructors the necessary help to address the growing capabilities and complexities of artificial intelligence, including ChatGPT.

Jeff Olimpo, director of the campus’ Institute for Scholarship, Pedagogy, Innovation and Research Excellence, said the goal of this study will be to determine how much instructors know about AI and how comfortable they would be to incorporate the technology into their courses.

Armed with that knowledge, the InSPIRE team will develop a multi-pronged, hybrid effort to build on every level of understanding from basic tutorials to in-depth ideas to enhance instruction to include ways students can use AI in their fields of study.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


This effort is the follow-up step to InSPIRE’s spring 2023 workshops that led to the university’s initial ChatGPT guidelines. Since then, the team has incorporated other concepts used at institutions within and beyond the University of Texas System.

“We essentially created a Frankenstein of sorts,” Olimpo said.

Jeff Olimpo, director of UTEP’s Institute for Scholarship, Pedagogy, Innovation and Research Excellence (UTEP)

The latest incarnation included recommendations of what might be appropriate to include in a syllabus such as if AI is prohibited, allowed or allowed with restrictions. The team also created a “Teaching with AI Technologies” guide that included a Frequently Asked Questions section that included AI restrictions, and procedures if the instructor suspected a student used AI in an assignment and did not credit the technology. The information was shared with faculty in January after it was approved by John Wiebe, provost and vice president for Academic Affairs.

Olimpo called the guidelines “brief, digestible and accessible,” and he stressed that instructors ultimately would decide what was best for their classes.

Gabriel Ibarra-Mejia, associate professor of public health sciences, was among the UTEP faculty who responded to the university’s recommendations. He said like it or not, ChatGPT (Generative Pre-trained Transformer) is part of the education equation now and he planned to embrace it to a point.

The professor said he allows students to use it in assignments as long as they cite its use and the reasons behind it such as to develop an outline or to polish the grammar or the report’s flow. What he does not want is for AI to replace thoughts and knowledge, especially from his students who may be health care professionals someday.

“I’m more concerned about how it might replace critical thinking,” said Ibarra-Mejia, who mentioned how he had received student papers where he suspected AI use because the responses had nothing to do with the question. “I’m concerned that the answers I get from a student might be from ChatGPT.”

Gabriel Ibarra-Mejia, associate professor of public health sciences at UTEP, said that he will allow students to use ChatGPT –with some restrictions — because it is an academic tool, but his concern is that it could lead to diminished critical thinking if used poorly. (Daniel Perez / El Paso Matters)

Melissa Vito, vice provost for Academic Innovation at UT San Antonio, said AI has been around for decades and that ChatGPT is part of the evolution.  She is the lead organizer of an AI conference for UT System institutions this week at her campus.

“The consensus in higher ed is that instructors need to use it, and students need to understand it and be able to use it,” Vito said.

In 2021, members of Forbes Technology Council agreed that AI would influence all industries, but those tech leaders suggested that it would have the most effect on industries such as logistics, cybersecurity, health care, research and development, financial services, advertising, e-commerce, manufacturing, public transportation, and media and entertainment.

A research study released in March 2023 by OpenAI, the creator of ChatGPT, showed that approximately 80% of the U.S. workers could have at least 10% of their work affected by GPT, and that 19% of employees could see at least 50% of their jobs affected by it. The projected effects span all wage levels.

Melissa Vito, vice provost for Academic Innovation at the University of Texas at San Antonio (UTSA)

While unaware of any UT System mandates to use ChatGPT, she said institutions are creating opportunities for faculty to learn about it so they can explain its uses better to their students. She said the best path for higher education is to work with the AI industry to address concerns such as data privacy that could restrict access to what is produced and how it is used.

Vito referenced the January announcement of the collaboration between Arizona State University and OpenAI. Among the goals of that relationship is to introduce advanced capabilities to the institution, which will help faculty and staff to investigate the possibilities of generative AI, which can create text, images and more in response to prompts.

The UTSA official said the purpose of the AI conference is to bring together administrators, faculty, staff and students with the broadest AI competencies to share their experiences and create a strong framework for how the UT System can benefit from the transformative effects of generative AI academically and socially.

Marcela Ramirez, associate vice provost for Teaching, Learning & Digital Transformation at UTSA, helped develop the conference’s workshops and panel discussions with representatives from sister institutions. They will cover ethical use, practical applications and how AI can be used to help students with critical thinking and problem-solving skills.

Ramirez, a two-time UTEP graduate who earned her BBA in 2008 and her MBA five years later, said the content will support faculty who want to update their courses with AI, and help them to be able to explain to students AI’s current limitations and future opportunities.

“What are the lessons learned,” asked Ramirez, who worked at UTEP for more than 10 years. “And what’s next?”

This article first appeared on El Paso Matters and is republished here under a Creative Commons license.

]]>
Inspiring: 4 Teen ‘STEM Superstars’ Build Inventions to Address Cancer, Suicide https://www.the74million.org/article/meet-the-stem-superstars-4-inspiring-teen-inventors-who-set-out-to-tackle-cancer-anxiety-suicide-more/ Wed, 13 Mar 2024 21:01:00 +0000 https://www.the74million.org/?post_type=article&p=723833 Thursday is officially Pi Day, offering Americans the annual opportunity to geek out over math, geometry and all things STEM. (It’s also recently become #DressForSTEM Day, celebrating women in science — more on that below) 

In honor of 3.14, we recently canvassed the country, searching out STEM students with noteworthy projects and inventions. You can see all our recent profiles on our STEM Superstars microsite; here are our most recent video profiles of four remarkable teenagers: 

Helping Amputees — Virginia’s Arav Bhargava

The 18-year-old senior at The Potomac School in McLean, Virginia has developed a universal fit, 3D-printed prosthetic for amputees missing their forearms. (Read the full story

Confronting Depression & Suicide — New York’s Natasha Kulviwat

The 17-year-old from Jericho researched a biomarker to help identify those at risk of suicide. (Read the full story

Easing Anxiety — Philadelphia’s Gavriela Beatrice Kalish-Schur

The 18-year-old senior at Pennsylvania’s Julia R. Masterman High School gave fruit flies anxiety to gain a deeper understanding for what makes us anxious — and to pave the path for better treatments. (Read the full story

Improving Rural Health Care — Maryland’s William Gao

The 18-year-old from Ellicott City’s Centennial High School created an AI-enabled diagnostic app that could help save rural cancer patients. (Read the full story

And in honor of March 14 and Women’s History Month, The 74’s Trinity Alicia explores women’s ongoing impact in STEM and how a hashtag is driving the Pi Day conversation to representation of women in the field:

]]>
AI Support Can Prevent College Students from Failing STEM Classes, Study Shows https://www.the74million.org/article/ai-support-can-prevent-college-students-from-failing-stem-classes-study-shows/ Wed, 13 Mar 2024 17:01:00 +0000 https://www.the74million.org/?post_type=article&p=723761 Researchers have found a new way to improve academic scores for college students studying the science, technology, engineering and mathematics field.

A recently published study from the University of Nebraska-Lincoln found that using artificial intelligence interventions boosted student achievement in STEM courses.

Retention rates in undergraduate STEM majors have fallen below 50%, and graduation rates are roughly 20% lower than in non-STEM majors, according to the study. Researcher Mohammad Hasan, who specializes in big data and artificial intelligence at UNL, said he saw this discouraging trend in his own STEM courses at the university, a campus of nearly 24,000 students.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Hasan said he became unsettled by the number of students who asked how to improve poor grades as the semester ended.

At that late date, “there was not much for me to do,” Hasan said. “Then, I was thinking that maybe I can create some kind of artificial intelligence-based support system which would tell you at the beginning of the semester, ‘Hey, you are doing okay, but if you don’t study well, maybe you will end up getting a poor grade’ or ‘You’re doing really great.’ ”

Hasan partnered with Bilal Khan, former UNL researcher and current professor at Lehigh University in Pennsylvania, to train an AI model on homework and test scores and final grades of 537 students in a computer science class between 2015 and 2018. 

In fall 2019,  they tested the model on 65 undergraduates taking the same course. Thirty-two received automated emails six, nine and 12 weeks into the semester containing the AI model’s projection of their success: good, fair, prone-to-risk or at risk of failing.

The remaining 33 students received one message that said “unable to make a prediction.”

At the end of the semester, nearly 91% of the first group passed the course, versus 73% of the second group.

Of students surveyed who reported actively checking their status from the AI model, 86% said they increased their effort after seeing the forecast.

Hasan said the study’s promising results helped him secure a $600,000 grant from the National Science Foundation to develop a smartphone app called Messages From a Future You. The original AI model still needs some key components to become a well-rounded intervention for STEM students, he said.

“At that time, I was just using students’ grades basically to forecast their future performance. And I realized that maybe it’s not just the grade that I should be looking at, maybe I should look at other aspects of their lives,” Hasan said. “For instance, are they engaged in their study? Are they motivated to study? Do they think that they can do it? Are they well connected to their peers? Are they getting enough help from the lab instructors, the teachers and so on, so forth? So we designed this app.”

Hasan, Khan and Neeta Kantamneni, director of the university’s counseling psychology program, hope the app will be ready this fall.

The AI model will become more sophisticated by gathering information from each student, based on daily questions about their personality, life and classroom experiences.

Hasan said the app will send targeted messages depending on the undergraduates’ background and progress — interventions that will mirror the type of advice provided in face-to-face counseling. The app might encourage students to participate in mindfulness activities, collaborate with peers or seek extra help during office hours.

“The model is more interesting in a sense that it can tell you not just about your future performance, but it would know exactly why you are going to get a poor grade. Is it because you’re losing your engagement, you’re losing your motivation?” Hasan said. “We are looking at ways to understand what is causing poor performance, right? And if we can identify that, what’s the remedy?”

Messages From a Future You is meant to be a friend for STEM students and support them through difficult classes. The app will even have an avatar that looks like the user.

STEM students “start with a lot of enthusiasm,” Hasan said. “But over time, their motivation degrades and their engagement degrades. I think that we can do something about that.”

]]>
Wizard Chess, Robot Bikes and More: Six Students Creating Cool Stuff with AI https://www.the74million.org/article/students-ai-opportunity-while-adults-fret-artificial-intelligence/ Sun, 25 Feb 2024 15:30:00 +0000 https://www.the74million.org/?post_type=article&p=722752 More than a year after ChatGPT’s surprise launch thrust artificial intelligence into public view, many educators and policymakers still fear that students will primarily use the technology for cheating. An October survey found that two-thirds of high school and college instructors are so concerned about AI they’re rethinking assignments, with many planning to require handwritten assignments, in-class writing or even oral exams. 

But a few students see things differently. They’re not only fearless about AI, they’re building their studies and future professional lives around it. While many of their teachers are scrambling to outsmart AI in the classroom, these students are embracing the technology, often spending hours at home, in classrooms and dorm rooms building tools they hope will launch their careers.

In a December survey, ACT, the non-profit that runs the college entrance exam of the same name, found that nearly half of high school students who’d signed up for the June 2023 exam had used AI tools, most commonly ChatGPT. Almost half of those who had used such tools relied on them for school assignments. 

The 74 went looking for young people diving head-first into AI and found several doing substantial research and development as early as high school. 

The six students we found, a few as young as 15, are thinking much more deeply about AI than most adults, their hands in the technology in ways that would have seemed impossible just a generation ago. Many are immigrants to the West or come from families that emigrated here. Edtech podcaster Alex Sarlin, who also writes a newsletter focused on edtech and founded the consultancy Edtech Insiders, isn’t surprised by the demographics. He explained that while U.S. companies typically make headlines in AI, the phenomenon has “truly been a product of global collaboration, and many of its major innovators have been immigrants,” often with training and professorships at top North American universities.

These young people are programming everything from autonomous bicycles to postpartum depression apps for new mothers to 911 chatbots, homework helpers and Harry Potter-inspired robotic chess boards. 

All have a clear message about AI: Don’t fear it. Learn about it.

Isabela Ferrer

Age 17

Hometown Bogota, Colombia

School MAST Academy, Miami, Fla.

What she’s working on: A high school junior at MAST, a public magnet high school focused on maritime studies and science, Ferrer plans to return to Colombia this spring and study computer science in college. She has been working with a foundation called FANA that takes in abandoned and abused children in her home country. She’s developing an AI tool to help the children learn how to read and write Spanish more easily.

“They enter a public school system that expects them to know how to read, but they don’t have these skills,” she said. 

Ferrer is also considering adding more features in the future, such as one that uses AI voice recognition to identify trauma in a student’s voice. 

Once she graduates, she’d like to take a gap year to “get a little more involved in the Colombian startup ecosystem and culture. I also want to travel internationally and possibly keep working on projects like the one I’m working on right now, but on an international scale.” 

What most people misunderstand about AI: “Something I think most people don’t get about AI is that it’s very accessible to everyone,” Ferrer said. “Coding API [application programming interface, which allows two applications to talk to each other] and creating AI models for any specific purpose is very easy and, if done correctly, can be beneficial for different purposes.” 

All the same, she also worries that AI is often used to tackle “very superficial problems” like productivity or data processing. “But I think there’s a huge opportunity to use these technologies to solve real problems in the world … There’s a huge opportunity to close different gaps that exist in emerging markets and in developing countries. And it’s very worth exploring.” 

Shanzeh Haji

Age 16

Hometown Toronto, Canada

School Bayview Secondary School, Richmond Hill, Ontario

Once she learned about postpartum depression, Haji began talking to new mothers and family members, including her own mother, who had experienced it. “I realized how big the problem was and how closely connected I was to it.” Haji finished coding the AI chatbot for the as-yet unnamed app and is working on the symptom recognition platform. 

What most people misunderstand about AI: “If you look at some of the people who are working in AI and some of the significant impact that AI has made on so many different problems,” she said, “whether it be climate change or medicine or drug discovery, you can just see that AI has significant potential — it can literally transform our lives in a positive way. It really allows for this radical innovation. And I feel like people see more of the negative side of artificial intelligence rather than the positive and the significance that it has on our lives.” 

Aditya Syam

Age 20

Hometown Mumbai, India

School Cornell University

What he’s working on: A math and computer science double major, Syam is part of a longstanding team at Cornell that is developing an AI-powered, self-navigating, autonomous bicycle, basically a robot bike. “The kinds of applications we are thinking of for this are deliveries and basically just getting things from point A to point B without having a human intervene at any point,” he said. Syam, who is working on the bike’s navigation team, has been honing its obstacle avoidance algorithm, which keeps it from hitting things. 

The project began about a decade ago, he said. “Back then, it was just a theory.” Now they plan to showcase an actual prototype of the bike this spring, probably in March or April, so everyone who has contributed to the project “can see what we’ve built.”

What most people misunderstand about AI: “It’s technology that’s been around for decades,” he said. “It’s just been rebranded in a different way.” ChatGPT, for instance, combines Natural Language Processing and Web access, which results in a kind of “miracle” product. “It seems so great — it can just pull something off the web for you, it can write essays for you, it can edit software code for you. But in its essence, it’s not that different from technologies that have been around before.”

Vinitha Marupeddi

Age 21

Hometown San Jose, Calif.

School Purdue University

What she’s working on: A senior studying computer science, data science and applied statistics, Marupeddi recently led two student teams — one in voice recognition and another in computer vision — developing a robotic, voice-activated AI chess game modeled after Wizard Chess, the 3-D animated game in the Harry Potter books in which the pieces come to life. “We were able to do a lot of high-level robotics using that one project, so I thought that was very cool,” she said. Though the game is still far from being playable, Marupeddi calls it a good use case “to get people interested in robotics and machine learning.” 

Last summer, she interned at a John Deere warehouse in Moline, Ill., where she was set free to work on any project that struck her fancy. Marupeddi looked around the warehouse and saw that Deere had a robot that was being used to track inventory, so she expanded its abilities to cover a wider area. She also worked on a computer vision algorithm that used security camera footage to detect how full certain areas of the warehouse were and determine how much more inventory they could hold.

What most people misunderstand about AI: ”Honestly, I think a good chunk of people are just obsessed with the cheating part of it. They’re like, ‘Oh, ChatGPT can just write my essay. It can do my homework. I don’t have to worry about it.’ But they don’t try to actually understand the material. The people that do use ChatGPT to understand the material are actually going to use it as tutors or use it to ask questions if they don’t understand something.” That divide, between those who reject AI and those who learn how to control it, could grow larger if unaddressed. But learning about AI, she said, will “give people the resources, if they have the drive.”

Vinaya Sharma

Age 18

Hometown Toronto, Canada

School Castlebrooke Secondary School, Brampton, Ontario

What she’s working on: Actually, the better question might be: What isn’t she working on? Sharma, a high school senior, writes code like most of us speak. In part, her work is a response to how little challenge she gets in school these days. “After COVID, I feel schools have gone easier on students,” she said. “I skip school as much as I can so I can code in my room.” The result has been a flurry of applications, from an AI-powered chatbot to handle 911 calls to a power grid simulator to a pharmaceutical app to aid in drug discovery. 

The 911 app is still in search of customers, she said, but would be valuable especially in cases where multiple people are calling about the same emergency, such as a car crash. The AI would geolocate the calls and determine if callers were using similar words to describe what they saw. To those who balk at talking to a 911 chatbot, Sharma said the current system in Toronto is often backed up. “It’ll be 100% better than being put on hold and no one assisting you at all.”

The power grid app idea was born after she began talking to engineers and energy policymakers and realized that, in her words, “The engineers were very technical, looking at things on a scale of voltages and currents. And the policymakers had trouble communicating with these grid engineers. And I realized that that was one of the bottlenecks slowing down the process so much.” She used design principles pioneered by one of her favorite video games, SimCity BuildIt, to give the two groups a drag-and-drop simulation that both could understand. 

Sharma got interested in drug discovery after reading that Lululemon founder Chip Wilson has a rare form of muscular dystrophy that makes it difficult to walk. He’s investing $100 million on treatments and research for a cure. Sharma said she “fell down a research rabbit hole” and soon realized that the drug discovery process “is honestly broken. It takes more than a decade to bring a drug to market, and it costs, on average, $1 billion to $2 billion,” or about $743 million to nearly $1.5 billion in U.S. dollars.

Her app, BioBytes, aims to bring down both the cost and time needed to bring drugs to market. 

What most people misunderstand about AI: “With any new emerging tech, there’s going to be bad actors that will abuse the system or use it for harm,” she said. “But personally I believe the pros outweigh it. Instead of taking these tools away from us in order to prevent these bad things from happening, I think that people need to realize that the tools are here and people are going to use them. So there needs to be a greater focus on education, of how to use the tools and how to use [them] for good and how it can actually support us.” 

Krishiv Thakuria 

Age 15

Hometown Mississauga, Ontario, Canada

School The Woodlands Secondary School, Mississauga

What he’s working on: Thakuria founded a startup called Aceflow.org and is building a set of AI-powered learning tools to help students study more efficiently. The tools let users upload any class materials — study notes, a PDF of a textbook chapter or entire novel or even a teacher’s PowerPoint. From there they can create “an infinite set of practice questions” keyed to the course, Thakuria said. If students get stuck, they can click on an AI tutor customized to the material they uploaded.

The tutoring function is similar to Khan Academy’s AI-powered teaching assistant Khanmigo, but Thakuria said Aceflow’s tool has an advantage: Khanmigo only works, for now, on Khan Academy materials. “In a lot of classes, teachers teach content in very different ways,” he said. “If you can personalize an AI tool to study the material of your teachers, you get learning that’s far more personalized and far more relevant to you, making your studying sessions more effective.” Aceflow users can also create timed study sessions, something neither Khanmigo nor ChatGPT users can currently do.

The new tool is being beta-tested by a focus group of 20, with a 1,400-person waitlist, he said. He and his partners plan to offer it on a “freemium” model, with charges for premium features. Even paying a small amount for unlimited use of the tool makes it available to many families who can’t afford a tutor, Thakuria said, since private tutoring can cost upwards of $10,000 a year. 

What most people misunderstand about AI: That its impact on education will be “binary,” he said. People believe “it’s either a good thing or a bad thing. I think that it can do both. For all the people who worry about AI being a bad thing, I would argue that, well, a hammer can be a bad thing when you give your kid a hammer for the first time to help you out with carpentry work. You have to teach your kid how to use it, right? And without teaching your kid how to use a tool, the tool is not going to be used properly, and that hammer is going to break something.”

It’s the same with AI. “If we can teach kids that smoking is bad for the body, we should teach kids that using AI in certain ways is bad for the brain. But we shouldn’t just focus on the negative effects, because then we’re closing off a future of using AI to solve educational inequity in so many beautiful ways. AI is a technology that can help us scale private tutoring to far more families than can actually afford it now. I think no one should underestimate the positive effects of AI while also safeguarding [against] the negative effects, because two things can be true at once.” 

]]>
Artificial Intelligence & Schools: Innovators, Teachers Talk AI’s Impact at SXSW https://www.the74million.org/article/18-ai-events-must-see-sxsw-edu-2024/ Thu, 15 Feb 2024 14:01:00 +0000 https://www.the74million.org/?post_type=article&p=722328 South by Southwest Edu returns to Austin, Texas, running March 3-7. As always, the event offers a wealth of panels, discussions, film screenings and workshops exploring emerging trends in education and innovation.

Keynote speakers this year include Geoffrey Canada of Harlem Children’s Zone, Carol Dweck of Stanford University, who popularized the idea of “growth mindset,” and actor Christopher Jackson, who starred on Broadway as George Washington in Hamilton. Jackson, who has a child on the autism spectrum, will discuss how doctors, parents and advocates are working together to change the ways neurodivergent kids communicate and learn.

But one issue that looms larger than most in the imaginations of educators is artificial intelligence. This year, South by Southwest EDU is offering dozens of sessions exploring AI’s potential and pitfalls. To help guide the way, we’ve scoured the schedule to highlight 18 of the most significant presenters, topics and panels: 

Monday, March 4:

11:30 a.m. — The Creative Frontier: AI & the Making of Immersive Reality: The New School’s Maya Georgieva looks at how AI is ushering in a new era of immersive experiences. Her talk explores worlds that blur the lines between the virtual and real, where human ingenuity converges with intelligent machines. Georgieva will spotlight the next generation of creators shaping immersive realities, sharing emerging practices and projects from her students as well as her innovation labs and design jams. Learn more.

1 p.m. — The Future of Assessment is Invisible: Educators have long sought a better way to demonstrate learning, adapt instruction and build student confidence. Now, advancements in machine learning, natural language processing and data analytics are creating new possibilities for finding out what students know. This session will explore the ways in which AI is rendering assessments invisible, reducing stress and anxiety for students while improving objectivity and generating actionable insights for educators. Learn more.

1 p.m. — How AI Can Help Teachers Simulate Success: Many high-pressure professions pilots, doctors and professional athletes among others have access to high-quality simulators to help them learn and improve their skills. Could teachers benefit from hours in a simulator before setting foot in a classroom? In this session featuring presenters from the Relay Graduate School of Education and Wharton Interactive at the University of Pennsylvania, panelists will discuss virtual classrooms they’re piloting. They’ll also address the challenges, successes and possibilities of developing an AI-driven teaching simulator. Learn more.

2:30 p.m. — An Investor Talk on AI: All-In or All-Hype: In just the first half of 2023, venture capital investors poured more than $40 billion into AI startups. Yet big questions loom about how these technologies may impact education and the world of work. How are education and workforce investors separating wheat from chaff? Hear from a trio of venture capital and impact investors as they share the trends they’re watching. Learn more.

3:30 p.m. — AI & the Future of Learning: This session will look at the profound transformations in teaching taking place in classrooms that blend AI with tailored, competency-focused education. Laura Jeanne Penrod of Southwest Career and Technical Academy and Nevada’s 2024 Teacher of the Year will explore AI’s role in enhancing rather than supplanting quality teaching and what happens when schools embrace the human touch and educators’ emotional intelligence. Learn more.

Laura Jeanne Penrod

3:30 p.m. — Generative AI in Class: Female Leadership Perspectives: In this interactive workshop led by women leaders from the University of Texas at Austin and the Waco (Texas) Independent School District, participants will learn how to design effective lesson plans and syllabi that incorporate AI tools such as ChatGPT and DALL-E to help prepare students to address society’s most pressing needs. Learn more.

4:00 p.m. — Creativity Is the Durable Skill That AI Can’t Replace: If we get AI in education right, it has the power to revolutionize how children learn. But if we get it wrong and fail to nourish children’s creativity their ability to innovate, think critically and problem solve we risk leaving them unprepared for a changing world. Creativity is the durable skill that AI cannot replace. And this panel, comprising educators and industry leaders, will explore the role we play in nurturing children’s innate creativity. Learn more.

4:00 p.m. — Generative AI in Education: Learn from the Pioneers: This panel, featuring early AI-in-education pioneers such as Amanda Bickerstaff, founder of AI for Education, Charles Foster, an AI researcher at Finetune Learning, and Ben Kornell,  co-founder of Edtech Insiders, will explore their journeys and what they consider the most exciting future opportunities and important challenges — in this emerging space. Learn More.

Tuesday, March 5:

11:30 a.m. — AI’s Impact on Students of Color: Rethinking Digital Wellness: AI’s continued adoption in schools raises concerns about bias, especially toward students of color. This session, hosted by Common Sense Education’s Jamie Nunez, will highlight practical ways AI tools impact engagement for students from diverse racial and ethnic backgrounds. It will also address ethical concerns such as plagiarism and issues with facial recognition tools. And it will feature positive student experiences with AI and practical ways to ensure it remains inclusive. Learn more.

Jamie Nunez

11:30 a.m. — AI Literacy for Educators: What It Is & How to Promote It: In 2024, what defines “AI literacy”? And how can we promote it effectively in schools? Marc Cicchino, innovation director for the Northern Valley Regional High School District in northeastern New Jersey, shares insights on fostering AI literacy through tailored learning experiences and initiatives like the NJ AI Literacy Summit. As part of the session, Cicchino guides attendees through organizing their own summit. Learn more

11:30 a.m. — The Cusp, a Work Shift Podcast: Leveraging AI to Benefit Learners & Workers: Come watch a live recording of The Cusp, a new podcast hosted by Work Shift’s Paul Fain, exploring AI’s potential to not only enhance how we develop skills and improve job quality but exacerbate inequalities in our education and workforce systems. Leaders from Learning Collider, MDRC and Burning Glass Institute will share their perspectives on how AI can reach learners and workers in innovative ways, bridging the gap to economic opportunity. Learn more.

2:30 p.m. — TeachAI: Empowering Educators to Teach with AI & About AI: While a few school districts have embraced artificial intelligence, neither the technology companies creating the AI nor the governments regulating it have provided guidance on how to integrate the new tech into classrooms. This has left districts wondering how to integrate AI safely, ethically and equitably. This panel of TeachAI.org founders and advisory members will discuss why government and education leaders must align standards with the needs of an increasingly AI-driven world. The panel features Khan Academy’s Kristen DiCerbo, Kara McWilliams of ETS, Hadi Partovi of Code.org and ISTE’s Joseph South. Learn more.

Wednesday, March 6:

11:30 a.m. — ChatECE: How AI Could Aid the Early Educator Workforce: Just as artificial intelligence is gaining momentum in education, the early childhood education workforce is experiencing record levels of burnout. A recent survey found many educators say they’re more likely to remain in their roles if they have access to better support, including high-quality classroom tools and flexible professional development. Could we harness AI to empower our early childhood workforce? This panel, led by the National Association for the Education of Young Children’s Michelle Kang and Isabelle Hau of the Stanford Accelerator for Learning, will explore the possibilities and challenges of AI in early childhood education. Learn more.

1 p.m. — Tomorrow’s Principal Podcast: Will AI Be Your Next Principal? Perhaps no one in education needs to adapt more to AI than principals. This discussion with a principal and consultants from IDEO, The Leadership Academy and the Aspen Institute will explore how principals can lead during this time of swift change. Participants will come away with tangible suggestions for fostering innovation, adaptability and self-awareness. Learn more.

3:30 p.m. — Yes, You CAN Build with AI Too: This interactive session will give educators an opportunity to explore how they might use AI to advance their work, regardless of their background or technical expertise. ​Led by project managers and leadership development specialists with Teach For America, it will help participants create their own AI tools, build a deeper understanding of generative AI and develop a better sense of its promises and risks. Learn more.

Thursday, March 7: 

10 a.m. — AI: Avoiding the Next Digital Divide: This panel discussion, led by The Education Trust’s Dia Bryant and Khan Academy’s Kristen DiCerbo, will look at whether emerging uses of AI in schools could create a new digital divide. It will explore the intersection of AI and education equity and AI’s impact on students of color, as well as those from low-income backgrounds. The session will offer steps that educators and policymakers can take to ensure that schools factor in the culture and neurodiversity of students. Learn more

Kristen DiCerbo

11 a.m. — Building the Next Gen of Black AI Leaders: This session, led by Alex Tsado of Alliance4ai, will explore what’s required to engage diverse learners to become emerging AI leaders. It’ll also explore how educators can help them build tech and leadership skills and promote an “AI-for-good” worldview. And it’ll examine the challenges that Black communities face in AI development — and propose research and solutions that can be scaled easily. Learn more.

11:30 a.m. — Meaningful & Safe AI: Policy & Research Perspectives: This panel brings together Kristina Ishmael of the U.S. Department of Education’s Office of Educational Technology and Jeremy Roschelle of Digital Promise for an interactive conversation about generative AI that will integrate two distinctive and powerful vantage points — policy and research. They’ll reflect on the listening sessions they’ve conducted, talk about policy and share insights from major research initiatives that address the efficacy, equity and ethics of generative AI. Learn more.

]]>
Nebraska Lawmaker Proposes Grant for AI Tools to Combat Dyslexia https://www.the74million.org/article/nebraska-lawmaker-proposes-grant-for-ai-tools-to-combat-dyslexia/ Wed, 14 Feb 2024 20:00:00 +0000 https://www.the74million.org/?post_type=article&p=722285 This article was originally published in Nebraska Examiner.

LINCOLN — When Millard North High School junior Janae Harris was in second grade, she read to a kindergarten class but kept getting stuck on words.

The teacher continually corrected her, Janae said, and told her she “needed to learn how to read” before she read to another class.

“I was embarrassed, and to this day it is terrifying to read out loud, and I continuously struggle to overcome it,” Janae told the Legislature’s Education Committee on Monday. “This moment will replay in my head forever.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Janae, who is in Millard’s STEM Academy and is captain of her school’s girl’s lacrosse team, among other involvements, has dyslexia. She testified in support of Legislative Bill 1253 to create a Dyslexia Research Grant Program for new technologies.

Janae Harris of Millard North High School testifies in front the Nebraska Education Committee on Monday, Feb. 12, 2024, in Lincoln. (Zach Wendling/Nebraska Examiner)

“I want to do everything in my power to minimize the struggles of dyslexic students,” Janae said.

‘Proficient, capable communicator’

State Sen. Lou Ann Linehan of Elkhorn, who introduced the bill, also has dyslexia and has fought for years to support students with dyslexia. The proposed research program would set aside $1 million for Nebraska businesses researching artificial-intelligence-based writing assistance for individuals with dyslexia.

State Sen. Lou Ann Linehan of Elkhorn, center, talks with State Sens. Fred Meyer of St. Paul and Danielle Conrad of Lincoln. Dec. 7, 2023. (Zach Wendling/Nebraska Examiner)

Linehan told the Nebraska Examiner that some educators have long discredited the lifelong disorder or cast it off as having to do with a student’s IQ or intelligence. She herself has struggled with the disorder, recalling how “horrified” she felt before her 1995 interview to work for then-U.S. Senate candidate Chuck Hagel.

She said she worried about whether she could communicate and whether Hagel would understand.

“I finally just told him, and he said, ‘Well, that’s easy. We’ll just get somebody to proof all your stuff,’ which is what we did,” Linehan said. “Then I got to the point where I was proofing things because somebody showed me how to have a tool so that I could become proficient.”

Linehan later served as Hagel’s campaign manager and his chief of staff in the U.S. Senate and said the impact of such research could be “huge.”

“This program could take a student who was afraid to write, afraid to communicate, struggling through college, and turn them into a proficient, capable communicator,” Linehan said.

It’s estimated that as many as 15%-20% of the world’s population has dyslexia, according to the International Dyslexia Association.

‘Fully partake’ in learning

In the past year, a group of University of Nebraska-Lincoln college students working in this field approached Linehan to discuss their fledgling business, Dyslexico, which they started about two years ago in the UNL Raikes School, based on the experiences of one of its co-founders, Grace Clausen.

Clausen, who has dyslexia, grew up in a school system that didn’t always work for her, according to fellow Dyslexico co-founder Bridget Peterkin of Omaha.

The Kauffman Academic Reesidential Center at the University of Nebraska-Lincoln. Feb. 9, 2024. (Zach Wendling/Nebraska Examiner)

Peterkin said corrective writing tools — from Word or Google Docs to Grammarly — do not always work, and other AI-based models, such as ChatGPT, may add words a writer didn’t intend.

In one example Peterkin showed to the Examiner, a student wrote “wondering” when they meant “wandering.” In another, the student wrote “I say a figure” instead of “saw.”

“It was never going to catch that because it was spelled correctly,” Peterkin, a senior computer science major at UNL, said of other spelling or grammar programs.

Unlike other programs, the Dyslexico software is powered through AI but finds a middle ground in not rewriting sentences, as ChatGPT did to grow with users over time and provide analysis that Peterkin and her team said might be able to help educators.

“We want to have a solution for the schools that helps students have the support to get their spelling and grammar correct while they can maintain their original voice and be able to fully partake in the learning process,” Peterkin said.

Another Nebraska-born startup that got its start in the Raikes School has grown to international success: Hudl, which has  a goal of “building the future of sports” with video and data entry.

Getting into students’ hands

Tristan Curd of Omaha, Dyslexico project manager and a senior computer science major at UNL, said Dyslexico has gotten into the hands of students and successfully launched with a public beta version last year. Dyslexico is also available online.

In the last year, Dyslexico entered agreements with two schools for test runs, providing developers with some feedback from The Pittsburgh New Church School for Children with Dyslexia and Millard Public Schools.

“There’s always going to be that distinction between us developing it and the actual users using it,” Curd said. “To get that info, it’s big.”

Members of the Dyslexico team at the University of Nebraska-Lincoln, from left: Tristan Curd, Bridget Peterkin and Nick Lauver on Feb. 9, 2024, in Lincoln. (Zach Wendling/Nebraska Examiner)

The company is also in talks to partner with Services for Students with Disabilities at UNL. Pablo Rangel, a disability specialist with SSD, testified before the Education Committee Monday in support in his individual capacity.

‘Independence and autonomy’

Rangel, who has dyslexia, said he could have benefited from such a program. He said Dyslexico could better prepare all K-12 students for college.

“The Dyslexico software does for individuals with dyslexia what prosthetics do for people who are missing a part of their body,” Rangel said. “It supports independence and autonomy for a person to move forward where typically they might retreat and give up.”

Colby Coash, on behalf of the Nebraska Association of School Boards and the Educational Service Unit Coordinating Commission, also testified in support of LB 1253.

Nick Lauver of Papillion, a business development associate for Dyslexico and a senior actuarial science and finance double major at UNL, said the company has already heard about some successes from educators, which have been “some of the brightest points” in the software’s development.

‘Redundant and unnecessary’

Megan Pitrat, a 10-year special education teacher in Syracuse, testified in opposition to LB 1253 on behalf of the Nebraska State Education Association. She said that while dyslexia is a problematic disorder, there are already systems in place to help students.

“I believe that allocating funds to research something that is already being serviced within the functioning system is redundant, unnecessary and a waste of precious funds that could instead be used to support teachers and systems that, as always, do the best with what we are given,” Pitrat testified.

State Sens. Lynne Walz of Fremont and Justin Wayne of Omaha tried to draw a distinction between NSEA’s testimony and LB 1253’s intent, which they said is to research and create but not demand the use of such technology. Pitrat said she would probably not use such technology and did not anticipate it being helpful.

“With all due respect, it’s not about you, it’s about the student,” Wayne said.

“OK, but as a practitioner, I’m determining, based on my experiences and my practice, how to deliver special education services to my students,” Pitrat responded.

‘Panacea’ or ‘game changing’

After the hearing, the NSEA referred questions to Tim Royers, president of the Millard Education Association, who clarified that the NSEA’s opposition comes in wanting to raise cautions and warn against framing such technologies as a “panacea.”

“We have concerns about digital, especially AI tools, being put in the driver’s seat to try and work on something that is as challenging to work on as dyslexia,” Royers told the Examiner.

Millard Education Association President Tim Royers, seated, testifies before the Education Committee. Jan. 17, 2024. (Aaron Sanderford/Nebraska Examiner)

He said the amount of time he’s lost by getting trained on tools that ended up being shelved after “two months tops” was a net loss for his time working with students.

Curd said Dyslexico wants to work with the Nebraska Department of Education on a large-scale research study in schools next year and aims to help an estimated 295,000 Nebraskans with dyslexia.

“If we can get it in the hands of the youngest people possible and make an impact that lasts throughout their lives, that’d be huge,” Curd said.

Janae Harris said she participated independently with Dyslexico and described the software as “game changing.” She said her “fervent hope” is that LB 1253 helps get Dyslexico into the hands of more students.

“Without the ability to read and write, Nebraska youth cannot be productive citizens and reach success,” Janae said. “Who knows what others could achieve with the help of this grant.”

The committee took no immediate action on LB 1253.

Editor’s note: This article has been updated to clarify Janae Harris’ experiences with Dyslexico and the number of Nebraskans the company aims to help.

Nebraska Examiner is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Nebraska Examiner maintains editorial independence. Contact Editor Cate Folsom for questions: info@nebraskaexaminer.com. Follow Nebraska Examiner on Facebook and Twitter.

]]>
Why the Rush toward Generative AI Literacy in K-12 Schools May Be Premature https://www.the74million.org/article/why-the-rush-toward-generative-ai-literacy-in-k-12-schools-may-be-premature/ Tue, 13 Feb 2024 12:00:00 +0000 https://www.the74million.org/?post_type=article&p=722064 The emergence of generative artificial intelligence is driving a movement to rapidly embed genAI literacy — the understanding and skills required to responsibly and effectively utilize these technologies — into the fabric of K-12 education. While this work is well-intentioned, aiming to prepare children for a tech-centric future, the challenge lies in discerning the appropriate timing, speed and manner of integrating genAI literacy, and ultimately, the technology itself into K-12.

One reason widespread genAI literacy in K-12 may be premature is the technology’s current state. The breakneck pace of development, coupled with underlying complexities and unknowns, makes it exceptionally challenging to provide evidence-based education. For instance, how genAI makes decisions remains a mystery, even to its creators. Anthropic CEO Dario Amodei, a leader in the field, recently suggested that genAI is “inherently unpredictable.” What’s more, the technology’s rapid advancement vastly exceeds the more measured pace of curriculum development and associated professional development necessary for high-quality instruction. This imbalance could put schools and districts at risk of constantly having to play catch-up with the skills and understanding needed to teach students how to use genAI responsibly, rather than concentrating on equipping them with a fundamental and lasting base of knowledge.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Second, an urgent push to incorporate genAI literacy in classrooms might lead to a low quality of tools, content and teaching as companies prioritize quickly getting their products to market over ensuring the rigor and educational integrity of their offerings.

Third, genAI literacy — even if focused on responsible adoption — implicitly suggests that the technology is safe for children. Putting aside doomsday hypotheticals — in a recent poll, 50% of genAI researchers said they believe there is a 10% or greater chance that humans will go extinct from our inability to control the technology — existing problems, such as false information and misinformation, deep fakes, bias and phishing, highlight the fact that none of the current major genAI models, including ChatGPT, Claude, Bard and Co-Pilot, are being built specifically with kids’ safety in mind. 

Finally, the push for widespread genAI literacy may detract from more pressing priorities. These include, somewhat ironically, systematic preparation for the adoption of genAI to minimize future risks, as well as investments in fundamental subjects like literacy, mathematics, arts and physical education. It might also drain crucial resources supporting students’ social, emotional and mental well-being, which will be especially critical to preserve amid looming budget challenges.

Still, while acknowledging these concerns, it’s also crucial to recognize the potential of genAI. Given its saturation in the public consciousness, completely dismissing genAI literacy would be naive. Responsibly integrating genAI literacy and adopting the technology in K-12 education is the obligation of all stakeholders: parents, teachers, administrators, philanthropists and policymakers. Here are five considerations:

  • K-12 stakeholders should approach rapid classroom-based genAI adoption with deep skepticism: Recognize the potential of genAI but approach its integration into schools with caution. Decades of ed tech underperformance suggest that the notion of “adopt or get left behind” may be misleading. Taking a skeptical stance toward genAI could help identify areas where the technology can benefit students and isolate potential risks, thus fostering a deliberate and principled approach to incorporating genAI literacy in classrooms.
  • Schools should offer genAI literacy to high school students only: Incorporate genAI literacy into regular academic classes rather than free-standing lessons, with a specific focus on its safe use, ethical considerations and development of the skills necessary to evaluate the technology’s effectiveness at real-world problem solving and task completion. The cognitive maturity of students in grades 9 to 12 will allow for a deeper and more nuanced understanding of complex concepts such as ethics and safety in the context of rapidly evolving AI technologies, which younger students — while adaptable and tech-savvy — may not yet possess. Developing appropriate teaching tools for younger students and training educators accordingly will take time.
  • Schools should continue to focus on timeless skills: The shape of future job markets impacted by genAI is largely unknowable — remember predictions about the inevitable death of blue-collar jobs from artificial intelligence? So it is prudent to continue focusing on skills essential in a world where the only predictable constant is change: critical thinking, problem-solving, adaptability and ethical reasoning, as well as newer areas such as computational thinking.
  • Philanthropy should focus on understanding and mitigating risks associated with genAI adoption: Funding should prioritize concerns about safety, privacy and well-being. Answering fundamental questions that can make genAI literacy more robust and rigorous should be the current focus. For instance, how will technology impact children’s sense of self? How will it impact cognition in young people? What systems must be built and data collected to determine the appropriate age to introduce classroom-based genAI tools?
  • Philanthropy and policymakers must empower adults: One clear lesson from the social media experiment over the last decade is that adults must protect young people from the risks associated with new technologies. To do so, they need to understand those risks with nuance. GenAI literacy should be offered to educators, parent-teacher associations and professional organizations, among others, to equip adults with the knowledge necessary to safeguard young people and advocate on their behalf. 

The sensible way forward is to focus on a balanced approach that prepares children for the future without overwhelming or misdirecting their learning experiences in the classroom.


]]>