A comprehensive metaverse might be a long way off, but the industry’s biggest player is trying to plant its flag in the American education industry right now.
This week I got a peek into one of Meta’s signature virtual-reality projects: a round of partnerships with college educators who have been using Meta’s technologies in their classrooms. I joined them for a virtual roundtable using the Meta Quest 3 headset, and then spoke to Meta’s president of global affairs Nick Clegg about how it’s all shaking out.
“The absorption of information by students and the way the students embraced this platform was unbelievable,” said Sherman Tippin, an attorney and adjunct professor teaching “Law in the Metaverse” at William & Mary College of Law.
He was among professors from 11 colleges including his own, Carnegie Mellon University, and Utah Valley College, the three of which just joined the program — and they nearly all shared his enthusiasm.
Clegg told me this excitement came organically.
“We haven’t been going around dumping Quest headsets on reluctant educators, this has been us responding increasingly to people who are at the forefront of using technology in education, and if anything I think we’ve been somewhat behind,” Clegg said.
He wrote in April that virtual reality is uniquely suited for educational purposes — allowing for both a kind of immersive social interaction that improves the learning experience, as well as access to simulations or experiences not otherwise available at a brick-and-mortar campus.
He said that VR headsets offered a specialized learning tool for schools in keeping with pedagogical best practices. “The educational systems that always seem to do best are those that have worked out how to tailor educational experiences around the individual needs of the student,” Clegg said, asserting that virtual reality devices allow for uniquely beneficial personal tailoring.
Of course, uptake for such a new technology at a large scale has revealed some institutional speed bumps as well, like time-consuming login processes that could hold up large classes of students. Clegg said that while the tech can be “relatively cumbersome to use” at the moment, he’s eager to pass along findings from the classroom experience to Meta’s engineering team.
While the metaverse classroom experience might be futuristic, there’s a present-day policy concern that could potentially interfere with Clegg’s vision for an educational metaverse: Aggressive American antitrust enforcement, as Meta and the Federal Trade Commission remain locked in an ongoing legal dogfight. I asked Clegg whether he worried at all that Meta could be painting yet another regulatory bullseye on its chest with its expansion into education tech, installing itself as a sort of default operating system for the virtual classroom.
He was skeptical of the idea, noting that some of the participants in yesterday’s meeting were using other companies’ technology to interact with its educational materials.
“It doesn’t feel to us at all that we’re somehow sweeping into virgin territory where no one else is competing,” Clegg said. He nodded to the fact that their educational apps are available through their own Meta Quest Store, but said Meta didn’t have “an entirely unopposed monopoly” in education.
The AI Policy Institute shared exclusively with DFD the results of another public poll on AI, this time on the Sam Altman fracas at OpenAI and some of the more esoteric philosophies of Silicon Valley.
The institute asked more than 1,200 Americans how much of the news about Altman they’d even heard at all — with a whopping 50 percent saying they’d heard none. That didn’t prevent them from sharing their opinions about it, however: When asked whether they think Altman’s firing and re-hiring increases or decreases the need for government regulation of AI, 59 percent said “increases” and only 28 responded “not sure.” The poll had a margin of error of 3.9 percent (crosstabs here).
AIPI also asked respondents what they think about “effective accelerationism,” a school of thought obscure even within the tech industry that believes AI capabilities should be pushed to their limit at any cost, with no restrictions, and that AI should even succeed humanity as the dominant form of life on the planet. This was not popular: Only 13 percent of respondents said they had a positive view of that idea, while 65 percent were unfavorable and 22 percent not sure.
The poll results also found widespread support for restrictions on AI-powered autonomous weapons, including a potential United Nations resolution banning lethal automated drones. AIPI additionally polled Americans on their view of whether AI companies should be held liable for various crimes committed with the help of AI, responding overwhelmingly favorably in every case.
Hear more about the AI Policy Institute and its co-founder Daniel Colson on the POLITICO Tech podcast.
The defense bill Congress agreed on late Wednesday contains some major rules for AI.
POLITICO’s Mohar Chatterjee reported on the National Defense Authorization Act for Pro subscribers, noting that it represents the most serious regulation Washington has yet enacted around the technology. It includes a bug bounty program for foundational AI models at the Pentagon, a competition for software that watermarks AI-generated content, and various reports and studies on agencies’ AI capabilities and knowledge gaps.
It also partially restructures the Pentagon’s AI office, establishing a Chief Digital and Artificial Intelligence Governing Council that will ensure the “responsible, coordinated and ethical” use of AI at the Pentagon.
The bill additionally nixes an environmental carveout that the semiconductor industry had pushed for as POLITICO’s Christine Mui reported for Pro subscribers — just as Washington will spend billions on microchip manufacturing via the CHIPS and Science Act.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).