@ferrarellim
Docencia e investigación - formación docente #AlfabetismosAumentados #AlfabetismosFluidos #Postdigital 📣 https://t.me/exploratorIA 🌎 https://linktr.ee/ferrarellim AI in education #AugmentedLiteracies #BorderPractices #EduTequiologies #AIinEDU #IAenEDU
1. The European Parliament’s attempt to ban plant-based foods from being sold as “sausages”, “burgers” etc is a direct response to livestock industry lobbying. This is a short thread on how utterly bleeding ridiculous it is. 🧵1/6
Our book with prof. Stephen Ball @ioe.bsky.social is an invitation to think education differently. Enjoy!
#escuela #eduación
🎥 "¿Escolarizar es educar? Tensiones, dilemas y preguntas urgentes"
#Debate entre @jordicollet.bsky.social y Mariano F. Enguita este pasado lunes.
Modera @carlosmagro.bsky.social
🔗 www.youtube.com/watch?v=Fn7O...
This one-pager from the French ministry of education contains more useful guidance for teachers than the entire document we recently got in Ireland.
It is understandable that policymakers might not be able to answer all the questions that arise with AI. But how refreshing it is to see them try!
What do The Matrix and educational AI have in common? Unfortunately way too much, as they promote very narrow ideas of what education should be.
Instead, Alberto Romele and I highlight better sci-fi inspirations in this new paper in Educational Theory:
onlinelibrary.wiley.com/doi/10.1111/...
quote from philosopher Micheł Wieczorek - “The recklessness with which we are approaching the adoption of AI in schools is ridiculous. If you think about it, you just have tech companies bringing products into school with no testing, no evidence, no oversight. If you or I tried to do that, we would immediately get into trouble with an ethics board at our university … And yet, for some reason, we have decided that bringing AI as quickly to schools as possible is the way to go”
We're hearing more & more sloppy talk about 'AI ethics' in education - 🎧🎙️ listen to me talk to @michalwieczorek.bsky.social for a philosophical take on the ethics of AI ... and why a lot of current ed-tech is ethically questionable!
www.buzzsprout.com/1301377/epis...
quote from Mark West: "“My own background is history. And I'll tell you what changes history - pandemics change history. People like to forget that. But looking forward 100 maybe 200 years in the future, the COVID 19 pandemic will be seen as be a major turning point in education. So, we need to be clear about how the pandemic has changed narratives about education … and we also need to be clear about what lessons we can draw from this”
Six years on since COVID hit us all, and it is wild how most people in ed-tech now act like *nothing* happened. I got to talk with Mark West about how the pandemic fundamentally changed our dependency on ed-tech, and what we can learn from the COVID experience.
www.buzzsprout.com/1301377/epis...
ijte.net/index.php/ij...
"GenAI chatbots can enhance learning by providing personalized support, immediate feedback, and opportunities for self-directed learning. However, concerns persist regarding over-reliance on AI, reduced critical thinking, and academic integrity."
Pew Research: "AI literacy is on the minds of parents, educators and researchers. Experts are already calling this a crucial skill for teens – including as a way to combat misinformation."
www.pewresearch.org/internet/202...
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more. Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage. Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot b…
To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date. The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. Regardless, these threats do not change our position: we cannot in good conscience accede to their request. It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required. We remain ready to continue our work to support the national security of the United States.
WASHINGTON (AP) — Anthropic CEO says AI company 'cannot in good conscience accede' to Pentagon's demands to allow wider use of its tech.
LOL: "After this story was published, Paliwal said he received a cease and desist letter from Instructure, which owns Canvas, and has since taken down Einstein’s website." This was the
agentic AI that was going to do your Canvas homework for you.
www.insidehighered.com/news/tech-in...
Caro Elebi:
"No es suficiente con pedir y promover la responsabilidad individual. La alfabetización es necesaria pero debe estar acompañada de regulaciones que establezcan límites claros a quienes diseñan, despliegan y comercializan estos sistemas"
www.lasillavacia.com/red-de-exper...
Something that an AI agent can't do is tell you what thoughts and questions you actually have when you read something. Here are some of mine about the passage below, and why I value teaching students to think critically: so they can ask their own questions.
Screenshot of an article header from a website. The title reads "Writing As Thinking—By Proxy" in bold serif font. Below it, the author's name "by Jon Ippolito" appears as a red hyperlink, followed by the date "Wednesday, February 18th 2026" in gray monospace font. The article preview shows a photo of a cream-colored t-shirt on a hanger printed with cartoon robots and the text "The Transformers: Writing Instructors in the Age of A.I." alongside an italic abstract that reads: "In this provocation, Jon Ippolito questions what human capabilities AI extends and what capabilities it removes. In doing so, he charts the evolution of human writing processes alongside technology while speculating on what future human writing practices will look like."
Will “writing as thinking” survive the AI age? A provocation from Jon Ippolito, followed by a conversation among the other "Transformers," Mark Marino, @anetv.bsky.social, @mahabali.bsky.social @marcwatkins.bsky.social Jeremy Douglass, and me.
preview.electronicbookreview.com/gatherings/t...
I'm working on a post about the Einstein AI agent that claimed it can do a whole course for you and log into canvas. It is likely a hoax or failed vibe-coded app. It has been taken down. Agentic AI like Perplexity's Comet browser CAN take a course for you. Knowing what is BS will always be valuable
Generic white dude who programs @westbynoreaster Why then did you take down the “Einstein” chatbot? 1:22 AM · Feb 26, 2026 · 67 Views Advait Paliwal @advaitpaliwal · 16h Cease and desist Generic white dude who programs @westbynoreaster · 15h Really? Presumably from Canvas/Instructure, right? Advait Paliwal @advaitpaliwal · 7h Due to the name Einstein
In utterly DELIGHTFUL news, Adwait Paliwal, the desi techbro behind the cheatbot Einstein AI which claimed it could log into Canvas and do/turn in your assignments for you has been forced to take down his website.
He'll likely be back, and there are others like him in abundance, sadly.
New OA article just out on "assetizing academic content" led by @jkom.bsky.social with me, @keanbirch.bsky.social & Klaus Beiter, exploring how academic materials are turned into value-generating digital assets by HE institutions, edtech platforms, and AI companies link.springer.com/article/10.1...
«Precisamente porque la tecnología conlleva riesgos, su uso se debe abordar durante los procesos educativos formales. Es una cuestión de justicia social». @carlosmagro.bsky.social
carlosmagro.substack.com/p/es-la-tecn...
Happy to announce this CfP!
A space to discuss globally the role of ethics washing in educational technologies.
Come and join us!
think.taylorandfrancis.com/special_issu...
Juan Villoro:
Los simulacros sustituyen a los actos
youtube.com/watch?v=xGhb...
Acabo de leer esto de @tonisolano.bsky.social
Vía @jordi-a.bsky.social
www.repasodelengua.com/2026/02/escu...
Tu Nube Seca Mi Río:
Impacto ecosocial de los Centros De Datos
tunubesecamirio.com
Gran conversación con Belén Gopegui sobre su libro "Te siguen"
youtu.be/N3NNTM2S74g?...
La educación es política. Desde Dewey hasta Freire, desde María Zambrano hasta los MRP (Movimientos de Renovación Pedagógica), la renovación pedagógica ha insistido en que la escuela no es un espacio neutral y ha mostrado que la educación se articula en torno a decisiones y valores que son políticos
Traje una amiga para ayudarme a compartirles porque registrarse para el kinder-3 y prekinder esta semana importa.
Visite myschools.nyc para empezar.
——
I brought a friend to help me share why signing up for free 3-K and Pre-K this week matters.
Visit myschools.nyc to get started.
#universitat #UB #GenAI
📢 "Bones pràctiques per a l'ús de la intel·ligència artificial generativa a la Universitat de Barcelona"
web.ub.edu/web/politiqu...
@ub.edu
📷 Helena Georgiou
#photooftheday #photography
#blueskyphotography #photokind
Tenemos tres importantes derechos: el derecho a equivocarnos, el derecho a cambiar de opinión y el derecho a irnos si lo decidimos. Conviene ejercerlos sin vacilar cuando sea necesario.
- Humberto Maturana