I came to education through experience, not entitlement—starting at a community college and later earning a Ph.D. from Princeton, where I was awarded the Whiting Fellowship for Distinguished Work in the Humanities. My work has always explored how media and technology shape cognition, authorship, and pedagogy—research I continued as a Brittain Postdoctoral Fellow in Digital Pedagogy at Georgia Tech, where I joined EdTech research with real-world classroom design.
I’ve worked in educational technology since the early 2000s, long before it was widely recognized as “EdTech.” My work blends systems thinking, networked cognition, and inclusive design—bringing academic rigor into scalable, equity-focused platforms. I’ve led education product strategy and learning experience design at the intersection of UX, pedagogy, and content—developing AI-integrated frameworks, SEL-centered tools, and professional learning experiences for real classrooms.
Alongside 15 years of teaching experience across K–12, community college, and university levels, I’ve spent the past decade leading content and product strategy in fast-scaling business environments. I've managed direct reports in content, marketing, and UX; built contributor networks of 200+ specialists; and developed platform-integrated instructional systems serving millions of users. I’ve created scalable content ecosystems and led cross-functional teams across education, pharma, AV, food manufacturing, and design—producing everything from blogs and white papers to instructional video series, product webpages, and multimedia course design.
I specialize in translating complexity into story, aligning instructional integrity with product and business outcomes, and designing learning systems that scale across modalities and users. My approach is shaped by a lifelong commitment to pedagogy, a deep respect for the complexity of learning, and a belief that education products should reflect the minds they serve.
The Cognitive Turn: Locating Cognitive Difference in the Age of AI
Why AI Discourse Needs N. Katherine Hayles’s Theory of Cognition
𝐖𝐡𝐞𝐧 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 𝐒𝐭𝐨𝐩𝐬 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠: Automation, Authorship, and the Ethics of Conceptual Rigor from Mark Twain to AI
This essay examines the growing disconnect between language and thought in contemporary discourse, particularly in the context of EdTech, AI, and academic theory. Beginning with reflections on the author’s recent engagement with theoretical dialogue on LinkedIn—a platform marked by performance-driven visibility rather than conceptual depth—it traces how theoretical vocabulary has increasingly come to function as professional shorthand, signaling intellectual alignment while often bypassing the labor of thinking. Drawing on examples from education discourse and the historical figure of Mark Twain—whose engagement with the typewriter and notions of automatic writing challenged humanist ideas of authorship—the essay situates current anxieties around AI-generated language within a longer tradition of mechanical mediation. Rather than framing authorship as a question of human versus machine, the essay argues for conceptual rigor as the true index of intellectual integrity. It calls for a renewed attention to friction, difficulty, and specificity in writing—not as barriers to communication, but as signs that thought is actively being done.
The cognitive turn: locating the cognitive difference in the age of AI
There’s a certain bleak ingenuity to the idea that our best response to AI’s unsettling fluency is to manually downgrade its pronouns. A recent Boston Globe op-ed recommends that we stop referring to generative systems as coworkers or collaborators, and instead swap the “o” for a zero: c0workers, c0companions. It’s not language that’s the problem here—it’s the use of language to shut down thought just when it’s most needed. Rather than open space for describing what resists classification, this symbolic tweak tries to pin the world back into place with a single keystroke.
The myth of prompting (La mitologia del Prompt)
At first glance, prompting an AI seems straightforward: a user poses a question, the model responds. But this framing—human initiator, machine reactor—is less a description than a myth about how knowledge works, especially in education and EdTech.
What kind of ethics does artificial intelligence demand? And are we even having the same conversation?
Un contributo su intelligenza artificiale e etica, mettendo a confronto due autori che sul tema hanno scritto: Luciano Floridi e Katherine Hayles.