news (external)

Watch Philosophy Lectures That Became a Hit During COVID by Professor Michael Sugrue (RIP): From Plato and Marcus Aurelius to Critical Theory

OLDaily by Stephen Downes - Wed, 2024-05-29 18:37
Colin Marshall, Open Culture, May 29, 2024

The summary in Daily Nous says it all, I think: "You might not have heard the philosophy lectures of Michael Sugrue, who died recently, but hundreds of thousands of others have — 'The type of professor you'd ditch class to go and listen to,' says one YouTube commenter." Open Culture leads with, "If we ask which philosophy professor has made the greatest impact in this decade, there's a solid case to be made for the late Michael Sugrue." If impact is defined as reach, then maybe. Though it might be hard to surpass Peter Adamson's monumental History of Philosophy Without Any Gaps. But the main point here - which surely ought to dominate any discussion of online learning - is that the apparatus of colleges, courses, degrees and credentials is only a very small part of the picture, and that real contributions are being made outside the classroom walls, out in society, where such learning surely belongs.

Web: [Direct Link] [This Post]

Towards Fairness and Justice in AI Education Policymaking - NORRAG -

OLDaily by Stephen Downes - Wed, 2024-05-29 18:37
Emma Ruttkamp-Bloem, NORRAG, May 29, 2024

This post comes from the larger publication, AI and Digital Inequalities (72 page PDF). Emma Ruttkamp-Bloem argues that such a policy should include "social values such as affirmation of the interconnectedness of all humans with each other, equity and human agency; human rights values such as privacy, transparency and accountability; and research values such as honesty and integrity." She also argues that "three of the biggest obstacles to attaining these goals include digital poverty concerns, the creation of monolithic societies and misinformation." This article is reflective of the publication as a whole (which is definitely worth a read): it is generally policy-based and founded in social justice themes. But the publication as a whole feels a bit lazy, in the sense that it essentially lists important global social justice issues and applies them to AI, with stipulations that AI should mitigate or in some way address these concerns.

Web: [Direct Link] [This Post]

Technology-Integrated Assessment in B.C. Higher Education – BCcampus

OLDaily by Stephen Downes - Wed, 2024-05-29 18:37
Colin Madland, BCcampus, May 29, 2024

This short article summarizes a longer paper in OTESSA on developing the Technology-Integrated Assessment Framework (19 page PDF) that "serves as a starting point to understand how to improve technology-integrated assessment practices in higher education in British Columbia and beyond." It consists "four components for instructors to consider when planning assessment." Each of these has three or four subcomponments and is based in previous literature on the subject - the purpose of assessment, based on the Bearman et al. model; the duty of care (in law, and also in the sense of communality); technology assessment (the UTAUT model); and assessment design (the five Rs framework). I would have looked for a more critical assessment of each of these (eg. why UTAUT and not UTAUT2?) but that may have taken more space than the journal allowed.

Web: [Direct Link] [This Post]

Training is not the same as chatting: ChatGPT and other LLMs don’t remember everything you say

OLDaily by Stephen Downes - Wed, 2024-05-29 18:37
Simon Willison, May 29, 2024

This is something I've also noticed while working with ChatGPT. It will appear to remember what you say, but it's not totally reliable. So when I day "do it the same as last time but make it green" sometimes it will work and sometimes it won't. Simon Willison says, "This can be quite unintuitive: these tools imitate a human conversational partner, and humans constantly update their knowledge based on what you say to to them." But I don't know about that. Ask a student to write a paragraph. Then give them a green pen and ask them to write the same paragraph. I'm pretty sure the two paragraphs will be different. Whenever we come up with something creative, we make it up from scratch almost every time, unless we have an eidactic memory, which more of us don't.

Web: [Direct Link] [This Post]

The Reason That Google's AI Suggests Using Glue on Pizza Shows a Deep Flaw With Tech Companies' AI Obsession

OLDaily by Stephen Downes - Wed, 2024-05-29 18:37
Frank Landymore, Futurism, May 29, 2024

I've mentioned the AI suggestion of using glue in pizza in a talk. This article reports on the source of the recommendation to a 11-year old Reddit comment posted by a user with a crass name "that was almost certainly meant as a joke." We are all agreed, I think, that adding glue to pizza is a bad idea. But we are not agreed, I think, on what large languiage models (LLM) are supposed to do. They're not encyclopedias. They are language learning systems. The sentence "put glue on pizza to prevent the cheese from sliding off" is a perfectly well-formed sentence. It also happens to be false (or at the very least, bad advice). LLMs are designed to address the first problem, and not so much the second. When in the future we get LLMs that are supposed to be reliable and accurate, we won't use Redit posts to decide what is good advice and what is not (or, at least, I hope not, though I'd be interested to see what the ethics of the AITA subreddit looks like. Via Michelle Manafy.

Web: [Direct Link] [This Post]

The Rise of Artificial Intelligence and the Implications for School Districts

OLDaily by Stephen Downes - Wed, 2024-05-29 00:37
Tom Vander Ark, Getting Smart, May 28, 2024

This is kind of fun because Tom Vander Ark offers his opinions at a school board meeting about the short and long-term implications of AI and then chatGPT is asked to do the same thing. Neither are horrible, both are what you would expect if you wrote your testimony from the 'consultant general advice on anything manual' - personalize, improve efficiency, protect privacy and security,  draft a plan, involve community, deliver training, etc. etc. etc. In other words, keep doing the same thing you've always done. You don't need to worry, nothing will really change, all your jobs will be safe.

Web: [Direct Link] [This Post]

A futurist decarbonizes his professional travel in 2024: problems and options

OLDaily by Stephen Downes - Wed, 2024-05-29 00:37
Bryan Alexander, May 28, 2024

"How can we travel without contributing to global warming?" asks Bryan Alexander. As an academic and futurist, he find himself traveling a lot. Trying to be responsible about it proves difficult, though. I read this as an object lesson in how climate change is much more a social problem than an individual problem. Even with the best of intentions, we can't do our jobs in a climate friendly manner. We need social supports and infrastructures our society - for whatever reason - won't put into place. Long term, the society will suffer for this.

Web: [Direct Link] [This Post]

Digital Native Déjà Vu: Avoiding unhelpful generational generalisations around AI in education

OLDaily by Stephen Downes - Wed, 2024-05-29 00:37
Linda Corrin, ASCILITE TELall Blog, May 28, 2024

The concern is over things like this: "In a LinkedIn post entitled Our coming AI Natives Prensky suggests that 'young people will grow up understanding how to control Generative AI just as they learn to direct their own bodies and minds, with much of the guidance they need in their pockets or chipped into their bodies'." Dave Cormier wrote on Twitter (now inaccessible unless you log in) "I've now seen the expression 'AI-Native' to describe the children who were 'born in the age of AI' because, I dunnno, they're going to grow gills or something." Sure, the whole 'digital native' thing was overblown. But we also make jokes today about young people not knowing what a DVD is or not understanding that phones had wires. It has more to do with exposure than age, obviously, but people do adapt, and it's worth adking how they'll adapt. Probably not with gills, though.

Web: [Direct Link] [This Post]

Microsoft teams with Khan Academy to make its AI tutor free for K-12 educators and will develop a Phi-3 math model

OLDaily by Stephen Downes - Tue, 2024-05-28 21:37
Ken Yeung, Venture Beat, May 28, 2024

This post describes the partnership between Khan Academy and Microsoft. "Launched in 2023, Khanmigo is an experimental AI tutor that offers students personalized guidance in subjects such as math, science, coding and writing. More than 65,000 students are said to be using the chatbot for their studies." Khan's core business is still courses - for now. But the real future is in personal tutoring.

Web: [Direct Link] [This Post]

Risking Ourselves in Education: Qualification, Socialization, and Subjectification Revisited

OLDaily by Stephen Downes - Mon, 2024-05-27 18:37
Gert Biesta, ResearchGate, Educational Theory, May 27, 2024

It's just another taxonomy, but this organization of the objectives of education presented at a talk today caught my imagination. Presented by Gert Bieta in 2011, it addresses qualification (the context of the discipline), socialization (social organzation of the discipline), and subjectification (personal capacity building and development). It addresses "concerns the shift in educational discourse, policy, and practice toward learners and their learning.This shift is often presented as a response to top-down practices of education that focus on teaching, the curriculum, and the input side of education more generally." Gert Biesta is new to me but who looks well worth catching up on.

Web: [Direct Link] [This Post]

AI in Enterprise Learning Systems

OLDaily by Stephen Downes - Sun, 2024-05-26 19:17

This presentation provides a general perspective on the use of AI in enterprise learning systems like Learning Experience Platforms, Learning Management Systems, and Talent Management Systems, and what that might mean for learning.

Summer Institute on Education and AI, Montreal () May 26, 2024 [Link] [Slides]

Oblongification of education

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
Ben Williamson, Code Acts in Education, May 24, 2024

I think what boths me most about both proponents and critics of AI in education is that none of them can imagine any other model than an instructor teaching a student. It's like both proponents and critics are frozen by this image of a computer screen - an oblong box - as though no other form of interaction could ever exist. It's Taylorism, not technology. I mean, here's Ben Williamson: "despite the rhetoric of transformation, all these AI tutors really seem to promise is a one-to-one transactional model of learning where the student interacts with a device. It's an approach that might work OK in the staged setting of a promo video recording studio, but is likely to run up hard with the reality of busy classrooms." Well, yeah, if a "busy classroom" is your ideal of learning, AI tutors might fall short. But that's like saying airplanes won't fit through railway tunnels.

Web: [Direct Link] [This Post]

You can't save research without saving universities

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
James Coe, WonkHe, May 24, 2024

This article questions the thinking behind "contingency plans for universities in the event that they fail which would protect researchers and their research," stipulating a set of conditions (eg., "there is sufficient funding", "the researcher wishes to move", etc.) needed to make this happen. But while not endorsing the contingency plans I would suggest that the heading really should be "You can't save university research without saving universities". I mean, I work in a non-university research institute, and I know of many others. Take, for example, the comment that "a building I know only too well cost ~£55M to construct, uses ~£1.5M+ in electricity a year and produces some very important technological advances." Nothing stops it from continuing on its own. There's no reason a full university infrastructure needs to surround that building, and in the case of many stand-alone research institutes world-wide, it doesn't.

Web: [Direct Link] [This Post]

AI in Education: Google’s LearnLM product has incredible potential

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
Daniel Christian, Learning Ecosystems, May 24, 2024

Daniel Christian summarizes three articles looking at the recently released AI models. Though I think the lead article on LearnLM from AI Supremacy isn't useful (it feels like an AI-assembled jumble of acontextual platitudes) the second makes an important point: "This tool (ChatGPT4o) is ideally positioned not only to instruct but also to sell products, shape minds, and manipulate real world events." The third is also interesting, but in a different way, as it makes the claim that free products are "market distortions". For example, "the entire LMS market of the 2000s sagged under the weight of Google Classroom's $0 price tag." But - importantly - there's no such thing as a non-distorted market. There's no 'natural state' of the market - that's just fantasy. The very concept of the market entails people trying to 'distort' the market by manipulating either supply or demand.

Web: [Direct Link] [This Post]

Why AI Can’t Replace Teachers

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
John Spencer, May 24, 2024

I really disagree with the main argument in this post, though I imagine many readers (especially those that are educators) would agree: "Whether it's a project-based learning unit or a class discussion, it is the teacher, as the artist and the problem-solver and curator, who sparks innovation." While this reads as though John Spencer thinks that the AI is incapable, it is also saying the same thing about students. This perspective places most of the agency in the hands of the teacher and almost non in the hands of the student. We did it that way because teachers were scarce. But when we can have individual teachers for each person, we need not depend on the teacher to set a direction. And I think we'll find that individual students - especially with AI support - will be able to cope.

Web: [Direct Link] [This Post]

Why publishers are preparing to federate their sites

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
Sara Guaglione, Digiday, May 24, 2024

The story: "The Verge and 404 Media are building out new functions that would allow them to distribute posts on their sites and on federated platforms – like Threads, Mastodon and Bluesky – at the same time. Replies to those posts on those platforms become comments on their sites." Sounds great, but I've been blocked by a a paywall on 404 in the past, and this doesn't really work with an open fediverse. So there's going to need to be some thinking about how all this works (I'm also considering how a similar plan for OLDaily would work, though the number of comments on OLDaily posts is pretty minimal - it's more of a design exercise for me).

Web: [Direct Link] [This Post]

Context matters? Yes and AI has made its move...

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
Donald Clark, Donald Clark Plan B, May 24, 2024

Overview of the ways AI-supported learning will be able to draw on context. "Where AI has been lacking is in knowing about your immediate context and intent. This nut is now being cracked, as AI's multimodal abilities have now hit the market. By this I mean its ability to 'hear' things, 'see' through your camera, identify things from video or know what's on your screen."

Web: [Direct Link] [This Post]

Open Education Bootcamp 2024 Recordings

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
Centre for Teaching and Learning, University of Regina, May 24, 2024

Open Education Bootcamp recordings are now available. These include not onlky the conversation I had with Alec Couros, Valerie Irvine, Brian Lamb and David Wiley but also presentations from Heather Ross (Access, Alignment, and Agency), Cristyne Hébert (Student Engagement and Open Pedagogy) and Catherine Cronin (Higher Education for Good), among others.

Web: [Direct Link] [This Post]

Fair use and the case against the commercialization of nonprofits: The Wikimedia Foundation’s amicus brief in Hachette v. Internet Archive

OLDaily by Stephen Downes - Fri, 2024-05-24 21:37
WikiMedia Foundation, May 24, 2024

I'm mostly posting this for the record, though it serves as an example of the knots we can get tied into when we confuse between non-commercial entities, like charities, and non-commercial use, like personal learning. Anyhow: the Wikimedia Foundation's brief (along with Creative Commons and Project Gutenberg) "argues that the Court's interpretation of fair use could wrongly classify nonprofit secondary uses as commercial, impacting all nonprofit organizations' ability to utilize copyrighted material." I think that in the case of Internet Archive, neither the organization nor the use are commercial in nature. But U.S. courts might not be of a similar mind.

Web: [Direct Link] [This Post]

What is it like to dislike data?

OLDaily by Stephen Downes - Fri, 2024-05-24 15:37
Michiko I. Wolcott, Msight Analytics LLC, May 24, 2024

"A lot of what I see is data enthusiasts trying to mandate and enforce adoption," writes the author. "They try to do this through authority and implicit fear rather than by empowering everyone in the organization." But where they fail isn't in the data, it's in understanding the subject. "Our problem-solving nature does not always translate to being good at diagnosing problems. We rarely uncover the root cause of the perceived problems." Seems right to me.

Web: [Direct Link] [This Post]

Pages

Subscribe to Ulrich Schrader's Website aggregator