Yet another helpful resource from the eCampusOntario Open Library, this pressbook is subtitled 'Strategies and Exercises for Meaningful Use in Higher Ed." The first 'chapter' is just a few paragraphs; the second contains a useful listing of AI engines and links to some AI databases. You'll also like the section on developing AI integrated projects, with ideas for some 17 fields of study. My main criticism is that it's so short - if you're going to take the time and effort to write an ebook, do a proper job of it.
Web: [Direct Link] [This Post]I'll just quote Dave Cormier on this, with approval: "This new regulation takes the most flexible and powerful information retrieval tool of all time and bans it from the places where our kids learn to learn - Ontario schools. Cell phones are super distracting. You might be reading this now when you should be doing something else. Just ignoring the devices is not going to help us get any better at controlling our usage or learning to do a better job finding/evaluating/assembling information. We are going to make our schools into imaginary places that have no connection to how knowledge is made. We need to adapt, not lock ourselves into a box."
Web: [Direct Link] [This Post]The comment culture has all but disappeared, notes Alan Levine. There's no one reason. "Not that attention is my purpose or goal, but really, there is so much more stuff out there, that we are swimming in it. But heck, a small but of validation goes a long way, and I don't see much by some heart click icon." True. But as I comment, the platforms drive comments - it used to be, people world link to their blog post on Twitter, people would follow and comment. The platform drives the comments - but this is much less the case in Mastodon, though I suspect this will change in time.
Web: [Direct Link] [This Post]For AI to support students, education institutions need to ensure employees have the skills and training to leverage new technologies.
Web: [Direct Link] [This Post]
I haven't tried either of these tools - I have a GPT4 account and use that. But I've been hearing that both of these are a step up. Here's the review: "Both tools have a lot to offer. If you need citations as part of your responses, Perplexity Pro wins hands-down. But be careful to not rule Gemini Advanced out, though. Its integration into Google Workspace means you will be using it in the future, one way or another." Each of the is about $20 a month, as is chatGPT-4. Your AI costs could add up faster than a cable bill if you're not careful.
Web: [Direct Link] [This Post]I made a comment during a talk earlier this week (slides, audio coming soon) that we cannot become ethical people merely by avoiding risk. That's why I challenge risk-based definitions of ethics in AI research and practice. We have to hope for and work toward something better. That's at the core of a lot of punk-based rebellion, and I will confess, if it has 'punk' in the name, I'm probably on board, because it implies building and doing and making for yourself. With that preamble (emphasis on the 'amble') I introduce this paper, which tries to do better: "cultivating hopepunk and solarpunk attitudes within the field of higher education and educational technology, as well as rewidening and rewilding higher education using utopian imagination, the article (20 page PDF) points towards more hopeful, preferable futures for the people and the planet."
Web: [Direct Link] [This Post]There are two stories here, the main story, and the story I think is written between the lines. The main story is that the small publisher is shutting down because the deluge of AI-generated content submitted is too much to wade through. Also, the content is poorly written: "It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same." Between the lines, though, is the likelihood that it's taking more and more time to distinguish between the AI-written and human-written content. And then what? How does a publisher decide what's worth publishing? Via Paul R. Pival.
Web: [Direct Link] [This Post]This paper describes "MetaMate, an open-access web-based tool leveraging large language models (LLMs) for automated data extraction in educational systematic reviews and meta-analyses." While some may reasonable express scepticism about the use of AI to summarize papers for metastudies, I think it will be a positive to spend more time reading the papers and less time searching for them. Anyhow, the paper evaluated the software's precision and finds it compares well with human coders in most areas, though it's weaker in subjects such as Delivery Mode (81.25%), Intervention Duration (87.5%), and Academic Subject (87.88%).
Web: [Direct Link] [This Post]As usual, we could replace the term 'journalism' with 'education' and arrive at pretty much the same conclusion. "Twitch is a major player among live video platforms with 1.6 billion hours of content produced monthly, much of it by users age 25-34. That content is mostly livestreamed gameplay, but the app is an increasingly common distributor of news and information." Here's a livestream we did just last week (I livestream using YouTube instead of Twitch). Here's a playlist of 65 videos where I livestream my own experiences learning new software. More and more, I think, we'll just livestream learning experiences directly, and more and more, they won't be in the classroom. (p.s. I often livestream my gaming sessions as well).
Web: [Direct Link] [This Post]This article effectively draws a link between what it means to learn a discipline and the concept of epistemic justice. "In mastering a discipline, learners need to master the 'underlying game' associated with disciplinary epistemes reflecting ways of thinking and practicing within a particular discipline." Quite right. Consequently, "If these disciplinary epistemes are based on epistemic hegemonies from the global north, then they are potentially exclusionary by definition and will ensure that certain learners either never grasp the 'underlying game' or have significant difficulty in doing so." To oversimplify (only a bit), there are two approaches. One is to change the learner. That's colonization (of the person from the South by the values of the North). The other is to change the game. That's decolonization. The challenge, though, lies in how to decolonize a discipline without undermining its factual basis, utility or relevance. The discipline may, for example, devalue "knowledge that is derived from everyday experiences or common-sense ways of thinking," but it may do so for a very good reason. Image: Loring.
Web: [Direct Link] [This Post]This is an interesting question. Companies collect information about you, and you can demand to see what they've collected and correct it if the information is wrong. But what about what a company believes about you? For example, the data might be 'Stephen missed a payment on May 18', while the corresponding belief might be 'Stephen is a bad credit risk'. My rights with respect to the first seem to be different than my rights with respect to the second. And to the extent that these beliefs are increasing generated by AI, we are developing a new class of information - 'generated information' - what we might not even be allowed to see, much less correct. Image: ChatGPT 4, "a line drawing of an AI with 'opinions'".
Web: [Direct Link] [This Post]This report (28 page PDF) focuses on four major ways the authors believe AI will serve education: personalized learning content and experiences; refined assessment and decision-making processes; optimization of teacher roles through augmentation and automation of tasks; and teaching both with and about AI. I'm not seeing this as especially visionary, despite the urgency of messages like "education systems must adapt to prepare young people for tomorrow's technology-driven economies." The meat of the report, I think, can be found in the nine case studies that form the second half of the report; I think they could all have been developed well before the current flurry of developments in AI (and indeed, probably were).
Web: [Direct Link] [This Post]GÉANT is a "collaboration of European National Research and Education Networks (NRENs)" that delivers "an information ecosystem of infrastructure and services to advance research, education, and innovation on a global scale." Today they announced their "decision to stop activities on the main GÉANT profiles on the X social media platform (Formerly Known As Twitter) as of 2 May 2024." Why? "We have seen Twitter go through radical transformations, changed ownership, morphing into X and into a completely different platform which increasingly amplifies hate speech, fake news, scams, extreme views, and illegal content. Verification badges that were once a symbol of trust have lost all meaning, essential features were dropped or limited to paying users, and costs seem to have been cut at the expense of security, privacy, and content moderation." If you are still using X/Twitter, you should ask, what are you supporting?
Web: [Direct Link] [This Post]This is in response to a contribution to an OAS meeting distributed in my office this morning. It is of course my set of opinions only, and not reflective of any official policy or practice, though I would add that most of these have been undertaken to one degree or another by various levels of Canadian government organizations.
Web: [Direct Link] [This Post]I want to say something like 'this paper describes a core tenet in connectivism' although of course we never conceived of it in anything like the richness and detail collective intelligence across 'scales and substrates' described here. This, in particular, is crucial: "collective intelligence is not only the province of groups of animals, and... an important symmetry exists between the behavioral science of swarms and the competencies of cells and other biological systems at different scales." The way networks work is tied up in the way evolution works, and these are tied up in how we describe learning and cognition generally. Or - how we should describe learning and cognition (as most people still labour under the mythology of folk-psychological information processing types of pictures such as 'executive function' and 'cognitive load' theories).
Web: [Direct Link] [This Post]Drawing on an emerging new picture of animal consciousness, a group of researchers have signed this declaration recognizing animal consciousness. "Subjective experience requires more than the mere ability to detect stimuli. However, it does not require sophisticated capacities such as human-like language or reason. Phenomenal consciousness is raw feeling—immediate felt experience, be it sensory or emotional—and this is something that may well be shared between humans and many other animals." My similar sentiment is expressed here. It may be thought that an ethics of animal rights and welfare follows immediately, but given the way humans treat each other, we need not fear the recognition of animal consciousness forces any new behaviours on our part (though it probably should).
Web: [Direct Link] [This Post]I mean, it should be obvious why we don't want advertising in AI chatbots, right? They're supposed to be trusted advisors. Imagine you went to your lawyer for advice on selling a house and they said, "I'd be glad to help you, but first, let me take this opportunity to recommend a McDonald's hamburger." Yeah, no. This applies doubly as the technology, still under development, already has trust issues. " Advertising in these generative AI chatbot experience won't be a "sustainable model" long term, according to Jaffe, unless CPMs 'go way up.' Building a subscription model for Ingenio's chatbots also means the publisher will have more control over revenue, the user relationship and distribution, he added."
Web: [Direct Link] [This Post]A recent meeting of UNESCO on digital education futures is notable for the resources distributed: "its AI Readiness Assessment tool to translate the Recommendations on the Ethics of AI into actions, as well as its Guidelines for the Governance of Digital Platforms to enable freedom of expression and inclusion while promoting a healthy information ecosystem in a digital era." Additionally, UNESCO's Guidance for the Use of Generative AI in Education and Research contains "a roadmap for regulating AI in education and strategies to address its profound risks and impact on teaching and learning," and a recent research report "which revealed gender biases and prejudices found in Large Language Models."
Web: [Direct Link] [This Post]I've been doing some work recently looking at task modeling. The time of linear and even circular models (like OODA) has past. Today we're looking at learning and working in complex environments that requires new tools. I stated by thinking of circular network diagrams, looked at chord diagrams, but this seems to be getting closer to the reality. "The CL aims to capture the entanglement and dynamics between the Covid-19 crisis and the Swedish food system through data-based visualization. The methodology includes a creation of a constantly evolving database with systemic trends captured through qualitative research methods i.a. interviews with food system actors, or literature review." The tool they use is called Kumu.
Web: [Direct Link] [This Post]I like this a lot. "This e-book is a template for you to use to get started documenting your learning journey (aka an e-Book of One's Own - eBoOO). You are free to copy the book and use the template to make it your own. Instructions on how to do all this are included within. We look forward to seeing the results!" Via Terry Greene.
Web: [Direct Link] [This Post]
All content on the site authored by Ulrich Schrader is licensed under a Creative Commons-License. Other licenses may apply for other authors.
Creative Commons explained