The argument that AI is violating copyright is being used mostly to eliminate it as competition; the question of whether it actually copies content (it doesn't) is moot. This is clear in this article, which describes an agreement whereby Picsart trains an AI model using Getty licensed images, creating a new image platform for both companies. The play is that this is legally safe - a clause risk-averse lawyers will embrace with enthusiasm. "the deal aims to provide "commercially safe AI-generated imagery" for creators, marketers and small businesses (and) it will offer customers commercial rights and indemnity for the images they create." It feels more like a protection racket than a service, even as it makes the hollow promise to develop "new ways to compensate the creators of the images used to train the AI model." See also this deal between Business Insider and OpenAI. The common foe shared by all, of course, is genuinely open access AI, which they would like to prevent as soon as possible.
Web: [Direct Link] [This Post]While most people in the world use Chrome, people who understand the web, I think, use Firefox (and of course I count myself among those). It's more secure, properly blocks advertising (with extensions like UBlock Origin) and won't let Google or Microsoft track you - important if you're working in what you would like to be a secure environment (it's funny how many people complain about surveillance culture but won't even take the basic step of using a more private browser). This post points to a few nifty features - including some new one for me: editing PDF files, screen capture
Web: [Direct Link] [This Post]Agency is key to understanding learning. Agency here is thought of as "deliberation, choosing between options, and bringing reasons to bear on what we do." This video interview (no transcript, sorry) asks the core question, can AI exhibit genuine agency? Humans, thought of as machines, certainly can, but what about 'artificial' machines? We can't rule it out without evidence or a good argument. But can a computing machine exhibit agency? Here, the need for evidence is on the other side: there's no good reason to assume such a machine would achieve agency. "Understanding is not producing a string of symbols," which is what we have today. There are questions of, for example, consciousness. And nobody has been able to say with any clarity what the endpoint of an artificial intelligency with agency would look like. It has to act in order to pursue some goal. But what would a system-generated goal look like?
Web: [Direct Link] [This Post]Very short post with the following message: "If you want to know how good a team is, watch them when things are tough. See how they support one another." But I'm here for the image, which though it's probably AI-generated, really works for me.
Web: [Direct Link] [This Post]You might not like this, but this, I think, is the future of assessment: "Stealth assessment is a learning analytics method, which leverages the collection and analysis of learners' interaction data to make real-time inferences about their learning." According to the authors, the success of this approach depends on four sets of models: "Stealth assessment is a learning analytics method, which leverages the collection and analysis of learners' interaction data to make real-time inferences about their learning.' The central question is whether these models can be developed algorithmically, that is, by using AI. The full version is behind a paywall, but don't bother - you get the sense from the outline and most of the relevant work was already openly published here. Image: Rahimi.
Web: [Direct Link] [This Post]Alex Usher paints a picture of generally declining government support for higher education since 1971. Through to 1996, this resulted in funding shortfalls altogether (this was ehen I was in higher ed). Since then, revenue generation has become the name of the game. But, he says - correctly, I think - this era is coming to an end. There's no support for continued tuition increases. And government has placed limits on international student recruitment. So institutions will have to focus on cutting costs. In the current model, this doesn't work - you can't cut your way to sustainability; each cut makes it more expensive to support the students that remain. The model - as I predicted in (checks) 1998 - has to change.
Web: [Direct Link] [This Post]As the summary states, " The Verifiable Credentials Working Group has just published a Working Group Note of Verifiable Credentials Overview." It's basically a three-part model: you have an issuer, a holder, and a verifier. There are two major approaches: "enveloping proofs or embedded proofs. In both cases, a proof cryptographically secures a Credential (for example, using digital signatures). In the enveloping case, the proof wraps around the Credential, whereas embedded proofs are included in the serialization, alongside the Credential itself. "
Web: [Direct Link] [This Post]I think this is a good idea, independently of any preconceptions we may have about the outcome. "We need a comprehensive evaluation–a metaphorical "doll test"–that can reveal how AI shapes students' perceptions, attitudes, and learning outcomes, especially when used extensively and at early ages." Such studies should not look for specific things, should not cater to people's fears about AI, but should be widely designed so as to capture any effects, whatever sort they may be.
Web: [Direct Link] [This Post]It is of course not a surprise to find three fake philosophy journals making it into the top 10 in Scopus rankings. This sort of fraud is rampant. But note how they accomplish this: "The trick is simple: The Addleton journals extensively cross-cite each other." It's a trick not limited to AI-generated journals, and results in some very unserious work being taken seriously.
Web: [Direct Link] [This Post]Teachers won't win in a war against AIm writes Deanna Mascle. Nor is it an answer to encourage students to 'feed content into the capitalist techbro maw' (writers using expressions like that are hard to rake seriously). " The solution to AI's entry into the writing classroom is not hysteria but a return to the writing workshop. Know your students, know their writing, and create a community where your students know the writing of their peers (because you write together as a community) and slay the AI monster together."
Web: [Direct Link] [This Post]This article describes how AIs are trained using 'the story plot game' whereby the systems are gradually trained to find missing words in a story pattern, leading to their ability to find a compelling story from nothing but pure static. "Just as the children in our game were asked to uncover the hidden plot, the model is instructed to remove the noisy pixels and return a coherent image." The effect of the prompt (or other context) pushes the generated story (or image) in certain directions, based on similarity with the input provided.
Web: [Direct Link] [This Post]This article describes some of the issues hiding in the weeds slowing the development of a proper federated identity system. In a nutshell, you need public key transparency (that is, everyone's public key is known by everyone else) and this article describes a registry system that would make it possible.
Web: [Direct Link] [This Post]"Open Rights Group has published its six priorities for digital rights that the next UK government should focus on," reports Ben Werdmuller. In this short post he highlights the third priority, which warns against the use of predictive policing. The other priorities include: secure messaging, digital sanctuary for refugees, freedom of expression online, data protection rights, and protection against tracking. It's interesting to me to compare the newfound interest in digital rights and ethics with what I produced in 1999 as the Cyberspace Carter of Rights (today as relevant as it ever was).
Web: [Direct Link] [This Post]Fear, Uncertainty and Doubt (FUD) is the name given to marketing from commercial software companies in an effort to dissuade people from option for open source. In this post Tom Woodward argues that some FUD is "necessary" in an effort to convince people at the college where he works to use the official software rather than installing something on their own. "Choosing to use unvetted applications is a significant risk," he writes. Not only are there security concerns, questions may also be asked about whether the software preserves privacy, is accessible, and is even going to be around next year.
Web: [Direct Link] [This Post]This post describes "two competing groups of weird nerds in Silicon Valley have been locked in a cringe philosophical battle over what both sides believe is the inevitable rise of an artificial super intelligence." On the one hand are the Effective Altruists (EA) who believe that AI is the best route to a better society (provided it is in the right hands). On the other side are the effective accelerationists (e/acc) who also believe AI will change society, and aren't really worried about the consequences. Neither group matters, argues Ryan Broderick. "They both result in the same outcome — an entire world run by automations owned by the ultra-wealthy. Which is why the most important question right now is not, 'how safe is this AI model?' It's, 'do we need even need it?'"
Web: [Direct Link] [This Post]This article reports on the gradual increase in the number of microcredentials available in Canada and recommends steps to make them more widely accessible and useful. This, though, remains the sticking point in any credential system: "The learner's capabilities are assessed by a qualified assessor who determines if they possess the capabilities or not." This creates costs, and undermines the economies of scale credentials might alternatively offer. AI-based assessment would change that, were it ever to be widely accepted and trusted.
Web: [Direct Link] [This Post]This is a good reference post for people who aren't familiar with how computers work. The CPU is the core of a computer and manages all its activities, managing and orchestrating operations involving data and devices. This article surveys terms related to CPUs, describes types of CPUs and some CPU companies, and looks at the 'next wave' of CPUs, including specialized processors such as specialized chips and quantum CPUs. Good accessible read.
Web: [Direct Link] [This Post]This is from a column by Kwame Anthony Appiah called 'The Ethicist' that appears in the New York Times. The assertion is that "It's not hypocritical to use A.I. yourself in a way that serves your students well, even as you insist that they don't use it in a way that serves them badly." The reason is that the purposes of the two uses are different: the student is attempting to learn, while the teacher is attempting to support learning.
Web: [Direct Link] [This Post]This article presents a version of what must be an advertiser's dream: an 'authenticated web' where the user must be logged in (and typically a paying subscriber) in order to access online content. This (frankly) is exactly what we don't want a distributed identification system to amount to, but as this article makes clear, publishers and advertisers would very much like to use identity to lock down the web.
Web: [Direct Link] [This Post]The summary in this short post is the key message: "In session after session, attendees at EIC are hearing the message that decentralized identity is the answer to their identity problems." I'm also convinced of this. But making it happen is not trivial.
Web: [Direct Link] [This Post]
All content on the site authored by Ulrich Schrader is licensed under a Creative Commons-License. Other licenses may apply for other authors.
Creative Commons explained