news (external)

Bikepacking - Rimouski

OLDaily by Stephen Downes - Fri, 2024-07-26 20:37
Stephen Downes, YouTube, Jul 26, 2024

You my have noticed that the publishing schedule this week and next is a bit erratic. That's because I'm out on another cycling tour, this time from my home near Ottawa to Rimouski, Quebec, 800 km down the St. Lawrence River. At least, that's my objective. I'm currently writing from Magog, Quebec, but I'm about to get back on the bike. Anyhow, this link will take you to a YouTube playlist of daily videos from my tour, following in the footsteps of the many hiking and bikepacking vloggers who have gone before me (and whose work I am of course drawing upon).

Web: [Direct Link] [This Post]

AI models collapse when trained on recursively generated data

OLDaily by Stephen Downes - Fri, 2024-07-26 20:37
Ilia Shumailov, et al., Nature, Jul 26, 2024

I think this is well-known and well established, but here's an official source: "We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as 'model collapse'." It's a bit like photocopying the same image over and over again - eventually you just end up with static.

Web: [Direct Link] [This Post]

Flipboard Brings Local News to the Fediverse

OLDaily by Stephen Downes - Fri, 2024-07-26 20:37
Carl Sullivan, Flipboard, Jul 26, 2024

Flipbord hasn't been relevant in any to me for years, but if they're bringing local news to the fediverse, they are suddenly relevant again. They write, "As part of our commitment to local news, Flipboard is bringing 63 regional and community titles to the fediverse." More, please.

Web: [Direct Link] [This Post]

Image resize and quality comparison

OLDaily by Stephen Downes - Fri, 2024-07-26 20:37
Simon Willison, Jul 26, 2024

What's neat isn't just the tool (which I could use to handle those awful webp images) but also that it was basically created by AI.

Web: [Direct Link] [This Post]

The Five Stages Of AI Grief

OLDaily by Stephen Downes - Tue, 2024-07-23 14:37
Benjamin Bratton, NOEMA, Jul 23, 2024

I completely agree with this point and have even made it in the recent past: "What is today called 'artificial intelligence' should be counted as a Copernican Trauma in the making. It reveals that intelligence, cognition, even mind (definitions of these historical terms are clearly up for debate) are not what they seem to be, not what they feel like, and not unique to the human condition." Except, I didn't call it a "trauma", I called in (as it properly is) a "revolution". Via Bryan Alexander.

Web: [Direct Link] [This Post]

Unpersoned

OLDaily by Stephen Downes - Tue, 2024-07-23 14:37
Cory Doctorow, Pluralistic, Jul 23, 2024

Cory Doctorow sounds the warning about being dependent on platforms. "The platforms have rigged things so that you must have an account with them in order to function, but they also want to have the unilateral right to kick people off their systems. The combination of these demands represents more power than any company should have, and Big Tech has repeatedly demonstrated its unfitness to wield this kind of power." Too true.

Web: [Direct Link] [This Post]

How to use Perplexity in your daily workflow

OLDaily by Stephen Downes - Tue, 2024-07-23 14:37
Michael Spencer, Alex McFarland, AI Supremacy, Jul 23, 2024

Perplexity is an AI chat application that answers questions for you. What makes it really useful is that it finds and cites its sources. I haven't worked with it yet (I certainly will) so I can't say how well it works, but this article is a really good starting point, offering practical applications and a number of examples.

 

Web: [Direct Link] [This Post]

Dispatches from the media apocalypse

OLDaily by Stephen Downes - Mon, 2024-07-22 14:37
Ben Werdmuller, Werd I/O, Jul 22, 2024

A long, detailed, and fascinating post from Ben Werdmuller on the future of online news. "There are two pivotal facts for every newsroom," he writes. "Their work must reach an audience, and someone must pay for it. The first is a prerequisite of the second: if nobody discovers the journalism, nobody will pay for it. So, reaching and growing an audience is crucial." The same is true for educational institutions, which lag news media by about 10-20 years. And what's crucial in the current environment is that the lock-down of social media and the sceptre of the 'dead internet' have made reaching an audience almost impossible.

Web: [Direct Link] [This Post]

Things I was wrong about pt2: The Death of the VLE

OLDaily by Stephen Downes - Mon, 2024-07-22 14:37
Martin Weller, The Ed Techie, Jul 22, 2024

Another of Msttin Weller's "I was wrong" series that I like so so much. Here again he was not alone. "I think during the late 00s we were all still caught up in web 2.0 fever, and let's face it, naive about the robustness of third party tools." Instead, those tools all died (or became their own sort of silo) snd the LMS - which by now is fully entrenched into administative systems - survived.

Web: [Direct Link] [This Post]

Academic authors 'shocked' after Taylor & Francis sells access to their research to Microsoft AI

OLDaily by Stephen Downes - Mon, 2024-07-22 08:37
Matilda Battersby, The Bookseller, Jul 22, 2024

All I can say is, what did you expect? "Authors claim they have not been told about the AI deal, were not given the opportunity to opt out and are receiving no extra payment." I think it's good that academic papers are being used to train AI - it's far better than using Twitter posts. But deals like this are characteristic of what we will get if we continue to support a closed-access publication system. If we want open access to AI, we have to offer open access to our publications. If you don't like the idea of corporate AI running everything, then you have to be willing to contribute to training the alternative. Simply saying "there should be no AI" is not an alternative. Via Robin DeRosa.

Web: [Direct Link] [This Post]

After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education

OLDaily by Stephen Downes - Mon, 2024-07-22 08:37
Rebecca Bellan, TechCrunch, Jul 22, 2024

According to this story, "Andrej Karpathy, former head of AI at Tesla and researcher at OpenAI, is launching Eureka Labs, an 'AI native' education platform." While most pundits will criticize the AI component (including the three-handed student in the 'school of the future' image) my own concern is that while the vision of AI may be futuristic, the vision of education doesn't step at all beyond the traditional collegiate model. Why oh why if we had AI in our pockets would we build a cathedral-sized learning institution out of glass?

Web: [Direct Link] [This Post]

DIF announces DWN Community Node

OLDaily by Stephen Downes - Fri, 2024-07-19 17:37
Decentralized Identity Foundation - Blog, Jul 19, 2024

According to this item, "the Decentralized Identity Foundation (DIF) today announced the availability of the Decentralized Web Node (DWN) Community Instance... personal data stores that eliminate the need for individuals to trust apps to responsibly use and protect their data." Now I haven't tried using this yet and don't know exactly how it works. But it does relate to a use case I'm working on - creating a web application that lives entirely in the browser using nothing but local storage. DWN would allow me to use the same credentials on different computers - at least, I think that's how it would work. (Why a browser-based application? Because, as someone once said, "the fediverse lives in the client". More on this in the fall.

Web: [Direct Link] [This Post]

The Library Is a Commons

OLDaily by Stephen Downes - Fri, 2024-07-19 17:37
Emily Drabinski, In These Times, Jul 19, 2024

A librarian describes the institution of the library in an overtly political frame that is, well, not wrong. "When library workers open the door in the morning, they give the public access to public space. When library workers check out a book or check it back in, they circulate public resources... Library workers understand that we are on the front lines of the movement for public ownership of the public good." Via Robin DeRosa.

Web: [Direct Link] [This Post]

Global cyber outage grounds flights, hits banks, telecoms, media

OLDaily by Stephen Downes - Fri, 2024-07-19 17:37
Reuters, Jul 19, 2024

The short story is that a faulty security update brought down a wide range of services around the world, most notably those dependent on Microsoft products. The technology responsible, the CloudStrike Falcon platform, "converges security and IT to protect all key areas of risk." As many have noted, the widespread nature of the outage offers a good argument for decentralization.

Web: [Direct Link] [This Post]

Updating OER Foundation Web Services for July 2024 | OERu Technology Blog

OLDaily by Stephen Downes - Fri, 2024-07-19 08:37
Dave Lane, OERu Technology, Jul 19, 2024

This is an update on the tech stack the OER Foundation's Dave Lane has been managing, as well as an update of sorts on the turmoil in New Zealand's polytech and vocational higher education sector. For content: WordPress, Drupal and SilverStripe (which is new to me); for video gosting, PeerTube.  Events using Mobilizon, fediverse tools including Mastodon, PieFed and PixelFed. Streaming: Owncast. Single Sign-On - Authentik.And many more things. I've worked with a lot of these systems, with and without guidance from Lane, and maintaining this software is not trivial - but essential, especially if you want to work in an open source world.

Web: [Direct Link] [This Post]

Active reading: how to become a better reader

OLDaily by Stephen Downes - Fri, 2024-07-19 08:37
Anne-Laure Le Cunff, Ness Labs, Jul 19, 2024

This is all pretty basic stuff, but if you're the sort of person who reads by starting with the first sentence and continuing until you get to the last sentence, hoping against hope that you remember some of the content in between, then this guide is for you. This article calls it 'active reading' - I think of it as 'reading analytically'. When you understand that text isn't just 'stories', that it's a complex multi-dimensional artifact, you begin to understand what's actually going on in writing, and (perhaps) in the mind of the person who wrote the text. It's how I regard just about any text I encounter (OLDaily exists because of my practice of paraphrasing anything I read).

Web: [Direct Link] [This Post]

The Model Isn't the Territory, Either

OLDaily by Stephen Downes - Fri, 2024-07-19 08:37
Douglas Rushkoff, Medium, Jul 19, 2024

In 1985 I filled more than 300 page of my Masters thesis Models and Modality arguing much the same point as is raised here: the model isn't the reality. It got tricky because back then (and still, I think) modal semantics were based in possible world semantical models, and these possible worlds were (according to some philosophers) real. Anyhow, Douglas Rushkoff is making the same case here, first with respect to fractals, and then with respect to AI, in about 1/100th of the space. And he takes it a step further: perhaps these new models can help remind us that the models we create in society - everything from money to traffic to property - are not real. They're things we create, arbitrarily. And "we can't fight over these created models and histories anymore. They cannot be resolved. They are not real. They are models. Games. Rhetoric. Approximations. They are figures, and never ground." P.S. I learned later that Laozi made much the same case about power, culture and virtue; and if we want, we could add Nietzsche to the mix.

Web: [Direct Link] [This Post]

Academic journals are a lucrative scam – and we’re determined to change that

OLDaily by Stephen Downes - Fri, 2024-07-19 08:37
Arash Abizadeh, The Guardian, Jul 19, 2024

Matthew Cheney calls this a "fierce" article, and the title alone makes it hard to disagree. " The commercial stranglehold on academic publishing is doing considerable damage to our intellectual and scientific culture. As disinformation and propaganda spread freely online, genuine research and scholarship remains gated and prohibitively expensive." Or as I say (a lot): democracy dies behind a paywall.

Web: [Direct Link] [This Post]

Language is a Tool for Communication, Not for Thinking

OLDaily by Stephen Downes - Thu, 2024-07-18 20:37
Irving Wladawsky-Berger, Jul 18, 2024

The main point of this post is to introduce the elemental cognition (EC) AI platform, "whose architecture follows human biology by separating its natural language components from its reasoning, problem solving engine." In this, argues the author, the architecture functions analogously with the human brain, treating language as a communications tool only, while actual cognitive functions are handled in sub-symbolic neural mechanisms (characterized here as "multiple precise logical and mathematical methods"). I think this is a good way to treat large language models (LLM) like chatGPT generally - as communications interfaces, not reasoning devices.

Web: [Direct Link] [This Post]

Will AI Ever Have Common Sense?

OLDaily by Stephen Downes - Thu, 2024-07-18 17:37
Steven Strogatz, Quanta Magazine, Jul 18, 2024

So this is a pretty interesting interview. The answer to the question in the title, I hope, is 'No', but not for the reason you may think. Here's my reasoning. As Yejin Choi says, "it's reasonable to suspect that humans don't necessarily try to predict which word comes next, but we rather try to focus on making sense of the world. So we tend to abstract away immediately." Now that's not quite true - AI also 'abstracts', but can use far more data points than a human, so its abstractions just look like statistical generalizations to us, not common sense generalizations. But like a human, it can also generalize too quickly and inappropriately - that's why, for example, it will respond (like many humans) incorrectly to "If I left five clothes to dry out in the sun, and it took them five hours to dry completely, how long would it take to dry 30 clothes?" (It's not  '30 hours', of course, it's 'five'). Human common sense (for example, folk psychology) has more in common with the answer '30 hours' than 'five'. And the thing is: an AI can be trained to avoid such errors. But humans, bless them, will keep on making them. And that is just common sense.

Web: [Direct Link] [This Post]

Pages

Subscribe to Ulrich Schrader's Website aggregator