After Orthogonality: Virtue-Ethical Agency and AI Alignment
Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices [1] : networks of action

Advertisement
This summary was auto-generated by AIMaster.ink from the original article published on The Gradient.
Read Full Article on The GradientRecommended AI Tools
AI writing & SEO content platform used by 10M+ teams
The AI-first code editor
Affiliate disclosure: we may earn a commission if you sign up via these links, at no cost to you.
Get the weekly AI digest
Top stories. No noise. Free.
Advertisement
Related in General

TAI #200: Anthropic’s Mythos Capability Step Change and Gated Release
TAI #200: Anthropic’s Mythos Capability Step Change and Gated Release

Building trust in the AI era with privacy-led UX
The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-bo
I Tested Ollama vs vLLM vs llama.cpp: The "Easiest" One Collapses at 5 Concurrent Users
Ollama has 52 million monthly downloads. It’s the tool every tutorial recommends. I used it for six months, believed it was “production… Continue reading on Towards AI »

Stateless vs Stateful Agents: The Decision That Breaks Most AI Systems
Your AI agent isn’t failing randomly. Continue reading on Towards AI »