Advertisement

GeneralThe Gradient·Reliable

After Orthogonality: Virtue-Ethical Agency and AI Alignment

Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices [1] : networks of action

After Orthogonality: Virtue-Ethical Agency and AI Alignment

Advertisement

This summary was auto-generated by AIMaster.ink from the original article published on The Gradient.

Read Full Article on The Gradient

Recommended AI Tools

See all tools →

Affiliate disclosure: we may earn a commission if you sign up via these links, at no cost to you.

Get the weekly AI digest

Top stories. No noise. Free.

Advertisement

Related in General

Building trust in the AI era with privacy-led UX
GeneralMIT Technology Review·about 7 hours agoReliable

Building trust in the AI era with privacy-led UX

The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-bo

Never miss an AI breakthrough

Join 5,000+ readers getting the best AI news weekly — curated, summarized, and delivered to your inbox.